Next Page: 10000

          Java Deep Learning Projects      Cache   Translate Page   Web Page Cache   

eBook Details: Paperback: 436 pages Publisher: WOW! eBook (June 29, 2018) Language: English ISBN-10: 178899745X ISBN-13: 978-1788997454 eBook Description: Java Deep Learning Projects: Build and deploy powerful neural network models using the latest Java deep learning libraries

The post Java Deep Learning Projects appeared first on WOW! eBook: Free eBooks Download.


          Beauty and Social Media: Meitu Announces Strategic Pathways for the Next Decade      Cache   Translate Page   Web Page Cache   
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en" style="background:#f6f6f6!important">  <head>    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">    <meta name="viewport" content="width=device-width">    <title>PRNJ Push Email - Headlines</title>    <style>@font-face{font-family:Montserrat;font-style:normal;font-weight:400;src:local('Montserrat Regular'),local('Montserrat-Regular'),url(https://fonts.gstatic.com/s/montserrat/v10/SKK6Nusyv8QPNMtI4j9J2wsYbbCjybiHxArTLjt7FRU.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:normal;font-weight:400;src:local('Montserrat Regular'),local('Montserrat-Regular'),url(https://fonts.gstatic.com/s/montserrat/v10/gFXtEMCp1m_YzxsBpKl68gsYbbCjybiHxArTLjt7FRU.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:normal;font-weight:400;src:local('Montserrat Regular'),local('Montserrat-Regular'),url(https://fonts.gstatic.com/s/montserrat/v10/zhcz-_WihjSQC0oHJ9TCYAzyDMXhdD8sAj6OAJTFsBI.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@font-face{font-family:Montserrat;font-style:normal;font-weight:500;src:local('Montserrat Medium'),local('Montserrat-Medium'),url(https://fonts.gstatic.com/s/montserrat/v10/BYPM-GE291ZjIXBWrtCweiyNCiQPWMSUbZmR9GEZ2io.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:normal;font-weight:500;src:local('Montserrat Medium'),local('Montserrat-Medium'),url(https://fonts.gstatic.com/s/montserrat/v10/BYPM-GE291ZjIXBWrtCwevfgCb1svrO3-Ym-Rpjvnho.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:normal;font-weight:500;src:local('Montserrat Medium'),local('Montserrat-Medium'),url(https://fonts.gstatic.com/s/montserrat/v10/BYPM-GE291ZjIXBWrtCweteM9fzAXBk846EtUMhet0E.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@font-face{font-family:Montserrat;font-style:normal;font-weight:600;src:local('Montserrat SemiBold'),local('Montserrat-SemiBold'),url(https://fonts.gstatic.com/s/montserrat/v10/q2OIMsAtXEkOulLQVdSl053YFo3oYz9Qj7-_6Ux-KkY.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:normal;font-weight:600;src:local('Montserrat SemiBold'),local('Montserrat-SemiBold'),url(https://fonts.gstatic.com/s/montserrat/v10/q2OIMsAtXEkOulLQVdSl02tASdhiysHpWmctaYEsrdw.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:normal;font-weight:600;src:local('Montserrat SemiBold'),local('Montserrat-SemiBold'),url(https://fonts.gstatic.com/s/montserrat/v10/q2OIMsAtXEkOulLQVdSl03XcDWh-RbO457623Zi1kyw.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@font-face{font-family:Montserrat;font-style:italic;font-weight:400;src:local('Montserrat Italic'),local('Montserrat-Italic'),url(https://fonts.gstatic.com/s/montserrat/v10/-iqwlckIhsmvkx0N6rwPmvgrLsWo7Jk1KvZser0olKY.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:italic;font-weight:400;src:local('Montserrat Italic'),local('Montserrat-Italic'),url(https://fonts.gstatic.com/s/montserrat/v10/-iqwlckIhsmvkx0N6rwPmojoYw3YTyktCCer_ilOlhE.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:italic;font-weight:400;src:local('Montserrat Italic'),local('Montserrat-Italic'),url(https://fonts.gstatic.com/s/montserrat/v10/-iqwlckIhsmvkx0N6rwPmhampu5_7CjHW5spxoeN3Vs.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@font-face{font-family:Montserrat;font-style:italic;font-weight:500;src:local('Montserrat Medium Italic'),local('Montserrat-MediumItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9FtxZpeM_Zh6uJFYM6sEJ7jls.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:italic;font-weight:500;src:local('Montserrat Medium Italic'),local('Montserrat-MediumItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9Ft_zIndX4RYN5BhIaIFu8k_A.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:italic;font-weight:500;src:local('Montserrat Medium Italic'),local('Montserrat-MediumItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9Ft9CODO6R-QMzjsZRstdx6VU.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@font-face{font-family:Montserrat;font-style:italic;font-weight:600;src:local('Montserrat SemiBold Italic'),local('Montserrat-SemiBoldItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9Ft8gif8LsIGoxiaDHvDrXzKs.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:italic;font-weight:600;src:local('Montserrat SemiBold Italic'),local('Montserrat-SemiBoldItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9Ft34iWgrNFAiT-cwBwpMBdno.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:italic;font-weight:600;src:local('Montserrat SemiBold Italic'),local('Montserrat-SemiBoldItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9Ft93uLUHnU24AL_1IdxwhTqs.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@media only screen{html{min-height:100%;background:#f3f3f3}}@media only screen and (max-width:630px){table.body img{width:auto;height:auto}table.body center{min-width:0!important}table.body .container{width:95%!important}table.body .columns{height:auto!important;-moz-box-sizing:border-box;-webkit-box-sizing:border-box;box-sizing:border-box;padding-left:30px!important;padding-right:30px!important}table.body .columns .columns{padding-left:0!important;padding-right:0!important}th.small-12{display:inline-block!important;width:100%!important}.columns th.small-12{display:block!important;width:100%!important}table.menu{width:100%!important}table.menu td,table.menu th{width:auto!important;display:inline-block!important}table.menu[align=center]{width:auto!important}}</style>  </head>  <body style="-moz-box-sizing:border-box;-ms-text-size-adjust:100%;-webkit-box-sizing:border-box;-webkit-text-size-adjust:100%;Margin:0;background:#f6f6f6!important;box-sizing:border-box;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;min-width:100%;padding:0;text-align:left;width:100%!important">          <span class="preheader" style="color:#f3f3f3;display:none!important;font-size:1px;line-height:1px;max-height:0;max-width:0;mso-hide:all!important;opacity:0;overflow:hidden;visibility:hidden"></span>  <table class="body" style="Margin:0;background:#f6f6f6!important;border-collapse:collapse;border-spacing:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;height:100%;line-height:1.3;margin:0;padding:0;text-align:left;vertical-align:top;width:100%">  <tr style="padding:0;text-align:left;vertical-align:top">    <td class="center" align="center" valign="top" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:1.3;margin:0;padding:0;text-align:left;vertical-align:top;word-wrap:break-word"> <center data-parsed="" style="min-width:600px;width:100%"><!--[if mso]><style type="text/css">body, table, table.body, h1, h2, h3, h4, h5, h6, p, td, th, a { font-family: 'Montserrat', Arial, sans-serif!important;}</style><![endif]--><table align="center" class="container float-center" style="Margin:0 auto;background:#fefefe;border-collapse:collapse;border-spacing:0;float:none;margin:0 auto;padding:0;text-align:center;vertical-align:top;width:600px"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:1.3;margin:0;padding:0;text-align:left;vertical-align:top;word-wrap:break-word"> <table class="row no-background" style="background:#f6f6f6;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table> <p class="text-center view-in-browser" style="Margin:0;Margin-bottom:10px;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;margin-bottom:0;padding:0;text-align:center"><small style="color:#cacaca;font-size:80%"><a href='http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEbSmnibeutBGHu9PCpp0b6U4Q4ZotzjXnIzJXOhYxAYUXB2XyBy0PUS-2FlNm-2FF0ougwnx1bGovu440U1yP0w6asEuqA2WubpbHYEuA-2FpYnCTLW4PxcVes3-2B0LYe7f21LrI2sYtQrnOy7aj44RQuyP-2FyxY-2FYTLaqjUVnbqvQntHWfWjyvp-2BcY1jDFou2z80N56yw-3D-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTYxDuCtu-2F6b-2Bs-2BUfEQ0GcxsfdhKSD1x51DbpsgSEMSSwH-2F3CmbSipP2YSmBPzh-2FNzNfiARB-2B3wsk1rgqL23J2xVZ-2FKmDo-2B78bModx0yTkcjUWi-2BMblD1I9XMmaOBTIpVA8sFYoxBOc018QCJS69MoM0A-3D-3D'style='Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none'>View in Browser</a></small> </p> </th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th>  </tr></tbody></table><table class="row sub-header"style="background:#00837E;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr  style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="16px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:16px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table>  <h4 class="text-center" style="-moz-hyphens:none;-webkit-hyphens:none;Margin:0;Margin-bottom:10px;color:#fff;font-family:Montserrat,Arial,sans-serif;font-size:18px;font-weight:400;hyphens:none;line-height:22px;margin:0;margin-bottom:10px;padding:0;text-align:center;text-transform:uppercase;word-break:none;word-wrap:normal">News From Meitu Inc.</h4><h5 class="text-center" style="-moz-hyphens:none;-webkit-hyphens:none;Margin:0;Margin-bottom:10px;color:#fff;font-family:Montserrat,Arial,sans-serif;font-size:14px;font-weight:500;hyphens:none;line-height:18px;margin:0;margin-bottom:0;padding:0;text-align:center;word-break:none;word-wrap:normal">Transmitted by PR Newswire for Journalists on <span class="prevent-break" style="display:inline-block">August 08, 2018 08:00 PM EST </span></h5></th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table><table class="row header" style="background:#e5f2f3;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:0;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left">                <table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table>               <h1 style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:20px;font-weight:500;line-height:28px;margin:0;margin-bottom:0;padding:0;text-align:left;word-wrap:normal">Beauty and Social Media: Meitu Announces Strategic Pathways for the Next Decade</h1> <table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table> </th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th>              </tr></tbody></table> <table class="row content" style="background:#fff;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"> <table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table><p class="prntac"><i>Major Product Upgrades, New Organizational Structure, and a Refined Business Model </i></p>
   <p><span class="xn-location">BEIJING</span>, <span class="xn-chron">Aug. 9, 2018</span> /PRNewswire/ -- Meitu Inc., a leading photo / video editing and sharing company, announced the company's "beauty and social media" strategic pathways for its second decade. As Meitu embarks on the next 10 years, it is poised to enter the social media space, going beyond the beautification products that have shaped the way 1.5 billion users around the world create and share their beauty. </p>
   <ul type="disc">
    <li>Meitu app:&nbsp;Biggest upgrade in a decade slated to be released on <span class="xn-chron">September 21</span>. Three evolutionary stages: from tool, to community, to social media platform </li>
    <li>Meipai app: Repositioning as a pan-knowledge short video platform. New social feature "homework" soon to be launched. </li>
    <li>Reorganize into three business units, including a newly established Social Product Business Unit. </li>
    <li>Refine the Relationship between Infrastructure and Commercial Products</li>
   </ul> 
   <p><b>Three Evolutionary Stages of the Meitu App and Biggest Upgrade in Decade</b></p><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table><table class="row content" style="background:#fff;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><center data-parsed="" style="min-width:510px;width:100%"><img src="https://photos.prnasia.com/prnh/20180808/2207131-1-a#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" alt="" align="center" class="float-center" style="-ms-interpolation-mode:bicubic;Margin:0 auto;clear:both;display:block;float:none;margin:0 auto;max-width:100%;outline:0;text-align:center;text-decoration:none;width:auto"></center></th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table><table class="row content" style="background:#fff;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left">
   <p>Meitu is evolving into a social platform. The evolution takes three stages: from tool, to community, to social media. In the first "tool" stage, Meitu successfully dominated photo-related internet traffic by meeting users' needs for photo editing and image enhancement. Meitu is currently in its second "community" stage, focusing on helping users create and share engaging content within the community. In the third "social media" stage, the focus will be on building relational connections among users. </p>
   <p>As an effort to test out the market, Meitu soft launched the community feature "Social Circle" in May. Feedback has been encouraging: each day, core users spend an average of 25 minutes on Social Circle, launch the app 8 times, and view over 75 pages. Linking tools aimed at creating an interconnected community have also proven to be effective. For example, the "Collage of Hearts" feature drove a 29% increase in DAU and created dynamic interactions between users. 41% of new users posted their collage of hearts creation as their first action in the community. </p>
   <p>Given such positive feedback, the Meitu App is poised for its biggest upgrade in a decade. Beginning <span class="xn-chron">September 21</span>, Social Circle will be featured prominently in the Meitu app's homepage, right underneath the tool section. The new version will also allow users to scroll down for more community feeds. </p> 
   <p>"While image-based social media has a massive overseas market, there has been no such counterpart in <span class="xn-location">China</span>. Meitu is set to become a one-stop solution: after processing their photos, users can share right here on the Meitu app," said Wu Xinhong, CEO and Founder of Meitu. </p>
   <p>Along with the Meitu app's social media development comes a new slogan that highlights individuality and an ideal way of life: "<span class="xn-person">My Life</span>, <span class="xn-person">My Style</span>". "My" refers to the individual: younger generations have a strong desire to project their internal self on the world and express their own personality and views. In doing so, they discover and attract people who share their interests,&nbsp;their community. "Life &amp; Style" refers to an ideal way of life – users can share their own ideal way of life, while understanding that of others. The slogan is inspired by the revelation that everyone is looking for their vision of beauty and an ideal lifestyle, a need that is far from being met in today's social media landscape. </p>
   <p>Evolving from community to social media involves three key aspects: making every user feel present, building relational connections, and facilitating interactions among users. "Meitu aims to deliver the best lifestyle community in 2018, and further explore the possibilities of social media in 2019," added Wu Xinghong. </p>
   <p><b>Meipai: "Pan-Knowledge" Repositioning and New Social Features </b></p>
   <p>Meipai will be repositioned to a "pan-knowledge" short video sharing platform. Differentiating itself from other "pan-entertainment" focused products, Meipai aims to add value for users by focusing on the sharing of people's talent, skills and experiences, i.e. "pan knowledge". With the new slogan, "Talent worth sharing", Meipai enables its users to share and learn new dance moves, perform magic tricks or skateboard, apply makeup, do yoga and make dessert, play the Rubik's cube, do crafts, etc. Meipai has a solid foundation for further development: Meipai's content library now includes over 300 sub-genres and more than 30,000 key opinion leaders. Meipai has also partnered with more than 50 top short video MCN organizations. </p>
   <p>Meipai will continue to evolve under the new positioning. The homepage is currently being revamped to be more discovery-oriented, which is better suited toward pan-knowledge content: this is because pan-knowledge seekers are more self-motivated and interested in actively exploring things. Meipai will also release new features to encourage social interactions. A "homework" feature will be released in the following month. The feature will allow users to, for example, upload their own video as a comment beneath the original dance tutorial video. </p>
   <p>"2017 has been a year of both excitement and reflection for the short video industry. We reposition Meipai under the guiding philosophy of developing in a sustainable and healthy manner," said Wu Xinghong. </p>
   <p><b>Reorganize into Three Business Units, including </b><b>A Dedicated Social Product Business Unit</b></p>
   <p>In conjunction with the strategic direction, Meitu will reform its corporate structure toward becoming a product-driven company. Effective <span class="xn-chron">August 8th</span>, Meitu will be divided into three major business units: the Social Product Business Unit, the Beauty Product Business Unit and the Smart Hardware Product Business Unit. The establishment of a dedicated social product unit will consist of both the Meitu and Meipai brands. </p>
   <p>Each business unit is encouraged to innovate independently in every aspect of the business to maximize the value of capital and human resources. </p>
   <p><b>Refine the Relationship between Infrastructure and Commercial Products</b></p>
   <p>Meitu's new strategic direction also calls for refining the relationship between infrastructure and commercial products. </p>
   <p>Photo editing has been Meitu's key entry point to the online world and essential to Meitu's success in the past decade. It has been instrumental in helping the company grow its user base and will drive traffic to the social platform in the future. In addition, social media products will increase Meitu's stickiness and traffic value. They will generate data on additional dimensions that helps Meitu further understand user behavior. This serves as Meitu's infrastructure. </p> 
   <p>The advertising platform serves as a traffic distribution mechanism, a business product that matches users with advertisements. The outer layer is Meitu's "beauty ecosystem" of commercial products, including smart hardware, an e-commerce platform, and value-added services, with more to come. "Beauty and social media are our strategic focus for Meitu's second decade and a long-term goal. There are bound to be bumps on the road ahead, with some things going smoother than others, but that will not shake our conviction," said Wu Xinhong. </p>
   <p><b>About Meitu </b></p>
   <p>Established in <span class="xn-chron">October 2008</span>, Meitu is a leading AI-driven photo/video editing and sharing company headquartered in <span class="xn-location">China</span>. With a mission to inspire more people to express their beauty, the company has developed a rich portfolio of software and smart hardware products designed based on the concept of beauty, including Meitu, BeautyCam, Meipai (a short-form video community app) and Meitu Smartphones. Meitu has amassed over 1.5 billion unique users worldwide, with over one third coming from outside <span class="xn-location">China</span>, including <span class="xn-location">the United States</span>, <span class="xn-location">Brazil</span>, <span class="xn-location">Japan</span>, <span class="xn-location">South Korea</span>, <span class="xn-location">India</span>, <span class="xn-location">Thailand</span>, <span class="xn-location">Indonesia</span>, <span class="xn-location">Malaysia</span>, <span class="xn-location">the Philippines</span>, and <span class="xn-location">Vietnam</span>. As of <span class="xn-chron">February 2018</span>, Meitu engages 454.7 million MAUs. With the largest database of human portraits on the planet, Meitu leverages AI in the pursuit of beauty. MTlab, Meitu's research and development hub, is dedicated to AI and imaging innovations such as Computer Vision, Deep Learning, and Computer Graphics. </p>
   <p>According to <span class="xn-person">App Annie</span>, Meitu has repeatedly ranked as one of the top eight iOS non-game app developers globally between June, 2014 and January, 2017. More information about Meitu, Inc. may be found at <a href="http://email.prnewswire.com/wf/click?upn=2YCkEyUYEN6O3WqBujxncIKbIcLpS4A87-2Bm3RYAOfB8-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTYa2TzizIjwbvGX4O5ntx4p3dS65bqsOFQ3H4RZVnPJV9ZU2oyXm4NmxUmKwDg3KpPKdiQ8TbRJ4tCKSvcaF8MP3LHXT9vNNjZYDdnnTTLwH8PzdlTab9RHJZrb4Hsf5HnVrghCvDJF8FtdE25jjfvtg-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">http://corp.meitu.com/en/</a></p>
   <p>Photo - <a href="http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEUZjEZBnaZU-2Be4exEX1-2BQ4KLy5k6S1ggByQgGlgql5lCY5gou7uoGwGhaqLIM-2BjRDQ-3D-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTYcWcW-2FDt-2FpQhpTxrf2yxsacbog-2Bbp5OxFu-2BjlUuB9iD8ZHZStef-2B4o8vXMuazK-2BiT-2F5tHrNTuw4CfA9CthAxcVK-2BP2zfj3g83QY5R1Q36BS-2FF-2FFaBqK1wwH5IPBADObuZEgafMncMgcUe8G6JmrLQZg-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">https://photos.prnasia.com/prnh/20180808/2207131-1-a</a> <br>Photo - <a href="http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEUZjEZBnaZU-2Be4exEX1-2BQ4KLy5k6S1ggByQgGlgql5lCD0C3d-2B3NlJe90VfnMgCO1g-3D-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTY61svKYqu-2Fx5iln15EafddXLB63M-2BMn4AaXQJu-2F07M9nV-2F03IJvJSaQee1qhDnzH-2Fdk99RKIbsLO26827VIahPWVA4K1MhaDnPWH2fK5yXYRI6iMiSmyD61diNNdyb9Kp4Gtr-2FQG-2BBV2FuSoxythh9Q-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">https://photos.prnasia.com/prnh/20180808/2207131-1-b</a> <br>Photo - <a href="http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEUZjEZBnaZU-2Be4exEX1-2BQ4KLy5k6S1ggByQgGlgql5lC-2BQZRWnuI08s-2Bl7D0IBhU1w-3D-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTYLUhlyeGTLMsR8PA-2Fk6t4sOW-2B5xhedmcaI9Mb0cgNqP6a50jy-2F5v3Q2Okdung8o581qp8rikWXUyWGDGVzXZ1gT8mvrBTpR6460fKvjZU5j5PTlkP-2FLDEQ1BMx79a-2BFbOaRqcw683tFRxO-2BGsJsNOfQ-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">https://photos.prnasia.com/prnh/20180808/2207131-1-c</a> </p>
   <p>SOURCE Meitu Inc.</p><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table><table class="row content" style="background:#fff;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="gallery small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><table class="row" style="border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:0!important;padding-right:0!important;text-align:left;width:100%"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><center data-parsed="" style="min-width:none!important;width:100%"><img src="https://photos.prnasia.com/prnh/20180808/2207131-1-b#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" alt="" align="center" class="float-center" style="-ms-interpolation-mode:bicubic;Margin:0 auto;clear:both;display:block;float:none;margin:0 auto;max-width:100%;outline:0;text-align:center;text-decoration:none;width:auto"></center><table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="16px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:16px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table><p style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:14px;font-weight:400;line-height:28px;margin:0;margin-bottom:16px;padding:0;text-align:left">The new Meitu app is slated to be launched on September 21. It will be the app&#8217;s biggest upgrade in a decade.</p><hr style="border:1px solid #8a8a8d;border-bottom:0"></th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table><table class="row content" style="background:#fff;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="gallery small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><table class="row" style="border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:0!important;padding-right:0!important;text-align:left;width:100%"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><center data-parsed="" style="min-width:none!important;width:100%"><img src="https://photos.prnasia.com/prnh/20180808/2207131-1-c#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" alt="" align="center" class="float-center" style="-ms-interpolation-mode:bicubic;Margin:0 auto;clear:both;display:block;float:none;margin:0 auto;max-width:100%;outline:0;text-align:center;text-decoration:none;width:auto"></center><table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="16px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:16px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table><p style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:14px;font-weight:400;line-height:28px;margin:0;margin-bottom:16px;padding:0;text-align:left">Meitu&#8217;s strategy diagram that illustrates its refined business model</p><hr style="border:1px solid #8a8a8d;border-bottom:0"></th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table>CONTACT: Yin Li, ly39@meitu.com, +86-182-2446-2004<p style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:14px;font-weight:400;line-height:28px;margin:0;margin-bottom:16px;padding:0;text-align:left">Web Site: <a href='http://email.prnewswire.com/wf/click?upn=2YCkEyUYEN6O3WqBujxncCQY5szH2aim4ysHmAplbpU-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTYZVqD-2F31K1uuDOSVgK9YuNfi5RHGMjy0VEzvafjcYlrc4lp-2FGDZjRieixTch6A4GIz-2FTpvErMFxHHfpmd3-2BL8tXtWT2ABqK9Q2g-2BEVGs7p5MJcui2xAN4RRYvszBmvVFnL-2BBSeMU2Dh31ay6q5JLD1g-3D-3D' style='Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none'>http://corp.meitu.com/en</a></p><table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="8px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:8px;font-weight:400;hyphens:auto;line-height:8px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table> <table class="button medium expand" style="Margin:0 0 16px 0;border-collapse:collapse;border-spacing:0;margin:0 0 16px 0;padding:0;text-align:left;vertical-align:top;width:100%!important"><tr style="padding:0;text-align:left;vertical-align:top"><td style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:1.3;margin:0;padding:0;text-align:left;vertical-align:top;word-wrap:break-word"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><td style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;background:#00607F;border:2px solid #00607F;border-collapse:collapse!important;color:#fefefe;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:1.3;margin:0;padding:0;text-align:left;vertical-align:top;word-wrap:break-word"><center data-parsed="" style="min-width:0;width:100%"><a href='http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEbSmnibeutBGHu9PCpp0b6U4Q4ZotzjXnIzJXOhYxAYUXB2XyBy0PUS-2FlNm-2FF0ougwnx1bGovu440U1yP0w6asEuqA2WubpbHYEuA-2FpYnCTLW4PxcVes3-2B0LYe7f21LrI2sYtQrnOy7aj44RQuyP-2FyxY-2FYTLaqjUVnbqvQntHWfWjyvp-2BcY1jDFou2z80N56yw-3D-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTYJ0w0DtW7ZkCvYBZUBtM0w8-2FE53WMdbZqL0RlAG22S70om9qzEGcMXqunmSXl4QYzNhtHhhjGTJHMmj-2BR5Fr-2FEc7m-2FIBf4X1Y4l2gGHwn8MZXbL8eIhQJMAkiqhFzmB1GtK9xerveysILniINDQU7ng-3D-3D'align="center" class="float-center" style="Margin:0;border:0 solid #00607F;border-radius:5px;color:#fefefe;display:inline-block;font-family:Montserrat,Arial,sans-serif;font-size:14px;font-weight:500;line-height:1.3;margin:0;padding:8px 16px 8px 16px;padding-left:0;padding-right:0;text-align:center;text-decoration:none;text-transform:uppercase;width:100%">View in Browser</a></center></td></tr></table></td><td class="expander" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:1.3;margin:0;padding:0!important;text-align:left;vertical-align:top;visibility:hidden;width:0;word-wrap:break-word"></td></tr></table> </th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table><table class="row sub-footer" style="background:#e5f2f3;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table> <h4 class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:500;line-height:1.3;margin:0;margin-bottom:10px;padding:0;text-align:center;word-wrap:normal">Tech</h4><h5 class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:14px;font-weight:400;line-height:1.3;margin:0;margin-bottom:10px;padding:0;text-align:center;word-wrap:normal"><strong>Username:</strong> aronschatz | <a href="http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEbSmnibeutBGHu9PCpp0b6W-2FoNI0ROearmohKp42569WC64-2FSpdy8Fo3nCX4s3fhCFq-2FttXRiT9oD9muRlENXOn7XeMspKjmOZLHDgA1mW9Y_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTY2O6nuAdzmJ7kpodA-2B668j9XmmMUo4rE6qzS8Rq0AichU7ViBm-2BZ15gTOl5Gi39LTw0i0QZ-2F8uBM8BlopTwx3dBLuVaqmd4DX7NifdpePuyQsXPX4zA-2BPESMEmHgFfoj1lCTsVBZlPAJ6B1cuwGhF8w-3D-3D" style="Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none">edit profile</a></h5></th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table>              <table class="row footer" style="background:#f6f6f6;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top">                <th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left">                  <table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    <strong style="font-weight:250!important">                      Copyright &copy; PR Newswire Association LLC. All Rights Reserved.                    </strong>                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    <strong style="font-weight:250!important">                      A Cision company.                    </strong>                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    350 Hudson Street, Suite 300 New York, NY 10014-4504                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    <a href="http://email.prnewswire.com/wf/click?upn=TwIh0OIjG8BOSB67uKqqj527Ndmd47su-2BVNkeMqZLyk-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTYcl-2BGBUbiV2REh9iLcLJB8wn1CKLvEEszw2BhsiUqMBCSjutZThUNY-2B4OAOwnZfHpuVoDUKPKmcrusnRXQBG3PfzBxTGKPVKpo25wBdMoJg2Ezo7f-2FaA4CeOQr6bFq5xZhe47lSqss70WCpWKW5pm8w-3D-3D" style="Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none">http://www.prnewswire.com</a>                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    To change the settings for your profile(s), email delivery or unsubscribe go to<br>                    <a href="http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEbSmnibeutBGHu9PCpp0b6W-2FoNI0ROearmohKp42569WxPxLmCJLONMn1VSHyUOtdRl0LgkDpR33F0lynsy9e3o-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTYvknhQrEjPYemuebKymw4YD8-2BINFDdtnb3S7k8Eo0NCoQ91-2FaD6hxkx1FThDXGTczlST-2BkozqS5Rh9k24Qudcs9FOpqmPkmdv-2B1EpxBQg6MCgjfZr4SCk-2FVdHOok4AvxL-2BDSub-2BJBkUVoBfeqXuucbQ-3D-3D" style="Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none">https://prnmedia.prnewswire.com/profile/?action=editProfile</a><br>and select the profile you would like to edit or delete. You can select the industries, subjects, languages, geographical areas, companies, delivery options and delivery frequencies of your choice.                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    In addition to current press releases, you can also find archived news, corporate information, photos, tradeshow news and much more on the PR Newswire for Journalists website:<br><a href="http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEbSmnibeutBGHu9PCpp0b6WyN-2FuRoFWC-2BvU2X16iF3dB_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTYFAK0jor5-2F-2BC86-2Bzx4wwkCkrFDt0q-2BIYGnpyoBi-2BtP6g4zxUFgycMlyTTIRsSSsHOJvKepNbwsf7npt-2Bsw2YGxFeL-2BKMgiBE7wmLpjb55hSKjWrx8igE1Ba8fpEkkn-2Fs-2BHj-2FxuD14TXzWo938HEFjfA-3D-3D" style="Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none">https://prnmedia.prnewswire.com</a>                  <br>                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    To contact us, email: <a href="mailto:mediasite@prnewswire.com" style="Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none">mediasite@prnewswire.com</a>                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    Please do not reply to this email; this is an automatically generated message.                  </p>                </th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th>              </tr></tbody></table>              <table class="row no-background" style="background:#f6f6f6;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top">                <th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left">                  <center data-parsed="" style="min-width:510px;width:100%">                    <table align="center" class="menu float-center" style="Margin:0 auto;border-collapse:collapse;border-spacing:0;float:none;margin:0 auto;padding:0;text-align:center;vertical-align:top;width:auto!important"><tr style="padding:0;text-align:left;vertical-align:top"><td style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:1.3;margin:0;padding:0;text-align:left;vertical-align:top;word-wrap:break-word"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top">                      <th class="menu-item float-center" style="Margin:0 auto;color:#0a0a0a;float:none;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:10px;padding-right:10px;text-align:center"><a href="http://email.prnewswire.com/wf/click?upn=7VDqtAz2AW-2FeY7XnbvsasQ-2FREeMBG2bqGXdCB4XzZ7DqUZ66ajq2hyxTpb99QH1px3yBpuS-2Bnw-2Fwq19QAjbg6g-3D-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le75SikOsmKsVhERarGZNmT7GvqxxpgrrrYpNU62Rmld3eim4CdvU1UAoefoAxN9G2mHZIKzyxuqQo1DJRGTDyEehPPkT0SH3Y5jhIITZPT8j-2FXhFrXXv3rVJB1VOOlptnTYwl4eoini6cuw5A0sSDsD6hmz6cNUOHNckc3AWQ1GOD-2FIiU12Gc8QAOlox-2B8ASy12VNWoQ5g3d7IOg1I5TK-2BiBxSFfmobP-2BpNm6G-2Fw9QvjZA4UnXW7LMnai-2FiSWfjlW0oe1cJSov5oAoKN5xMxGtYzw-3D-3D" style="Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none"><img class="social-icon" src="http://content.prnewswire.com/designimages/fa-facebook_2x.jpg" width="18#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" alt="" style="-ms-interpolation-mode:bicubic;border:none;clear:both;display:block;max-width:18px;outline:0;text-decoration:none;width:100%"></a></th>                      <th class="menu-item float-center" style="Margin:0 auto;color:#0a0a0a;float:none;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:10px;padding-right:10px;text-align:center"><a href="http://email.prnewswire.com/wf/click?upn=TwIh0OIjG8BOSB67uKqqj7owuiGA3E3Pj6GpdgAhBSAM-2BLR6wI3HWJ2bn-2BQWA2vhFYfN4C62mtJD6y
          Counting Craters: Come On, Come On, Look a Little Closer — at Solar System History      Cache   Translate Page   Web Page Cache   

The moon has captured humans’ imaginations for thousands of years — but an astronomical number of questions remain about its history, and the history of our solar system. Some of the answers lie in the craters that pockmark the moon’s surface. And with deep learning, scientists are able to see these craters more clearly than Read article >

The post Counting Craters: Come On, Come On, Look a Little Closer — at Solar System History appeared first on The Official NVIDIA Blog.


          Use AWS DeepLens to Determine If Hotdog or Not Hotdog      Cache   Translate Page   Web Page Cache   
While at re:Invent, I got to take a deep learning workshop to learn about the new capabilities of AWS such as SageMaker and Greengrass. We used a new device created by AWS and Intel called DeepLens to build an image classification model, deploy it to the device, and use the model to predict image labels. […]
          Linux Foundation and Kernel News      Cache   Translate Page   Web Page Cache   

read more


          IBM Power 9 scales to enterprise server      Cache   Translate Page   Web Page Cache   
IBM Power 9 scales to enterprise server


32 to 192 core for enterprise cloud 

Data and making sense of data is one of the most valuable currencies for the modern enterprise and IBM plays big in that market with its Power 9. Now the company has announced a Power 9 solution for enterprise customers.

Back in May, IBM announced AC922 based machines to target AI and deep learning tasks and the new Power E950 and E980 are targeting incredibly big businesses including government, big banks and the financial sector, oil and bass, energy and utilities, public safety and healthcare. Let's take a moment to think about this. Oil and gas utilities with 80,000 sensors in a facility can produce 15 petabytes of data.

Public safety - just in New York City - generates 520 terabytes of data from surveillance cameras. The energy and utility industry with 680 million+ smart meters produce over 280 petabytes of data while healthcare produces the equivalent of 300 million books of health related data that gets produced, per human, in a lifetime.

Security remains a big concern are there is a likelihood of 28 percent that enterprise data might be breached within the next 24 months with an average cost of data breach in 2017 hitting $3.6 million.

Power Cloud 

Power Cloud has a big play in the modern enterprise, whether we are talking about the private or public cloud. IBM has PowerVR and IBM private cloud with Power VC, Power VC Cloud for SDI and KVM with secure and accelerate live migration. Power 9 is cloud ready, as Power VM is built in to scale out systems and Power 8 and Power 9 support enterprise pools and run seamlessly Linux workloads in the PowerVC private cloud.

The existing Power 9 architecture delivers 2X performance per core compared to a X86 Xeon S, 2.6 times memory per socket, and 1.8 times memory bandwidth per socket. That's not all, as Power 9 offers 2X I/O bandwidth and twice the performance in Crypto engines.

IBM is launching Power Enterprise Systems in Q3 2018, the Power E950 for smaller customers and the Power E980 for bigger players. Power systems for the enterprise combine cloud management with security to cloud and Power 9 performance to enable the most data intensive, mission critical workloads, IBM claims.

99.9996 percent uptime

IBM Power System has been ranked the most reliable for 10 straight years delivering 99.9996 percent uptime.

The Power 9 Eneprise E950 MTM 9040 MR9 comes in 4U form factore and has 2S and 4S configurations. This enables 32, 40, 44 and 48 cores as well as 128 DDR4 DIMM models with a maximum 16TB. The system comes with a built in IBM Power Virtual machine and has 10 slots of PCI Gen 4. It runs AIX and Linux.

Power9E

The bigger brother, called E980 1-4 modes MTM 9080-M9S, comes with 5U system node and a 2U system controller unit with 4S processor sockets per node. Up to 192 cores can be added to such systems with up to 128 DDR4 CDIMMs and up to 64TB memory.

This system also supports built in IBM Power VM and up to 32 PCIe Gen 4  slots for very fast expansion accelerators. The E980 runs AIX, IBM I and Linux.  

Power E950

The Power E950's notable enhancement includes more efficient processor communication, 4X memory capacity up to 16 TB per system and 4TB per socket to improve the performance of large scaling in memory data base applications. The E950 2X on chip crypto engines protect data at rest and in motion. The I/O infrastructure designed for PCIe Gen 4 provides 2X I/O bandwidth and concurrent maintenance to improve throughput and help maintain service levels, IBM claims.

The four integrated NVMe drive slots provide faster and more compact storage,  while the system RAS features are improved with enhanced DC-DC regulator redundancy and full fan concurrent maintenance to provide more reliable power and cooling. Last, but not least, the Power E 950 offers improved cost and serviceability with options for 0 – 2 concurrently maintainable internal storage controllers, IBM claims.

Power E980 for big players

Power E980 offers enterprise class Power 9 processors that provide fast response time and increase system stability. Large scale systems throughput will increase and offer a 4X increase in internode bandwidth. The 2X memory capacity with up to 64TB per system can improve the performance of large scale in memory data base applications.

Just like the E950 there is a 2X IB with PCIe Gen 4, 4 integrated NVMEe slots per CEC. System RAS features are improved with support for Guided FSP and SMP cable installation as well as dynamic de-allocation of SMP cable, ½ bandwidth mode and concurrent repair of SMP cabling in the event of a failure in SMP cabling components. A new distributed system clock design eliminates discrete clock cabling from each system drawer to the system control unit.

Upgrade to Power 9 can save money

IBM also promises that with the migration to Power 9 you can save over 50 percent within 3 years. The Power 7 770 machine with 64 cores, 1TB memory and relative performance of 579 / relative performance per core of 8.2 cost $230,000 in hardware maintenance in three years 24x7 while the software maintenance for the same period should stuck up to $19,000, a total of $420.000 for maintenance.

Power E950 offers 24 to 32 cores with 1TB memory and relative performance of 656 or relative performance per core of 27.3. The hardware would cost $182,000 but the hardware maintenance for three years 24x7 comes included and software maintenance for the same period will end up at $27,000 with total cost of ownership at $209,000.

power9Em

This amounts to nearly 2X performance per core and over 12 percent additional capacity with 60 percent deduction in cores to save on the future software licensing and maintenance. This might be a great play for a private cloud deployment.

Migrating to Power E 980 can save you 50 percent within five years, Big Blue claims.

power9E980

Every E950 and E980 system comes with PowerVM and Cloud Power Virtual Cloud manager, Cloud Management console (CMC) VMware vRealize suite integration. This, says IBM is all "carefully tailored" for the  Power cloud native solution with IBM cloud private as well as dynamic resource management across multiple Power cloud servers with enterprise pools.

The Power 9 enterprise stuff including E950 and E980 offer simplified multi cloud delivered with security and built to scale performance while staying affordable and over all reliable, continues IBM.


          Controllable Image-to-Video Translation: A Case Study on Facial Expression Generation. (arXiv:1808.02992v1 [cs.CV])      Cache   Translate Page   Web Page Cache   

Authors: Lijie Fan, Wenbing Huang, Chuang Gan, Junzhou Huang, Boqing Gong

The recent advances in deep learning have made it possible to generate photo-realistic images by using neural networks and even to extrapolate video frames from an input video clip. In this paper, for the sake of both furthering this exploration and our own interest in a realistic application, we study image-to-video translation and particularly focus on the videos of facial expressions. This problem challenges the deep neural networks by another temporal dimension comparing to the image-to-image translation. Moreover, its single input image fails most existing video generation methods that rely on recurrent models. We propose a user-controllable approach so as to generate video clips of various lengths from a single face image. The lengths and types of the expressions are controlled by users. To this end, we design a novel neural network architecture that can incorporate the user input into its skip connections and propose several improvements to the adversarial training method for the neural network. Experiments and user studies verify the effectiveness of our approach. Especially, we would like to highlight that even for the face images in the wild (downloaded from the Web and the authors' own photos), our model can generate high-quality facial expression videos of which about 50\% are labeled as real by Amazon Mechanical Turk workers.


          Object Detection in Satellite Imagery using 2-Step Convolutional Neural Networks. (arXiv:1808.02996v1 [cs.CV])      Cache   Translate Page   Web Page Cache   

Authors: Hiroki Miyamoto, Kazuki Uehara, Masahiro Murakawa, Hidenori Sakanashi, Hirokazu Nosato, Toru Kouyama, Ryosuke Nakamura

This paper presents an efficient object detection method from satellite imagery. Among a number of machine learning algorithms, we proposed a combination of two convolutional neural networks (CNN) aimed at high precision and high recall, respectively. We validated our models using golf courses as target objects. The proposed deep learning method demonstrated higher accuracy than previous object identification methods.


          Radon Inversion via Deep Learning. (arXiv:1808.03015v1 [cs.CV])      Cache   Translate Page   Web Page Cache   

Authors: Ji He, Jianhua Ma

Radon transform is widely used in physical and life sciences and one of its major applications is the X-ray computed tomography (X-ray CT), which is significant in modern health examination. The Radon inversion or image reconstruction is challenging due to the potentially defective radon projections. Conventionally, the reconstruction process contains several ad hoc stages to approximate the corresponding Radon inversion. Each of the stages is highly dependent on the results of the previous stage. In this paper, we propose a novel unified framework for Radon inversion via deep learning (DL). The Radon inversion can be approximated by the proposed framework with an end-to-end fashion instead of processing step-by-step with multiple stages. For simplicity, the proposed framework is short as iRadonMap (inverse Radon transform approximation). Specifically, we implement the iRadonMap as an appropriative neural network, of which the architecture can be divided into two segments. In the first segment, a learnable fully-connected filtering layer is used to filter the radon projections along the view-angle direction, which is followed by a learnable sinusoidal back-projection layer to transfer the filtered radon projections into an image. The second segment is a common neural network architecture to further improve the reconstruction performance in the image domain. The iRadonMap is overall optimized by training a large number of generic images from ImageNet database. To evaluate the performance of the iRadonMap, clinical patient data is used. Qualitative results show promising reconstruction performance of the iRadonMap.


          On feature selection and evaluation of transportation mode prediction strategies. (arXiv:1808.03096v1 [cs.AI])      Cache   Translate Page   Web Page Cache   

Authors: Mohammad Etemad, Amilcar Soares Junior, Stan Matwin

Transportation modes prediction is a fundamental task for decision making in smart cities and traffic management systems. Traffic policies designed based on trajectory mining can save money and time for authorities and the public. It may reduce the fuel consumption and commute time and moreover, may provide more pleasant moments for residents and tourists. Since the number of features that may be used to predict a user transportation mode can be substantial, finding a subset of features that maximizes a performance measure is worth investigating. In this work, we explore wrapper and information retrieval methods to find the best subset of trajectory features. After finding the best classifier and the best feature subset, our results were compared with two related papers that applied deep learning methods and the results showed that our framework achieved better performance. Furthermore, two types of cross-validation approaches were investigated, and the performance results show that the random cross-validation method provides optimistic results.


          Deep Video Color Propagation. (arXiv:1808.03232v1 [cs.CV])      Cache   Translate Page   Web Page Cache   

Authors: Simone Meyer, Victor Cornillère, Abdelaziz Djelouah, Christopher Schroers, Markus Gross

Traditional approaches for color propagation in videos rely on some form of matching between consecutive video frames. Using appearance descriptors, colors are then propagated both spatially and temporally. These methods, however, are computationally expensive and do not take advantage of semantic information of the scene. In this work we propose a deep learning framework for color propagation that combines a local strategy, to propagate colors frame-by-frame ensuring temporal stability, and a global strategy, using semantics for color propagation within a longer range. Our evaluation shows the superiority of our strategy over existing video and image color propagation methods as well as neural photo-realistic style transfer approaches.


          Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning. (arXiv:1709.10205v3 [cs.NE] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Georgios Detorakis, Sadique Sheik, Charles Augustine, Somnath Paul, Bruno U. Pedroni, Nikil Dutt, Jeffrey Krichmar, Gert Cauwenberghs, Emre Neftci

Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware. However, neuromorphic implementations of embedded learning at large scales that are both flexible and efficient have been hindered by a lack of a suitable algorithmic framework. As a result, the most neuromorphic hardware is trained off-line on large clusters of dedicated processors or GPUs and transferred post hoc to the device. We address this by introducing the neural and synaptic array transceiver (NSAT), a neuromorphic computational framework facilitating flexible and efficient embedded learning by matching algorithmic requirements and neural and synaptic dynamics. NSAT supports event-driven supervised, unsupervised and reinforcement learning algorithms including deep learning. We demonstrate the NSAT in a wide range of tasks, including the simulation of Mihalas-Niebur neuron, dynamic neural fields, event-driven random back-propagation for event-based deep learning, event-based contrastive divergence for unsupervised learning, and voltage-based learning rules for sequence learning. We anticipate that this contribution will establish the foundation for a new generation of devices enabling adaptive mobile systems, wearable devices, and robots with data-driven autonomy.


          Semantic Segmentation of Human Thigh Quadriceps Muscle in Magnetic Resonance Images. (arXiv:1801.00415v2 [cs.CV] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Ezak Ahmad, Manu Goyal, Jamie S. McPhee, Hans Degens, Moi Hoon Yap

This paper presents an end-to-end solution for MRI thigh quadriceps segmentation. This is the first attempt that deep learning methods are used for the MRI thigh segmentation task. We use the state-of-the-art Fully Convolutional Networks with transfer learning approach for the semantic segmentation of regions of interest in MRI thigh scans. To further improve the performance of the segmentation, we propose a post-processing technique using basic image processing methods. With our proposed method, we have established a new benchmark for MRI thigh quadriceps segmentation with mean Jaccard Similarity Index of 0.9502 and processing time of 0.117 second per image.


          Multi-region segmentation of bladder cancer structures in MRI with progressive dilated convolutional networks. (arXiv:1805.10720v2 [cs.CV] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Jose Dolz, Xiaopan Xu, Jerome Rony, Jing Yuan, Yang Liu, Eric Granger, Christian Desrosiers, Xi Zhang, Ismail Ben Ayed, Hongbing Lu

Precise segmentation of bladder walls and tumor regions is an essential step towards non-invasive identification of tumor stage and grade, which is critical for treatment decision and prognosis of patients with bladder cancer (BC). However, the automatic delineation of bladder walls and tumor in magnetic resonance images (MRI) is a challenging task, due to important bladder shape variations, strong intensity inhomogeneity in urine and very high variability across population, particularly on tumors appearance. To tackle these issues, we propose to use a deep fully convolutional neural network. The proposed network includes dilated convolutions to increase the receptive field without incurring extra cost nor degrading its performance. Furthermore, we introduce progressive dilations in each convolutional block, thereby enabling extensive receptive fields without the need for large dilation rates. The proposed network is evaluated on 3.0T T2-weighted MRI scans from 60 pathologically confirmed patients with BC. Experiments shows the proposed model to achieve high accuracy, with a mean Dice similarity coefficient of 0.98, 0.84 and 0.69 for inner wall, outer wall and tumor region, respectively. These results represent a very good agreement with reference contours and an increase in performance compared to existing methods. In addition, inference times are less than a second for a whole 3D volume, which is between 2-3 orders of magnitude faster than related state-of-the-art methods for this application. We showed that a CNN can yield precise segmentation of bladder walls and tumors in bladder cancer patients on MRI. The whole segmentation process is fully-automatic and yields results in very good agreement with the reference standard, demonstrating the viability of deep learning models for the automatic multi-region segmentation of bladder cancer MRI images.


          Java Deep Learning Projects      Cache   Translate Page   Web Page Cache   

eBook Details: Paperback: 436 pages Publisher: WOW! eBook (June 29, 2018) Language: English ISBN-10: 178899745X ISBN-13: 978-1788997454 eBook Description: Java Deep Learning Projects: Build and deploy powerful neural network models using the latest Java deep learning libraries

The post Java Deep Learning Projects appeared first on eBookee: Free eBooks Download.


          Building a Globally Optimized Computational Intelligent Image Processing Algorithm for On-Site Inference of Nitrogen in Plants      Cache   Translate Page   Web Page Cache   
Estimating nutrient content in plants is a crucial task in the application of precision farming. This work will be more challenging if it is conducted nondestructively based on plant images captured in the field due to the variation of lighting conditions. This paper proposes a computational intelligence image processing to analyze nitrogen status in wheat plants. We developed an ensemble of deep learning multilayer perceptron-using committee machines for color normalization and image segmentation. This paper also focuses on building a genetic-algorithm-based global optimization to fine tune the color normalization and nitrogen estimation results. We discovered that the proposed method can successfully normalize plant images by reducing color variabilities compared to other color normalization techniques. Furthermore, this algorithm is able to enhance the nitrogen estimation results compared to other non-global optimization methods as well as the most renowned SPAD meter based nitrogen measurement.
          Whats new on arXiv      Cache   Translate Page   Web Page Cache   
Rethinking Numerical Representations for Deep Neural Networks With ever-increasing computational demand for deep learning, it is critical to investigate the …

Continue reading


          UCI Health Opens Center for Artificial Intelligence, Deep Learning      Cache   Translate Page   Web Page Cache   
The team will focus on developing deep learning neural networks and applying them to diagnostics, disease prediction, and treatment planning.
          Intel Reinventing Xeon for AI – but Is It Too Late?      Cache   Translate Page   Web Page Cache   
On Wednesday, Intel announced a new AI extension to Xeon that it calls Intel Deep Learning Boost, which will ship with the Cascade Lake processor ...
          Thierry Pellegrino on What’s New at the Dell HPC Community      Cache   Translate Page   Web Page Cache   
Dell EMC and NVIDIA engineered this deep learning design to be built around Dell EMC PowerEdge servers with NVIDIA Tesla V100 Tensor Core ...
          Voices in AI – Episode 62: A Conversation with Atif Kureishy      Cache   Translate Page   Web Page Cache   
Episode 62 of Voices in AI features host Byron Reese and Atif Kureishy discussing AI, deep learning, and the practical examples and implications in ...
          Speech-recognition Tool Can Distinguish ALS, May Offer Way of Evaluating Patients at Home      Cache   Translate Page   Web Page Cache   
Researchers believe that system performance can also be improved by using deep learning methods, particularly on larger datasets. In the future ...
          NYU Social Entrepreneurship Program Honors Edtech Startup Smart Sparrow and Others in …      Cache   Translate Page   Web Page Cache   
Judges were asked to select businesses that would promote postsecondary education in the U.S. with machine and deep learning algorithms, ...
          Choosing Between GAN Or Encoder Decoder Architecture For ML Applications Is Like Comparing …      Cache   Translate Page   Web Page Cache   
Since the deep learning boom has started, numerous researchers have started building many architectures around neural networks. It is often ...
          Linux Deep Learning expands: answer is still 42      Cache   Translate Page   Web Page Cache   
The Linux Foundation Deep Learning Foundation (LF DLF) has announced five new members: Ciena, DiDi, Intel, Orange and Red Hat.
          Intel hosts AI DevCon to help create AI-ready talent in India      Cache   Translate Page   Web Page Cache   
Intel India hosted the AI (Artificial Intelligence) DevCon, aimed at bringing together the top minds in data science, machine and deep learning, ...
          Deep convolutional autoencoder for radar-based classification of similar aided and unaided human activities      Cache   Translate Page   Web Page Cache   
Radar-based activity recognition is a problem that has been of great interest due to applications such as border control and security, pedestrian identification for automotive safety, and remote health monitoring. This paper seeks to show the efficacy of micro-Doppler analysis to distinguish even those gaits whose micro-Doppler signatures are not visually distinguishable. Moreover, a three-layer, deep convolutional autoencoder (CAE) is proposed, which utilizes unsupervised pretraining to initialize the weights in the subsequent convolutional layers. This architecture is shown to be more effective than other deep learning architectures, such as convolutional neural networks and autoencoders, as well as conventional classifiers employing predefined features, such as support vector machines (SVM), random forest, and extreme gradient boosting. Results show the performance of the proposed deep CAE yields a correct classification rate of 94.2% for micro-Doppler signatures of 12 different human activities measured indoors using a 4 GHz continuous wave radar—17.3% improvement over SVM.
          (USA-TX-Plano) Advertising & Analytics - Principal Data Scientist (AdCo)      Cache   Translate Page   Web Page Cache   
The Data Scientist will be responsible for designing and implementing processes and layouts for complex, large- scale data sets used for modeling, data mining, and research purposes. The purpose of this role is to conceptualize, prototype, design, develop and implement large scale big data science solutions in the cloud and on premises, in close collaboration with product development teams, data engineers and cloud enterprise teams. Competencies in implementing common and new machine learning, text mining and other data science driven solutions on cloud based technologies such as AWS are required. The data scientist will be knowledgeable and skilled in the emerging data science trends and must be able to provide technical guidance to the other data scientists in implementing emerging and advanced techniques. The data scientist must also be able to work closely with the product and business teams to conceptualize appropriate data science models and methods that meet the requirements. Key Roles and Responsibilities + Uses known and emerging techniques and methods in data science (including statistical, machine learning, deep learning, text and language analytics and visualization) in big data and cloud based technologies to conceptualize, prototype, design, code, test, validate and tune data science centric solutions to address business and product requirements + Conceptualizes data science enablers required for supporting future product features based on business and product roadmaps, and guides cross functional teams in prototyping and validating these enablers + Mentors and guides other data scientists + Uses a wide range of existing and new data science and machine learning tools and methods as required to solve the problem on hand. Skilled in frameworks and libraries using but not limited to R, python, spark, scala, pig, hive, mllib, mxnet, tensorflow, keras, theanos etc. + Is aware of industry trends an collaborates with the platform and engineering teams to update the data science development stack for competitive advantage + Collaborates with third party data science capability vendors and provides appropriate recommendations to the product development teams + Works in a highly agile environment **Experience** Typically requires 10 or more years experience or PhD in an approved field with a minimum of 6 years of relevant experience. **Education** Preferred Masters of Science in Computer Science, Math or Scientific Computing; Data Analytics, Machine Learning or Business Analyst nanodegree; or equivalent experience.
          (USA-CO-Denver) Learning and Development Specialist      Cache   Translate Page   Web Page Cache   
**92304BR** **Job title:** Learning and Development Specialist **Segment:** Upstream **Role synopsis:** The Learning & Development Specialist is responsible for creating, facilitating, monitoring, evaluating and documenting organizational learning solutions. As we build and maintain our talent management framework, we will rely on this person to help us create and then implement world-class learning programs. Deep learning programs are a key priority and this person must play a significant role in their creation, delivery, and everything in between. The position is considered proficient in adult learning, instructional design, and group facilitation. The incumbent is expected to collaborate with subject matter experts and stakeholders using independent judgment, problem solving, and analytical skills in identifying and assessing talent development needs, and designing and delivering educational programs and content. Utilizing performance consulting skills and best practices in learning & development, this role often advises leaders and peers in identifying learning needs and determining appropriate education and talent development solutions. This position requires the individual to be able to build strong connections and trusting relationships with current employees at all levels. This individual will be a member of the People Operations team and must support an environment that blends hard work, continuous learning, innovation, and personal development. The preferred location for this position is Houston, TX. Candidates from Denver, CO will also be considered. **Req ID:** 92304BR **Location:** United States - Colorado - Denver, United States - Texas - Houston **Is this a part time position?:** No **Relocation available:** Negotiable **Travel required:** Yes - up to 25% **Key accountabilities:** + Support delivery of our employee development strategy for all levels of the business (graduates, high potentials, leadership, technical, compliance training, field operators, etc.) + Lead the identification and assessment of training needs; develop, plan and organize training programs based upon determined needs. Work with SME (subject matter experts) on knowledge transfer/content. + Facilitate learning solutions in either a physical classroom or virtual setting. Serve as a role model for excellent instructional facilitation and develop those capabilities in others. + Assist with activities related to the modernization of our employee development philosophy, approach, and components + Assist with designing and delivering ongoing L&D programs: Identify participants and communicate with them and their managers, set up and monitor registrations, review course materials to ensure up-to-date/relevant, deliver instruction and/or coordinate with other presenters, prepare and execute follow-up evaluations. + Explore and administer modern learning and development technology to facilitate smooth execution, assignment and tracking of L&D activities. (i.e. LMS, authoring software) + Establish relationships and interface with vendors, as needed, to meet training program needs. + Communicate timely training throughout the organization; manage learning events calendar. + Create training schedules, and coordinate logistics with colleagues on the Talent and Learning Team in support of a well-orchestrated, holistic approach to employee development. + Collaborate with colleagues on the Talent and Learning Team on assigned projects. + Other duties as assigned **Essential Education:** + Minimum of a Bachelor degree required. + PHR or SPHR preferred. + CPLP preferred. **Desirable criteria and qualifications:** + Foster an environment of safety-first operations. + Forward-looking thinker who actively seeks opportunities and solutions. + Demonstrated use of IMPACT principles: **I** - Innovated: Learns from new ideas and applies solutions to add value. **M** - Motivated: Overcomes obstacles with an intense desire to succeed. **P** - Performance Driven: Makes value-based decisions involving measured risk to deliver business objectives. **A** - Accountable: Takes responsibility and ownership of business performance. **C** - Collaborative: Shares knowledge and works together for the good of L48. **T** - Trustworthy: Keeps commitments, listens to others and authentically supports change necessary to achieve our Path to Premier. + Demonstrated business focus. Self-directed, process-oriented, strong business acumen, and experienced at driving change. + Ability to function well independently, demonstrate ownership and self-sufficiency for project management and the achievement of agreed upon results. + Ability to succeed in a fast-paced environment, working on multiple projects with constantly changing demands and deadlines. + Ability to manage and deliver high quality performance on multiple, simultaneous strategic, value-added tasks and priorities. + Numerate with strong analytical ability. + Relentless drive and determination – high energy and personable with an intense focus on doing the job right the first time. + Ability to build productive relationships with employees at all levels of the organization. + Ability to coach others and provide constructive challenge at all levels of the organization. + Ability to work collaboratively as part of a team. + Highly organized and attentive to detail. + Strong interpersonal and communication skills. + Sensitivity to confidential matters. **About BP:** We are a global energy business involved in every aspect of the energy system. We have 75,000 employees in 80 countries, working towards delivering light, heat and mobility to millions of people, every day. We are one of the very few companies equipped to solve some of the big complex challenges that matter for the future. We have a real contribution to make to the world's ambition of a low carbon future. Join us, and be part of what we can accomplish together. **Sub-category:** Learning & Development **Job category:** Human Resources **Countries (State/Region):** United States **Disclaimer:** If you are selected for a position in the United States, your employment will be contingent upon submission to and successful completion of a post-offer/pre-placement drug test (and alcohol screening/medical examination if required by the role) as well as pre-placement verification of the information and qualifications provided during the selection process. The drug screen requires a hair test for which BP must be able to obtain a sufficient hair sample for analysis (~4 cm/1 ½” scalp, or > 2 cm/¾” body – arms & armpits/legs/chest) As part of our dedication to the diversity of our workforce, BP is committed to Equal Employment Opportunity. Applicants will receive consideration for employment without regard for race, color, gender, religion, national origin, disability, veteran status, military status, age, marital status, sexual orientation, gender identity, genetic information or any other protected group status. We are also committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us or have one of your representatives contact us at BPUSApplicationAssis@bp.com or by telephone at 281.366.1999. Read the Equal Employment Opportunity is the Law poster and the poster supplement - for more information about Equal Employment Opportunities. ( Spanish version ) BP is an equal employment opportunity and affirmative action employer. View our policy statement . **Essential experience and job requirements:** + A minimum of 4 years in Human Resources, with an additional 2 years in organizational development + Strong needs assessment and instructional design/curriculum development experience. + Experience designing education for online/web-based learning environments. + Experience managing organizational training projects working with stakeholders at both leadership and staff levels **Other Requirements (eg Travel, Location):** Travel 15-20% **bp.com #tag:** \#lower48req **Eligibility Requirements:** If you are applying for a position in the United States, you must be at least 18 years of age, legally authorized to work in the United States; and not require sponsorship for employment visa status (e.g., TN, H1B status), now or in the future.
          (USA-TX-Houston) Learning and Development Specialist      Cache   Translate Page   Web Page Cache   
**92304BR** **Job title:** Learning and Development Specialist **Segment:** Upstream **Role synopsis:** The Learning & Development Specialist is responsible for creating, facilitating, monitoring, evaluating and documenting organizational learning solutions. As we build and maintain our talent management framework, we will rely on this person to help us create and then implement world-class learning programs. Deep learning programs are a key priority and this person must play a significant role in their creation, delivery, and everything in between. The position is considered proficient in adult learning, instructional design, and group facilitation. The incumbent is expected to collaborate with subject matter experts and stakeholders using independent judgment, problem solving, and analytical skills in identifying and assessing talent development needs, and designing and delivering educational programs and content. Utilizing performance consulting skills and best practices in learning & development, this role often advises leaders and peers in identifying learning needs and determining appropriate education and talent development solutions. This position requires the individual to be able to build strong connections and trusting relationships with current employees at all levels. This individual will be a member of the People Operations team and must support an environment that blends hard work, continuous learning, innovation, and personal development. The preferred location for this position is Houston, TX. Candidates from Denver, CO will also be considered. **Req ID:** 92304BR **Location:** United States - Colorado - Denver, United States - Texas - Houston **Is this a part time position?:** No **Relocation available:** Negotiable **Travel required:** Yes - up to 25% **Key accountabilities:** + Support delivery of our employee development strategy for all levels of the business (graduates, high potentials, leadership, technical, compliance training, field operators, etc.) + Lead the identification and assessment of training needs; develop, plan and organize training programs based upon determined needs. Work with SME (subject matter experts) on knowledge transfer/content. + Facilitate learning solutions in either a physical classroom or virtual setting. Serve as a role model for excellent instructional facilitation and develop those capabilities in others. + Assist with activities related to the modernization of our employee development philosophy, approach, and components + Assist with designing and delivering ongoing L&D programs: Identify participants and communicate with them and their managers, set up and monitor registrations, review course materials to ensure up-to-date/relevant, deliver instruction and/or coordinate with other presenters, prepare and execute follow-up evaluations. + Explore and administer modern learning and development technology to facilitate smooth execution, assignment and tracking of L&D activities. (i.e. LMS, authoring software) + Establish relationships and interface with vendors, as needed, to meet training program needs. + Communicate timely training throughout the organization; manage learning events calendar. + Create training schedules, and coordinate logistics with colleagues on the Talent and Learning Team in support of a well-orchestrated, holistic approach to employee development. + Collaborate with colleagues on the Talent and Learning Team on assigned projects. + Other duties as assigned **Essential Education:** + Minimum of a Bachelor degree required. + PHR or SPHR preferred. + CPLP preferred. **Desirable criteria and qualifications:** + Foster an environment of safety-first operations. + Forward-looking thinker who actively seeks opportunities and solutions. + Demonstrated use of IMPACT principles: **I** - Innovated: Learns from new ideas and applies solutions to add value. **M** - Motivated: Overcomes obstacles with an intense desire to succeed. **P** - Performance Driven: Makes value-based decisions involving measured risk to deliver business objectives. **A** - Accountable: Takes responsibility and ownership of business performance. **C** - Collaborative: Shares knowledge and works together for the good of L48. **T** - Trustworthy: Keeps commitments, listens to others and authentically supports change necessary to achieve our Path to Premier. + Demonstrated business focus. Self-directed, process-oriented, strong business acumen, and experienced at driving change. + Ability to function well independently, demonstrate ownership and self-sufficiency for project management and the achievement of agreed upon results. + Ability to succeed in a fast-paced environment, working on multiple projects with constantly changing demands and deadlines. + Ability to manage and deliver high quality performance on multiple, simultaneous strategic, value-added tasks and priorities. + Numerate with strong analytical ability. + Relentless drive and determination – high energy and personable with an intense focus on doing the job right the first time. + Ability to build productive relationships with employees at all levels of the organization. + Ability to coach others and provide constructive challenge at all levels of the organization. + Ability to work collaboratively as part of a team. + Highly organized and attentive to detail. + Strong interpersonal and communication skills. + Sensitivity to confidential matters. **About BP:** We are a global energy business involved in every aspect of the energy system. We have 75,000 employees in 80 countries, working towards delivering light, heat and mobility to millions of people, every day. We are one of the very few companies equipped to solve some of the big complex challenges that matter for the future. We have a real contribution to make to the world's ambition of a low carbon future. Join us, and be part of what we can accomplish together. **Sub-category:** Learning & Development **Job category:** Human Resources **Countries (State/Region):** United States **Disclaimer:** If you are selected for a position in the United States, your employment will be contingent upon submission to and successful completion of a post-offer/pre-placement drug test (and alcohol screening/medical examination if required by the role) as well as pre-placement verification of the information and qualifications provided during the selection process. The drug screen requires a hair test for which BP must be able to obtain a sufficient hair sample for analysis (~4 cm/1 ½” scalp, or > 2 cm/¾” body – arms & armpits/legs/chest) As part of our dedication to the diversity of our workforce, BP is committed to Equal Employment Opportunity. Applicants will receive consideration for employment without regard for race, color, gender, religion, national origin, disability, veteran status, military status, age, marital status, sexual orientation, gender identity, genetic information or any other protected group status. We are also committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us or have one of your representatives contact us at BPUSApplicationAssis@bp.com or by telephone at 281.366.1999. Read the Equal Employment Opportunity is the Law poster and the poster supplement - for more information about Equal Employment Opportunities. ( Spanish version ) BP is an equal employment opportunity and affirmative action employer. View our policy statement . **Essential experience and job requirements:** + A minimum of 4 years in Human Resources, with an additional 2 years in organizational development + Strong needs assessment and instructional design/curriculum development experience. + Experience designing education for online/web-based learning environments. + Experience managing organizational training projects working with stakeholders at both leadership and staff levels **Other Requirements (eg Travel, Location):** Travel 15-20% **bp.com #tag:** \#lower48req **Eligibility Requirements:** If you are applying for a position in the United States, you must be at least 18 years of age, legally authorized to work in the United States; and not require sponsorship for employment visa status (e.g., TN, H1B status), now or in the future.
          One-to-One at Scale: The Confluence of Behavioral Science and Technology and How It’s ...      Cache   Translate Page   Web Page Cache   

Consumer and business customers have increasing expectations that businesses provide products and services customized for their unique needs. Adaptive intelligence and machine learning technology, combined with insights into behavior, make this customization possible. The financial services industry is moving aggressively to take advantage of these new capabilities. In March 2018, Bank of America launched Erica, a virtual personal assistant—a chatbot—powered by AI. In just three months, Erica surpassed one million users.

But to achieve personalization at scale requires an IT infrastructure that can handle huge amounts of data and process it in real time. Engineered systems purpose-built for these cognitive workloads provide the foundation that helps make this one-to-one personalization possible.

Bradley Leimer, Managing Director and Head of Fintech Strategy at Explorer Advisory & Capital, provides consulting and investment advisory services to start-ups, accelerators, and established financial services companies. As the former Head of Innovation and Fintech Strategy at Santander U.S., his team connected the bank to the fintech ecosystem. Bradley spoke with us recently about how behavioral science is evolving in the financial services industry and how new technological capabilities, when tied to human behavior, are changing the way organizations respond to customer needs.

I know you’re fascinated by behavioral science. How does it frame what you do in the financial sector?

Behavioral science is fascinating because the study of human behavior itself is so intriguing. One of the many books I was influenced by early in my career was Paco Underhill’s 1999 book Why We Buy. The science around purchase behavior and how companies leverage our behavior to create buying decisions that fall in their favor—down to where products are placed and the colors that are used to attract the eye—these are techniques that have been used since before the Mad Men era of advertising.

I’m intrigued by the psychology behind the decisions we make. People are a massive puzzle to solve at scale. Humans are known to be irrational, but they are irrational in predictable ways. Leveraging behavioral science, along with things like design thinking and human-computer interaction, have been a part of building products and customer experiences in financial services for some time. To nudge customers to sign up for a service or take an additional product or to perform behaviors that are sometimes painful like budgeting, saving more, investing, consolidating, or optimizing the use of credit all involve deeply understanding human behavior.

Student debt reached $1.5 trillion in Q1 2018. Can behavioral analytics be used to help students better manage their personal finances?

What’s driving this intersection between behavioral science and fintech?

Companies have been using the ideas of behavioral science in strategic planning and marketing for some time, but it’s only been in the last decade that the technology to act upon the massive amount of new data we collect has been available. The type of data we used to struggle to plug into a mainframe through data reels now flies freely within a cloud of shared service layers. So beyond new analytic tools and AI, there are few other things that are important.

People interact with brands differently now. To become a customer now in financial services, it most often means that you’re interacting through an app, or a website, not in any physical form. It’s not necessarily how a branch is laid out anymore; it’s how the navigation works in your application, and what you can do in how few steps, how quickly you can onboard. This is what is really driving the future of revenue opportunity in the financial space.

At the same time, the competition for customers is increasing. Investments in the behavioral science area are a must-have now because the competition gets smarter every day and the applications to understand human behavior are simultaneously getting more accessible. We use behavioral science to understand and refine our precious opportunities to build empathy and relationships. 

You’ve mentioned the evolution of behavioral science in the financial services industry. How is it evolving and what’s the impact?

Behavioral science is nothing without the right type of pertinent, clean data. We have entered the era of engagement banking: a marketing, sales, and service model that deploys technology to achieve customer intimacy at scale. But humans are not just 1’s and 0’s. You need a variety of teams within banks and fintechs to leverage data in the right way, to make sure it addresses real human needs.

The real impact of these new tools has only started to be really felt. We have an opportunity to broaden the global use of financial services to reduce the number of the underbanked, to open new markets for payments and credit, to optimize every unit of currency for our customers more fully and lift up a generation by ending poverty and reducing wealth inequality.

40% of Americans could not come up with $400 for an emergency expense. Behavioral science can help move people move out of poverty and reduce wealth inequality.

How does artificial intelligence facilitate this evolution?

Financial institutions are challenged with innovating a century-old service model, and the addition of advanced analytics, artificial intelligence tools and how they can be used within the enterprise is still a work in progress. Our metamorphosis has been slowed by the dual weight of digital transformation and the broader implications of ever-evolving customers.

Banks have vast amounts of unstructured and disparate data throughout their complicated, mostly legacy, systems. We used to see static data modeling efforts based on hundreds of inputs. That’s transitioned to an infinitely more complex set of thousands of variables. In response, we are developing and deploying applications that make use of machine learning, deep learning, pattern recognition, and natural language processing among other functionalities.

Using AI applications, we have seen efficiency gains in customer onboarding/know-your-customer (KYC), automation of credit decisioning and fraud detection, personalized and contextual messaging, supply-chain improvements, infinitely tailored product development, and more effective communication strategies based on real-time, multivariate data. AI is critical to improving the entire lifecycle of the customer experience.

What’s the role of behavioral analytics in this trend?

Behavioral analytics combines specific user data: transaction histories, where people shop, how they manage their spending and savings habits, the use of credit, historical trends in balances, how they use digital applications, how often they use different channels like ATMs and branches, along with technology usage data like navigation path, clicks, social media interactions, and responsiveness to marketing. It takes a more holistic and human view of data, connecting individual data points to tell us not only what is happening, but also how and why it is happening.

You’ve built out these customization and personalization capabilities in banks and fintechs. Tell us about the basic steps any enterprise can take to build these capabilities.

As an organization, you need to clearly define your business goals. What are the metrics you want to improve? Is it faster onboarding, lower cost of acquisition, quicker turn toward profitable products, etc.? And how can a more customer-centric, personalized experience assist those goals?

As you develop these, make sure you understand who needs to be in the room. Many banks don’t have a true data science team, or they are a sort of hybrid analytical marketing team that has many masters. That’s a mistake. You need deep understanding of advanced analytics to derive the most efficiencies out of these projects. Then you need a strong collaborative team that includes marketing, digital banking, customer experience, and representation from those teams that interacts with clients. Truly user-centric teams leverage data to create a complete understanding of their users’ challenges. They develop insight into what features their customers use and what they don’t and build knowledge of how customers get the most value out of their products. And then they continually iterate and adjust.

You also need to look at your partnerships, including those with fintechs. There are several lessons derived from fintech platforms such as attention to growth through business model flexibility, devotion to speed-to-market, and a focus on creating new forms of customer value through leveraging these tools to customize everything from onboarding to the new user experience as well as how they communicate and customize the relationship over time.

What would be the optimum technology stack to support real-time contextual messages, products, or services?

Choosing the right technology stack for behavioral analytics is not that different than for any other type of application. You have to find the solution that maps most economically and efficiently to your particular problem set. This means implementing a technology that can solve the core business problems, can be maintained and supported efficiently, and minimizes your total cost of ownership.

In banking, it has to reduce risk while maximizing your opportunities for success. The legacy systems that many banks still deploy were built on relational databases and not designed for real-time processing, providing access via Restful APIs and the cloud-based data lakes we see today. Nor did they have the ability to connect and analyze any form of data. The types of data we now have to consider is just breathtaking and growing daily. In choosing technology partners, you want to make sure what you’re buying is built for this new world from the beginning, that the platform is flexible. You have to be able to migrate between on-premises solutions to the cloud, along with a variety of virtual machines being used today.

If I can paraphrase what you’re saying, it’s that financial services companies need a big data solution to manage all these streams of structured and unstructured data coming in from AI/ML, and other advanced applications. Additionally, a big data solution that simplifies deployment by offering identical functionality on-premises, in the cloud, and in the Oracle public Cloud behind your firewall would also be a big plus.

Are there any other must-haves in terms of performance, analytics, etc., to build an effective AI-based solution?

Must-haves include flexibility to consume all types of data, especially that which is gathered from the web and from digital applications. It needs to be very good at data aggregation—that is, reducing large data sets down to more manageable proportions that are still representative. It must be good at transitioning from aggregation to the detail level and back to optimize different analytical tools. It should be strong in quickly identifying cardinality—how many types of variables can there be within a given field.

Some other things to look for in a supporting infrastructure are direct access through query tools (SQL), support for data transformation within the platform (ETL and ELT tools), flexible data model or unstructured access to all data, algorithmic data transformation, ability to add and access one-off data sets simply (like through ODBC), flexible ways to use APIs to load and extract information, that kind of thing. A good system needs to be real time to help customers in taking the most optimized journey within digital applications. 

To wrap up our discussion, what three tips would you give the enterprise IT chief about how to incorporate these new AI capabilities to help the organization reach its goals around delivering a better customer experience?

First, realize that this isn’t just a technology problem—it will require engineers, data scientists, system architects and data specialists sure, but you also need a collaborative team that involves many parts of the business and builds tools that are accessible.

Start with simple KPIs to improve. Reducing the cost of acquisition or improving onboarding workflows, improving release time for customer-facing applications, reducing particular types of unnecessary customer churn—these are good places to start. They improve efficiencies and impact the bottom line. They help build the case around necessary new technology spend and create momentum.

Understand that the future of the financial services model is all about the customer—understanding their needs and helping the business meet them. Our greatest source of innovation is, in the end, our empathy.

You’ve given us a lot to think about, Bradley. Based on our discussion, it seems that the world of financial services is changing and banks today will require an effective AI-based solution that leverages behavioral science and personalization capabilities.

Additionally, in order for banks to sustain a competitive advantage and lead in the market, they need to invest an effective big data warehousing strategy. Therefore, business and IT leaders need a solution that can store, acquire, process large data workloads at scale, and has cognitive workload capabilities to give you the advanced insights needed to run your business most effectively. It is also important that the technology is tailor-made for advancing businesses’ analytical capabilities that leverage familiar big data and analytics open source tools. And Oracle Big Data Appliance provides that high-performance, cloud-ready secure platform for running diverse workloads using Hadoop, Spark, and NoSQL systems. 


          A Response to the Times: Technology and Deep Learning      Cache   Translate Page   Web Page Cache   
Deep learning involves the use of critical higher order thinking skills (like complex inductive reas
          How deep learning helps in healthcare      Cache   Translate Page   Web Page Cache   

Deep learning serves the great role for healthcare. It has redefined the cancer treatment and helped greatly for healthcare. If you want to know more on personalized medicine for cancer with deep learning then visit: https://medium.com/clusterone/personalized-medicine-redefining-cancer-tr...


          Deep Learning Market - Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2018 – 2022      Cache   Translate Page   Web Page Cache   
WiseGuyRerports.com Presents “Deep Learning Market in the US 2017-2021” New Document to its Studies
          (USA-CA-San Jose) Python Engineer Software 2      Cache   Translate Page   Web Page Cache   
At Northrop Grumman, our work with **cutting-edge technology** is driven by something **human** : **the lives our technologies protects** . It's the value of innovation that makes a difference today and tomorrow. Here you'll have the opportunity to connect with coworkers in an environment that's uniquely caring, diverse, and respectful; where employees share experience, insights, perspectives and creative solutions through integrated product & cross-functional teams, and employee resource groups. Don't just build a career, build a life at Northrop Grumman. The Cyber Intelligence Mission Solutions team is seeking an Engineer Software 2 to join our team in San Jose as we kick off a new 10 year program to protect our nation's security. You will be using your Python skills to perform advanced data analytics on a newly architected platform. Hadoop, Spark, Storm, and other big data technologies will be used as the basic framework for the program's enterprise. **Roles and Responsibilities:** + Python development of new functionality and automation tools using Agile methodologies + Build new framework using Hadoop, Spark, Storm, and other big data technologies + Migrate legacy enterprise to new platform + Test and troubleshoot using Python and some Java on Linux + Function well as a team player with great communication skills **Basic Qualifications:** + Bachelor Degree in a STEM discipline (Science, Technology, Engineering or Math)from an accredited institutionwith 2+ years of relevant work experience, or Masters in a STEM discipline with 0+ years of experience + 1+ years of Python experience in a work setting + Active SCI clearance **Preferred Qualifications:** + Machine learning / AI / Deep Learning / Neural Networks + Familiar withHadoop, Spark or other Big Data technologies + Familiar with Agile Scrum methodology + Familiar withRally, GitHub, Jenkins, Selenium applications Northrop Grumman is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class. For our complete EEO/AA statement, please visit www.northropgrumman.com/EEO . U.S. Citizenship is required for most positions.
          Deep Learning Engineer - Devoteam - Randstad      Cache   Translate Page   Web Page Cache   
Unique culture where people speak frankly, have great mutual respect and are united by a genuine interest in technology. Is Deep Learning your competency realm?...
Van Devoteam - Mon, 06 Aug 2018 13:27:24 GMT - Toon alle vacatures in Randstad
          Machine Learning SW Engineer      Cache   Translate Page   Web Page Cache   
MD-Rockville, Our client now has an open Machine Learning SW Engineer opening. It is a SW Engineering role with Machine Learning. If the right candidate had Deep Learning concepts (Algorithm using Neural Networks) that would be a plus but at least Machine Learning. Mission: There is a NEW Data Strategy for our client to enable to solve DATA problems more efficiently. DAY to DAY: Will be working with Product Tea
          [Ervaringen] BTO Notebooks      Cache   Translate Page   Web Page Cache   
Replies: 6994 Last poster: laptopplus at 10-08-2018 10:28 Topic is Open quote:Edit: Hmm.. komplot theorie of deep learning Zie nu net deze laptop op de frontpage langskomen in een banner van BTO.Noemen ze 'ReTargeting', op basis van je bezoek wordt er een cookie geplaatst en wordt je getarget met het product
          Il labirinto di luce      Cache   Translate Page   Web Page Cache   

RICERCA – Un team di ingegneri elettronici ed elettrici dell’UCLA (Università della California – Los Angeles) ha creato una rete neurale artificiale fisica, ossia un dispositivo che implementa modelli di neural networks ed algoritmi di deep learning, in grado di analizzare elevate quantità di dati alla velocità della luce. Cos’è il deep learning? Il deep learning, […]

L'articolo Il labirinto di luce proviene da OggiScienza.


          [עושים תוכנה] מצילים חיי אדם באמצעות Deep Learning      Cache   Translate Page   Web Page Cache   

אם אתם מסתובבים בעולם התוכנה וכנראה שגם אם לא, אתם שומעים כמה פעמים ביום את צמד המילים Machine Learning.
פה ושם כנראה גם שמעתם את צמד המילים של המגניבות החדשות בשכונה : Deep Learning

אבל..האם טרחתם להתעמק (מצחיק!) ולהבין מה זה אומר? האם העזתם להתנסות בתחום בעצמכם?

היחס בין הכמות שנאמרות המילים הללו לבין השימוש בהם בפועל בצורה אמיתית ונכונה הוא מקרי בהחלט.
לכן, כדי שתוכלו להבין קצת מעבר , הבאנו בפרק החדש של ״עושים תוכנה״ את גיא ריינר אחד המייסדים של חברת aidoc.

גיא ביחד עם שותפיו הנהדרים אלעד וולך ומיכאל ברגינסקי, פיתחו מערכת שעוזרת לרדיולוגים לנתח צילומי CT ורנטגן ובעצם עוזרת לייעל תהליכים ואולי אפילו מצילה חיי אדם.
בלי השימוש בDeep Learning לא בטוח שהם היו מצליחים לעשות זאת ותהיו בטוחים שהדרך שהם עשו בשנים האחרונות לא הייתה קלה בכלל.

The post [עושים תוכנה] מצילים חיי אדם באמצעות Deep Learning appeared first on עושים היסטוריה.


          Software System Engineer - Software Development Kit, Neural netw      Cache   Translate Page   Web Page Cache   
MA-Boston, If you are a Software System Engineer with experience, please read on! What You Will Be Doing - Assist in build of a deep learning Software Development Kit for Optical Processing Unit accelerators - Create deep learning framework compiler for Optical Processing Unit - Develop runtime engine including core API and driver - Implement deep neutal network models on Optical Processing Unit - Assist in
          Embedded ML Developer - Erwin Hymer Group North America - Virginia Beach, VA      Cache   Translate Page   Web Page Cache   
NVIDIA VisionWorks, OpenCV. Game Development, Accelerated Computing, Machine Learning/Deep Learning, Virtual Reality, Professional Visualization, Autonomous...
From Indeed - Fri, 22 Jun 2018 17:57:58 GMT - View all Virginia Beach, VA jobs
          Data Scientist - ZF - Northville, MI      Cache   Translate Page   Web Page Cache   
Deep Learning, NVIDIA, NLP). You will run cost-effective data dive-ins on complex high volume data from a variety of sources and develop data solutions in close...
From ZF - Thu, 21 Jun 2018 21:14:15 GMT - View all Northville, MI jobs
          Artificial Intelligence Deep Learning Engineer - Advantest - San Jose, CA      Cache   Translate Page   Web Page Cache   
Work with Advantest business units to incorporate developed AI/Deep Learning technologies into products and/or internal processes....
From Advantest - Mon, 06 Aug 2018 17:45:44 GMT - View all San Jose, CA jobs
          Alles, was du über KI wissen musst – unsere t3n Podcasts zum Thema      Cache   Translate Page   Web Page Cache   
Künstliche Intelligenz wird die Welt stärker verändern als das Internet, glaubt Salesforce-Chefwissenschaftler Richard  Socher. Kein anderes Thema bewegt die IT-Welt so stark. Auch beim t3n Podcast ist es ein Schwerpunkt-Thema – wir zeigen euch alle bisherigen Folgen zu KI.

Wie setzen Unternehmen KI ein?

Wenn Experten, Wissenschaftler und Journalisten über die künstliche Intelligenz diskutieren, geht es oft um grundsätzliche Fragen: Was unterscheidet Mensch und Maschine oder wird es eines Tages KIs geben, die intelligenter sind als Menschen? Dabei bleibt oftmals der Umstand auf der Strecke, dass intelligente Algorithmen in der Wirtschaft vor allem dann zum Einsatz kommen, wenn eine Steigerung der Effizienz möglich ist. In dieser Episode soll deshalb ein pragmatischer Blick auf künstliche Intelligenz geworfen werden. Print-Chefredakteur Luca Caracciolo spricht dazu mit dem Leiter der Machine-Learning-Foundation von SAP, Sebastian Wieczorek, über konkrete Anwedungsfälle von Machine Learning in Unternehmenssoftware und wie das die tägliche Arbeit verändert.

Salesforce-Chefwissenschaftler: „KI wird einen noch größeren Einfluss auf die Menschheit haben als das Internet“

Richard Socher, geboren und aufgewachsen in Deutschland, ist Professor für künstliche Intelligenz an der Stanford University und Chefwissenschaftler von Salesforce. Im t3n Podcast berichtet er, was KI heute wirklich kann – und welche Alternativen zu Machine Learning existieren.

USA versus China versus Europa: Der große Wettlauf um die Vorherrschaft bei künstlicher Intelligenz

Künstliche Intelligenz gilt als Schlüsseltechnologie der kommenden Jahre. Deshalb ist es nicht verwunderlich, dass Unternehmen und Staaten auf der ganzen Welt um die klügsten Köpfe buhlen.

FDP-Chef Christian Lindner fordert CERN für künstliche Intelligenz in Europa

FDP-Chef Christian Lindner fordert im t3n Podcast eine gemeinsame europäische Anstrengung, um die Grundlagenforschung im Bereich der künstlichen Intelligenz voranzubringen. Vorbild könnte das CERN sein.

Ranga Yogeshwar erklärt, wie künstliche Intelligenz auch dein Leben verändert

Wie wird künstliche Intelligenz unser Leben verändern? Und was unterscheidet uns Menschen eigentlich von den Maschinen? Im t3n Podcast liefert Ranga Yogeshwar die Antworten.

Bedroht künstliche Intelligenz unsere Demokratie?

Künstliche Intelligenz verändert zunehmend, wie wir arbeiten und leben. Was aber, wenn sie die Grundlagen unseres Zusammenlebens bedroht?

Weniger arbeiten, mehr erreichen: Neue Freiräume durch künstliche Intelligenz und Automatisierung

Weniger arbeiten, mehr erreichen: Das könnte mit dem zunehmenden Einsatz von künstlicher Intelligenz gelingen. Wie nutzen Unternehmen aber die neu gewonnen Freiräume sinnvoll?

Sebastian Thrun: „KI wird alle repetitiven Aufgaben des Menschen ersetzen“

Sebastian Thrun hat für Google an selbstfahrenden Autos geforscht und will mit Udacity die Bildung revolutionieren. Im Podcast spricht er über die Chancen der KI und warum im Silicon Valley jeder mit Deep-Learning-Fähigkeiten einen Job bekommt.

Künstliche Intelligenz: Hat die Entwicklung bereits ihren Höhepunkt erreicht?

Wenn heute über Digitalisierung gesprochen wird, geht es meist auch um KI. Wir diskutieren im Podcast, was von der Technologie in Zukunft zu erwarten ist.

Deutscher Microsoft-Technikchef: Wohin führt die Explosion der künstlichen Intelligenz?

Der Boom von Machine Learning, insbesondere Deep Learning, ist weiter eines der größten Hype-Themen der Tech-Branche. Im t3n-Filterblase-Podcast erläutert Microsofts Technikchef für Deutschland die Hintergründe.

Droht das Menschheitsende durch die Superintelligenz?

Starke und schwache KI, Machine Learning, Neuronale Netze – was steckt eigentlich dahinter? t3n.de-Chefredakteur Stephan Dörner und Luca Caracciolo, Chefredakteur des Print-Magazins, erklären die wichtigsten Begriffe und diskutieren Anwendungsbeispiele und Einsatzszenarien der einzelnen KI-Disziplinen in der Wirtschaft. Abschließend geht es um die Superintelligenz – was passiert eigentlich, wenn KI-Systeme eines Tages die kognitiven Fähigkeiten des Menschen übersteigen?

t3n Podcast abonnieren

Ihr könnt den t3n Podcast bequem in der Podcast-App eurer Wahl abonnieren. In der Regel findet ihr den Podcast, wenn ihr ihn dort einfach sucht. Ansonsten könnt ihr auch den RSS-Feed manuell in der App eingeben.
          Has Artificial Intelligence Hit a Wall?       Cache   Translate Page   Web Page Cache   
The biggest breakthrough in AI, deep learning, has hit a wall, and a debate is raging about how to get to the next level. The Wall Street Journal's Christopher Mims explains.

Learn more about your ad choices. Visit megaphone.fm/adchoices
          Software System Engineer - Software Development Kit, Neural netw      Cache   Translate Page   Web Page Cache   
MA-Boston, If you are a Software System Engineer with experience, please read on! What You Will Be Doing - Assist in build of a deep learning Software Development Kit for Optical Processing Unit accelerators - Create deep learning framework compiler for Optical Processing Unit - Develop runtime engine including core API and driver - Implement deep neutal network models on Optical Processing Unit - Assist in
          First Class GPUs support in Apache Hadoop 3.1, YARN & HDP 3.0      Cache   Translate Page   Web Page Cache   

This blog is also co-authored by Zian Chen and Sunil Govindan from Hortonworks.

Introduction Apache Hadoop 3.1, YARN, & HDP 3.0
First Class GPUs support in Apache Hadoop 3.1, YARN &amp; HDP 3.0
Without speed up with GPUs, some computations take forever! (Image from Movie “Howl’s Moving Castle”)

GPUs are increasingly becoming a key tool for many big data applications. Deep-learning / machine learning, data analytics , Genome Sequencing etc all have applications that rely on GPUs for tractable performance. In many cases, GPUs can get up to 10x speedups. And in some reported cases (like this ), GPUs can get up to 300x speedups! Many modern deep-learning applications directly build on top of GPU libraries like cuDNN (CUDA Deep Neural Network library). It’s not a stretch to say that many applications like deep-learning cannot live without GPU support.

Starting Apache Hadoop 3.1 and with HDP 3.0, we have a first-class support for operators and admins to be able to configure YARN clusters to schedule and use GPU resources.

Previously, without first-class GPU support, YARN has a not-so-comprehensive story around GPU support. Without this new feature, users have to use node-labels ( YARN-796 ) to partition clusters to make use of GPUs, which simply puts machines equipped GPUs to a different partition and requires jobs to be submitted that need GPUs to the specific partition. For a detailed example of this pattern of GPU usage, see Yahoo!’s blog post about Large Scale Distributed deep-learning on Hadoop Clusters .

Without a native and more comprehensive GPU support, there’s no isolation of GPU resources also! For example, multiple tasks compete for a GPU resource simultaneously which could cause task failures / GPU memory exhaustion, etc.

To this end, the YARN community looked for a comprehensive solution to natively support GPU resources on YARN.

First class GPU support on YARN GPU scheduling using “extensible resource-types “in YARN

We need to recognize GPU as a resource type when doing scheduling. YARN-3926 extends the YARN resource model to a more flexible model which makes it easier to add new countable resource-types. It also considers the related aspect of “resource profiles” which allow users to easily specify the resources they need for containers. Once we have GPUs type added to YARN, YARN can schedule applications on GPU machines. By specifying the number of requested GPU to containers, YARN can find machines with available GPUs to satisfy container requests.


First Class GPUs support in Apache Hadoop 3.1, YARN &amp; HDP 3.0
GPU isolation

With GPU scheduling support, containers with GPU request can be placed to machines with enough available GPU resources. We still need to solve the isolation problem: When multiple applications use GPU resources on the same machine, they should not affect each other.

Even if GPU has many cores, there’s no easy isolation story for processes sharing the same GPU. For instance, Nvidia Multi-Process Service (MPS) provides isolation for multiple process access the same GPU, however, it only works for Volta architecture, and MPS is not widely support by deep learning platforms yet. ,So our isolation, for now, is per-GPU device: each container can ask for an integer number of GPU devices along with memory, vcores (for example 4G memory, 4 vcores and 2 GPUs). With this, each application uses their assigned GPUs exclusively .

We use cgroups to enforce the isolation. This works by putting a YARN container a process tree into a cgroup that allows access to only the prescribed GPU devices. When Docker containers are used on YARN, nvidia-docker-plugin an optional plugin that admins have to configure is used to enforce GPU resource isolation.

GPU discovery

For properly doing scheduling and isolation, we need to know how many GPU devices are available in the system. Admins can configure this manually on a YARN cluster. But it may also be desirable to discover GPU resources through the framework automatically. Currently, we’re using Nvidia system management interface (nvidia-smi) to get number of GPUs in each machine and usages of these GPU devices. An example output of nvidia-smi looks like below:


First Class GPUs support in Apache Hadoop 3.1, YARN &amp; HDP 3.0
Web UI

We also added GPU information to the new YARN web UI. On ResourceManager page, we show total used and available GPU resources across the cluster along with other resources like memory / cpu.


First Class GPUs support in Apache Hadoop 3.1, YARN &amp; HDP 3.0

On NodeManager page, YARN shows per-GPU device usage and metrics:


First Class GPUs support in Apache Hadoop 3.1, YARN &amp; HDP 3.0
Configurations

To enable GPU support in YARN, administrators need to set configs for GPU Scheduling and GPU isolation.

GPU Scheduling

(1) yarn.resource-types in resource.type.xml

This gives YARN a list of available resource types supported for user to use. We need to add “yarn.io/gpu” here if we want to support GPU as a resource type

(2) yarn.scheduler.capacity.resource-calculator in capacity-scheduler.xml

DominantResourceCalculator MUST be configured to enable GPU scheduling. It has to be set to, org.apache.hadoop.yarn.util.resource.DominantResourceCalculator

GPU Isolation

(1) yarn.nodemanager.resource-plugins in yarn-site.xml

This is to enable GPU isolation module on NodeManager side. By default, YARN will automatically detect and config GPUs when above config is set. It should also add “yarn.io/gpu”

(2) yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices in yarn-site.xml

Specify GPU devices which can be managed by YARN NodeManager, split by comma Number of GPU devices will be reported to RM to make scheduling decisions. Set to auto (default) to let YARN automatically discover GPU resource from system.

Manually specify GPU devices if auto detect GPU device failed or admin only wants a s
          Chinese Fintech professionals expect up to 30% pay rise      Cache   Translate Page   Web Page Cache   

The EY-DBS report, The Rise of FinTech in China, attributes the growth of the Fintech sector in the mainland as driven by multiple factors – “the scale of unmet needs being addressed by dominant technology leaders, combined with regulatory facilitation and easy access to capital. Underserved by China’s incumbent banking system, consumers and small-to-medium-sized enterprises (SMEs) are increasingly turning to alternative providers for access to payments, credit, investments, insurance, and even other non-financial service offerings.”

As a consequence of this growth is a rising shortage of Fintech skills.

The China Fintech Employment 2018 Report by specialist recruitment firm, Michael Page China, revealed that 92% of Fintech companies surveyed agreed that China is facing an acute shortage of professional Fintech talent right now. Also, 38% of respondents view the quality of talent as a critical factor to the sustained success in the industry.

Rupert Forster, Managing Director of Michael Page North and East China, says, “Within Fintech, we are observing a growing demand for talent with skills relating to Artificial Intelligence, machine learning and deep learning. These skills are also sought after in sectors outside of Fintech, such as other Chinese Internet companies, creating a wider talent gap in the market.”

For those looking to hire Fintech talent, 85% of surveyed employers expressed difficulty in finding the right people.

Forty-five percent cited shortage of necessary skills as the biggest hurdle. Fintech professionals know they are in demand with 47% stating they had changed jobs in the last 12 months.

What attracts Fintech professionals to switch? Top motivations include strong career path (29%), right company culture fit (24%) and salary (17%). Forty-four percent of those in it for the money say they expect salary increments ranging from 21 – 30% when securing a new job.

Figure 1: What Fintech employees want

What Fintech employees want

Source: Michael Page China Fintech Employment 2018

“The gap between employer demand for skills and the available talent is not a problem exclusive to Fintech. We see this across many sectors which is purely a reflection of the fast-growing, innovative nature of modern China. The most successful companies are those who are able to implement in-house talent development programs,” Forster explained.

Caption: 
Image from iStockPhoto/baona

          #8: Deep Learning with Python      Cache   Translate Page   Web Page Cache   
Deep Learning
Deep Learning with Python
Francois Chollet
(2)

Buy new: CDN$ 66.00 CDN$ 52.75
31 used & new from CDN$ 44.40

(Visit the Bestsellers in Programming list for authoritative information on this product's current rank.)
          Has Artificial Intelligence Hit a Wall?       Cache   Translate Page   Web Page Cache   
The biggest breakthrough in AI, deep learning, has hit a wall, and a debate is raging about how to get to the next level. The Wall Street Journal's Christopher Mims explains. Learn more about your ad choices. Visit megaphone.fm/adchoices
          NetApp et NVIDIA accélèrent le Deep Learning avec une nouvelle architecture d’Intelligence Artificielle      Cache   Translate Page   Web Page Cache   

L’architecture NetApp ONTAP AI accélère l’accès aux données pour s’adapter aux besoins de l’Intelligence Artificielle NetApp, entreprise référence de la gestion de la donnée dans le cloud hybride, lance aujourd’hui l’architecture NetApp® ONTAP® AI, qui allie la puissance des supercalculateurs NVIDIA DGX™ aux systèmes de stockage 100 % Flash connectés au cloud NetApp AFF A800. L’objectif est de simplifier, d’accélérer et de faire […]

L’article NetApp et NVIDIA accélèrent le Deep Learning avec une nouvelle architecture d’Intelligence Artificielle est apparu en premier sur DOCaufutur --.


          Machine Learning SW Engineer      Cache   Translate Page   Web Page Cache   
MD-Rockville, Our client now has an open Machine Learning SW Engineer opening. It is a SW Engineering role with Machine Learning. If the right candidate had Deep Learning concepts (Algorithm using Neural Networks) that would be a plus but at least Machine Learning. Mission: There is a NEW Data Strategy for our client to enable to solve DATA problems more efficiently. DAY to DAY: Will be working with Product Tea
          企業でAIをより簡単に活用――Dell EMC、「Dell EMC Ready Solutions for AI」をリリース      Cache   Translate Page   Web Page Cache   
Dell EMCは、企業のAI活用を支援する新ソリューション「Dell EMC Ready Solutions for AI」を提供開始した。「Machine Learning with Hadoop」と「Deep Learning with NVIDIA」という2つのパッケージを用意している。
          Comment on Regression Tutorial with the Keras Deep Learning Library in Python by Shooter      Cache   Translate Page   Web Page Cache   
I meant which variable should i use. I got my answer in one of your comments. Perhaps it is estimator.model.evaluate Thanks.
          Comment on Multi-Class Classification Tutorial with the Keras Deep Learning Library by Shooter      Cache   Translate Page   Web Page Cache   
Hi, I wanted to ask again that using K-fold validation like this kfold = KFold(n_splits=10, shuffle=True, random_state=seed) results = cross_val_score(estimator, X, dummy_y, cv=kfold) or using train/test split and validation data like this x_train,x_test,y_train,y_test=train_test_split(X,dummy_y,test_size=0.33,random_state=seed) estimator.fit(x_train,y_train,validation_data=(x_test,y_test)) These are just sampling techniques, we can use any one of them according to the availability and size of data right?
          Comment on Multi-Class Classification Tutorial with the Keras Deep Learning Library by Shooter      Cache   Translate Page   Web Page Cache   
Hi Jason, this code gives the accuracy of 98%. But when i add k-fold cross validation code, accuracy decreases to 75%.
          Comment on Multi-Class Classification Tutorial with the Keras Deep Learning Library by Shooter      Cache   Translate Page   Web Page Cache   
Hi jason, It seems you have already answered my question in one of the comments. I need to convert the categorical value into one hot encoding then create dummy variable and then input it. Thanks.
          Comment on Multi-Class Classification Tutorial with the Keras Deep Learning Library by Shooter      Cache   Translate Page   Web Page Cache   
I mean what if X contains multiple labels like "High and Low"? We need to use one hot encoding on that X data too and continue other steps in the same way?
          Deep phenotyping: deep learning for temporal phenotype/genotype classification.      Cache   Translate Page   Web Page Cache   
Related Articles

Deep phenotyping: deep learning for temporal phenotype/genotype classification.

Plant Methods. 2018;14:66

Authors: Taghavi Namin S, Esmaeilzadeh M, Najafi M, Brown TB, Borevitz JO

Abstract
Background: High resolution and high throughput genotype to phenotype studies in plants are underway to accelerate breeding of climate ready crops. In the recent years, deep learning techniques and in particular Convolutional Neural Networks (CNNs), Recurrent Neural Networks and Long-Short Term Memories (LSTMs), have shown great success in visual data recognition, classification, and sequence learning tasks. More recently, CNNs have been used for plant classification and phenotyping, using individual static images of the plants. On the other hand, dynamic behavior of the plants as well as their growth has been an important phenotype for plant biologists, and this motivated us to study the potential of LSTMs in encoding these temporal information for the accession classification task, which is useful in automation of plant production and care.
Methods: In this paper, we propose a CNN-LSTM framework for plant classification of various genotypes. Here, we exploit the power of deep CNNs for automatic joint feature and classifier learning, compared to using hand-crafted features. In addition, we leverage the potential of LSTMs to study the growth of the plants and their dynamic behaviors as important discriminative phenotypes for accession classification. Moreover, we collected a dataset of time-series image sequences of four accessions of Arabidopsis, captured in similar imaging conditions, which could be used as a standard benchmark by researchers in the field. We made this dataset publicly available.
Conclusion: The results provide evidence of the benefits of our accession classification approach over using traditional hand-crafted image analysis features and other accession classification frameworks. We also demonstrate that utilizing temporal information using LSTMs can further improve the performance of the system. The proposed framework can be used in other applications such as in plant classification given the environment conditions or in distinguishing diseased plants from healthy ones.

PMID: 30087695 [PubMed]


          Deep Learning Engineer - Devoteam - Randstad      Cache   Translate Page   Web Page Cache   
Unique culture where people speak frankly, have great mutual respect and are united by a genuine interest in technology. Is Deep Learning your competency realm?...
Van Devoteam - Mon, 06 Aug 2018 13:27:24 GMT - Toon alle vacatures in Randstad
          企業でAIをより簡単に活用――Dell EMC、「Dell EMC Ready Solutions for AI」をリリース      Cache   Translate Page   Web Page Cache   
Dell EMCは、企業のAI活用を支援する新ソリューション「Dell EMC Ready Solutions for AI」を提供開始した。「Machine Learning with Hadoop」と「Deep Learning with NVIDIA」という2つのパッケージを用意している。
           Embedded Computing on the Edge       Cache   Translate Page   Web Page Cache   
Embedded Computing on the Edge

Embedded computing has passed—more or less unscathed—through many technology shifts and marketing fashions. But the most recent—the rise of edge computing—could mean important new possibilities and challenges. So what is edge computing (Figure 1)? The cynic might say it is just a grab for market share by giant cloud companies that have in the past struggled […]

Embedded computing has passed—more or less unscathed—through many technology shifts and marketing fashions. But the most recent—the rise of edge computing—could mean important new possibilities and challenges.

So what is edge computing (Figure 1)? The cynic might say it is just a grab for market share by giant cloud companies that have in the past struggled in the fragmented embedded market, but now see their chance. That theory goes something like this.

Figure 1. Computing at the network edge puts embedded systems in a whole new world.

With the concept of the Internet of Things came a rather naïve new notion of embedded architecture: all the embedded system’s sensors and actuators would be connected directly to the Internet—think smart wall switch and smart lightbulb—and all the computing would be done in the cloud. Naturally, this proved wildly impractical for a number of reasons, so the gurus of the IoT retreated to a more tenable position: some computing had to be local, even though the embedded system was still very much connected to the Internet.

Since the local processing would be done at the extreme periphery of the Internet, where IP connectivity ended and private industrial networks or dedicated connections began, the cloud- and network-centric folks called it edge computing. They saw the opportunity to lever their command of the cloud and network resources to redefine embedded computing as a networking application, with edge computing as its natural extension.

A less cynical and more useful view looks at edge computing as one facet of a new partitioning problem that the concurrence of cloud computing, widespread broadband access, and some innovations in LTE cellular networks have created. Today, embedded systems designers must, from requirements definition on through the design process, remember that there are several very different processing sites available to them (Figure 2). There is the cloud. There is the so-called fog. And there is the edge. Partitioning tasks and data among these sits has become a necessary skill to the success of an embedded design project. If you don’t use the new computing resources wisely, you will be vulnerable to a competitor who does—not only in terms of features, performance, and cost advantages to be gained, but in consideration of the growing value of data that can be collected from embedded systems in operation.

.Figure 2. Edge computing offers the choice of three different kinds of processing sites.

The Joy of Partitioning

Unfortunately, partitioning is not often a skill embedded-system designers cultivate. Traditional embedded designs employ a single processor, or at worst a multi-core SoC with an obvious division of labor amongst the cores.

But edge computing creates a new scale of difficulty. There are several different kinds of processing sites, each with quite distinct characteristics. And the connections between processors are far more complicated than the nearly transparent inter-task communications of shared-memory multicore systems. So, doing edge computing well requires a rather formal partitioning process. It begins with defining the tasks and identifying their computing, storage, bandwidth, and latency requirements. Then the process continues by characterizing the compute resources you have available, and the links between them. Finally, partitioning must map tasks onto processors and inter-task communications onto links so that the system requirements are met. This is often an iterative process that at best refines the architecture and at worst turns into a protracted, multi-party game of Whack-a-Mole. It is helpful, perhaps, to look at each of these issues: tasks, processing and storage sites, and communications links, in more detail.

The Tasks

There are several categories of tasks in a traditional embedded system, and a couple of categories that have recently become important for many designs. Each category has its own characteristic needs in computing, storage, I/O bandwidth, and task latency.

In any embedded design there are supervisory and housekeeping tasks that are necessary, but are not particularly compute- or I/O- intensive, and that have no hard deadlines. This category includes most operating-system services, user interfaces, utilities, system maintenance and update, and data logging.

A second category of tasks with very different characteristics is present in most embedded designs. These tasks directly influence the physical behavior of the system, and they do have hard real-time deadlines, often because they are implementing algorithms within feedback control loops responsible for motion control or dynamic process control. Or they may be signal-processing or signal interpretation tasks that lie on a critical path to a system response, such as object recognition routines behind a camera input.

Often these tasks don’t have complex I/O needs: just a stream or two of data in and one or two out. But today these data rates can be extremely high, as in the case of multiple HD cameras on a robot or digitized radar signals coming off a target-acquisition and tracking radar. Algorithm complexity has traditionally been low, held down by the history of budget-constrained embedded designs in which a microcontroller had to implement the digital transfer function in a control loop. But as control systems adopt more modern techniques, including stochastic state estimation, model-based control, and, recently, insertion of artificial intelligence into control loops, in some designs the complexity of algorithms inside time-critical loops has exploded. As we will see, this explosion scatters shrapnel over a wide area.

The most important issue for all these time-critical tasks is that the overall delay from sensor or control input to actuator response be below a set maximum latency, and often that it lies within a narrow jitter window. That makes partitioning of these tasks particularly interesting, because it forces designers to consider both execution time—fully laden with indeterminacies, memory access and storage access delays—and communications latencies together. The fastest place to execute a complex algorithm may be unacceptably far from the system.

We also need to recognize a third category of tasks. These have appeared fairly recently for many designers, and differ from both supervisory and real-time tasks. They arise from the intrusion of three new areas of concern: machine learning, functional safety, and cyber security. The distinguishing characteristic of these tasks is that, while each can be performed in miniature with very modest demands on the system, each can quickly develop an enormous appetite for computing and memory resources. And, most unfortunately, each can end up inside delay-sensitive control loops, posing very tricky challenges for the design team.

Machine learning is a good case in point. Relatively simply deep-learning programs are already being used as supervisory tasks to, for instance, examine sensor data to detect progressive wear on machinery or signs of impending failure. Such tasks normally run in the cloud without any real-time constraints, which is just as well, as they do best with access to huge volumes of data. At the other extreme, trained networks can be ported to quite compact blocks of code, especially with the use of small hardware accelerators, making it possible to use a neural network inside a smart phone. But a deep-learning inference engine trained to detect, say, excessive vibration in a cutting tool during a cut or the intrusion of an unidentified object into a robot’s planned trajectory—either of which could require immediate intervention—could end up being both computationally intensive and on a time-critical path.

Similarly for functional safety and system security, simple rule-based safety checks or authentication/encryption tasks may present few problems for the system design. But simple often, in these areas, means weak. Systems that must operate in an unfamiliar environment or must actively repel novel intrusion attempts may require very complex algorithms, including machine learning, with very fast response times. Intrusion detection, for instance, is much less valuable as a forensic tool than as a prevention.

Resources

Traditionally, the computing and storage resources available to an embedded system designer were easy to list. There were microcontroller chips, single-board computers based on commercial microprocessors, and in some cases boards or boxes using digital signal processing hardware of one sort or another. Any of these could have external memory, and most could attach, with the aid of an operating system, mass storage ranging from a thumb drive to a RAID disk array. And these resources were all in one place: they were physically part of the system, directly connected to sensors, actuators, and maybe to an industrial network.

But add Internet connectivity, and this simple picture snaps out of focus. The original system is now just the network edge. And in addition to edge computing, there are two new locations where there may be important computing resources: the cloud, and what Cisco and some others are calling the fog.

The edge remains much as it has been, except of course that everything is growing in power. In the shadow of the massive market for smart-phone SoCs, microcontrollers have morphed into low-cost SoCs too, often with multiple 32-bit CPU cores, extensive caches, and dedicated functional IP suited to a particular range of applications. Board-level computers have exploited the monotonically growing power of personal computer CPU chips and the growth in solid-state storage. And the commoditization of servers for the world’s data centers has put even racks of data-center-class servers within the reach of well-funded edge computing sites, if the sites can provide the necessary space, power, and cooling.

Recently, with the advent of more demanding algorithms, hardware accelerators have become important options for edge computing as well. FPGAs have long been used to accelerate signal-processing and numerically intensive transfer functions. Today, with effective high-level design tools they have broadened their use beyond these applications into just about anything that can benefit from massively parallel or, more importantly, deeply pipelined execution. GPUs have applications in massively data-parallel tasks such as vision processing and neural network training. And as soon as an algorithm becomes stable and widely used enough to have good library support—machine vision, location and mapping, security, and deep learning are examples—someone will start work on an ASIC to accelerate it.

The cloud, of course, is a profoundly different environment: a world of essentially infinite numbers of big x86 servers and storage resources. Recently, hardware accelerators from all three races—FPGAs, GPUs, and ASICs—have begun appearing in the cloud as well. All these resources are available for the embedded system end-user to rent on an as-used basis.

The important questions in the cloud are not about how many resources are available—there are more than you need—but about terms and conditions. Will your workload run continuously, and if not, what is the activation latency? What guarantees of performance and availability are there? What will this cost the end user? And what happens if the cloud platform provider—who in specialized application areas is often not a giant data-center owner, but a small company that itself leases or rents the cloud resources—suffers a change in situation? These sorts of questions are generally not familiar to embedded-system developers, nor to their customers.

Recently there has been discussion of yet another possible processing site: the so-called fog. The fog is located somewhere between the edge and the cloud, both physically and in terms of its characteristics.

As network operators and wireless service providers turn from old dedicated switching hardware to software on servers, increasingly, Internet connections from the edge will run not through racks of networking hardware, but through data centers. For edge systems relying on cloud computing, this raises an important question: why send your inter-task communications through one data center just to get it to another one? It may be that the networking data center can provide all the resources your task needs without having to go all the way to a cloud service provider (CSP). Or it may be that a service provider can offer hardware or software packages to allow some processing in your edge-computing system, or in an aggregation node near your system, before having to make the jump to a central facility. At the very least you would have one less vendor to deal with. And you might also have less latency and uncertainly introduced by Internet connections. Thus, you can think of fog computing as a cloud computing service spread across the network and into the edge, with all the advantages and questions we have just discussed.

Connections

When all embedded computing is local, inter-task communications can almost be neglected. There are situations where multiple tasks share a critical resource, like a message-passing utility in an operating system, and on extremely critical timing paths you must be aware of the uncertainly in the delay in getting a message between tasks. But for most situations, how long it takes to trigger a task and get data to it is a secondary concern. Most designs confine real-time tasks to a subset of the system where they have a nearly deterministic environment, and focus their timing analyses there.

But when you partition a system between edge, fog, and cloud resources, the kinds of connections between those three environments, their delay characteristics, and their reliability all become important system issues. They may limit where you can place particular tasks. And they may require—by imposing timing uncertainty and the possibility of non-delivery on inter-task messages—the use of more complex control algorithms that can tolerate such surprises.

So what are the connections? We have to look at two different situations: when the edge hardware is connected to an internet service provider (ISP) through copper or fiber-optics (or a blend of the two), and when the connection is wireless (Figure 3).

Figure 3. Tasks can be categorized by computational complexity and latency needs.

The two situations have one thing in common. Unless your system will have a dedicated leased virtual channel to a cloud or fog service provider, part of the connection will be over the public Internet. That part could be from your ISP’s switch plant to the CSP’s data center, or it could be from a wireless operator’s central office to the CSP’s data center.

That Internet connection has two unfortunate characteristics, from this point of view. First, it is a packet-switching network in which different packets may take very different routes, with very different latencies. So, it is impossible to predict more than statistically what the transmission delay between two points will be. Second, Internet Protocol by itself offers only best-effort, not guaranteed, delivery. So, a system that relies on cloud tasks must tolerate some packets simply vanishing.

An additional point worth considering is that so-called data locality laws—which limit or prohibit transmission of data outside the country of origin—are spreading around the world. Inside the European Union, for instance, it is currently illegal to transmit data containing personal information across the borders of a number of member countries, even to other EU members. And in China, which uses locality rules for both privacy and industrial policy purposes, it is illegal to transmit virtually any sort of data to any destination outside the country. So, designers must ask whether their edge system will be able to exchange data with the cloud legally, given the rapidly evolving country-by-country legislation.

These limitations are one of the potential advantages of the fog computing concept. By not traversing the public network, systems relying on ISP or wireless-carrier computing resources or local edge resources can exploit additional provisions to reduce the uncertainty in connection delays.

But messages still have to get from your edge system to the service provider’s aggregation hardware or data center. For ISPs, that will mean a physical connection, typically using Internet Protocol over fiber or hybrid copper/fiber connections, often arranged in a tree structure. Such connections allow for provisioning of fog computing nodes at points where branches intersect. But as any cable TV viewer can attest, they also allow for congestion at nodes or on branches to create great uncertainties in available bandwidth and latency. Suspension of net neutrality in the US has added a further uncertainty, allowing carriers to offer different levels of service to traffic from different sources, and to charge for quality-of-service guarantees.

If the connection is wireless, as we are assured many will be once 5G is deployed, the uncertainties multiply. A 5G link will connect your edge system through multiple parallel RF channels and multiple antennas to one or more base stations. The base stations may be anything from a small cell with minimal hardware to a large local processing site with, again, the ability to offer fog-computing resources, to a remote radio transceiver that relies on a central data center for all its processing. In at least the first two cases, there will be a separate backhaul network, usually either fiber or microwave, connecting the base station to the service provider’s central data center.

The challenges include, first, that latency will depend on what kind of base stations you are working with—something often completely beyond your control. Second, changes in RF transmission characteristics along the mostly line-of-site paths can be caused by obstacles, multipath shifts, vegetation, and even weather. If the channel deteriorates, retry rates will go up, and at some point the base station and your edge system will negotiate a new data rate, or roll the connection over to a different base station. So even for a fixed client system, the characteristics of the connection may change significantly over time, sometimes quite rapidly.

Partitioning

Connectivity opens a new world for the embedded-system designer, offering amounts of computing power and storage inconceivable in local platforms. But it creates a partitioning problem: an iterative process of locating tasks where they have the resources they need, but with the latencies, predictability, and reliability they require.

For many tasks location is obvious. Big-data analyses that comb terabytes of data to predict maintenance needs or extract valuable conclusions about the user can go in the cloud. So, can compute-intensive real-time tasks when acceptable latency is long, and the occasional lost message is survivable or handled in a higher-level networking protocol. A smart speaker in your kitchen can always reply “Let me think on that a moment,” or “Sorry, what?”

Critical, high-frequency control loops must stay at or very near the edge. Conventional control algorithms can’t tolerate the delay and uncertainty of any other choice.

But what if there is a conflict: a task too big for the edge resources, but too time-sensitive to be located across the Internet? Fog computing may solve some of these dilemmas. Others may require you to place more resources in your system.

Just how far today’s technology has enriched the choices was illustrated recently by a series of Microsoft announcements. Primarily involved in edge computing as a CSP, Microsoft has for some time offered the Azure Stack—essentially, an instance of their Azure cloud platform—to run on servers on the customer premises. Just recently, the company enriched this offering with two new options: FPGA acceleration, including the Microsoft’s Project Brainwave machine-learning acceleration, for Azure Stack installations, and Azure Sphere, a way of encapsulating Azure’s security provisions in an approved microcontroller, secure operating system, and coordinated cloud service for use at the edge. Similarly, Intel recently announced the OpenVINO™ toolkit, a platform for implementing vision-processing and machine intelligence algorithms at the edge, relying on CPUs with optional support from FPGAs or vision-processing ASICs. Such fog-oriented provisions could allow embedded-system designers to simply incorporate cloud-oriented tasks into hardware within the confines of their own systems, eliminating the communications considerations and making ideas like deep-learning networks within control loops far more feasible.

In other cases, designers may simply have to refactor critical tasks into time-critical and time-tolerant portions. Or they may have to replace tried and true control algorithms with far more complex approaches that can tolerate the delay and uncertainty of communications links. For example, a complex model-based control algorithm could be moved to the cloud, and used to monitor and adjust a much simpler control loop that is running locally at the edge.

Life at the edge, then, is full of opportunities and complexities. It offers a range of computing and storage resources, and hence of algorithms, never before available to most embedded systems. But it demands a new level of analysis and partitioning, and it beckons the system designer into realms of advanced system control that go far beyond traditional PID control loops. Competitive pressures will force many embedded systems into this new territory, so it is best to get ahead of the curve.

 

 

 

 


       Contact Us  |  New User  |  Site Map  |  Privacy  |  Legal Notice 
        Copyright © 1995-2016 Altera Corporation, 101 Innovation Drive, San Jose, California 95134, USA
Update feed preferences

          Senior DevOps Engineer      Cache   Translate Page   Web Page Cache   
CA-Santa Clara, A well funded, exciting startup building highly scalable advanced analytics product utilizing state of the art research in Computer Vision, Artificial Intelligence and Deep Learning for the Defense intelligence market place is looking for a Senior DevOps Engineer. This is direct hire FTE position with the company based in Santa Clara, CA. Responsibilities Expert understanding of running a large-sc
          conx 3.7.1      Cache   Translate Page   Web Page Cache   
On-Ramp to Deep Learning. Built on Keras
          Deep Learning Engineer - Devoteam - Randstad      Cache   Translate Page   Web Page Cache   
Unique culture where people speak frankly, have great mutual respect and are united by a genuine interest in technology. Is Deep Learning your competency realm?...
Van Devoteam - Mon, 06 Aug 2018 13:27:24 GMT - Toon alle vacatures in Randstad
          Intel hands first Optane DIMM to Google, where it’ll collect dust until a supporting CPU arrives      Cache   Translate Page   Web Page Cache   
Cascade Lake is a 14nm process CPU, using the Xeon Skylake's Purley micro-architecture, which features AI-focused Deep Learning Boost ...
          In The Age Of Relevancy, Will Impressions Matter?      Cache   Translate Page   Web Page Cache   
With the increasing sophistication of deep learning algorithms and neural networks, this is becoming less of a problem. Deep learning can do a lot for ...
          Deep Learning Market by manufacturers, regions and SWOT Analysis      Cache   Translate Page   Web Page Cache   
Deep Learning Market research report provides Emerging Market trends, Raw Materials Analysis, Manufacturing Process, regional outlook and ...
          Finally, Computers Can Learn To Count Better      Cache   Translate Page   Web Page Cache   
With the onslaught of neural networks and deep learning, the breadth of tasks carried out by a computer has grown very fast. Neural networks have ...
          Global Deep Learning Chipset Market 2018 Opportunities and Share: Qualcomm, Intel, NVIDIA …      Cache   Translate Page   Web Page Cache   
Global Deep Learning Chipset Market Report 2018 includes a total amalgamation of assessable trends and predicting analysis. This Deep Learning ...
          Deep Learning Market 2018 Set to Witness Growth by 2025 | Industry Analysis And Global Key …      Cache   Translate Page   Web Page Cache   
Global Deep Learning Market Research Report 2018 peaks the detailed analysis of industry share, growth factors, development trends, size, majors ...
          Deep Learning: License Plate Recognition Technology (LPR)      Cache   Translate Page   Web Page Cache   
Thanks to our OCR 5 technology created with advanced deep learning algorithms allowing the reading and recognition of different license plate ...
          UW artificial intelligence project receives $125 million      Cache   Translate Page   Web Page Cache   
“While we have seen dramatic advancements with deep learning models for some of these tasks, they do not yet have the capabilities to abstract away ...
          Expanding the scope of Artificial Intelligence to healthcare      Cache   Translate Page   Web Page Cache   
By focusing on modernity, flexibility and scalability, the company is able to provide AI tools such as Deep Learning (a network capable of learning ...
          Deep Learning Market Global: Determined by Market Concentration Rate, New Entrants, Products …      Cache   Translate Page   Web Page Cache   
Global Deep Learning market size, trends and forecasts report analyses the latest trends in industry with a perceptive to identify the future growth ...
          Dell EMC Targets AI Workloads With Integrated Systems      Cache   Translate Page   Web Page Cache   
The company is rolling out two new Ready Solutions for machine learning with Hadoop and deep learning with GPU accelerators from Nvidia.
          Video: New Cascade Lake Xeons to Speed Ai with Intel Deep Learning Boost      Cache   Translate Page   Web Page Cache   

This week at the Data-Centric Innovation Summit, Intel laid out their near-term Xeon roadmap and plans to augment their AVX-512 instruction set to boost machine learning performance. "This dramatic performance improvement and efficiency - up to twice as fast as the current generation - is delivered by using a single instruction to handle INT8 convolutions for deep learning inference workloads which required three separate AVX-512 instructions in previous generation processors."

The post Video: New Cascade Lake Xeons to Speed Ai with Intel Deep Learning Boost appeared first on insideHPC.


          Linux Foundation: Academy, Wall Street and Surveillance Giants      Cache   Translate Page   Web Page Cache   
  • The Academy teams up with the Linux Foundation for open source tech
  • Academy Software Foundation will let filmmakers use open source creative software

    The Academy of Motion Picture Arts and Sciences and The Linux Foundation today launched the Academy Software Foundation (ASWF) to provide a neutral forum for open source software developers in the motion picture and broader media industries to share resources and collaborate on technologies for image creation, visual effects, animation, and sound.

  • Hollywood Goes Open Source: Academy Teams Up With Linux Foundation to Launch Academy Software Foundation

    Hollywood now has its very own open source organization: The Academy of Motion Picture Arts and Sciences has teamed up with the Linux Foundation to launch the Academy Software Foundation, which is dedicated to advance the use of open source in film making and beyond.

    The association’s founding members include Animal Logic, Autodesk, Blue Sky Studios, Cisco, DNEG, DreamWorks, Epic Games, Foundry, Google Cloud, Intel, SideFX, Walt Disney Studios and Weta Digital. Together, they want to promote open source, help studios and others in Hollywood with open source licensing issues and manage open source projects under the helm of the Software Foundation.

    The cooperation between the Academy and the Linux Foundation began a little over two years ago, when the Academy’s Science and Technology Council began to look into Hollywood’s use of open source software. “It’s the culmination of a couple of years of work,” said Industrial Light & Magic (ILM) head Rob Bredow in an interview with Variety this week.

  • Hollywood gets its own open-source foundation

    Open source is everywhere now, so maybe it’s no surprise that the Academy of Motion Picture Arts and Sciences (yes, the organization behind the Oscars) today announced that it has partnered with the Linux Foundation to launch the Academy Software Foundation, a new open-source foundation for developers in the motion picture and media space.

    The founding members include a number of high-powered media and tech companies, including Animal Logic, Blue Sky Studios, Cisco, DreamWorks, Epic Games, Google, Intel, SideFX, Walt Disney Studios and Weta Digital.

  • The Linux Foundation Announces Keynote Speakers for All New Open FinTech Forum to Explore the Intersection of Financial Services and Open Source
  • The Linux Foundation Announces Keynote Speakers for All New Open FinTech Forum to Explore the Intersection of Financial Services and Open Source

    The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the keynote speakers for Open FinTech Forum, taking place October 10-11 in New York.

  • LF Deep Learning signs up 5 more members, names AT&T's Gilbert as governing board chair

    The Linux Foundation's LF Deep Learning Foundation announced it has added Ciena, DiDi, Intel, Orange and Red Hat to its membership roster.

    Open source communities truly thrive when there's an array of vendors and service providers adding to the collective brain trust. With the recent additions, Deep Learning now has 15 members since it was first formed earlier this year.

    The addition of Orange was notable, but Deep Learning is still missing some key service provider players, such as Verizon, BT, CenturyLink, Deutsche Telekom and Telefónica, which seem content to pursue machine learning and artificial intelligence on their own.

read more




Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10