Next Page: 10000

          Principal Data Scientist | IT - G2 PLACEMENTS TI - Montréal, QC      Cache   Translate Page      
Python (Sci-kit Learn, numpy, pandas, Tensorflow, Keras), R, Matlab, SQL. Principal Data Scientist *....
From Indeed - Sun, 07 Oct 2018 19:29:37 GMT - View all Montréal, QC jobs
          Principal Data Scientist - DMA Global - Montréal, QC      Cache   Translate Page      
Python (Sci-kit Learn, numpy, pandas, Tensorflow, Keras) R, Matlab, SQL. You will ideally have a Master's or PhD in Statistics, Mathematics, Computer Science,...
From Indeed - Thu, 13 Sep 2018 17:03:53 GMT - View all Montréal, QC jobs
          Machine Learning/AI Engineer - Groom & Associates - Montréal, QC      Cache   Translate Page      
Expérience avec tensorflow ou d'autres backends, keras ou autres frameworks, scikit-learn, OpenCV, Pandas. Experience with tensorflow or other backends, keras...
From Groom & Associates - Thu, 06 Sep 2018 08:57:48 GMT - View all Montréal, QC jobs
          Ingénieur en apprentissage automatique - Groom & Associates - Montréal, QC      Cache   Translate Page      
Expérience avec tensorflow ou d'autres backends, keras ou autres frameworks, scikit-learn, OpenCV, Pandas. Experience with tensorflow or other backends, keras...
From Groom & Associates - Thu, 06 Sep 2018 05:09:54 GMT - View all Montréal, QC jobs
          Art Vision Vjing contest 2018      Cache   Translate Page      

Art Vision Vjing contest 2018 from eps on Vimeo.

Recent VJ performance on Circle of Light festival (Moscow), 2nd prize.

All visuals are realtime generated (vvvv, HLSL shaders, Tensorflow) and performed from vvvv.

sept 2018


          Principal Data Scientist | IT - G2 PLACEMENTS TI - Montréal, QC      Cache   Translate Page      
Python (Sci-kit Learn, numpy, pandas, Tensorflow, Keras), R, Matlab, SQL. Principal Data Scientist *....
From Indeed - Sun, 07 Oct 2018 19:29:37 GMT - View all Montréal, QC jobs
          Principal Data Scientist - DMA Global - Montréal, QC      Cache   Translate Page      
Python (Sci-kit Learn, numpy, pandas, Tensorflow, Keras) R, Matlab, SQL. You will ideally have a Master's or PhD in Statistics, Mathematics, Computer Science,...
From Indeed - Thu, 13 Sep 2018 17:03:53 GMT - View all Montréal, QC jobs
          Machine Learning/AI Engineer - Groom & Associates - Montréal, QC      Cache   Translate Page      
Expérience avec tensorflow ou d'autres backends, keras ou autres frameworks, scikit-learn, OpenCV, Pandas. Experience with tensorflow or other backends, keras...
From Groom & Associates - Thu, 06 Sep 2018 08:57:48 GMT - View all Montréal, QC jobs
          Ingénieur en apprentissage automatique - Groom & Associates - Montréal, QC      Cache   Translate Page      
Expérience avec tensorflow ou d'autres backends, keras ou autres frameworks, scikit-learn, OpenCV, Pandas. Experience with tensorflow or other backends, keras...
From Groom & Associates - Thu, 06 Sep 2018 05:09:54 GMT - View all Montréal, QC jobs
          دیدگاه‌ها برای دانلود آموزش Hands-on Deep Learning with TensorFlow با dev-master      Cache   Translate Page      
با سلام مشکل لینک ها برطرف گردید.
          دیدگاه‌ها برای دانلود آموزش Hands-on Deep Learning with TensorFlow با hamedtalebpoorb      Cache   Translate Page      
سلام ببخشید لینک این دانلود خرابه.لطفا درستش کنید.خیلی ممنون
          A Look at CNTK v2.6 and the Iris Dataset      Cache   Translate Page      

Version 2.6 of CNTK was released a few weeks ago so I figured I’d update my system and give it a try. CNTK (“Cognitive Network Tool Kit”) is Microsoft’s neural network code library. Primary alternatives include Google’s TensorFlow and Keras (a library that makes TF easier to use), and Facebook’s PyTorch.

To cut to the chase, I deleted by existing CNTK and then installed v2.6 using the pip utility, and then . .

As I write this, I think back about all the effort that was required to figure out how to install CNTK (and TF and Keras and PyTorch). It’s easy for me now, but if you’re new to using neural network code libraries, trust me, there’s a lot to learn ― mostly about all the many things that can go wrong with an installation, how to interpret the error messages, and how to resolve.

OK, back to my post. I ran my favorite demo, classification on the Iris Dataset. My old (written for v2.5) CNTK code ran as expected. Excellent!


A Look at CNTK v2.6 and the Iris Dataset

The real moral of the story is that deep learning with neural network libraries is new and still in a state of constant flux. This makes it tremendously difficult to stay abreast of changes. New releases of these libraries emerge not every free months, or even every few weeks, but often every few days. The pace of development is unlike anything I’ve ever seen in computer science.


A Look at CNTK v2.6 and the Iris Dataset

Additionally, the NN libraries are just the tip of the technology pyramid. There are dozens and dozens of supporting systems, and they are being developed with blazing speed too. For example, I did an Internet search for “auto ML” and found many systems that are wrappers over CNTK or TF/Keras or PyTorch, and that are intended to automate the process pipeline of things like hyperparameter tuning, data preprocessing, and so on.

The blistering pace of development of neural network code libraries and supporting software will eventually slow down (maybe 18 months as a wild guess), but for now it’s an incredibly exciting time to be working with deep learning systems.


A Look at CNTK v2.6 and the Iris Dataset

I suspect that an artist’s style doesn’t change too quickly over time (well, after his/her formative years). Three paintings by an unknown (to me) artist with similar compositions but slightly different styles.


          ML.NET 0.6 发布,微软的 .NET 跨平台机器学习框架      Cache   Translate Page      

ML.NET 0.6 已发布,ML.NET 是一个跨平台的开源机器学习框架,旨在让 .NET 开发者更快上手机器学习。

ML.NET 允许 .NET 开发者开发他们自己的模型,并将自定义 ML 注入到他们的应用程序中。他们无需开发或调整机器学习模型的专业知识,一切都可在 .NET 中搞定。

ML.NET 0.6 更新亮点:

  • 用于构建和使用机器学习模型的新 API

    ML.NET API 在该版本中进行首次迭代,旨在使机器学习更轻松、更强大。详情

  • 能够对预训练(pre-trained)的 ONNX 模型进行评分 详情

  • 模型预测性能改进

  • 其他改进:

    • improvements to ML.NET TensorFlow scoring

    • more consistency with the .NET type-system

    • having a model deployment suitable for serverless workloads like Azure Functions

更多细节请查阅发布公告:

https://blogs.msdn.microsoft.com/dotnet/2018/10/08/announcing-ml-net-0-6-machine-learning-net/


          [原]程序员入错行怎么办?      Cache   Translate Page      

640?wx_fmt=gif

程序员应该选择什么技术领域才能获得最高的回报?

本文详细解读了 2018 年最热门的五大领域,对行业现状、薪资概况及具体的技能要求给出了深入的分析,希望给担心“入错行”的你提供些指导。

640?wx_fmt=jpeg

七天国庆黄金周转眼就过,退散的除了出游热情,还有买房炒房的浪潮。

坊间来报,自 10 月 15 日起中国央行将下调部分金融机构存款准备金率,降准之外还会再释放增量资金约 7500 亿元——这次金融领域的大动作,对技术领域而言,开发者最直接的观感大抵就是备受诟病的房价要稳了。

“安土重迁,黎民之幸。”自古以来,房子都是人们安身立命的根本所在。而对于广大的开发者而言,买房也是绕不开的话题,但是高昂的房价之下各种“逃离北上广”、“逃离一线城市”的声音一直层出不穷。与房价高对等的,是开发者们“高薪多酬”、“996”、“压力大”、“不修边幅”等扯不掉的标签。

那么真实的开发者现状究竟是怎样的?

每年都有大量的开发者调查报告发布,报告的主体也不尽相同,从技术开发者的全局画像到细分领域的剖析解读应有尽有。下面我们就从大数据、云计算、AI、区块链、物联网这五个具体领域,结合最新的技术发展动态,给大家呈现出最为真实的中国开发者绘卷。

 

640?wx_fmt=png

水涨船高”的大数据开发者人才需求和薪资报酬

 

大数据时代,数据所蕴含的价值已毋庸置疑,在政府、企业、科研等领域都有其身影。事实上,它已经上升到了国家战略层面,中国、美国以及欧盟等国家都已经将大数据列入其中,微软、谷歌、百度以及亚马逊等科技巨头也紧跟其后,将大数据技术视为未来发展的重大筹码。

这点从黑客们“前仆后继”的信息窃取行径中也可见一斑。

仅这一年,就有多起大型数据泄漏事件发生:Facebook 上多达 5000 万的用户信息被泄露,用于操纵选民投票;WiFi 万能钥匙被爆窃取了 9 亿用户隐私,用于营销推广和谋取暴利;QQ 浏览器、百度手机输入法涉嫌私自调动摄像头、自动录音等侵权手段;A 站近千万条数据公开泄露;1.23 亿条华住旗下所有酒店的数据被泄漏和公开售卖......

由此可见,数据价值之高,大数据技术的重要性也不言而喻。

CSDN 2017 年调研数据显示,78% 的企业在进行大数据相关的开发和应用。虽然目前大约 57% 的企业对大数据的应用更多仍体现在统计分析、报表及数据可视化上,而且因为整体的大数据行业还不十分成熟,企业的需求定位尚不明确,因此深层次的应用还未普及也是情理之中了。但是这个比例与 2015 年、2016 年相比,已经有了非常大的提升。

640?wx_fmt=png

这种情况下,大数据开发者的人才需求和薪资报酬自然也是水涨船高。

640?wx_fmt=jpeg

根据中国商业联合会数据分析专业委员会统计表明,未来中国基础性数据分析人才的缺口将达到 1400 万,而在 BAT 等企业招聘的职位里,60% 以上都在招聘大数据人才。此外据领英报告显示,在大数据开发者的各个岗位中,数据分析人才的供给指数仅为 0.05,属于高度稀缺,数据分析人才跳槽速度也最快,平均跳槽速度为 19.8 个月......再以北京 2017 年的大数据开发者工资收入水平为例,五成以上的开发者月薪高于 30K,均薪可达 30230 元。

640?wx_fmt=jpeg

对于想要投身大数据抑或是身在坑底的开发者来说,最好的建议是找准一个切入点,比如平台搭建、ETL、离线处理程序、实时数据分析等,然后再往更大的领域扩充自己的知识储备——这样或许会让数据开发之路走得更稳。

 

640?wx_fmt=png

44% 的人认为数据库管理是收入最高的云计算技能

 

2017 年发布的 Gartner 技术成熟度曲线中,云计算已经不在“新兴技术”之列,转而进入到快速发展的车道了。2006 年 3 月,亚马逊推出第一个云计算服务的时候外界并不看好,但是随着云计算步入第二个发展 10 年,全球云计算的市场已经趋于稳定增长,逐渐远离单纯的“虚拟化或是网络服务”,成为了独立、成型以及普及度较高的 IT 基础设施服务。

640?wx_fmt=jpeg

容器、微服务、DevOps 等技术在不断推动着云计算的变革,科技巨头们也相继把云计算提到了战略的高度:亚马逊、谷歌、微软以及阿里云、腾讯云等疯狂地兴建数据中心,彼此之间也在围绕着客户“融合”,比如 Instagram 从亚马逊 AWS 迁移到 Facebook 的自有平台,Zynga 从自有平台迁移到亚马逊 AWS,苹果公司为了分摊风险将一部分业务从 AWS 分散到 GoogleCloud,Verizon 抛弃微软 office 回归谷歌 G Suite……

这些动作都在表明云计算的边界日益模糊,业务上的深度融合似乎是大势所趋。与此同时,中国的云计算市场也处于高速增长的阶段。CSDN 调研数据显示,有 83% 的企业正在使用云服务,仅有不到 1 成的企业对云计算不甚关注。在具体应用上,企业在虚拟机、网络存储、负载均衡三方面的应用较为普遍,基于 Docker 或 OpenStack 是当前云平台部署的两种主流框架。

640?wx_fmt=png

而从云计算开发者的角度而言,随着企业将基础设施迁移到公有云中,对云计算技术的专业人员需求将不断加大。Rackspace 去年发布了“云成本的专业知识”研究报告,该调查与伦敦证券交易所学者和 Vanson Bourne 合作。

调查发现,近四分之三的 IT 决策者(71%)认为他们的组织由于缺乏云技术而失去了收入,占全球云计算收入的 5%。报告指出,由于人才缺口巨大,IT 团队需要花费五周的时间才能完成招聘任务。

那么云计算开发者的哪些技能最受欢迎?Rackspace 调查的受访者确定了一些企业迫切需求的云计算技能:数据库管理,44% 的人表示这是收入最高的云计算技能,24% 的人认为这是最难招到相应人才的职位;云安全,业界不断发生的数据泄露事件不断增加云安全专业人才的需求;服务管理,涉及供应、监控和编排组织对云工具的使用;项目迁移管理,36% 的受访者认为这是极难招到合适的掌握该技能的人才;自动化,随着越来越多的组织采用 DevOps 的方式,越来越多的企业正在使用自动化工具来处理云端和内部数据中心基础设施的日常配置和管理任务;此外,云原生应用开发、Microsoft Azure、测试、DevOps 等相关的技术人才也逐渐受到追捧。

不过,安全问题仍是云服务最大的顾虑所在。在互联网系云计算服务商中,阿里、腾讯等巨头正在大力投入安全领域,其他玩家能否跟进还尚不可知。

 

 

640?wx_fmt=png

AI 软件工程师和算法工程师是最受欢迎的岗位

 

据麦肯锡 2017 年发布的《人工智能,下一个数字前沿》报告显示,机器人和语音识别作为最受欢迎的两大投资领域,已经吸纳了全球科技巨头们高达 200 亿至 300 亿美元不等的巨额资本——最近一年的 AI 炒得尤其火热。

以 BAT 等互联网公司为例,百度作为首家号称“All in AI”的科技公司,一直专注于对话式人工智能系统 DuerOS 和自动驾驶系统 Apollo 平台;阿里巴巴也在全面布局 AI 生态,疯狂投资 AI 初创公司,而且还蓄力智能云、AI 芯片等技术;起步较晚的腾讯同样不甘示弱,不仅成立了 AI Lab,还网罗了大量人工智能专家,积极推动语音识别、人脸识别等技术内部产品化......

而近来搅翻了国内搜索市场的 Google,也在上个月的上海开发者大会上将人工智能贯穿始终,从 Android 到智能穿戴,从 TensorFlow 到 AR 应用,要么构筑底层生态,要么引领技术潮流,不一而足。

CSDN 调研显示,虽然当前国内 AI 的普及率还偏低,但发展潜力巨大,只有 25% 的开发者表示完全没有人用过。

640?wx_fmt=png

而据猎聘大数据研究院近期发布的问卷调研中发现,AI 核心职能对学历要求明显增高,AI 人才主要分布在北京、上海和深圳这三个一线城市,在行业方面,AI 人才的分布以互联网为主,但也向其它行业逐渐渗透。在 Top 10 核心职能上,AI 软件工程师和算法工程师遥遥领先,是最紧俏的职能岗位。

640?wx_fmt=jpeg

此外,根据美国知名研究机构 CB Insights 最新发布的《2018 年必看的人工智能热门趋势》(Top AI Trends To Watch In 2018)显示,通过对 AI 行业发展现状进行了深入的研究剖析,人工智能的薪资水准已明显超越前后端开发、移动开发等岗位。

640?wx_fmt=other

而据普华永道发布的一份报告显示,随着人工智能扩展到更为具体的领域,它将需要数据科学家和人工智能专家通常缺乏的各领域的专业知识和技能。未来,对于 AI 开发者而言,更为全方位的技术储备是必不可少的。

640?wx_fmt=jpeg

 

640?wx_fmt=png

区块链技术开发者仍热情高涨

 

近年来“跌宕起伏”的区块链市场,也将区块链技术应用带到了大众眼前。

据摩根士丹利的研究报告显示,“比特币的价格上涨速度,大约是纳斯达克综合指数的 15 倍。”2017-2018 年比特币的价格走势,和 1998 年前的互联网泡沫期间的纳斯达克综合指数走势很像,但是速度要快得多,摩根士丹利分析师认为,这“预示着纳斯达克的历史正在重复上演”。

640?wx_fmt=jpeg

但是各种“泡沫”的质疑声下,开发者学习区块链技术的热情依旧高涨。

据 CodeMentor 发起的“ 区块链开发生态现状调查”研究显示,虽然有 46% 的受访者表示他们没有计划在短期内(未来三个月内)学习区块链这项新技术,但计划在未来数月内开始学习区块链技术的开发者占到九成之多。

在薪酬方面,BOSS 直聘数据显示,2018 年第一季度,区块链技术岗位平均招聘薪酬增长 31%,打败了其他所有岗位。“但区块链人才池太小,挖人很难。挖一个区块链的人,要付出 200% 的努力。”为了招人,各家公司也使尽了浑身解数——但绝大多数的从业者都是不合格的,要成为区块链的技术精英,不仅要懂计算机、编程语言,还要对经济学和博弈论有深刻理解,人才的严重短缺或许也是区块链市场泡沫形成的一大诱因吧。

此外,区块链当前的应用仍相对较少。CSDN 调查显示,正在用或者准备用区块链技术解决技术问题的人群仅占受访者的 10%,有 20% 的人对区块链完全不了解。缺少开发经验、技术资料以及落地的应用和场景是当前区块链开发的主要挑战,此次调研中分别占 56%、54%、50%。

640?wx_fmt=png

 

640?wx_fmt=png

优秀的物联网人才“供”远小于“求”

 

从智能家居到医疗监控,从可穿戴设备到能源供给,物联网已经成为了我们生活中不可分割的主要部分,国内外科技巨头也竞相布局物联网。

今年年初阿里就曾表示,loT 是集团继电商、金融、物流、云计算之后的一条新的主赛道,并提出 5 年要完成 100 亿设备连接的目标;百度也推出了百度云天工智能物联网平台;华为推动 NB-IoT 标准制定,并发布了物联网操作系统 LiteOS、NB-IoT 端到解决方案;腾讯推出“QQ 物联智能硬件开放平台”,将 QQ 账号体系及关链、QQ 消息通道等核心能力提供给可穿戴设备、智能家居、智能车载、传统硬件等领域合作伙伴,实现用户与设备及设备与设备之间的互联互通互动......

但是据 Eclipse IoT 发布的《2018 年物联网开发者调查报告》显示,企业开发物联网解决方案的增长率仅为 5.8%。不过虽然增长缓慢,但也透露出物联网企业们正在摆脱理论领域,更多地将理论付诸于实践。

640?wx_fmt=jpeg

这其中,物联网的构建难度之高不得不提。在物联网中,组网、人机交互、数据、安全特性等技术碎片化太过严重,因此它不单单是纯软件的开发,还需要掌握硬件的嵌入式等技能。这种背景下,物联网开发者的热度也自然很高。仅从国内某知名招聘平台上,就可以发现物联网工程师平均就业薪资可以达到 15K/月,且全网的招聘需求高达 14000+ 条。

640?wx_fmt=png

此外,作为国家倡导的新兴战略性产业,物联网备受各界重视,并成为就业前景广阔的热门领域。自 2011 年以来,全国各地高校纷纷设立物联网专业,物联网工程导论、嵌入式系统与单片机、无线传感器网络与 RFID 技术、物联网技术及应用、云计算与物联网、物联网安全、物联网体系结构及综合实训、信号与系统概论、现代传感器技术等课程以及多种选修课。

对于物联网开发者本身而言,则建议在学习时找准物联网的角度,深入学习,掌握知识和项目实战技能才是重中之重。

 

640?wx_fmt=png

我们真实的开发者究竟是什么样子的?

 

代码改变世界,开发者所创造的技术世界正给我们的生活带来革命性的变化。以上的五大领域开发者现状描摹也只是技术更迭下的时代缩影,在快速发展的当下,我们的开发者画像又会呈现出怎样的趋势变化呢?

自 2004 年开始,CSDN 通过对开发人员、开发技术以及开发工具、平台的状况和发展趋势等进行深入的调研,为各相关行业提供了中国软件开发者群体以及软件开发服务领域市场所提供的重要参考资料。迄今为止,已有数以万计的开发者参与其中,共同绘就了真实的中国开发者画像。

而现在,2018 年 CSDN 软件开发者大调查活动已经正式启动了!作为技术开发社区的一份子,我们诚邀你加入我们的大调查活动。

现在扫描以下二维码即可参与:

640?wx_fmt=png

此外,我们还为你准备了精美的礼品,华为 nova3 智能手机、小爱智能音箱、CSDN 背包、CSDN 定制T恤、数百本技术图书等你来拿!参与即有机会获赠,还等什么,快来试试吧!

640?wx_fmt=png

点击下方的“阅读原文”或复制官网链接(https://www.csdn.net/2018dev/)至浏览器访问,也可立即参与。

 

640?wx_fmt=jpeg

 

微信改版了,

想快速看到CSDN的热乎文章,

赶快把CSDN公众号设为星标吧,

打开公众号,点击“设为星标”就可以啦!

640?wx_fmt=png


“征稿啦”

CSDN 公众号秉持着「与千万技术人共成长」理念,不仅以「极客头条」、「畅言」栏目在第一时间以技术人的独特视角描述技术人关心的行业焦点事件,更有「技术头条」专栏,深度解读行业内的热门技术与场景应用,让所有的开发者紧跟技术潮流,保持警醒的技术嗅觉,对行业趋势、技术有更为全面的认知。

如果你有优质的文章,或是行业热点事件、技术趋势的真知灼见,或是深度的应用实践、场景方案等的新见解,欢迎联系 CSDN 投稿,联系方式:微信(guorui_1118,请备注投稿+姓名+公司职位),邮箱(guorui@csdn.net)。

 

推荐阅读:

640?wx_fmt=gif

640?wx_fmt=gif

作者:csdnnews 发表于 2018/10/08 22:17:44 原文链接 https://blog.csdn.net/csdnnews/article/details/82975989
阅读:1227

          [转]千万小时机器学习训练后,我从趟过的坑中学习到的 16 个技巧!      Cache   Translate Page      

640?wx_fmt=gif

在经历成千上万个小时机器学习训练时间后,计算机并不是唯一学到很多东西的角色,作为开发者和训练者的我们也犯了很多错误,修复了许多错误,从而积累了很多经验。

在本文中,作者基于自己的经验(主要基于 TensorFlow)提出了一些训练神经网络的建议,还结合了案例,可以说是过来人的实践技巧了。

640?wx_fmt=jpeg

出品 | AI 科技大本营


640?wx_fmt=png

通用技巧


有些技巧对你来说可能就是明摆着的事,但在某些时候可能却并非如此,也可能存在不适用的情况,甚至对你的特定任务来说,可能不是一个好的技巧,所以使用时需要务必要谨慎!

1.使用 ADAM 优化器

确实很有效。与更传统的优化器相比,如 Vanilla 梯度下降法,我们更喜欢用ADAM优化器。用 TensorFlow 时要注意:如果保存和恢复模型权重,请记住在设置完AdamOptimizer 后设置 Saver,因为 ADAM 也有需要恢复的状态(即每个权重的学习率)。

2.ReLU 是最好的非线性(激活函数)

就好比 Sublime 是最好的文本编辑器一样。ReLU 快速、简单。而且,令人惊讶的是,它们工作时,不会发生梯度递减的情况。虽然 Sigmoid 是常见的激活函数之一,但它并不能很好地在 DNN 进行传播梯度。

3.不要在输出层使用激活函数

这应该是显而易见的道理,但如果使用共享函数构建每个层,那就很容易犯这样的错误:所以请确保在输出层不要使用激活函数。

4.请在每一个层添加一个偏差

这是 ML 的入门知识了:偏差本质上就是将平面转换到最佳拟合位置。在 y=mx+b 中,b 是偏差,允许曲线上下移动到“最佳拟合”位置。

5.使用方差缩放(variance-scaled)初始化

在 Tensorflow 中,这看起来像tf.reemaner.variance_scaling_initializer()。

根据我们的经验,这比常规的高斯函数、截尾正态分布(Truncated Normal)和 Xavier 能更好地泛化/缩放。

粗略地说,方差缩放初始化器根据每层的输入或输出数量(TensorFlow中的默认值是输入数量)调整初始随机权重的方差,从而有助于信号更深入地传播到网络中,而无须额外的裁剪或批量归一化(Batch Normalization)。Xavier 与此相似,只是各层的方差几乎相同;但是不同层形状变化很大的网络(在卷积网络中很常见)可能不能很好地处理每层中的相同方差。

6.归一化输入数据

对于训练,减去数据集的均值,然后除以它的标准差。在每个方向的权重越少,你的网络就越容易学习。保持输入数据以均值为中心且方差恒定有助于实现这一点。你还必须对每个测试输入执行相同的规范化,因此请确保你的训练集与真实数据相似。

以合理保留其动态范围的方式缩放输入数据。这与归一化有关,但应该在归一化之前就进行。

例如,真实世界范围为 [0,140000000] 的数据 x 通常可以用 tanh(x) 或 tanh(x/C) 来控制,其中 C 是一些常数,它可以拉伸曲线,以适应 tanh 函数缓坡部分的动态范围内的更多输入范围。特别是在输入数据在一端或两端可能不受限制的情况下,神经网络将在(0,1)之间学习得更好。

7.一般不用学习率衰减

学习率衰减在 SGD 中更为常见,但 ADAM 很自然地处理了这个问题。如果你真的想把每一分表现都挤出去:在训练结束时短时间内降低学习率;你可能会看到突然的、非常小的误差下降,然后它会再次变平。

如果你的卷积层有 64 或 128 个过滤器,那就足够了。特别是一个对于深度网络而言。比如,128 个真的就已经很多了。如果你已经有了大量的过滤器,那么再添加更多的过滤器未必会进一步提高性能。

8.池化用于平移不变性

池化本质上就是让网络学习图像“那部分”的“总体思路”。例如,最大池化可以帮助卷积网络对图像中的特征的平移、旋转和缩放变得更加健壮。


640?wx_fmt=png

调试神经网络


如果你的网络没能很好地进行学习(指在训练过程中损失/准确率没有收敛,或者没有得到预期的结果),那么可以试试以下的技巧:

9.过拟合

如果你的网络没有学习,那么首先要做的第一件事就是对训练点进行过拟合。准确率基本上应为 100% 或 99.99%,或误差接近 0。如果你的神经网络不能对单个数据点进行过拟合,那么体系架构就可能有严重的问题,但这可能是微妙的。如果你可以对一个数据点进行过拟合,但是对较大的集合进行训练仍然无法收敛,请尝试以下建议:

10.降低学习率

你的网络学习就会变得更慢一些,但是它可能会找到以前无法进入的最小化的方式,因为它的步长太大了。

11.提高学习率

这样做将会加快训练,有助于收紧反馈,这意味着无论你的网络是否正常工作,你都会很快地知道你的网络是否有效。虽然网络应该更快地收敛,但其结果可能不会很好,而且“收敛”实际上可能会跳来跳去。(对于 ADAM 优化器,我们发现在很多经历中,学习率大约为 0.001 时,表现很不错。)

12.减少批量处理规模

将批处理大小减小到 1,可以为你提供与权重更新相关的更细粒度的反馈,你应该使用TensorBoard(或其他一些调试/可视化工具)展示出来。

13.删除批归一化层

随着批处理大小减少到 1,这样做会暴露出梯度消失或梯度爆炸的问题。我们曾有过一个网络,在好几周都没有收敛,当我们删除了批归一化层之后,我们才意识到第二次迭代时输出都是 NaN。就像是创可贴上的吸水垫,它也有它可以发挥效果的地方,但前提是你知道网络没有 Bug。

14.增加批量处理的规模

一个更大的批处理规模,如果可以的话,整个训练集减少梯度更新中的方差,使每个迭代更准确。换句话说,权重更新将朝着正确的方向发展。但是!它的可用性和物理内存限制都有一个有效的上限。通常,我们发现这个建议不如上述两个建议有用,可以将批处理规模减少到1并删除批归一化层。

15.检查你的重构

大幅度的矩阵重构(如改变图像的X、Y 维度)会破坏空间局部性,使网络更难学习,因为它也必须学会重塑。(自然特征变得支离破碎。事实上,自然特征在空间上呈局部性,也是为什么卷积神经网络能如此有效的原因!)如果使用多个图像/通道进行重塑,请特别小心;使用 numpi.stack()进行适当的对齐操作。

16.仔细检查你的损失函数

如果使用一个复杂的函数,请尝试将其简化为 L1 或 L2。我们发现L1对异常值不那么敏感,在发出噪声的批或训练点时,不会做出太大的调整。

如果可以的话,仔细检查你的可视化。你的可视化库(Matplotlib、OpenCV等)是调整值的比例呢,还是它们进行裁剪?可考虑使用一种视觉上均匀的配色方案。


640?wx_fmt=png

实战分析


为了使上面所描述的过程更容易让读者理解,我们这儿有一些用于描述我们构建的卷积神经网络的真实回归实验的损失图(通过TesnorBoard)。

起初,这个网络根本没有学习:

640?wx_fmt=png

我们试图裁剪这些值,以防止它们超出界限:

640?wx_fmt=png

嗯。看看不平滑的值有多疯狂啊!学习率是不是太高了?我们试着在一个输入数据上降低学习率并进行训练:

640?wx_fmt=png

你可以看到学习率的前几个变化发生的位置(大约在 300 步和 3000 步)。显然,我们衰减得太快了。所以,给它更多的衰减时间,它表现得会更好:

640?wx_fmt=png

你可以看到我们在 2000 步和 5000 步的时候衰减了。这样更好一些了,但还不够好,因为它没有趋于 0。

然后我们禁用了 LR 衰减,并尝试将值移动到更窄的范围内,而不是通过 Tanh 输入。虽然这显然使误差值小于 1,但我们仍然不能对训练集进行过拟合:

640?wx_fmt=png

这里我们发现,通过删除批归一化层,网络在一到两次迭代之后迅速输出 NaN。我们禁用了批归一化,并将初始化更改为方差缩放。

这些改变了一切!我们能够对只有一两个输入的测试集进行过拟合了。当底部的图标裁剪Y轴时,初始误差值远高于 5,表明误差减少了近 4 个数量级:

640?wx_fmt=png

上面的图表是非常平滑的,但是你可以看到它非常快地拟合了测试输入,随着时间的推移,整个训练集的损失降低到了 0.01 以下。

这没有降低学习速度。然后我们将学习速率降低一个数量级后继续训练,得到更好的结果: 

640?wx_fmt=png

这些结果好得多了!但是,如果我们以几何方式降低学习率,而不是将训练分成两部分,会发生什么样的结果呢?

通过在每一步将学习率乘以 0.9995,结果就不那么好了:

640?wx_fmt=png

大概是因为学习率衰减太快了吧。乘数为 0.999995 会表现的更好,但结果几乎相当于完全没有衰减。

我们从这个特定的实验序列中得出结论,批归一化隐藏了由槽糕的初始化引起的爆炸梯度,并且 ADAM 优化器对学习率的衰减并没有什么特别的帮助,与批归一化一样,裁剪值只是掩盖了真正的问题。我们还通过 tanh 来控制高方差输入值。

我们希望,本文提到的这些基本技巧能够在你构建深度神经网络时有所帮助。通常,正式因为简单的事情才改变了这一切。

原文链接:https://pcc.cs.byu.edu/2017/10/02/practical-advice-for-building-deep-neural-networks/

作者:Matt H/Daniel R

译者:婉清,责编:Jane

推荐阅读:

640?wx_fmt=gif

640?wx_fmt=gif

作者:csdnnews 发表于 2018/10/08 12:28:06 原文链接 https://blog.csdn.net/csdnnews/article/details/82975995
阅读:87

          (USA-TX-Austin) Data Scientist      Cache   Translate Page      
Job Description Ticom Geomatics, a CACI Company, delivers industry leading Signals Intelligence and Electronic Warfare (SIGINT/EW) products that enable our nation’s tactical war fighters to effectively utilize networked sensors, assets, and platforms to perform a variety of critical national security driven missions. We are looking for talented, passionate Engineers, Scientists, and Developers who are excited about using state of the art technologies to build user-centric products with a profound impact to the US defense and intelligence community. We are seeking to grow our highly capable engineering teams to build the best products in the world. The successful candidate is an individual who is never satisfied with continuing with the status quo just because “it’s the way things have always been done”. What You'll Get to Do: The prime responsibility of the Data Scientist position is to provide support for the design, development, integration, test and maintenance of CACI’s Artificial Intelligence and Machine Learning product portfolio. This position is based in our Austin, TX office. For those outside of the Austin area, relocation assistance is considered on a case by case basis Duties and Responsibilities: - Work within a cross-disciplinary team to develop new machine learning-based software applications. Position is responsible for implementing machine learning algorithms by leveraging open source and custom machine learning tools and techniques - Use critical thinking to assess deficiencies in existing machine learning or expert system-based applications and provide recommendations for improvement - Generate technical documentation to include software description documents, interface control documents (ICDs) and performance analysis reports - Travel to other CONUS locations as required (up to 25%) You’ll Bring These Qualifications: - Degree in Computer Science, Statistics, Mathematics or Electrical & Computer Engineering from an ABET accredited university with a B.S degree and a minimum of 7 years of related experience, or a M.S. degree and 5 years of experience, or a PhD with a minimum of 2 years of academic or industry experience. - In depth knowledge and practical experience using a variety of machine learning techniques including: linear regression, logistic regression, neural networks, state vector machines, anomaly detection, natural language processing and clustering techniques - Expert level knowledge and practical experience with C++, Python, Keras, TensorFlow, PyTorch, Caffe, Docker - Technical experience in the successful design, development, integration, test and deployment of machine learning based applications - Strong written and verbal communication skills - Self-starter that can work with minimum supervision and has good team interaction skills - US citizenship is required along with the ability to obtain a TS/SCI security clearance Desired Qualifications: - Basic understanding and practical experience with digital signal processing techniques - Experience working with big data systems such as Hadoop, Spark, NoSQL and Graph Databases - Experience working within research and development (R&D) environments - Experience working within Agile development teams leveraging DevOps methodology - Experience working within cross-functional teams following a SCRUM/Sprint-based project execution - Experience implementing software within a Continuous Integration, Continuous Deployment environment - Experience delivering software systems for DoD customers What We can Offer You: - We’ve been named a Best Place to Work by the Washington Post. - Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives. - We offer competitive benefits and learning and development opportunities. - We are mission-oriented and ever vigilant in aligning our solutions with the nation’s highest priorities. - For over 55 years, the principles of CACI’s unique, character-based culture have been the driving force behind our success. Ticom Geomatics (TGI) is a subsidiary of CACI International, Inc. in Austin, Texas with ~200 employees.” We’ve recently been named by Austin American Statesman as one of the Top Places to Work in Austin. We are an industry leader in interoperable, mission-ready Time and Frequency Difference of Arrival (T/FDOA) Precision Geolocation systems and produce diverse portfolio of Intelligence, Surveillance and Reconnaissance (ISR) products spanning small lightweight sensors, rack-mounted deployments, and cloud-based solutions which are deployed across the world. The commitment of our employees to "Engineering Results" is the catalyst that has propelled TGI to becoming a leader in software development, R&D, sensor development, and signal processing. Our engineering teams are highly adept at solving complex problems with the application of leading-edge technology solutions. Our work environment is highly focused yet casual with flexible schedules that enable each of our team members to achieve the work life balance that works for them. We provide highly competitive benefits package including a generous 401(k) contribution and Paid Time Off (PTO) policy. See additional positions at: http://careers.caci.com/page/show/TGIJobs Job Location US-Austin-TX-AUSTIN CACI employs a diverse range of talent to create an environment that fuels innovation and fosters continuous improvement and success. At CACI, you will have the opportunity to make an immediate impact by providing information solutions and services in support of national security missions and government transformation for Intelligence, Defense, and Federal Civilian customers. CACI is proud to provide dynamic careers for employees worldwide. CACI is an Equal Opportunity Employer - Females/Minorities/Protected Veterans/Individuals with Disabilities.
          tensorflow keras installation      Cache   Translate Page      

Hi, please give me step by step instruction to install the latest version of tensorflow, keras, scikit-image, scikit-learn, open-cv, Simpleitk.

 

Thanks


          Real-Time Code Generation Using AI Catches Designers And Developers By Surprise      Cache   Translate Page      


teleportHQ, a platform dedicated to open-source tools for user interface (UI) professionals, recently left designers and developers reeling with excitement over its ‘#ThinkToCode’ ecosystem.

Over on Twitter and LinkedIn, one of its videos demonstrating real-time code generation using artificial intelligence (AI) has been making its rounds on the social networks.

Dan Saffer, who works as a product design leader at Twitter, shared the video on the platform seeking help with finding its creator. “Anyone know who did this? It’s blowing my mind. Occasionally you see the future and I feel like this is one of those times,” Saffer wrote.

It was also highlighted by Mark Kelly, the founder of AI Awards, on LinkedIn.

Check out teleportHQ’s ‘#ThinkToCode’ ecosystem and see more responses below.

It's from @TeleportHQio (HT @andyyang)https://t.co/jFx1hpDc9a

— Dan Saffer (@odannyboy) October 8, 2018


It’s circling on LinkedIn as well. https://t.co/HBDMK0AD56. So cool.

— Christopher Robin (@IamUserX) October 9, 2018


Holy shit https://t.co/IoCgYkLW5t

— Dinesh Dave (@appleidinesh) October 9, 2018


pic.twitter.com/fiRoTv7WSY

— Startup Cardi Boo (@StartupCardiB) October 9, 2018


pic.twitter.com/WOeht89bsm

— Sadiyah Ali (@5adiali) October 9, 2018


Wow. 🤯

— Vish Amin (@VishalRAmin) October 10, 2018


My God! 😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱🔥🔥🔥🔥🔥🔥😱😱😱😱😱😱😱🔥🔥🔥🔥🔥🔥😱😱😱😱😱😱😱😱😱😱😱🔥🔥🔥🔥😱😱😱🔥🔥🔥🔥🔥😱🔥😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱🔥😱🔥😱😱😱😱😱😱😱😱😱😱😱😱😱😱😱

— Emmanuel Adigwe (@Igbo_Chuck) October 9, 2018


Indeed interesting that so many are surprised by this!

— IntuitionMachine (@IntuitMachine) October 9, 2018


I was at @jsheroes when you talked about it and thought it was a very nice crazy idea. So good to see it working!

— Isaac Besora (@ibesora) September 27, 2018


Woah!! #ThinkToCode #TheFutureIsHere https://t.co/jnD3TodxBA

— Manny Colon (@_mannycolon) September 23, 2018


After a busy summer, we're about to announce exciting developments over the next few weeks.
In the meantime, a short preview of our #ThinkToCode ecosystem in the making #MachineLearning #TensorFlow pic.twitter.com/HOjM7LUK0s

— teleportHQ (@TeleportHQio) September 4, 2018


Airbnb has done something similar https://t.co/I24dSJLCHI

— benarent (@benarent) October 8, 2018


And what @UizardIO is doing as a commercial product.

Initial prototype https://t.co/qf0NEANmBP

— Naushad Shaikh (@snaushads) October 9, 2018


Thanks for sharing my old research demo video! 😊 For a more up-to-date sneek peak in our product, check this out https://t.co/grfnYZMmsq

— Tony Beltramelli (@Tbeltramelli) October 9, 2018




[via odannyboy, main image screenshot via teleportHQ]
          微軟更換ML.NET API釋出0.6版,預測效能提升超過百倍      Cache   Translate Page      
由微軟研究院開發的機器學習框架ML.NET,持續釋出每月更新,這次最重要的更新是棄用LearningPipeline API,使用全新的學習API,支援的機器學習框架也越來越豐富,除了上一個版本的TensorFlow,現在也支援ONNX模型。而在整體效能上,這個版本的模型預測速度提升了超過100倍。
          Style Transfer Model Development      Cache   Translate Page      
Implement similar to (https://medium.com/tensorflow/neural-style-transfer-creating-art-with-deep-learning-using-tf-keras-and-eager-execution-7d541ac31398) for capturing style from multiple sources, storing into embeddings and to then re-apply them at differing scales (i.e... (Budget: $500 USD, Jobs: Machine Learning, Python, Tensorflow)
          Can Cloud-based AI Boost the Economy?      Cache   Translate Page      
Society of Internet Professionals: Can Cloud-based AI Boost the Economy?
Society of Internet Professionals "Can Cloud-based AI Boost the Economy?"

This article, authored by Cory Popescu was first published on the blog of the Society of Internet Professionals (SIP). SIP is a not-for-profit, Toronto (Canada) based International organization to connect, learn and share. Our Vision is to provide the opportunity to leverage technology to have an inclusive future for everyone. Since 1997, SIP has spearheaded many initiatives, educational programs, and networking events.

Currently Artificial Intelligence (AI) penetrates faster in the tech industry, since it has increased the volume of new products and services in an efficient way and the structure of development, production, and delivery seems suitable to the implementation of this cutting-edge technology. Still plenty of other businesses and industries have yet to take advantage of the advances in this relatively novel application field. In medicine, energy or manufacturing sector, a more intensive and comprehensive application of AI could dramatically transform these fields while boosting the economic productivity.

Not only AI enters a particular sector, high-tech, also within it only a few major organizations make use of it to create expansion through volume and efficiency at unbelievable higher speed. Companies like Google, Amazon, Microsoft, Baidu and some startups engage AI in their matrix since it may be acceptable price-wise, while for the most part and all the rest of our economy, this novel technology proves difficult to implement and extremely costly.

Therefore, companies like Amazon, Microsoft, and Google aim to create cloud-based AI to make the technology cheaper to be implemented and used effortlessly. This leading-edge cloud-based AI is available now and by expanding it to lots of organizations it could trigger more economic development. Therefore, the solution is to bring AI and cloud-based machine-learning tools to large audiences.

In this pursuit, Microsoft has Azure, its own cloud platform and by cooperating with Amazon they are working to offer an open-source machine learning library called Gluon. This is created to engage building neural nets to become a core AI technology to copy and apply the processes of learning exposed by the human brain.

Although at this time, Amazon dominates cloud machine learning with AWS, Google is following suit with an open-source AI library named TensorFlow. This library proves powerful since it can develop and build further machine-learning software. Also, on designers’ table a priority represents the simpler use and implementation of AI, and the recent Google pre-trained system suit Cloud AutoML, promises to do just that. Both organizations start preparing consulting services to cover the shortage of cloud-based AI specialists who can spread the knowledge of the leading-edge AI.

The future will tell who is going to spread more and what quality of AI. It certainly represents a huge business opportunity for those involved. The cloud-based AI technology has great chances to expand and comprehend various sectors untouched so far. We can only realize the true benefits… and the downsides of AI once the cloud-based is ready to roll and found at almost everyone’s fingertips.

Cory Popescu

Your comments are welcomed

Click on the links below to read the other articles by Cory Popescu:
Is It Possible To Get 100% Privacy On The Internet?
Why Incorporate Blockchain in Your Business?
Blockchain: Unbelievable Transactions Blockchain Can Promote!


           Jelly Bean Identifier       Cache   Translate Page      
Using TensorFlow's mobilenet retrained neural network to identify the flavors of multiple jelly beans.
          Inteligência Artificial Embarcada da Renesas      Cache   Translate Page      

Confira a solução para Inteligência Artificial Embarcada da Renesas com um exemplo prático de uso do TensorFlow com uma placa de desenvolvimento.

The post Inteligência Artificial Embarcada da Renesas appeared first on Embarcados - Sua fonte de informações sobre Sistemas Embarcados.


          ShadowPlay: Using our hands to have some fun with AI      Cache   Translate Page      

Editor’s note:TensorFlow, our open source machine learning platform, is just that—open to anyone. Companies, nonprofits, researchers and developers have used TensorFlow in some pretty cool ways and at Google, we're always looking to do the same. Here's one of those stories.

Chinese shadow puppetry—which uses silhouette figures and music to tell a story—is an ancient Chinese art form that’s been used by generations to charm communities and pass along cultural history. At Google, we’re always experimenting with how we can connect culture with AI and make it fun, which got us thinking: can AI help put on a shadow puppet show?

So we created ShadowPlay, an interactive installation that celebrates the shadow puppetry art form. The installation, built using TensorFlow and TPUs, uses AI to recognize a person’s hand gestures and then magically transform the shadow figure into digital animations representing the 12 animals of the Chinese zodiac and in an interactive show.

Shadowplay.gif

Attendees use their hands to make shadow figures, which transform into animated characters and creates.

We debuted ShadowPlay at the World AI Conference and Google Developers Day in Shanghai in September. To build the experience, we developed a custom machine learning model that was trained on a dataset made up of lots of examples of people’s hand shadows, which could eventually recognize the shadow and match it to the corresponding animal. “In order to bring this project to life, we asked Googlers to help us train the model by making a lot of fun hand gestures. Once we saw the reaction of users seeing their hand shadows morph into characters, it was impossible not to smile!”, says Miguel de Andres-Clavera, Project Lead at Google. To make sure the experience could guess what animal people were making with high accuracy, we trained the model using TPUs, our custom machine learning hardware accelerators.

We had so much fun building ShadowPlay (almost as much fun as practicing our shadow puppets … ), that we’ll be bringing it to more events around the world soon!


          Comment on How to Setup a Python Environment for Machine Learning and Deep Learning with Anaconda by Sindhu      Cache   Translate Page      
theano: 1.0.3 tensorflow: 1.11.0 Using TensorFlow backend. keras: 2.2.4
          Incredible Video Shows Artificial Intelligence Creating A Website Just By Looking At The Wireframe      Cache   Translate Page      

Artificial Intelligence is now being used to create front-end designs from wireframe to HTML code. teleportHQ, a platform of open-source tools for UI professionals, has released a video demonstrating real-time code generation using TensorFlow machine learning and computer vision image recognition. Watch below Titled #ThinkToCode, the ecosystem has sparked excitement and conversation among web/UI designers […]

The post Incredible Video Shows Artificial Intelligence Creating A Website Just By Looking At The Wireframe appeared first on Digital Synopsis.


          Exploring LSTMs      Cache   Translate Page      

It turns out LSTMs are a fairly simple extension to neural networks, and they're behind a lot of the amazing achievements deep learning has made in the past few years. So I'll try to present them as intuitively as possible – in such a way that you could have discovered them yourself.

But first, a picture:

LSTM

Aren't LSTMs beautiful? Let's go.

(Note: if you're already familiar with neural networks and LSTMs, skip to the middle – the first half of this post is a tutorial.)

Neural Networks

Imagine we have a sequence of images from a movie, and we want to label each image with an activity (is this a fight?, are the characters talking?, are the characters eating?).

How do we do this?

One way is to ignore the sequential nature of the images, and build a per-image classifier that considers each image in isolation. For example, given enough images and labels:

  • Our algorithm might first learn to detect low-level patterns like shapes and edges.
  • With more data, it might learn to combine these patterns into more complex ones, like faces (two circular things atop a triangular thing atop an oval thing) or cats.
  • And with even more data, it might learn to map these higher-level patterns into activities themselves (scenes with mouths, steaks, and forks are probably about eating).

This, then, is a deep neural network: it takes an image input, returns an activity output, and – just as we might learn to detect patterns in puppy behavior without knowing anything about dogs (after seeing enough corgis, we discover common characteristics like fluffy butts and drumstick legs; next, we learn advanced features like splooting) – in between it learns to represent images through hidden layers of representations.

Mathematically

I assume people are familiar with basic neural networks already, but let's quickly review them.

  • A neural network with a single hidden layer takes as input a vector x, which we can think of as a set of neurons.
  • Each input neuron is connected to a hidden layer of neurons via a set of learned weights.
  • The jth hidden neuron outputs \(h_j = \phi(\sum_i w_{ij} x_i)\), where \(\phi\) is an activation function.
  • The hidden layer is fully connected to an output layer, and the jth output neuron outputs \(y_j = \sum_i v_{ij} h_i\). If we need probabilities, we can transform the output layer via a softmax function.

In matrix notation:

$$h = \phi(Wx)$$
$$y = Vh$$

where

  • x is our input vector
  • W is a weight matrix connecting the input and hidden layers
  • V is a weight matrix connecting the hidden and output layers
  • Common activation functions for \(\phi\) are the sigmoid function, \(\sigma(x)\), which squashes numbers into the range (0, 1); the hyperbolic tangent, \(tanh(x)\), which squashes numbers into the range (-1, 1), and the rectified linear unit, \(ReLU(x) = max(0, x)\).

Here's a pictorial view:

Neural Network

(Note: to make the notation a little cleaner, I assume x and h each contain an extra bias neuron fixed at 1 for learning bias weights.)

Remembering Information with RNNs

Ignoring the sequential aspect of the movie images is pretty ML 101, though. If we see a scene of a beach, we should boost beach activities in future frames: an image of someone in the water should probably be labeled swimming, not bathing, and an image of someone lying with their eyes closed is probably suntanning. If we remember that Bob just arrived at a supermarket, then even without any distinctive supermarket features, an image of Bob holding a slab of bacon should probably be categorized as shopping instead of cooking.

So what we'd like is to let our model track the state of the world:

  1. After seeing each image, the model outputs a label and also updates the knowledge it's been learning. For example, the model might learn to automatically discover and track information like location (are scenes currently in a house or beach?), time of day (if a scene contains an image of the moon, the model should remember that it's nighttime), and within-movie progress (is this image the first frame or the 100th?). Importantly, just as a neural network automatically discovers hidden patterns like edges, shapes, and faces without being fed them, our model should automatically discover useful information by itself.
  2. When given a new image, the model should incorporate the knowledge it's gathered to do a better job.

This, then, is a recurrent neural network. Instead of simply taking an image and returning an activity, an RNN also maintains internal memories about the world (weights assigned to different pieces of information) to help perform its classifications.

Mathematically

So let's add the notion of internal knowledge to our equations, which we can think of as pieces of information that the network maintains over time.

But this is easy: we know that the hidden layers of neural networks already encode useful information about their inputs, so why not use these layers as the memory passed from one time step to the next? This gives us our RNN equations:

$$h_t = \phi(Wx_t + Uh_{t-1})$$
$$y_t = Vh_t$$

Note that the hidden state computed at time \(t\) (\(h_t\), our internal knowledge) is fed back at the next time step. (Also, I'll use concepts like hidden state, knowledge, memories, and beliefs to describe \(h_t\) interchangeably.)

RNN

Longer Memories through LSTMs

Let's think about how our model updates its knowledge of the world. So far, we've placed no constraints on this update, so its knowledge can change pretty chaotically: at one frame it thinks the characters are in the US, at the next frame it sees the characters eating sushi and thinks they're in Japan, and at the next frame it sees polar bears and thinks they're on Hydra Island. Or perhaps it has a wealth of information to suggest that Alice is an investment analyst, but decides she's a professional assassin after seeing her cook.

This chaos means information quickly transforms and vanishes, and it's difficult for the model to keep a long-term memory. So what we'd like is for the network to learn how to update its beliefs (scenes without Bob shouldn't change Bob-related information, scenes with Alice should focus on gathering details about her), in a way that its knowledge of the world evolves more gently.

This is how we do it.

  1. Adding a forgetting mechanism. If a scene ends, for example, the model should forget the current scene location, the time of day, and reset any scene-specific information; however, if a character dies in the scene, it should continue remembering that he's no longer alive. Thus, we want the model to learn a separate forgetting/remembering mechanism: when new inputs come in, it needs to know which beliefs to keep or throw away.
  2. Adding a saving mechanism. When the model sees a new image, it needs to learn whether any information about the image is worth using and saving. Maybe your mom sent you an article about the Kardashians, but who cares?
  3. So when new a input comes in, the model first forgets any long-term information it decides it no longer needs. Then it learns which parts of the new input are worth using, and saves them into its long-term memory.
  4. Focusing long-term memory into working memory. Finally, the model needs to learn which parts of its long-term memory are immediately useful. For example, Bob's age may be a useful piece of information to keep in the long term (children are more likely to be crawling, adults are more likely to be working), but is probably irrelevant if he's not in the current scene. So instead of using the full long-term memory all the time, it learns which parts to focus on instead.

This, then, is an long short-term memory network. Whereas an RNN can overwrite its memory at each time step in a fairly uncontrolled fashion, an LSTM transforms its memory in a very precise way: by using specific learning mechanisms for which pieces of information to remember, which to update, and which to pay attention to. This helps it keep track of information over longer periods of time.

Mathematically

Let's describe the LSTM additions mathematically.

At time \(t\), we receive a new input \(x_t\). We also have our long-term and working memories passed on from the previous time step, \(ltm_{t-1}\) and \(wm_{t-1}\) (both n-length vectors), which we want to update.

We'll start with our long-term memory. First, we need to know which pieces of long-term memory to continue remembering and which to discard, so we want to use the new input and our working memory to learn a remember gate of n numbers between 0 and 1, each of which determines how much of a long-term memory element to keep. (A 1 means to keep it, a 0 means to forget it entirely.)

Naturally, we can use a small neural network to learn this remember gate:

$$remember_t = \sigma(W_r x_t + U_r wm_{t-1}) $$

(Notice the similarity to our previous network equations; this is just a shallow neural network. Also, we use a sigmoid activation because we need numbers between 0 and 1.)

Next, we need to compute the information we can learn from \(x_t\), i.e., a candidate addition to our long-term memory:

$$ ltm'_t = \phi(W_l x_t + U_l wm_{t-1}) $$

\(\phi\) is an activation function, commonly chosen to be \(tanh\).

Before we add the candidate into our memory, though, we want to learn which parts of it are actually worth using and saving:

$$save_t = \sigma(W_s x_t + U_s wm_{t-1})$$

(Think of what happens when you read something on the web. While a news article might contain information about Hillary, you should ignore it if the source is Breitbart.)

Let's now combine all these steps. After forgetting memories we don't think we'll ever need again and saving useful pieces of incoming information, we have our updated long-term memory:

$$ltm_t = remember_t \circ ltm_{t-1} + save_t \circ ltm'_t$$

where \(\circ\) denotes element-wise multiplication.

Next, let's update our working memory. We want to learn how to focus our long-term memory into information that will be immediately useful. (Put differently, we want to learn what to move from an external hard drive onto our working laptop.) So we learn a focus/attention vector:

$$focus_t = \sigma(W_f x_t + U_f wm_{t-1})$$

Our working memory is then

$$wm_t = focus_t \circ \phi(ltm_t)$$

In other words, we pay full attention to elements where the focus is 1, and ignore elements where the focus is 0.

And we're done! Hopefully this made it into your long-term memory as well.


To summarize, whereas a vanilla RNN uses one equation to update its hidden state/memory:

$$h_t = \phi(Wx_t + Uh_{t-1})$$

An LSTM uses several:

$$ltm_t = remember_t \circ ltm_{t-1} + save_t \circ ltm'_t$$
$$wm_t = focus_t \circ tanh(ltm_t)$$

where each memory/attention sub-mechanism is just a mini brain of its own:

$$remember_t = \sigma(W_r x_t+ U_r wm_{t-1}) $$
$$save_t = \sigma(W_s x_t + U_s wm_{t-1})$$
$$focus_t = \sigma(W_f x_t + U_f wm_{t-1})$$
$$ ltm'_t = tanh(W_l x_t + U_l wm_{t-1}) $$

(Note: the terminology and variable names I've been using are different from the usual literature. Here are the standard names, which I'll use interchangeably from now on:

  • The long-term memory, \(ltm_t\), is usually called the cell state, denoted \(c_t\).
  • The working memory, \(wm_t\), is usually called the hidden state, denoted \(h_t\). This is analogous to the hidden state in vanilla RNNs.
  • The remember vector, \(remember_t\), is usually called the forget gate (despite the fact that a 1 in the forget gate still means to keep the memory and a 0 still means to forget it), denoted \(f_t\).
  • The save vector, \(save_t\), is usually called the input gate (as it determines how much of the input to let into the cell state), denoted \(i_t\).
  • The focus vector, \(focus_t\), is usually called the output gate, denoted \(o_t\). )

LSTM

Snorlax

I could have caught a hundred Pidgeys in the time it took me to write this post, so here's a cartoon.

Neural Networks

Neural Network

Recurrent Neural Networks

RNN

LSTMs

LSTM

Learning to Code

Let's look at a few examples of what an LSTM can do. Following Andrej Karpathy's terrific post, I'll use character-level LSTM models that are fed sequences of characters and trained to predict the next character in the sequence.

While this may seem a bit toyish, character-level models can actually be very useful, even on top of word models. For example:

  • Imagine a code autocompleter smart enough to allow you to program on your phone. An LSTM could (in theory) track the return type of the method you're currently in, and better suggest which variable to return; it could also know without compiling whether you've made a bug by returning the wrong type.
  • NLP applications like machine translation often have trouble dealing with rare terms. How do you translate a word you've never seen before, or convert adjectives to adverbs? Even if you know what a tweet means, how do you generate a new hashtag to capture it? Character models can daydream new terms, so this is another area with interesting applications.

So to start, I spun up an EC2 p2.xlarge spot instance, and trained a 3-layer LSTM on the Apache Commons Lang codebase. Here's a program it generates after a few hours.

While the code certainly isn't perfect, it's better than a lot of data scientists I know. And we can see that the LSTM has learned a lot of interesting (and correct!) coding behavior:

  • It knows how to structure classes: a license up top, followed by packages and imports, followed by comments and a class definition, followed by variables and methods. Similarly, it knows how to create methods: comments follow the correct orders (description, then @param, then @return, etc.), decorators are properly placed, and non-void methods end with appropriate return statements. Crucially, this behavior spans long ranges of code – see how giant the blocks are!
  • It can also track subroutines and nesting levels: indentation is always correct, and if statements and for loops are always closed out.
  • It even knows how to create tests.

How does the model do this? Let's look at a few of the hidden states.

Here's a neuron that seems to track the code's outer level of indentation:

(As the LSTM moves through the sequence, its neurons fire at varying intensities. The picture represents one particular neuron, where each row is a sequence and characters are color-coded according to the neuron's intensity; dark blue shades indicate large, positive activations, and dark red shades indicate very negative activations.)

Outer Level of Indentation

And here's a neuron that counts down the spaces between tabs:

Tab Spaces

For kicks, here's the output of a different 3-layer LSTM trained on TensorFlow's codebase:

There are plenty of other fun examples floating around the web, so check them out if you want to see more.

Investigating LSTM Internals

Let's dig a little deeper. We looked in the last section at examples of hidden states, but I wanted to play with LSTM cell states and their other memory mechanisms too. Do they fire when we expect, or are there surprising patterns?

Counting

To investigate, let's start by teaching an LSTM to count. (Remember how the Java and Python LSTMs were able to generate proper indentation!) So I generated sequences of the form

aaaaaXbbbbb

(N "a" characters, followed by a delimiter X, followed by N "b" characters, where 1 <= N <= 10), and trained a single-layer LSTM with 10 hidden neurons.

As expected, the LSTM learns perfectly within its training range – and can even generalize a few steps beyond it. (Although it starts to fail once we try to get it to count to 19.)

aaaaaaaaaaaaaaaXbbbbbbbbbbbbbbb
aaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbb
aaaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbbb
aaaaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbbbb
aaaaaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbbbb # Here it begins to fail: the model is given 19 "a"s, but outputs only 18 "b"s.

We expect to find a hidden state neuron that counts the number of a's if we look at its internals. And we do:

Neuron #2 Hidden State

I built a small web app to play around with LSTMs, and Neuron #2 seems to be counting both the number of a's it's seen, as well as the number of b's. (Remember that cells are shaded according to the neuron's activation, from dark red [-1] to dark blue [+1].)

What about the cell state? It behaves similarly:

Neuron #2 Cell State

One interesting thing is that the working memory looks like a "sharpened" version of the long-term memory. Does this hold true in general?

It does. (This is exactly as we would expect, since the long-term memory gets squashed by the tanh activation function and the output gate limits what gets passed on.) For example, here is an overview of all 10 cell state nodes at once. We see plenty of light-colored cells, representing values close to 0.

Counting LSTM Cell States

In contrast, the 10 working memory neurons look much more focused. Neurons 1, 3, 5, and 7 are even zeroed out entirely over the first half of the sequence.

Counting LSTM Hidden States

Let's go back to Neuron #2. Here are the candidate memory and input gate. They're relatively constant over each half of the sequence – as if the neuron is calculating a += 1 or b += 1 at each step.

Counting LSTM Candidate Memory

Input Gate

Finally, here's an overview of all of Neuron 2's internals:

Neuron 2 Overview

If you want to investigate the different counting neurons yourself, you can play around with the visualizer here.

(Note: this is far from the only way an LSTM can learn to count, and I'm anthropomorphizing quite a bit here. But I think viewing the network's behavior is interesting and can help build better models – after all, many of the ideas in neural networks come from analogies to the human brain, and if we see unexpected behavior, we may be able to design more efficient learning mechanisms.)

Count von Count

Let's look at a slightly more complicated counter. This time, I generated sequences of the form

aaXaXaaYbbbbb

(N a's with X's randomly sprinkled in, followed by a delimiter Y, followed by N b's). The LSTM still has to count the number of a's, but this time needs to ignore the X's as well.

Here's the full LSTM. We expect to see a counting neuron, but one where the input gate is zero whenever it sees an X. And we do!

Counter 2 - Cell State

Above is the cell state of Neuron 20. It increases until it hits the delimiter Y, and then decreases to the end of the sequence – just like it's calculating a num_bs_left_to_print variable that increments on a's and decrements on b's.

If we look at its input gate, it is indeed ignoring the X's:

Counter 2 - Input Gate

Interestingly, though, the candidate memory fully activates on the irrelevant X's – which shows why the input gate is needed. (Although, if the input gate weren't part of the architecture, presumably the network would have presumably learned to ignore the X's some other way, at least for this simple example.)

Counter 2 - Candidate Memory

Let's also look at Neuron 10.

Counter 2 - Neuron 10

This neuron is interesting as it only activates when reading the delimiter "Y" – and yet it still manages to encode the number of a's seen so far in the sequence. (It may be hard to tell from the picture, but when reading Y's belonging to sequences with the same number of a's, all the cell states have values either identical or within 0.1% of each other. You can see that Y's with fewer a's are lighter than those with more.) Perhaps some other neuron sees Neuron 10 slacking and helps a buddy out.

Remembering State

Next, I wanted to look at how LSTMs remember state. I generated sequences of the form

AxxxxxxYa
BxxxxxxYb

(i.e., an "A" or B", followed by 1-10 x's, then a delimiter "Y", ending with a lowercase version of the initial character). This way the network needs to remember whether it's in an "A" or "B" state.

We expect to find a neuron that fires when remembering that the sequence started with an "A", and another neuron that fires when remembering that it started with a "B". We do.

For example, here is an "A" neuron that activates when it reads an "A", and remembers until it needs to generate the final character. Notice that the input gate ignores all the "x" characters in between.

A Neuron - #8

Here is its "B" counterpart:

B Neuron - #17

One interesting point is that even though knowledge of the A vs. B state isn't needed until the network reads the "Y" delimiter, the hidden state fires throughout all the intermediate inputs anyways. This seems a bit "inefficient", but perhaps it's because the neurons are doing a bit of double-duty in counting the number of x's as well.

Copy Task

Finally, let's look at how an LSTM learns to copy information. (Recall that our Java LSTM was able to memorize and copy an Apache license.)

(Note: if you think about how LSTMs work, remembering lots of individual, detailed pieces of information isn't something they're very good at. For example, you may have noticed that one major flaw of the LSTM-generated code was that it often made use of undefined variables – the LSTMs couldn't remember which variables were in scope. This isn't surprising, since it's hard to use single cells to efficiently encode multi-valued information like characters, and LSTMs don't have a natural mechanism to chain adjacent memories to form words. Memory networks and neural Turing machines are two extensions to neural networks that help fix this, by augmenting with external memory components. So while copying isn't something LSTMs do very efficiently, it's fun to see how they try anyways.)

For this copy task, I trained a tiny 2-layer LSTM on sequences of the form

baaXbaa
abcXabc

(i.e., a 3-character subsequence composed of a's, b's, and c's, followed by a delimiter "X", followed by the same subsequence).

I wasn't sure what "copy neurons" would look like, so in order to find neurons that were memorizing parts of the initial subsequence, I looked at their hidden states when reading the delimiter X. Since the network needs to encode the initial subsequence, its states should exhibit different patterns depending on what they're learning.

The graph below, for example, plots Neuron 5's hidden state when reading the "X" delimiter. The neuron is clearly able to distinguish sequences beginning with a "c" from those that don't.

Neuron 5

For another example, here is Neuron 20's hidden state when reading the "X". It looks like it picks out sequences beginning with a "b".

Neuron 20 Hidden State

Interestingly, if we look at Neuron 20's cell state, it almost seems to capture the entire 3-character subsequence by itself (no small feat given its one-dimensionality!):

Neuron 20 Cell State

Here are Neuron 20's cell and hidden states, across the entire sequence. Notice that its hidden state is turned off over the entire initial subsequence (perhaps expected, since its memory only needs to be passively kept at that point).

Copy LSTM - Neuron 20 Hidden and Cell

However, if we look more closely, the neuron actually seems to be firing whenever the next character is a "b". So rather than being a "the sequence started with a b" neuron, it appears to be a "the next character is a b" neuron.

As far as I can tell, this pattern holds across the network – all the neurons seem to be predicting the next character, rather than memorizing characters at specific positions. For example, Neuron 5 seems to be a "next character is a c" predictor.

Copy LSTM - Neuron 5

I'm not sure if this is the default kind of behavior LSTMs learn when copying information, or what other copying mechanisms are available as well.

States and Gates

To really hone in and understand the purpose of the different states and gates in an LSTM, let's repeat the previous section with a small pivot.

Cell State and Hidden State (Memories)

We originally described the cell state as a long-term memory, and the hidden state as a way to pull out and focus these memories when needed.

So when a memory is currently irrelevant, we expect the hidden state to turn off – and that's exactly what happens for this sequence copying neuron.

Copy Machine

Forget Gate

The forget gate discards information from the cell state (0 means to completely forget, 1 means to completely remember), so we expect it to fully activate when it needs to remember something exactly, and to turn off when information is never going to be needed again.

That's what we see with this "A" memorizing neuron: the forget gate fires hard to remember that it's in an "A" state while it passes through the x's, and turns off once it's ready to generate the final "a".

Forget Gate

Input Gate (Save Gate)

We described the job of the input gate (what I originally called the save gate) as deciding whether or not to save information from a new input. Thus, it should turn off at useless information.

And that's what this selective counting neuron does: it counts the a's and b's, but ignores the irrelevant x's.

Input Gate

What's amazing is that nowhere in our LSTM equations did we specify that this is how the input (save), forget (remember), and output (focus) gates should work. The network just learned what's best.

Extensions

Now let's recap how you could have discovered LSTMs by yourself.

First, many of the problems we'd like to solve are sequential or temporal of some sort, so we should incorporate past learnings into our models. But we already know that the hidden layers of neural networks encode useful information, so why not use these hidden layers as the memories we pass from one time step to the next? And so we get RNNs.

But we know from our own behavior that we don't keep track of knowledge willy-nilly; when we read a new article about politics, we don't immediately believe whatever it tells us and incorporate it into our beliefs of the world. We selectively decide what information to save, what information to discard, and what pieces of information to use to make decisions the next time we read the news. Thus, we want to learn how to gather, update, and apply information – and why not learn these things through their own mini neural networks? And so we get LSTMs.

And now that we've gone through this process, we can come up with our own modifications.

  • For example, maybe you think it's silly for LSTMs to distinguish between long-term and working memories – why not have one? Or maybe you find separate remember gates and save gates kind of redundant – anything we forget should be replaced by new information, and vice-versa. And now you've come up with one popular LSTM variant, the GRU.
  • Or maybe you think that when deciding what information to remember, save, and focus on, we shouldn't rely on our working memory alone – why not use our long-term memory as well? And now you've discovered Peephole LSTMs.

Making Neural Nets Great Again

Let's look at one final example, using a 2-layer LSTM trained on Trump's tweets. Despite the tiny big dataset, it's enough to learn a lot of patterns.

For example, here's a neuron that tracks its position within hashtags, URLs, and @mentions:

Hashtags, URLs, @mentions

Here's a proper noun detector (note that it's not simply firing at capitalized words):

Proper Nouns

Here's an auxiliary verb + "to be" detector ("will be", "I've always been", "has never been"):

Modal Verbs

Here's a quote attributor:

Quotes

There's even a MAGA and capitalization neuron:

MAGA

And here are some of the proclamations the LSTM generates (okay, one of these is a real tweet):

Tweets Tweet

Unfortunately, the LSTM merely learned to ramble like a madman.

Recap

That's it. To summarize, here's what you've learned:

Candidate Memory

Here's what you should save:

Save

And now it's time for that donut.

Thanks to Chen Liang for some of the TensorFlow code I used, Ben Hamner and Kaggle for the Trump dataset, and, of course, Schmidhuber and Hochreiter for their original paper. If you want to explore the LSTMs yourself, feel free to play around!


          ShadowPlay: Using our hands to have some fun with AI      Cache   Translate Page      

Editor’s note:TensorFlow, our open source machine learning platform, is just that—open to anyone. Companies, nonprofits, researchers and developers have used TensorFlow in some pretty cool ways and at Google, we're always looking to do the same. Here's one of those stories.

Chinese shadow puppetry—which uses silhouette figures and music to tell a story—is an ancient Chinese art form that’s been used by generations to charm communities and pass along cultural history. At Google, we’re always experimenting with how we can connect culture with AI and make it fun, which got us thinking: can AI help put on a shadow puppet show?

So we created ShadowPlay, an interactive installation that celebrates the shadow puppetry art form. The installation, built using TensorFlow and TPUs, uses AI to recognize a person’s hand gestures and then magically transform the shadow figure into digital animations representing the 12 animals of the Chinese zodiac and in an interactive show.

Shadowplay.gif

Attendees use their hands to make shadow figures, which transform into animated characters and creates.

We debuted ShadowPlay at the World AI Conference and Google Developers Day in Shanghai in September. To build the experience, we developed a custom machine learning model that was trained on a dataset made up of lots of examples of people’s hand shadows, which could eventually recognize the shadow and match it to the corresponding animal. “In order to bring this project to life, we asked Googlers to help us train the model by making a lot of fun hand gestures. Once we saw the reaction of users seeing their hand shadows morph into characters, it was impossible not to smile!”, says Miguel de Andres-Clavera, Project Lead at Google. To make sure the experience could guess what animal people were making with high accuracy, we trained the model using TPUs, our custom machine learning hardware accelerators.

We had so much fun building ShadowPlay (almost as much fun as practicing our shadow puppets … ), that we’ll be bringing it to more events around the world soon!




Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10