Next Page: 10000

          phasespace added to PyPI      Cache   Translate Page      
Tensorflow implementation of the Raubold and Lynch method for n-body events
          ML Solutions Architect - Alexa AI - Amazon Technologies, Inc. - Seattle, WA      Cache   Translate Page      
Frameworks and Infrastructure with tools like Apache MXNet and TensorFlow, API-driven Services like Amazon Lex, Amazon Polly and Amazon Rekognition to quickly...
From Amazon.com - Tue, 08 Jan 2019 21:02:35 GMT - View all Seattle, WA jobs
          【ちょっと技術的なことAdventCalender】 Chainer Playgroundを少しだけ触ってみた      Cache   Translate Page      

この記事はちょっと技術的なことAdventCalenderの7日目です。
7日目といいつつ書くのが遅くなってしまい日付を跨いじゃってますが、そこは大目に見てください…

さて、最近IT業界ではディープラーニングだのAIだの、機械学習系の話題がものすごく流行っていると思います。
筆者の所属する大学の研究室でも、機械学習を積極的にやっており、その分野で論文を書いている人もいます。(自分は機械学習とは異なる分野で研究しているのですが…)
で、そんな流行りものである機械学習ではそれを行うためのフレームワークなども出てきております。
そのような流れの中で、ひとつ気になるニュースがあったので今日はそれをピックアップしようと思います。
深層学習をウェブブラウザ上で学習できる「Chainer Playground」の無償公開について

そもそもChainerとは

先ほどお話しした、機械学習フレームワークの一つです。このフレームワークは日本の会社であるPreferred Networks(通称 PFN)が作成したフレームワークです。
Chainerの特徴はこんな感じらしいです。

In contrast, Chainer adopts a “Define-by-Run” scheme, i.e., the network is defined on-the-fly via the actual forward computation. More precisely, Chainer stores the history of computation instead of programming logic. This strategy enables to fully leverage the power of programming logic in Python. For example, Chainer does not need any magic to introduce conditionals and loops into the network definitions. The Define-by-Run scheme is the core concept of Chainer. We will show in this tutorial how to define networks dynamically.

http://docs.chainer.org/en/stable/tutorial/basic.htmlより引用

まあ英語で専門的なこと言われても何言っているかわかりませんね(読める人は読んでください)。
PFNの中の人のブログやSlideShareを見ると、要するにほかのフレームワークは"Define-and-Run"形式で、「ディープラーニングで使うネットワーク構造の構築」と「実際に構築した構造にデータを流して処理する」部分が別々の実装として書かなければならないのに対し、Chainerは"Define-by-Run"形式で構築と処理を一度に書けてしまうため、比較的簡単にニューラルネットワークの構造をかける点が特徴だそうです。(間違っていたらすいません)
ちなみに他に有名なライブラリとしてはGoogleのTensorFlowやカリフォルニア大学バークレー校の研究チームが開発したCaffeなどがあるようです。

Chainer Playgroundとは

Chainer Playgroundでは深層学習とChainerの基礎を実際のプログラミングを通しながら学べます。Chainer Playgroundを利用するにあたって必要なのはウェブブラウザのみで、PythonやChainerのインストールも必要ありません。ブラウザでChainer Playgroundに接続するだけですぐに学習を始められます。

深層学習をウェブブラウザ上で学習できる「Chainer Playground」の無償公開についてより引用

次は日本語でよかったです。まあ上に書いてある説明でだいたいは終わっていて、Chainerを使った機械学習の基礎をwebブラウザ上で簡単に学べるサイトになっています。Ubuntuを用意して環境構築してやっと…とかしいなくても機械学習を体感的に学ぶことができるみたいです。
ただしまだベータ版なので内容は完全版ではないみたいです。

少しだけ触ってみた

画面はこんな感じみたいです。 f:id:igbt3116redtrain:20161208030919p:plain
左側に説明、右側にエディタと実行結果が表示され、テキストを読みながら実行することができます。

早速管理人もやってみました。
最初は機械学習でいう「学習対象のモデル」とは何ぞやというところから入るわけですが…まあ数学チックな話がガンガン出てきます…結構本気で読む必要があり、理解するのにもなかなかの時間がかかります。そりゃあこんだけ難しいことをこなすデータサイエンティストは引く手あまたなわけだ…
ちなみに僕のような数学よくわかってない、プログラムガッツリやっていましたという人はおそらく「次元」って言葉で一気に混乱すると思います(実際しました)。100次元とかそういう字句がたくさん出てきますが、これは100次元「配列」のことではなく、100次元「ベクトル」のことですね。数学的なベクトルです。(x,y,z)みたいなアレです。そこを間違えるとコードを見ても???ってなるので気をつけてください。
そのあとは目的関数を定義して最適化することで、機械学習の基礎を学ぶみたいですが、この先は管理人も勉強中です…

機械学習にちょっと興味があるよって方、ぜひ試してみてはいかがでしょうか…!!


          TensorFlow技术发展与落地实践      Cache   Translate Page      

本次分享的主要内容包括以下四个方面:首先是深度学习简介;其次是TensorFlow简介;然后讲解了深度学习在服装设计方向的机会与挑战;最后重点介绍深度学习的落地实践。

image

一、 深度学习简介

1.1 深度学习的发展

阿尔法围棋(AlphaGo)是第一个击败人类职业围棋选手、第一个战胜围棋世界冠军的人工智能机器人, 从AlphaGo开始, 人工智能正式走入了大众的视野, 在10月18日,AlphaGo的父亲DeepMind团队推出了AlphaGo Zero,从0训练用了40天时间,以100:0击败了当今的世界围棋第一AlphaGo,AlphaZero在去年底通过自我对弈,就完爆上一代围棋冠军程序AlphaGo,且没有采用任何的人类经验作训练数据,不管是AlphaGo还是AlphaGo他们的原理都是深度学习。

image

1.2 深度学习的动机

计算机语言一般都是按照固定流程完成任务,但是有一些任务,比如下五子棋、无人驾驶,人脸识别等,很难把他们的流程描述出来。这个时候就需要用到人工智能的方式来实现这些流程不确定的工作。

点击查看原文>

          ML Solutions Architect - Alexa AI - Amazon Technologies, Inc. - Seattle, WA      Cache   Translate Page      
Frameworks and Infrastructure with tools like Apache MXNet and TensorFlow, API-driven Services like Amazon Lex, Amazon Polly and Amazon Rekognition to quickly...
From Amazon.com - Tue, 08 Jan 2019 21:02:35 GMT - View all Seattle, WA jobs
          TensorFlow.js 1.0      Cache   Translate Page      

Rilasciato TensorFlow.js 1.0, prima major release della libreria JavaScript che consente di implementare modelli di machine learning da browser Web.

Leggi TensorFlow.js 1.0


          Machine Learning Model      Cache   Translate Page      
Need to build intelligent models which can predict form fields in the editable PDF form fields Will give you more details . TensorFlow or any other ML technique can be used (Budget: $2 - $8 USD, Jobs: Artificial Intelligence, Machine Learning, Python, Tensorflow)
          Machine learning Project      Cache   Translate Page      
Need to build intelligent models which can predict form fields in the editable PDF form fields Will give you more details . TensorFlow or any other ML technique can be used (Budget: $2 - $8 USD, Jobs: Algorithm, Machine Learning, Python, Software Architecture)
          Book Memo: “Generative Adversarial Networks Projects”      Cache   Translate Page      
Build next-generation generative models using TensorFlow and Keras Book DescriptionGenerative Adversarial Networks (GANs) have the potential to build next-generation models, …

Continue reading


          Data Science: Deep Learning in Python (Updated)      Cache   Translate Page      
Data Science: Deep Learning in Python (Updated)#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
Data Science: Deep Learning in Python (Updated)
.MP4 | Video: 1280x720, 30 fps(r) | Audio: AAC, 48000 Hz, 2ch | 1.43 GB
Duration: 9.5 hours | Genre: eLearning Video | Language: English

The MOST in-depth look at neural network theory, and how to code one with pure Python and Tensorflow.

Learn how Deep Learning REALLY works (not just some diagrams and magical black box code)


          Data scientist - TMC - People Drive Technology - Longueuil, QC      Cache   Translate Page      
Python (Sci-kit Learn, numpy, pandas, Tensorflow, Keras) Matlab, SQL; | JOIN tmc North america:....
From Indeed - Wed, 06 Mar 2019 20:45:02 GMT - View all Longueuil, QC jobs
          Machine Learning/AI Engineer - Groom & Associes - Montréal, QC      Cache   Translate Page      
Experience with tensorflow or other backends, keras or other frameworks, scikit-learn, OpenCV, Pandas. An international company is looking for Machine Learning...
From Groom & Associes - Fri, 08 Mar 2019 21:29:13 GMT - View all Montréal, QC jobs
          Senior Data Scientist - Visualization and Optimization - TMC - People Drive Technology - Montréal, QC      Cache   Translate Page      
Python (Sci-kit Learn, numpy, pandas, Tensorflow, Keras), R, Matlab, SQL; | JOIN tmc North america:....
From Indeed - Wed, 20 Feb 2019 18:51:12 GMT - View all Montréal, QC jobs
          Data Scientist / Scientifique des Données - Terminal - Montréal, QC      Cache   Translate Page      
Development experience using numpy, pandas, scikit-learn, tensorflow... Expérience en développement à l’aide de numpy, pandas, scikit-learn, tensorflow......
From Terminal - Sat, 10 Nov 2018 01:01:14 GMT - View all Montréal, QC jobs
          tf-estimator-nightly 1.14.0.dev2019031301      Cache   Translate Page      
TensorFlow Estimator.
          tf-nightly 1.14.1.dev20190313      Cache   Translate Page      
TensorFlow is an open source machine learning framework for everyone.
          tf-nightly-2.0-preview 2.0.0.dev20190313      Cache   Translate Page      
TensorFlow is an open source machine learning framework for everyone.
          TensorFlow 2.0 Alpha veröffentlicht: Eager Execution wird zum Standard      Cache   Translate Page      

TensorFlow 2.0 soll leichter zu benutzen sein, dafür wird Eager Execution zum Standard. In der neuen Version des Machine-Learning-Frameworks werden die APIs aufgeräumt und Keras steht im Zentrum der Entwicklung von ML-Modellen.

Der Beitrag TensorFlow 2.0 Alpha veröffentlicht: Eager Execution wird zum Standard ist auf entwickler.de erschienen.


          TensorFlow.js 1.0      Cache   Translate Page      

Rilasciato TensorFlow.js 1.0, prima major release della libreria JavaScript che consente di implementare modelli di machine learning da browser Web.

Leggi TensorFlow.js 1.0


          TensorFlow.js 1.0      Cache   Translate Page      
Rilasciato TensorFlow.js 1.0, prima major release della libreria JavaScript che consente di implementare modelli di machine learning da browser Web. Leggi TensorFlow.js 1.0

I virus e le vulnerabilità non vengono più aggiornate. Visitate il nostro sito per altri contenuti di qualità.
          Best Practices for TensorFlow* On Intel® Xeon® Processor-based HPC Infrastructures      Cache   Translate Page      

AI is bringing new ways to use massive amounts of data to solve problems in business and industry—and in high performance computing (HPC). AI applications increasingly take on day-to-day use cases, HPC practitioners—like their commercial counterparts—are looking to move deep learning training off specialized laboratory hardware and software onto the familiar Intel®-based infrastructure already in [...]

Read More...

[...]

Read More...

The post Best Practices for TensorFlow* On Intel® Xeon® Processor-based HPC Infrastructures appeared first on Blogs@Intel.


          (USA-MD-Laurel) Senior Software Engineer / Computer Scientist      Cache   Translate Page      
## Position Description **Are you looking for an opportunity that will keep you engaged, challenged, and growing year after year?** **Are you searching for meaningful work that prioritizes creative problem-solving over profits?** **Do you have a strong software engineering and mathematics background?** If so, we're looking for someone like you to join our team at APL. The Tactical Intelligence Systems Group of the Asymmetric Operations Sector is seeking experienced engineers, scientists, and developers driven by curiosity, motivated to deliver solutions, and who have a real passion for learning! We are looking for developers create powerful, cutting-edge solutions for challenges in immersive user interfaces, run-time simulation, machine learning, and artificial intelligence. This may involve: * Surveying academic research to and industry tools to solve problems related to game engine rendering, graphics optimization, and custom shaders * Crafting simulations to generate datasets for machine learning algorithms * Performing full-stack architecture and API design for integrating diverse systems * Building and implementing artificial intelligence algorithms to drive characters in a variety of simulations * Developing immersive user experiences in augmented and virtual reality * Developing software frameworks to manage and analyze agent behavior * Collaborating with Laboratory, for-profit contractor, and sponsor teams to address critical sponsor needs * Effectively communicating results with non-expert audiences, and generating creative ideas to benefit the country. * Some limited travel (up to 10%) to customer sites, and occasional weekend and other after-hours work required to handle and/or complete critical project/work-related business needs. **As a Senior Software Engineer / Computer Scientist, you will....** * Primarily be responsible for applying knowledge in game design, machine learning, full-stack design, and software development to data analysis problems for our sponsors. * Work independently and on teams to engineer software solutions. * Explore promising research and maintain / gain the technical edge required for projects, and share and develop new approaches and methods. * Collaborate to document and support software analytics, and clearly present status and results to internal and external partners. ## Qualifications **You meet our minimum qualifications for the job if you...** * Possess a BS degree in Computer Science, Mathematics, or a related technical track. * Have 5+ years of programming experience and a strong math background. * Are willing and able to deliver operational solutions within business constraints. * Have demonstrated experience in at least three of the following areas: software development, development using 3D game engine, back-end development, machine learning, natural language processing and translation, knowledge representation and reasoning with evidence, synthetic data generation. * Hold an active [Secret or Top Secret] security clearance. If selected, you will be subject to a government security clearance investigation and must meet the requirements for access to classified information. Eligibility requirements include U.S. citizenship. * Are a U.S. Citizen with the ability to obtain a Department of Defense security clearance. If selected, you will be subject to a government security clearance investigation and must meet the requirements for access to classified information. Eligibility requirements include U.S. citizenship. **You'll go above and beyond our minimum requirements if you...** * Have a Master's or PhD degree in Computer Science or a related field and 10+ years of relevant experience. * Possess 2+years of experience performing development using a graphics engine such as Unity3D, Unreal Engine, Blender or Maya. * Have 2+ years of experience applying machine learning to data science or artificial intelligence. * Are experienced in developing Augmented / Virtual Reality solutions. * Are experienced with team-based development of software products and are able to lead development and research projects. * Have experience with machine learning libraries such as Tensorflow, Keras, Caffe, MXNet, CNTK, and scikit-learn, and are familiar with modern databases and parallel computation. * Possess a current DoD security clearance. **Why work at APL?** The Johns Hopkins University Applied Physics Laboratory (APL) brings world-class expertise to our nation’s most critical defense, security, space and science challenges. With a wide selection of challenging, impactful work and a robust education assistance program, APL promotes a culture of life-long learning. Our employees enjoy generous benefits and healthy work/life balance. APL’s campus is located in the Baltimore-Washington metro area. Learn more about our career opportunities at www.jhuapl.edu/careers. APL is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender identity, sexual orientation, national origin, disability status, veteran status, or any other characteristic protected by applicable law. *Primary Location:* *United States-*Maryland-*Laurel *Schedule:* Full-time *Req ID:* 18517
          技术干货 | FreeFlow: 基于软件的虚拟RDMA容器云网络      Cache   Translate Page      

本文由星云 Cluster 编译并授权 InfoQ 发布,原文链接:https://mp.weixin.qq.com/s/jHSaBHJdikJCZZiMjM4m5Q

摘要

为实现资源的高效利用和轻量隔离,很多流行的大型云应用都在逐渐使用容器化。同时,很多数据密集型应用(例如,数据分析和深度学习框架)正在或希望采用RDMA来提高网络性能。行业趋势表明,这两种场景不可避免地会发生冲突。在本文中,我们介绍了FreeFlow,一个为容器云设计的基于软件的RDMA虚拟化框架。FreeFlow纯粹使用基于软件的方法,利用商用RDMA NICs实现了虚拟RDMA网络。与现有的RDMA虚拟化解决方案不同,FreeFlow完全满足来自云环境的需求,例如多租户的隔离、容器迁移的可移植性 、以及控制和数据平面策略的可控制性。FreeFlow对应用程序也是透明的,且使用很小的CPU开销提供了接近裸机RDMA的网络性能。在我们对TensorFlow和Spark的评估中,FreeFlow提供了几乎与裸机RDMA相同的应用性能。

1. 介绍

大型云应用开发者一直在追求高性能、低运维成本和高资源利用率,导致容器化和远程直接内存访问(RDMA)网络技术被越来越多的采用。

容器[7,11,6]提供的轻量级隔离和可移植性,降低了部署和管理云应用的复杂性(从而降低了成本)。因此,现在容器成为管理和部署大型云应用的实际方式。

因为与基于标准TCP/IP的网络相比,RDMA网络能够提供更高的吞吐量、更低的延迟和更少的CPU占用,所以许多数据密集型应用(例如,深度学习和数据分析框架)都会采用RDMA [24,5,18,17]。

点击查看原文>

          El ecosistema de TensorFlow para programadores principiantes y expertos en Machine Learning: cursos, lenguajes y Edge Computing      Cache   Translate Page      

El ecosistema de TensorFlow para programadores principiantes y expertos en Machine Learning: cursos, lenguajes y Edge Computing#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

TensorFlow es la apuesta clave de Google para construir el ecosistema del futuro del Machine Learning que pueda ser ejecutado en la nube, en aplicaciones o en dispositivos hardware de todo tipo.

Precisamente, los esfuerzos en su última TensorFlow Dev Summit 2019 han ido enfocados en facilitar y simplificar el uso del framework, incorporando más API tanto para los programadores principiantes como para los más expertos. De este modo, todos podremos aprovecharnos de las nuevas mejoras para crear modelos de aprendizaje más fácilmente para la mayor número de casos de uso y desplegarlos en cualquier dispositivos.

Han impulsado el despliegue de los algoritmos de forma local en dispositivos hardware con la release final de TensorFlow Lite 1.0 sin necesidad de recurrir a la nube u otro sistema centralizado para ser procesados. Un claro ejemplo de que el Edge Computing forma parte de la estrategia clave de Google para dotar a cualquier dispositivo, ya sea IoT o móvil, de todas las ventajas del aprendizaje automático.

En los tres años que han pasado desde su lanzamiento, TensorFlow ha sentando las bases de un ecosistema de Machine Learning end-to-end, ayudando a potenciar la revolución del Deep Learning. Cada vez hay más desarrolladores que hacen uso de algoritmos para implementar nuevas funcionalidades a los usuarios o acelerar tareas hasta ahora tediosas como la clasificación de imágenes, la captura y reconocimiento de documentos o el reconocimiento de voz y la síntesis del lenguaje natural en los asistentes virtuales (Google Assistant o Alexa)

No es extraño que TensorFlow sea el proyecto con mayor número de contribuciones en Github año tras años, con más de 1.800 contribuciones. Acumulando más de 41 millones de descargas en tres años de historia y decenas de ejemplos de uso en distintas plataformas.

Tensorflow

El camino hacia TensorFlow 2.0

TensorFlow ha sentando las bases de un ecosistema de Machine Learning end-to-end, ayudando a potenciar la revolución del Deep Learning

TensorFlow 2.0 Alpha se ha fijado el objetivo de simplificar su uso, ampliando las posibilidades para ser una plataforma de ML más abierta que puede ser utilizada tanto por investigadores que quieran realizar experimentos y estudios, desarrolladores dispuestos a automatizar cualquier clase tarea o empresas que quieran facilitar la experiencia de uso de sus usuarios a través del inteligencia artificial.

Uno de los pilares de TensorFlow 2.0 es la integración más estrecha con Keras como la API de alto nivel para construir y entrenar modelos de Deep Learning. Keras tiene diversas ventajas:

  • Enfocada en el usuario. Keras tiene una interfaz más simple y consistente, adaptada a los casos de uso más comunes. Facilitando un feedback más claro para entender los errores de implementación.
  • Ser más modular y reutilizable. De este modo los modelos de Keras puede componer estructuras más complejas a través de capas y optimizadores sin necesidad de un modelo específico para entrenar.
  • Pensado tanto para principiantes como para expertos. Aprovechando como idea fundamental el background de los diversos tipos de programadores que se están involucrando desde el principio en el desarrollo de Deep Learning. Keras provee una API mucho más clara sin necesidad de ser un experto con años de experiencia.

También se han incorporado una amplia colección de datasets públicos preparados para ser utilizados con TensorFlow. Cualquier desarrollador que se haya aventurado a trabajar en Machine Learning sabe, esto representa el ingrediente principal para crear modelos y entrenar los algoritmos que después emplearemos. Tener esa ingente cantidad de datos ayuda bastante.

Para la migración de TensorFlow 1.x a 2.0 se han creado diversas herramientas para convertir y migrar los modelos. Ya que se han realizado actualizaciones necesarias para que sean más óptimos y pueden ser desplegados en más plataformas.

El ecosistema sigue creciendo con numerosas librerías para crear un entorno de trabajo más seguro y privado. Como el lanzamiento de la librería TensorFlow Federated, cuya intención es descentralizar el proceso de entrenamiento para compartirlo con múltiples participantes que puedan ejecutarlo localmente y envíen los resultados sin exponer necesariamente los datos capturados, sólo compartiendo el aprendizaje obtenido para la generación de los algoritmos. Un claro ejemplo de esto es el aprendizaje automático de los teclados virtuales, como el de GBoard de Google, que no expone datos sensibles, ya que va a aprendiendo localmente en el propio dispositivo.

Federated Tensor Flow Gboard

Al hilo de esto, el equilibrio entre Machine Learning y la privacidad es una tarea compleja, por ello se ha lanzado la librería de TensorFlow Privacy que permite definir distintos escenarios y grados para salvaguardar los datos más sensible y anonimizar la información de entrenamiento de los modelos.

Python no está solo en TensorFlow, más lenguajes como Swift o Javascript se unen a la plataforma

Python sigue siendo una pieza fundamental en el ecosistema Machine Learning y a la vez ha recibido un gran impulso al ser uno de los lenguajes principales

Obviamente, Python sigue siendo una pieza fundamental en el ecosistema Machine Learning y a la vez ha recibido un gran impulso al ser uno de los lenguajes principales con decenas de librerías entre las más utilizadas, a parte de su gran madurez. No sólo en TensorFlow, sino en otras plataformas como PyTorch.

Pero el ecosistema de TensorFlow ha abierto sus puertas incorporando librerías como TensorFlow.js, que finalmente alcanza la versión 1.0 Con más de 300.000 descargas y 100 contribuciones. Permite ejecutar proyectos ML en el navegador o el en backend con Node.js, tanto modelos ya pre entrenados como construir entrenamientos.

Empresas como Uber o Airbnb ya lo están utilizando en entornos de producción. Hay una amplia galería de ejemplos y casos de uso utilizando JavaScript junto a TensorFlow

Tensor Flow Switf

Otra de las grandes novedades es el avance en la implementación de TensorFlow en Swift con su versión 0.2. De esta forma incorporan un nuevo lenguaje de propósito general como Swift al paradigma ML con todas las funcionalidades para que los desarrolladores puedan acceder a todos los operadores de TensorFlow. Todo ello construido sobre las bases de Jupyter y LLDB.

Ejecutar localmente nuestros modelos con Tensor Lite: la apuesta por el Edge computing

Edge computing se basa en ejecutar modelos y hacer inferencias directamente en local sin depender de que tengan que ser enviados los datos para ser analizados a la nube.

El objetivo de TensorFlow Lite es impulsar definitivamente el Edge Computing en los millones de dispositivos que a día de hoy son capaces de ejecutar TensorFlow. Es la solución para ejecutar modelos y hacer inferencias directamente sin depender de que tengan que ser enviados los datos para ser analizados a la nube.

En Mayo de 2017 fue presentado en la conferencia de desarrolladores de Google IO con una versión preview. Esta semana ha alcanzado la versión definitiva TensorFlow Lite 1.0. Lo que ayudará a implementar diversos casos de uso como la generación de texto predictivo, la clasificación de imágenes, la detección de objetos, el reconocimiento de audio o la síntesis de voz, entre otros muchos escenarios que se pueden implementar.

Esto permite una mejora de rendimiento considerable debido a la conversión al modelo de TensorFlow Lite que permite la herramienta, como por el aumento de rendimiento para ser ejecutado en las GPU de cada dispositivo, incluyendo Android, por ejemplo.

Con ello, TensorFlow Mobile comienza a ser deprecado, salvo que realmente queramos realizar entrenamientos directamente desde el mismo dispositivo. Ya han confirmado que dentro del roadmap de esta versión Lite están trabajando en esto mismo, desvelaron ciertas funcionalidades interesantes como el aprendizaje acelerado con la asignación de pesos para mejorar la inferencia e incorporar ese aprendizaje en sucesivas ejecuciones.

Para completar los novedades, se presentó Google Coral una placa hardware que permite desplegar modelos usando TensorFlow Lite y toda la potencia de la Edge TPU de Google.

Tensor Flow Coral Board

Aprender sobre TensorFlow y Machine Learning cada vez más fácil con estos cursos

En 2016, Udacity lanzó el primer curso sobre TensorFlow en colaboración con Google. Desde entonces, más de 400.000 estudiantes se han apuntado al curso. Aprovechando el lanzamiento de Tensor Flow 2.0 Alpha, se ha renovado el curso por completo para hacerlo más accesible a cualquier desarrollador sin requerir un profundo conocimiento en matemáticas. Tal como ellos afirman: “Si puedes programar, puedes construir aplicaciones AI con Tensor Flow”

Cursos Tensorflow Udacity Deeplearning

El curso de Udacity está guiado por el equipo de desarrollo de Google, a día de hoy está disponible la formación de los primeros 2 meses de la planificación, pero irán añadiendo más contenido a lo largo de las semanas. En la primera parte podrás aprender los conceptos fundamentales detrás del machine learning y cómo construir tu primera red neuronal usando TensorFlow. Disponen de numerosos ejercicios y codelabs escritos por el propio equipo de Tensor Flow.

También se ha incorporado nuevo material en deeplearning.ai con un curso de introducción a AI, ML y DL, parte del career path de Tensor Flow: from Basics to Mastery series de Coursera. Entre los instructores se encuentra Andrew Ng, uno de los más importantes impulsores del Machine Learning desde sus inicios.

Y otra de las plataformas orientadas a la formación en AI, Fast.ai ha incorporado dos cursos sobre el uso de TensorFlow Lite para desarrolladores móviles y otro sobre el uso de Swift en TensorFlow.

Definitivamente, tenemos muchas oportunidades para empezar a aprender más sobre la revolución del Machine Learning junto a TensorFlow, una de las plataformas end-to-end más completas para este fin.

recommendation.header

Gasolina, diésel, híbrido… Cómo acertar con la propulsión

IA y redes neuronales, la apuesta de Google para luchar contra la pornografía infantil

Estados Unidos y Rusia paran las negociaciones sobre un tratado para prohibir los robots asesinos

-
the.news El ecosistema de TensorFlow para programadores principiantes y expertos en Machine Learning: cursos, lenguajes y Edge Computing originally.published.in por Txema Rodríguez .


          ML-Konferenz: Jetzt noch Frühbucherrabatt für Minds Mastering Machines sichern      Cache   Translate Page      
Die Entwicklerkonferenz bringt neben Grundlagen Vorträge zu TensorFlow 2, dem ONNX-Format und Praxisberichte vom Flugzeugbau bis zur Logistik.
          ML Solutions Architect - Alexa AI - Amazon Technologies, Inc. - Seattle, WA      Cache   Translate Page      
Frameworks and Infrastructure with tools like Apache MXNet and TensorFlow, API-driven Services like Amazon Lex, Amazon Polly and Amazon Rekognition to quickly...
From Amazon.com - Tue, 08 Jan 2019 21:02:35 GMT - View all Seattle, WA jobs
          Introducing Android Q Beta      Cache   Translate Page      

Posted by Dave Burke, VP of Engineering

In 2019, mobile innovation is stronger than ever, with new technologies from 5G to edge to edge displays and even foldable screens. Android is right at the center of this innovation cycle, and thanks to the broad ecosystem of partners across billions of devices, Android's helping push the boundaries of hardware and software bringing new experiences and capabilities to users.

As the mobile ecosystem evolves, Android is focused on helping users take advantage of the latest innovations, while making sure users' security and privacy are always a top priority. Building on top of efforts like Google Play Protect and runtime permissions, Android Q brings a number of additional privacy and security features for users, as well as enhancements for foldables, new APIs for connectivity, new media codecs and camera capabilities, NNAPI extensions, Vulkan 1.1 support, faster app startup, and more.

Today we're releasing Beta 1 of Android Q for early adopters and a preview SDK for developers. You can get started with Beta 1 today by enrolling any Pixel device (including the original Pixel and Pixel XL, which we've extended support for by popular demand!) Please let us know what you think! Read on for a taste of what's in Android Q, and we'll see you at Google I/O in May when we'll have even more to share.

Building on top of privacy protections in Android

Android was designed with security and privacy at the center. As Android has matured, we've added a wide range of features to protect users, like file-based encryption, OS controls requiring apps to request permission before accessing sensitive resources, locking down camera/mic background access, lockdown mode, encrypted backups, Google Play Protect (which scans over 50 billion apps a day to identify potentially harmful apps and remove them), and much more. In Android Q, we've made even more enhancements to protect our users. Many of these enhancements are part of our work in Project Strobe.

Giving users more control over location

With Android Q, the OS helps users have more control over when apps can get location. As in prior versions of the OS, apps can only get location once the app has asked you for permission, and you have granted it.

One thing that's particularly sensitive is apps' access to location while the app is not in use (in the background). Android Q enables users to give apps permission to see their location never, only when the app is in use (running), or all the time (when in the background).

For example, an app asking for a user's location for food delivery makes sense and the user may want to grant it the ability to do that. But since the app may not need location outside of when it's currently in use, the user may not want to grant that access. Android Q now offers this greater level of control. Read the developer guide for details on how to adapt your app for this new control. Look for more user-centric improvements to come in upcoming Betas. At the same time, our goal is to be very sensitive to always give developers as much notice and support as possible with these changes.

More privacy protections in Android Q

Beyond changes to location, we're making further updates to ensure transparency, give users control, and secure personal data.

In Android Q, the OS gives users even more control over apps, controlling access to shared files. Users will be able to control apps' access to the Photos and Videos or the Audio collections via new runtime permissions. For Downloads, apps must use the system file picker, which allows the user to decide which Download files the app can access. For developers, there are changes to how your apps can use shared areas on external storage. Make sure to read the Scoped Storage changes for details.

We've also seen that users (and developers!) get upset when an app unexpectedly jumps into the foreground and takes over focus. To reduce these interruptions, Android Q will prevent apps from launching an Activity while in the background. If your app is in the background and needs to get the user's attention quickly -- such as for incoming calls or alarms -- you can use a high-priority notification and provide a full-screen intent. See the documentation for more information.

We're limiting access to non-resettable device identifiers, including device IMEI, serial number, and similar identifiers. Read the best practices to help you choose the right identifiers for your use case, and see the details here. We're also randomizing the device's MAC address when connected to different Wi-Fi networks by default -- a setting that was optional in Android 9 Pie.

We are bringing these changes to you early, so you can have as much time as possible to prepare. We've also worked hard to provide developers detailed information up front, we recommend reviewing the detailed docs on the privacy changes and getting started with testing right away.

New ways to engage users

In Android Q, we're enabling new ways to bring users into your apps and streamlining the experience as they transition from other apps.

Foldables and innovative new screens

Foldable devices have opened up some innovative experiences and use-cases. To help your apps to take advantage of these and other large-screen devices, we've made a number of improvements in Android Q, including changes to onResume and onPause to support multi-resume and notify your app when it has focus. We've also changed how the resizeableActivity manifest attribute works, to help you manage how your app is displayed on foldable and large screens. To you get started building and testing on these new devices, we've been hard at work updating the Android Emulator to support multiple-display type switching -- more details coming soon!

Sharing shortcuts

When a user wants to share content like a photo with someone in another app, the process should be fast. In Android Q we're making this quicker and easier with Sharing Shortcuts, which let users jump directly into another app to share content. Developers can publish share targets that launch a specific activity in their apps with content attached, and these are shown to users in the share UI. Because they're published in advance, the share UI can load instantly when launched.

The Sharing Shortcuts mechanism is similar to how App Shortcuts works, so we've expanded the ShortcutInfo API to make the integration of both features easier. This new API is also supported in the new ShareTarget AndroidX library. This allows apps to use the new functionality, while allowing pre-Q devices to work using Direct Share. You can find an early sample app with source code here.

Settings Panels

You can now also show key system settings directly in the context of your app, through a new Settings Panel API, which takes advantage of the Slices feature that we introduced in Android 9 Pie.

A settings panel is a floating UI that you invoke from your app to show system settings that users might need, such as internet connectivity, NFC, and audio volume. For example, a browser could display a panel with connectivity settings like Airplane Mode, Wi-Fi (including nearby networks), and Mobile Data. There's no need to leave the app; users can manage settings as needed from the panel. To display a settings panel, just fire an intent with one of the new Settings.Panel actions.

Connectivity

In Android Q, we've extended what your apps can do with Android's connectivity stack and added new connectivity APIs.

Connectivity permissions, privacy, and security

Most of our APIs for scanning networks already require COARSE location permission, but in Android Q, for Bluetooth, Cellular and Wi-Fi, we're increasing the protection around those APIs by requiring the FINE location permission instead. If your app only needs to make peer-to-peer connections or suggest networks, check out the improved Wi-Fi APIs below -- they simplify connections and do not require location permission.

In addition to the randomized MAC addresses that Android Q provides when connected to different Wi-Fi networks, we're adding new Wi-Fi standard support, WP3 and OWE, to improve security for home and work networks as well as open/public networks.

Improved peer-to-peer and internet connectivity

In Android Q we refactored the Wi-Fi stack to improve privacy and performance, but also to improve common use-cases like managing IoT devices and suggesting internet connections -- without requiring the location permission.

The network connection APIs make it easier to manage IoT devices over local Wi-Fi, for peer-to-peer functions like configuring, downloading, or printing. Apps initiate connection requests indirectly by specifying preferred SSIDs & BSSIDs as WiFiNetworkSpecifiers. The platform handles the Wi-Fi scanning itself and displays matching networks in a Wi-Fi Picker. When the user chooses, the platform sets up the connection automatically.

The network suggestion APIs let apps surface preferred Wi-Fi networks to the user for internet connectivity. Apps initiate connections indirectly by providing a ranked list of networks and credentials as WifiNetworkSuggestions. The platform will seamlessly connect based on past performance when in range of those networks.

Wi-Fi performance mode

You can now request adaptive Wi-Fi in Android Q by enabling high performance and low latency modes. These will be of great benefit where low latency is important to the user experience, such as real-time gaming, active voice calls, and similar use-cases.

To use the new performance modes, call WifiManager.WifiLock.createWifiLock() with WIFI_MODE_FULL_LOW_LATENCY or WIFI_MODE_FULL_HIGH_PERF. In these modes, the platform works with the device firmware to meet the requirement with lowest power consumption.

Camera, media, graphics

Dynamic depth format for photos

Many cameras on mobile devices can simulate narrow depth of field by blurring the foreground or background relative to the subject. They capture depth metadata for various points in the image and apply a static blur to the image, after which they discard the depth metadata.

Starting in Android Q, apps can request a Dynamic Depth image which consists of a JPEG, XMP metadata related to depth related elements, and a depth and confidence map embedded in the same file on devices that advertise support.

Requesting a JPEG + Dynamic Depth image makes it possible for you to offer specialized blurs and bokeh options in your app. You can even use the data to create 3D images or support AR photography use-cases in the future. We're making Dynamic Depth an open format for the ecosystem, and we're working with our device-maker partners to make it available across devices running Android Q and later.

With Dynamic Depth image you can offer specialized blurs and bokeh options in your app.

New audio and video codecs

Android Q introduces support for the open source video codec AV1. This allows media providers to stream high quality video content to Android devices using less bandwidth. In addition, Android Q supports audio encoding using Opus - a codec optimized for speech and music streaming, and HDR10+ for high dynamic range video on devices that support it.

The MediaCodecInfo API introduces an easier way to determine the video rendering capabilities of an Android device. For any given codec, you can obtain a list of supported sizes and frame rates using VideoCodecCapabilities.getSupportedPerformancePoints(). This allows you to pick the best quality video content to render on any given device.

Native MIDI API

For apps that perform their audio processing in C++, Android Q introduces a native MIDI API to communicate with MIDI devices through the NDK. This API allows MIDI data to be retrieved inside an audio callback using a non-blocking read, enabling low latency processing of MIDI messages. Give it a try with the sample app and source code here.

ANGLE on Vulkan

To enable more consistency for game and graphics developers, we are working towards a standard, updateable OpenGL driver for all devices built on Vulkan. In Android Q we're adding experimental support for ANGLE on top of Vulkan on Android devices. ANGLE is a graphics abstraction layer designed for high-performance OpenGL compatibility across implementations. Through ANGLE, the many apps and games using OpenGL ES can take advantage of the performance and stability of Vulkan and benefit from a consistent, vendor-independent implementation of ES on Android devices. In Android Q, we're planning to support OpenGL ES 2.0, with ES 3.0 next on our roadmap.

We'll expand the implementation with more OpenGL functionality, bug fixes, and performance optimizations. See the docs for details on the current ANGLE support in Android, how to use it, and our plans moving forward. You can start testing with our initial support by opting-in through developer options in Settings. Give it a try today!

Vulkan everywhere

We're continuing to expand the impact of Vulkan on Android, our implementation of the low-overhead, cross-platform API for high-performance 3D graphics. Our goal is to make Vulkan on Android a broadly supported and consistent developer API for graphics. We're working together with our device manufacturer partners to make Vulkan 1.1 a requirement on all 64-bit devices running Android Q and higher, and a recommendation for all 32-bit devices. Going forward, this will help provide a uniform high-performance graphics API for apps and games to use.

Neural Networks API 1.2

Since introducing the Neural Networks API (NNAPI) in 2017, we've continued to expand the number of operations supported and improve existing functionality. In Android Q, we've added 60 new ops including ARGMAX, ARGMIN, quantized LSTM, alongside a range of performance optimisations. This lays the foundation for accelerating a much greater range of models -- such as those for object detection and image segmentation. We are working with hardware vendors and popular machine learning frameworks such as TensorFlow to optimize and roll out support for NNAPI 1.2.

Strengthening Android's Foundations

ART performance

Android Q introduces several new improvements to the ART runtime which help apps start faster and consume less memory, without requiring any work from developers.

Since Android Nougat, ART has offered Profile Guided Optimization (PGO), which speeds app startup over time by identifying and precompiling frequently executed parts of your code. To help with initial app startup, Google Play is now delivering cloud-based profiles along with APKs. These are anonymized, aggregate ART profiles that let ART pre-compile parts of your app even before it's run, giving a significant jump-start to the overall optimization process. Cloud-based profiles benefit all apps and they're already available to devices running Android P and higher.

We're also continuing to make improvements in ART itself. For example, in Android Q we've optimized the Zygote process by starting your app's process earlier and moving it to a security container, so it's ready to launch immediately. We're storing more information in the app's heap image, such as classes, and using threading to load the image faster. We're also adding Generational Garbage Collection to ART's Concurrent Copying (CC) Garbage Collector. Generational CC is more efficient as it collects young-generation objects separately, incurring much lower cost as compared to full-heap GC, while still reclaiming a good amount of space. This makes garbage collection overall more efficient in terms of time and CPU, reducing jank and helping apps run better on lower-end devices.

Security for apps

BiometricPrompt is our unified authentication framework to support biometrics at a system level. In Android Q we're extending support for passive authentication methods such as face, and adding implicit and explicit authentication flows. In the explicit flow, the user must explicitly confirm the transaction in the TEE during the authentication. The implicit flow is designed for a lighter-weight alternative for transactions with passive authentication. We've also improved the fallback for device credentials when needed.

Android Q adds support for TLS 1.3, a major revision to the TLS standard that includes performance benefits and enhanced security. Our benchmarks indicate that secure connections can be established as much as 40% faster with TLS 1.3 compared to TLS 1.2. TLS 1.3 is enabled by default for all TLS connections. See the docs for details.

Compatibility through public APIs

Another thing we all care about is ensuring that apps run smoothly as the OS changes and evolves. Apps using non-SDK APIs risk crashes for users and emergency rollouts for developers. In Android Q we're continuing our long-term effort begun in Android P to move apps toward only using public APIs. We know that moving your app away from non-SDK APIs will take time, so we're giving you advance notice.

In Android Q we're restricting access to more non-SDK interfaces and asking you to use the public equivalents instead. To help you make the transition and prevent your apps from breaking, we're enabling the restrictions only when your app is targeting Android Q. We'll continue adding public alternative APIs based on your requests; in cases where there is no public API that meets your use case, please let us know.

It's important to test your apps for uses of non-SDK interfaces. We recommend using the StrictMode method detectNonSdkApiUsage() to warn when your app accesses non-SDK APIs via reflection or JNI. Even if the APIs are exempted (grey-listed) at this time, it's best to plan for the future and eliminate their use to reduce compatibility issues. For more details on the restrictions in Android Q, see the developer guide.

Modern Android

We're expanding our efforts to have all apps take full advantage of the security and performance features in the latest version of Android. Later this year, Google Play will require you to set your app's targetSdkVersion to 28 (Android 9 Pie) in new apps and updates. In line with these changes, Android Q will warn users with a dialog when they first run an app that targets a platform earlier than API level 23 (Android Marshmallow). Here's a checklist of resources to help you migrate your app.

We're also moving the ecosystem toward readiness for 64-bit devices. Later this year, Google Play will require 64-bit support in all apps. If your app uses native SDKs or libraries, keep in mind that you'll need to provide 64-bit compliant versions of those SDKs or libraries. See the developer guide for details on how to get ready.

Get started with Android Q Beta

With important privacy features that are likely to affect your apps, we recommend getting started with testing right away. In particular, you'll want to enable and test with Android Q storage changes, new location permission states, restrictions on background app launch, and restrictions on device identifiers. See the privacy documentation for details.

To get started, just install your current app from Google Play onto a device or Android Virtual Device running Android Q Beta and work through the user flows. The app should run and look great, and handle the Android Q behavior changes for all apps properly. If you find issues, we recommend fixing them in the current app, without changing your targeting level. Take a look at the migration guide for steps and a recommended timeline.

Next, update your app's targetSdkVersion to 'Q' as soon as possible. This lets you test your app with all of the privacy and security features in Android Q, as well as any other behavior changes for apps targeting Q.

Explore the new features and APIs

When you're ready, dive into Android Q and learn about the new features and APIs you can use in your apps. Take a look at the API diff report, the Android Q Beta API reference, and developer guides as a starting point. Also, on the Android Q Beta developer site, you'll find release notes and support resources for reporting issues.

To build with Android Q, download the Android Q Beta SDK and tools into Android Studio 3.3 or higher, and follow these instructions to configure your environment. If you want the latest fixes for Android Q related changes, we recommend you use Android Studio 3.5 or higher.

How do I get Android Q Beta?

It's easy - you can enroll here to get Android Q Beta updates over-the-air, on any Pixel device (and this year we're supporting all three generations of Pixel -- Pixel 3, Pixel 2, and even the original Pixel!). Downloadable system images for those devices are also available. If you don't have a Pixel device, you can use the Android Emulator, and download the latest emulator system images via the SDK Manager in Android Studio.

We plan to update the preview system images and SDK regularly throughout the preview. We'll have more features to share as the Beta program moves forward.

As always, your feedback is critical, so please let us know what you think — the sooner we hear from you, the more of your feedback we can integrate. When you find issues, please report them here. We have separate hotlists for filing platform issues, app compatibility issues, and third-party SDK issues.


          Développeur Fullstack (C++, TensorFlow ou PyTorch) / Fullstack Developer (C++, TensorFlow or PyTorch) - Huawei Canada - Montréal, QC      Cache   Translate Page      
Located in Hong Kong, Shenzhen, Beijing, London, Paris, Montreal, Toronto and Edmonton, Noah’s Ark Lab is Huawei Technologies’ flagship AI lab....
From Huawei Canada - Wed, 06 Feb 2019 17:46:37 GMT - View all Montréal, QC jobs
          Développeur Full Stack (Python/JavaScript avec Tensorflow/Pytorch) / Full Stack Developer (Python/JavaScript with Tensorflow/Pytorch) - Huawei Canada - Montréal, QC      Cache   Translate Page      
Located in Hong Kong, Shenzhen, Beijing, London, Paris, Montreal, Toronto and Edmonton, Noah’s Ark Lab is Huawei Technologies’ flagship AI lab....
From Huawei Canada - Thu, 24 Jan 2019 17:46:38 GMT - View all Montréal, QC jobs


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10
Site Map 2018_10_11
Site Map 2018_10_12
Site Map 2018_10_13
Site Map 2018_10_14
Site Map 2018_10_15
Site Map 2018_10_16
Site Map 2018_10_17
Site Map 2018_10_18
Site Map 2018_10_19
Site Map 2018_10_20
Site Map 2018_10_21
Site Map 2018_10_22
Site Map 2018_10_23
Site Map 2018_10_24
Site Map 2018_10_25
Site Map 2018_10_26
Site Map 2018_10_27
Site Map 2018_10_28
Site Map 2018_10_29
Site Map 2018_10_30
Site Map 2018_10_31
Site Map 2018_11_01
Site Map 2018_11_02
Site Map 2018_11_03
Site Map 2018_11_04
Site Map 2018_11_05
Site Map 2018_11_06
Site Map 2018_11_07
Site Map 2018_11_08
Site Map 2018_11_09
Site Map 2018_11_10
Site Map 2018_11_11
Site Map 2018_11_12
Site Map 2018_11_13
Site Map 2018_11_14
Site Map 2018_11_15
Site Map 2018_11_16
Site Map 2018_11_17
Site Map 2018_11_18
Site Map 2018_11_19
Site Map 2018_11_20
Site Map 2018_11_21
Site Map 2018_11_22
Site Map 2018_11_23
Site Map 2018_11_24
Site Map 2018_11_25
Site Map 2018_11_26
Site Map 2018_11_27
Site Map 2018_11_28
Site Map 2018_11_29
Site Map 2018_11_30
Site Map 2018_12_01
Site Map 2018_12_02
Site Map 2018_12_03
Site Map 2018_12_04
Site Map 2018_12_05
Site Map 2018_12_06
Site Map 2018_12_07
Site Map 2018_12_08
Site Map 2018_12_09
Site Map 2018_12_10
Site Map 2018_12_11
Site Map 2018_12_12
Site Map 2018_12_13
Site Map 2018_12_14
Site Map 2018_12_15
Site Map 2018_12_16
Site Map 2018_12_17
Site Map 2018_12_18
Site Map 2018_12_19
Site Map 2018_12_20
Site Map 2018_12_21
Site Map 2018_12_22
Site Map 2018_12_23
Site Map 2018_12_24
Site Map 2018_12_25
Site Map 2018_12_26
Site Map 2018_12_27
Site Map 2018_12_28
Site Map 2018_12_29
Site Map 2018_12_30
Site Map 2018_12_31
Site Map 2019_01_01
Site Map 2019_01_02
Site Map 2019_01_03
Site Map 2019_01_04
Site Map 2019_01_06
Site Map 2019_01_07
Site Map 2019_01_08
Site Map 2019_01_09
Site Map 2019_01_11
Site Map 2019_01_12
Site Map 2019_01_13
Site Map 2019_01_14
Site Map 2019_01_15
Site Map 2019_01_16
Site Map 2019_01_17
Site Map 2019_01_18
Site Map 2019_01_19
Site Map 2019_01_20
Site Map 2019_01_21
Site Map 2019_01_22
Site Map 2019_01_23
Site Map 2019_01_24
Site Map 2019_01_25
Site Map 2019_01_26
Site Map 2019_01_27
Site Map 2019_01_28
Site Map 2019_01_29
Site Map 2019_01_30
Site Map 2019_01_31
Site Map 2019_02_01
Site Map 2019_02_02
Site Map 2019_02_03
Site Map 2019_02_04
Site Map 2019_02_05
Site Map 2019_02_06
Site Map 2019_02_07
Site Map 2019_02_08
Site Map 2019_02_09
Site Map 2019_02_10
Site Map 2019_02_11
Site Map 2019_02_12
Site Map 2019_02_13
Site Map 2019_02_14
Site Map 2019_02_15
Site Map 2019_02_16
Site Map 2019_02_17
Site Map 2019_02_18
Site Map 2019_02_19
Site Map 2019_02_20
Site Map 2019_02_21
Site Map 2019_02_22
Site Map 2019_02_23
Site Map 2019_02_24
Site Map 2019_02_25
Site Map 2019_02_26
Site Map 2019_02_27
Site Map 2019_02_28
Site Map 2019_03_01
Site Map 2019_03_02
Site Map 2019_03_03
Site Map 2019_03_04
Site Map 2019_03_05
Site Map 2019_03_06
Site Map 2019_03_07
Site Map 2019_03_08
Site Map 2019_03_09
Site Map 2019_03_10
Site Map 2019_03_11
Site Map 2019_03_12
Site Map 2019_03_13