Next Page: 10000

          Principal Business Development Manager for Mobile Compute Infrastructure - Amazon.com - Seattle, WA      Cache   Translate Page      
AWS customers are looking for ways to change their business models and solve complex business challenges with machine learning (ML) and deep learning (DL)...
From Amazon.com - Thu, 16 Aug 2018 01:20:19 GMT - View all Seattle, WA jobs
          Intel Solidifies Deep Learning Portfolio with Varied Architectures and Packages       Cache   Translate Page      
Gadi Singer is the vice president and general manager of the Artificial Intelligence Products Group at Intel. In an interview with Ed Sperling of Semiconductor Engineering, he discusses how Intel is evolving to meet the ever changing requirements for deep learning. He believes that Xeon processors are well suited for deep learning but there are other solutions needed that range from sub-1 watt to 400 watts. Intel's ability to leverage the data center, edge and system integration will be key to their future in creating a solid portfolio of products for the deep learning field. There are three elements. One is that we need a portfolio, because our customers are asking for it. You need solutions that go from the end device, whether that's a security camera or a drone or a car, to a gateway, which is the aggregation point, and up to the cloud or on-premise servers. You need a set of solutions that are very efficient at each of those points. One element of our hardware strategy is to provide a portfolio with complementary architectures and solutions. Another element is to further make Xeon a strong foundation for AI. Discussion
          CNN-Based Signal Detection for Banded Linear Systems. (arXiv:1809.03682v1 [cs.IT])      Cache   Translate Page      

Authors: Congmin Fan, Xiaojun Yuan, Ying-Jun Angela Zhang

Banded linear systems arise in many communication scenarios, e.g., those involving inter-carrier interference and inter-symbol interference. Motivated by recent advances in deep learning, we propose to design a high-accuracy low-complexity signal detector for banded linear systems based on convolutional neural networks (CNNs). We develop a novel CNN-based detector by utilizing the banded structure of the channel matrix. Specifically, the proposed CNN-based detector consists of three modules: the input preprocessing module, the CNN module, and the output postprocessing module. With such an architecture, the proposed CNN-based detector is adaptive to different system sizes, and can overcome the curse of dimensionality, which is a ubiquitous challenge in deep learning. Through extensive numerical experiments, we demonstrate that the proposed CNN-based detector outperforms conventional deep neural networks and existing model-based detectors in both accuracy and computational time. Moreover, we show that CNN is flexible for systems with large sizes or wide bands. We also show that the proposed CNN-based detector can be easily extended to near-banded systems such as doubly selective orthogonal frequency division multiplexing (OFDM) systems and 2-D magnetic recording (TDMR) systems, in which the channel matrices do not have a strictly banded structure.


          Comment on Open Thread, September 12, 2018 by Doug Leighton      Cache   Translate Page      
Some food for thought, MAKE YOUR DAUGHTER PRACTICE MATH. SHE’LL THANK YOU LATER. A large body of research has revealed that boys and girls have, on average, similar abilities in math. But girls have a consistent advantage in reading and writing and are often relatively better at these than they are at math, even though their math skills are as good as the boys’. The consequence? A typical little boy can think he’s better at math than language arts. But a typical little girl can think she’s better at language arts than math. As a result, when she sits down to do math, she might be more likely to say, “I’m not that good at this!” She actually is just as good (on average) as a boy at the math — it’s just that she’s even better at language arts. In the international PISA test, the United States ranks near the bottom among the 35 industrialized nations in math. But girls especially could benefit from some extra required practice, which would not only break the cycle of dislike-avoidance-further dislike, but build confidence and that sense of, “Yes, I can do this!” Practice with math can help close the gap between girls’ reading and math skills, making math seem like an equally good long-term study option. Even if she ultimately chooses a non-STEM career, today’s high-tech world will mean her quantitative skills will still come in handy. All learning isn’t — and shouldn’t be — “fun.” Mastering the fundamentals is why we have children practice scales and chords when they’re learning to play a musical instrument, instead of just playing air guitar. It’s why we have them practice moves in dance and soccer, memorize vocabulary while learning a new language and internalize the multiplication tables. In fact, the more we try to make all learning fun, the more we do a disservice to children’s abilities to grapple with and learn difficult topics. As Robert Bjork, a leading psychologist, has shown, deep learning involves “desirable difficulties.” Some learning just plain requires effortful practice, especially in the initial stages. Practice and, yes, even some memorization are what allow the neural patterns of learning to take form. https://www.nytimes.com/2018/08/07/opinion/stem-girls-math-practice.html?rref=collection%2Ftimestopic%2FMathematics&action=click&contentCollection=science&region=stream&module=stream_unit&version=latest&contentPlacement=5&pgtype=collection
          Qualcomm Snapdragon Wear 3100 For Smartwatches Announced      Cache   Translate Page      

On Monday, Qualcomm Technologies launched the Qualcomm Snapdragon Wear 3100 for smartwatches. Before this Qualcomm had launched the Snapdragon Wear 2100 in 2016. During the official launch event in San Francisco, Qualcomm revealed that companies like the Fossil Group, Louis Vuitton, and Montblanc are the first customers to use the platform. According to the company, Qualcomm Snapdragon Wear 3100 is their next generation smartwatch platform based on a new ultra-low power hierarchical system architecture. This results in better battery life and personalised experiences.

The Snapdragon Wear 3100 features 32-bit quad-core A7 processors, integrated DSP, and an ultra-low power co-processor. The company claims that all these features work in tandem with each other to enable extended battery life. The new model can sustain the battery life from 4 to 12 hours. And when put in traditional watch mode, the battery of the smartwatch can even sustain up to a week-long.

Louis Vuitton will use the Snapdragon wear 3100 for its Smartwatch

 

The new co-processor, QCC1110, is a bit on the smaller side. The chipset is responsible for providing better audio, display, and sensor experiences. The co-processor consists of a deep learning engine which paves way for features like keyword detection.

Juxtaposed to premier brands like Apple, this upgrade sounds like a basic step. In fact, both the Qualcomm Snapdragon Wear 3100 and its predecessor, the Qualcomm Snapdragon Wear 2100 have the same main processor. This means that we cannot expect much difference when it comes to the speed of the smartwatch.

The Qualcomm Snapdragon Wear 3100 platform has three variants. The smartwatch is available in Bluetooth and Wi-Fi tethered version, GPS-based tethered version, and 4G LTE connected version. The company has revealed that the chip is in the process of mass production and will be soon ready for shipment. There’s no update about the price as of yet. However, Montblanc has announced that a smartwatch featuring the Snapdragon Wear 3100 should be available in October. The smartphone is expected to be priced at $900.

The post Qualcomm Snapdragon Wear 3100 For Smartwatches Announced appeared first on iGyaan Network.


          Deep Learning-Experte/Expertin - Bosch Gruppe - Vaihingen an der Enz      Cache   Translate Page      
Standort Stuttgart-Vaihingen Arbeitsbereiche Informationstechnologie Einstieg als Berufserfahrene/-r Startdatum Nach Vereinbarung Arbeitszeit Vollzeit und...
Gefunden bei Bosch Gruppe - Fri, 07 Sep 2018 12:02:46 GMT - Zeige alle Vaihingen an der Enz Jobs
          Deep Learning Expert - Bosch Gruppe - Vaihingen an der Enz      Cache   Translate Page      
Standort Stuttgart-Vaihingen Arbeitsbereiche Informationstechnologie Einstieg als Berufserfahrene/-r Startdatum Nach Vereinbarung Arbeitszeit Vollzeit und...
Gefunden bei Bosch Gruppe - Fri, 07 Sep 2018 12:02:45 GMT - Zeige alle Vaihingen an der Enz Jobs
          Integration of Machine Learning and Deep Learning with GIS - the new paradigm      Cache   Translate Page      
none
          Google AI with Jeff Dean      Cache   Translate Page      

Jeff Dean, the lead of Google AI, is on the podcast this week to talk with Melanie and Mark about AI and machine learning research, his upcoming talk at Deep Learning Indaba and his educational pursuit of parallel processing and computer systems was how his career path got him into AI. We covered topics from his team’s work with TPUs and TensorFlow, the impact computer vision and speech recognition is having on AI advancements and how simulations are being used to help advance science in areas like quantum chemistry. We also discussed his passion for the development of AI talent in the content of Africa and the opening of Google AI Ghana. It’s a full episode where we cover a lot of ground. One piece of advice he left us with, “the way to do interesting things is to partner with people who know things you don’t.”

Listen for the end of the podcast where our colleague, Gabe Weiss, helps us answer the question of the week about how to get data from IoT core to display in real time on a web front end.

Jeff Dean

Jeff Dean joined Google in 1999 and is currently a Google Senior Fellow, leading Google AI and related research efforts. His teams are working on systems for speech recognition, computer vision, language understanding, and various other machine learning tasks. He has co-designed/implemented many generations of Google’s crawling, indexing, and query serving systems, and co-designed/implemented major pieces of Google’s initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google’s distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, the open-source TensorFlow system for machine learning, and a variety of internal and external libraries and developer tools.

Jeff received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on whole-program optimization techniques for object-oriented languages. He received a B.S. in computer science & economics from the University of Minnesota in 1990. He is a member of the National Academy of Engineering, and of the American Academy of Arts and Sciences, a Fellow of the Association for Computing Machinery (ACM), a Fellow of the American Association for the Advancement of Sciences (AAAS), and a winner of the ACM Prize in Computing.

Cool things of the week
  • Google Dataset Search is in beta site
  • Expanding our Public Datasets for geospatial and ML-based analytics blog
    • Zip Code Tabulation Area (ZCTA) site
  • Google AI and Kaggle Inclusive Images Challenge site
  • We are rated in the top 100 technology podcasts on iTunes site
  • What makes TPUs fine-tuned for deep learning? blog
Interview
  • Jeff Dean on Google AI profile
  • Deep Learning Indaba site
  • Google AI site
  • Google AI in Ghana blog
  • Google Brain site
  • Google Cloud site
  • DeepMind site
  • Cloud TPU site
  • Google I/O Effective ML with Cloud TPUs video
  • Liquid cooling system article
  • DAWNBench Results site
  • Waymo (Alphabet’s Autonomous Car) site
  • DeepMind AlphaGo site
  • Open AI Dota 2 blog
  • Moustapha Cisse profile
  • Sanjay Ghemawat profile
  • Neural Information Processing Systems Conference site
  • Previous Podcasts
    • GCP Podcast Episode 117: Cloud AI with Dr. Fei-Fei Li podcast
    • GCP Podcast Episode 136: Robotics, Navigation, and Reinforcement Learning with Raia Hadsell podcast
    • TWiML & AI Systems and Software for ML at Scale with Jeff Dean podcast
  • Additional Resources
    • arXiv.org site
    • Chris Olah blog
    • Distill Journal site
    • Google’s Machine Learning Crash Course site
    • Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville book and site
    • NAE Grand Challenges for Engineering site
    • Senior Thesis Parallel Implementations of Neural Network Training: Two Back-Propagation Approaches by Jeff Dean paper and tweet
    • Machine Learning for Systems and Systems for Machine Learning slides
Question of the week

How do I get data from IoT core to display in real time on a web front end?

  • Building IoT Applications on Google Cloud video
  • MQTT site
  • Cloud Pub/Sub site
  • Cloud Functions site
  • Cloud Firestore site
Where can you find us next?

Melanie is at Deep Learning Indaba and Mark is at Tokyo NEXT. We’ll both be at Strangeloop end of the month.

Gabe will be at Cloud Next London and the IoT World Congress.


          KDnuggets™ News 18:n34, Sep 12: Essential Math for Data Science; 100 Days of Machine Learning Code; Drop Dropout      Cache   Translate Page      
Also: Neural Networks and Deep Learning: A Textbook; Don't Use Dropout in Convolutional Networks; Ultimate Guide to Getting Started with TensorFlow.
          TYAN Exhibits Artificial Intelligence and Deep Learning Optimized Server Platforms at GTC Japan 2018      Cache   Translate Page      

          Topaz A.I. Gigapixel 1.1.1      Cache   Translate Page      
Topaz A.I. Gigapixel 1.1.1
Topaz A.I. Gigapixel 1.1.1
Windows x64 | Languages: English | File Size: 919.2 MB
A.I.Gigapixel™ is the first and only desktop application to use the power of artificial intelligence to enlarge your images while adding natural details for an amazing result. Using deep learning technology, A.I.Gigapixel™ can enlarge images and fill in details that other resizing products leave out. These traditional methods produce images that are blurry, unrealistically painterly, and lack the details that are present in real high resolution images.


          Topaz A.I. Gigapixel 1.0.2 (x64) Portable      Cache   Translate Page      
Topaz A.I. Gigapixel 1.0.2 (x64) Portable
Topaz A.I. Gigapixel 1.0.2 (x64) Portable | 314 Mb


A.I.Gigapixel™ is the first and only desktop application to use the power of artificial intelligence to enlarge your images while adding natural details for an amazing result. Using deep learning technology, A.I.Gigapixel™ can enlarge images and fill in details that other resizing products leave out. These traditional methods produce images that are blurry, unrealistically painterly, and lack the details that are present in real high resolution images.
          Luca Massaron, Alberto Boschetti - TensorFlow Deep Learning Projects      Cache   Translate Page      
Leverage the power of Tensorflow to design deep learning systems for a variety of real-world scenarios. TensorFlow is one of the most popular frameworks used for machine learning and, more recently, deep learning. It provides a fast and efficient framework for training different kinds of deep learning models, with very high accuracy.
          Nvidia unveils Tesla T4 chip for faster AI inference in datacenters - VentureBeat      Cache   Translate Page      

VentureBeat

Nvidia unveils Tesla T4 chip for faster AI inference in datacenters
VentureBeat
Nvidia today debuted the Tesla T4 graphics processing unit (GPU) chip to speed up inference from deep learning systems in datacenters. The T4 GPU packed with 2,560 CUDA cores, and 320 Tensor cores with the power to process queries nearly 40 times ...

and more »

          Nvidia unveils Tesla T4 chip for faster AI inference in datacenters - VentureBeat      Cache   Translate Page      

VentureBeat

Nvidia unveils Tesla T4 chip for faster AI inference in datacenters
VentureBeat
Nvidia today debuted the Tesla T4 graphics processing unit (GPU) chip to speed up inference from deep learning systems in datacenters. The T4 GPU packed with 2,560 CUDA cores, and 320 Tensor cores with the power to process queries nearly 40 times ...

and more »

          Nvidia unveils Tesla T4 chip for faster AI inference in datacenters      Cache   Translate Page      

Nvidia today debuted the Tesla T4 graphics processing unit (GPU) chip to speed up inference from deep learning systems in datacenters. The T4 GPU packed with 2,560 CUDA cores, and 320 Tensor cores with the power to process queries nearly 40 times faster than a CPU. Inference is the process of deploying trained AI models to […]


          Scientists Reveal "Lensless" Camera      Cache   Translate Page      
Cameras have come a long way since the days of photographers hiding under a cloak amidst a startling puff of smoke. At any camera store, you can find video cameras that can record underwater, devices that takes photographs with breathtaking clarity at incredible distances—or even ones that do both. However, regardless of how advanced they get, every digital camera currently in existence is constrained by one thing: the need for a focusing lens.

Dr. Rajesh Menon tinkers with a prototype "lensless" camera, which uses light scattered by an ordinary pane of glass.
Image Credit: University of Utah.
You may remember from high school science class that a focusing lens is typically a clear piece of plastic or glass that guides light rays passing through it toward a focal point. Generally speaking, in digital cameras this focal point will be the image sensor, the device that registers and records the light hitting it. When the lens isn’t set up right for objects at a given distance, you can tell by the blurry nature of the image—but when there isn’t a lens at all, the result is an unintelligible mess.

This schematic shows how a simple lens can refract light rays, focusing them to a single point.
Image Credit: Panther, via Wikimedia Commons.
Although they perform an essential function, lenses also add bulk and cost to cameras. It’s not easy to grind down glass to the perfect shape, and the precision involved there can add a lot to the price tag. And no matter how small your phone’s circuitry gets, the bulk of a lens will always be there. But there just doesn’t seem a way around it if you want your photographs to represent objects that you can recognize.

But, in the age of digital cameras and advanced computing, does the image that a camera takes in really need to be intelligible to humans? What if a machine could make sense of the garbled mess that is the result of a completely unfocused camera, and then “translate” it into an image that people could actually understand? That’s exactly what Dr. Rajesh Menon of the University of Utah wondered.

“All the lens does is rearrange this information [the light coming from the object of interest] for a human being to perceive the object or scene,” he says. “Our question was, what if this rearrangement doesn’t happen? Could we still make sense of the object or scene?”

To satiate his curiosity, Menon set up a miniature glass window—uncurved, so as to let light through without distorting it—in his lab, which he surrounded with reflective tape. On one edge of the window he attached a simple off-the-shelf image sensor, and in front of the window he placed a display that showed various simple images like a stick figure, a square, and the University of Utah “U”.

This set of diagrams illustrates Menon's experimental setup. In (a) you can see the general concept of the window/image sensor, while (b) shows how the light rays travel through the window; most pass directly through, but a few are scattered into the image sensor by the rough edge of the window and the reflective tape. Images (c) and (d) are photographs of the actual equipment.
Image Credit: R. Menon, via Optics Express.
As light from the display (representing an object being photographed) passed through the window, a very small fraction of it—about 1%—was scattered by the glass and redirected towards the image sensor by the reflective tape. It’s important to note that, although the light’s path was modified by the presence of this tape, it differs from the behavior of a true lens in that the light rays don't converge toward a single point. Next, Menon developed an algorithm using deep learning to “unscramble” the blurry images, reconstructing the original object.

And, to a large extent, it worked! Menon was able to take recognizable photographs using this “lens-less” camera, a first in the history of optics (unless you count the "pinhole camera" effect). Granted, the photos aren’t the sort of high resolution we’ve come to expect from cutting-edge camera technology, but they’re certainly usable. The applications for this technology are far-reaching; autonomous cars could have windows that double as sensors. Future construction projects could incorporate “security glass” that monitors the surrounding area, and augmented reality glasses could be drastically reduced in bulk. And these cameras could be quite cheap as well—Menon says that the biggest cost is in the image sensor itself, which is already quite low. The technology is actually agnostic to the type of image sensor used, so companies (and consumers) could conceivably shop around for the lowest prices. While he acknowledges that there could be an additional cost factor in the software package required to decode the image, Menon is optimistic. “I’d say the cost to the consumer will be much less than what cameras (in your phone, for example) cost today.”

Some of Menon's experimental results. On the left, you can see the pattern produced by the LED array or LCD, and in the center the unmodified image recorded by the image sensor. The rightmost column shows the "unscrambled" photographs after passing through the algorithm.
Image Credit: R. Menon, via Optics Express.

Even so, there are still some kinks to work out. To begin with, Menon’s research was conducted with a bright, high-contrast object, and it’s unclear how the technology will fare under less ideal conditions—outdoors at dusk, for example. Menon says, “My intuition is that with appropriate sensors and more sophisticated algorithms, it should work fine under normal daylight or room lighting.” He does point out that for low-light conditions, a flash or infrared light could help. Nevertheless, he considers the issue of lighting to be one of the biggest limitations in his technology.

The other big question mark is the camera’s range. Menon found that for the optimal photo, the object should be about 150mm—that’s about 6 inches—from the window. When we consider that many applications, like security cameras, require a much greater range, this is a fairly serious limitation. However, by including the use of additional sensors or adjusting their position, this optimal distance can be lengthened or shortened.

While the technology is far from perfect, Menon sees this project as an exciting implementation of what he calls “non-anthropomorphic cameras”. He explains, “Cameras have been designed for over 100 years based on human perception. It is arguably true that more images and videos are seen by machines today rather than by humans. This will inevitably be true in the future.”

So, what if we start designing cameras for machines rather than humans, and only have them translate when we need them to? Menon concludes, “Our paper is one small step in that direction.”

—Eleanor Hook
          Tesla T4, architettura Turing anche per il deep learning      Cache   Translate Page      
Nvidia ha svelato alla GPU Technology Conference che si sta tenendo in questi giorni in Giappone la nuova Tesla T4, pensata per il mondo del deep learning.
          Nvidia unveils Tesla T4 chip for faster AI inference in datacenters      Cache   Translate Page      

Nvidia today debuted the Tesla T4 graphics processing unit (GPU) chip to speed up inference from deep learning systems in datacenters. The T4 GPU packed with 2,560 CUDA cores, and 320 Tensor cores with the power to process queries nearly 40 times faster than a CPU. Inference is the process of deploying trained AI models to […]

The post Nvidia unveils Tesla T4 chip for faster AI inference in datacenters appeared first on RocketNews | Top News Stories From Around the Globe.


          Episode 122: #122: You’d Better Recognize      Cache   Translate Page      

This week Dave and Gunnar talk about recognition: facial recognition, keystroke recognition, Dothraki recognition.

men-at-computers-18

Cutting Room Floor

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Create enhanced animated 3D models with realistic features      Cache   Translate Page      
I want you to build me 3D models with animation. I want you to use Deep Learning and Tensorflow to enhance the realistic features. The model should be as detailed as possible (realistic features). Inbox... (Budget: $30 - $250 USD, Jobs: 3D Animation, 3D Modelling, 3D Rendering, Machine Learning, Tensorflow)
          Create enhanced animated 3D models with realistic features      Cache   Translate Page      
I want you to build me 3D models with animation. I want you to use Deep Learning and Tensorflow to enhance the realistic features. The model should be as detailed as possible (realistic features). Inbox... (Budget: $30 - $250 USD, Jobs: 3D Animation, 3D Modelling, 3D Rendering, Machine Learning, Tensorflow)
          Topaz A.I. Gigapixel 1.1.1 Win x64      Cache   Translate Page      

A.I.Gigapixel™ is the first and only desktop application to use the power of artificial intelligence to enlarge your images while adding natural details for an amazing result. Using deep learning technology, A.I.Gigapixel™ can enlarge images and fill in details that other resizing products leave out. These traditional methods produce images that are blurry, unrealistically painterly, […]

The post Topaz A.I. Gigapixel 1.1.1 Win x64 appeared first on GFXDomain Blog.


          Nvidia presenteert Tesla T4-accelerator met Turing-gpu      Cache   Translate Page      
Nvidia heeft de Tesla T4-accelerator aangekondigd, die voorzien is van een Turing-gpu met Tensor-cores en 16GB gddr6-geheugen. De kaart is bedoeld voor in datacenters waar gewerkt wordt met deep learning.
          Machine Learning Engineer      Cache   Translate Page      
• Entwicklung von Softwareapplikationen und Qualitätsprüfverfahren basierend auf neuronalen Netzen (Vision, Sensordaten) • Umsetzung von neusten Konzepten aus dem Bereich Deep Learning in Industrie- und Forschungsprojekten • Arbeiten in ESA/Space...
          How to extract building footprints from satellite images using deep learning      Cache   Translate Page      
As part of the AI for Earth team, I work with our partners and other researchers inside Microsoft to develop new ways to use machine learning and other AI approaches to solve global environmental challenges. In this post, we highlight a sample project of using Azure infrastructure for training a deep learning model to gain insight from geospatial data.
          GPUs vs CPUs for deployment of deep learning models      Cache   Translate Page      
Choosing the right type of hardware for deep learning tasks is a widely discussed topic. An obvious conclusion is that the decision should be dependent on the task at hand and based on factors such as throughput requirements and cost.
          TYAN Exhibits Artificial Intelligence and Deep Learning Optimized Server Platforms at GTC Japan 2018      Cache   Translate Page      
...per-unit cost computation for the AI revolution," said Danny Hsu , Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit. Featuring maximum performance and system density, TYAN Thunder HX TA88-B7107 takes full advantage of NVIDIA NVLink™ technology and ...

          Q Technology announced its sales volume figures of August 2018      Cache   Translate Page      

Camera module sales volume and fingerprint recognition sales volume grew 92.5% and 70.6% year-on-year respectively 

KUNSHAN, China, Sept. 12, 2018 /PRNewswire/ -- Q Technology (Group) Company Limited (Stock code: 1478.HK) ("Q Technology") released a voluntary announcement regarding its major products sales volume of August 2018 in HKEX on 7th of September 2018. The year-on-year growth rate of camera module ("CCM") sales volume and fingerprint recognition module ("FPM") sales volume both accelerated, making it the fastest growing month as of now in the year.

  1. Monthly sales volume of CCM hit record high: riding on the strong growth momentum since July, the CCM sales volume of the Company amounted to 28.93 million units and hit the record high of the Company at all time, representing a sequential growth of approx. 25.5% and year-on-year growth of approx. 92.5%. The changes are mainly attributable to the customer product cycle and the market share gains of the Group of camera modules. Product portfolio has been enhanced and maintained a better level of high-end mix thanks to the mass production of dual camera modules, 3D structured light module and automotive modules. Total sales volume of 10 mega pixel and above amounted to approx. 12.4 million units, increased 82.2% year-on-year and hit the record high for the category as well, which is expected to impose a positive impact on the trend of average selling price in second half of 2018.
  2. Sales volume growth of FPM reached new height: the sales volume of FPM in August amounted to approx. 11.96 million units, which was record-breaking in the last four consecutive months since May, representing an increase of approx. 25.4% month-on-month and approx. 70.6% year-on-year. The changes are mainly attributable to the market share gains of the Group of capacitive type FPM and the customer product cycle of under-glass FPM and other products. It's expected to impose a positive impact on the trend of average selling price of FPM in second half of 2018.

Both the two core business segments of Q Technology have achieved great performance, which demonstrates the Group's determination to build its long-term core competencies in intelligent vision industry. Looking forward, Q Technology will commit itself to work on four major development directions: 1) Scaling up to optimize marginal cost; 2) Optimizing product portfolio to improve gross profit margin; 3) Promoting the abilities in intelligent production automation to reduce labor headcount and improve production efficiency; and 4) Deepening vertical integration to enhance competitive strength. Q Technology will continue to provide high quality service to our customers to promote user-centric experience and embrace the vision of "enabling computer to understand and see the real world clearer."

The relevant figures are not equivalent to the final revenue or profit of the Company, and the figures have not been reviewed or audited by the independent auditors and/or the audit committee of the Company and are subject to possible adjustments. Shareholders and potential investors of the Company are advised to exercise caution when dealing in the shares of the Company.

About Q Technology (Group) Company Limited (Stock Code: 1478.HK)

Q Technology is a China leading mid to high-end mobile terminal camera module and fingerprint recognition module manufacturer. Q Technology is dedicated to provide machine vision and human vision to mobile terminals, automobile and robots by the persistent pursuit of capabilities in optics, computing vision and deep learning.

For enquiries, please contact the investor relations team of the Company at:

Mr. Richard Fan             E-mail: richard.fan@ck-telecom.com
Mr. Louis So                  E-mail: louis.so@qtechglobal.com
Ms. Yanqing Cai             E-mail: yanqing.cai@qtechglobal.com


          (DEU-Munich) Data Engineer – Innovation Scaling for Web & Mobile Applications      Cache   Translate Page      
*Role Title:* Data Engineer Innovation Scaling for Web & Mobile Applications - 10A *The Role:* You live to break down and solve complex problems by creating practical, maintainable, and scalable solutions. You're a great person that willingly collaborates, listens and cares about your peers. If this is you then you have the best premises to join our team. In your role as the Data Engineer you will be responsible for the end to end Data Migration development, ownership and management. Our department is mainly responsible for the transition and scaling of the prototypes, generated by the innovation department, towards a fully integrated solution which our customers can rely on. Besides that we are also responsible for the enhancements & maintenance of existing products. *Your responsibilities will include but are not limited to:* * Build the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of data sources incl. using SQL, Hadoop and AWS data sources. Document & consolidate data sources if required. * Collaborate with local development & data teams and the central data management group * Identify, design and implement internal process improvements:* automating manual processes, optimizing data delivery, re-design infrastructure for greater scalability etc. * Enable cutting-edge customer solutions by retrieving and aggregating data from multiple sources and compiling it into digestible and actionable forms. * Act as a trusted technical advisor for the teams and stakeholders. * Work with managers, software developers, and scientists to design and develop data infrastructure and cutting-edge market solutions. * Create data tools for analytics and data science team members tat assist them in building and optimizing our products into innovative business leaders in their segment. * Derive Unsupervised and Supervised Insights from Data with below specializations * Provide Machine Learning competences o Working on various kind of data like Continuous Numerical, Discrete, Textual, Image, Speech, Baskets etc. o Experience in Data Visualization, Predictive Analytics, Machine Learning, Deep Learning, Optimization etc. o Derive and Drive business Metrics and Measurement Systems to enable for AI readiness. o Handle large datasets using big data technologies. *The Impact:* You have the opportunity to shape one of the oldest existing industries in one of the largest enterprises in the market. Through active participation in shaping and improving our ways to achieve technical excellence you will drive and improve our business. *The Career Opportunity:* You will be working within flat hierarchies in a young and dynamic team with flexible working hours. You will benefit from a bandwidth of career enhancing opportunities. You have very good opportunities to shape your own working environment in combination with a very good compensation as well as benefits and will experience the advantage of both a big enterprise and a small start-up at the same time. Since the team is fairly small you will benefit from high trust and responsibility given to you. Also you will be a key person to grow our team. You should also be motivated to introduce new innovative processes and tools into an existing global enterprise structure. *The Team - The Business:* We are a small, highly motivated team in a newly set up division to scale innovation. We use agile methodologies to drive performance and we share and transfer knowledge as well as embracing methods such as pairing or lightning talks to do so. We are always trying to stay ahead of things and try to be state-of-the-art and cutting-edge. *Knowledge & Skills:* * Proven experience in a data engineering, business analytics, business intelligence or comparable data engineering role, including data warehousing and business intelligence tools, techniques and technology * B.S. degree in math, statistics, computer science or equivalent technical field * Experience transforming raw data into information. Implemented data quality rules to ensure accurate, complete, timely data that is consistent across databases. * Demonstrated ability to think strategically about business, product, and technical challenges * Experience in data migrations and transformational projects * Fluent English written and verbal communication skills * Effective problem-solving and analytical capabilities * Ability to handle a high pressure environment * Programming & Tool skills, Python, Spark, Tableau, XLMiner, Linear Regression, Logistic Regression, Unsupervised Machine Learning, Supervised Machine Learning, Forecasting, Marketing, Pricing, SCM, SMAC Analytics *_Beneficial experience:* _ * Experience in NoSQL databases (e.g. Dynamo DB, Mongo DB) * Experience in RDBMS databases (e.g. Oracle DB) *_About Platts and S&P Global_* *Platts is a premier source of benchmark price assessments and commodities intelligence. At Platts, the content you generate and the relationships you build are essential to the energy, petrochemicals, metals and agricultural markets. Learn more at https:* - - www.platts.com - *S&P Global*includes Ratings, Market Intelligence, S&P Dow Jones Indices and Platts. Together, we re the foremost providers of essential intelligence for the capital and commodities markets. - S&P Global is an equal opportunity employer committed to making all employment decisions without regard to race - ethnicity, gender, pregnancy, gender identity or expression, colour, creed, religion, national origin, age, disability, marital status (including domestic partnerships and civil unions), sexual orientation, military veteran status, unemployment status, or other legally protected categories, subject to applicable law. - *To all recruitment agencies:* S&P Global does not accept unsolicited agency resumes. Please do not forward such resumes to any S&P Global employee, office location or website. S&P Global will not be responsible for any fees related such resumes.
          Learning Robust Features and Latent Representations for Single View 3D Pose Estimation of Humans and Objects      Cache   Translate Page      
Estimating the 3D poses of rigid and articulated bodies is one of the fundamental problems of Computer Vision. It has a broad range of applications including augmented reality, surveillance, animation and human-computer interaction. Despite the ever-growing demand driven by the applications, predicting 3D pose from a 2D image is a challenging and ill-posed problem due to the loss of depth information during projection from 3D to 2D. Although there have been years of research on 3D pose estimation problem, it still remains unsolved. In this thesis, we propose a variety of ways to tackle the 3D pose estimation problem both for articulated human bodies and rigid object bodies by learning robust features and latent representations. First, we present a novel video-based approach that exploits spatiotemporal features for 3D human pose estimation in a discriminative regression scheme. While early approaches typically account for motion information by temporally regularizing noisy pose estimates in individual frames, we demonstrate that taking into account motion information very early in the modeling process with spatiotemporal features yields significant performance improvements. We further propose a CNN-based motion compensation approach that stabilizes and centralizes the human body in the bounding boxes of consecutive frames to increase the reliability of spatiotemporal features. This then allows us to effectively overcome ambiguities and improve pose estimation accuracy. Second, we develop a novel Deep Learning framework for structured prediction of 3D human pose. Our approach relies on an auto-encoder to learn a high-dimensional latent pose representation that accounts for joint dependencies. We combine traditional CNNs for supervised learning with auto-encoders for structured learning and demonstrate that our approach outperforms the existing ones both in terms of structure preservation and prediction accuracy. Third, we propose a 3D human pose estimation approach that relies on a two-stream neural network architecture to simultaneously exploit 2D joint location heatmaps and image features. We show that 2D pose of a person, predicted in terms of heatmaps by a fully convolutional network, provides valuable cues to disambiguate challenging poses and results in increased pose estimation accuracy. We further introduce a novel and generic trainable fusion scheme, which automatically learns where and how to fuse the features extracted from two different input modalities that a two-stream neural network operates on. Our trainable fusion framework selects the optimal network architecture on-the-fly and improves upon standard hard-coded network architectures. Fourth, we propose an efficient approach to estimate 3D pose of objects from a single RGB image. Existing methods typically detect 2D bounding boxes and then predict the object pose using a pipelined approach. The redundancy in different parts of the architecture makes such methods computationally expensive. Moreover, the final pose estimation accuracy depends on the accuracy of the intermediate 2D object detection step. In our method, the object is classified and its pose is regressed in a single shot from the full image using a single, compact fully convolutional neural network. Our approach achieves the state-of-the-art accuracy without requiring any costly pose refinement step and runs in real-time at 50 fps on a modern GPU, which is at least 5X faster than the state of the art.
          The Business Aspect of Artificial Intelligence (AI) in Data Science      Cache   Translate Page      

How can our businesses best benefit from Artificial Intelligence (AI)? We cover the relevant technologies necessary to make the most out of AI, such as GPUs to enable deep learning networks, quantum computing, and the cloud. We explore which industries and applications benefit most from AI, such as finance and retail. We share the resources required for AI projects, including computing, data, and educational resources. Know the investment required to incorporate AI into data science projects so that you can best leverage this technology.


          Comment on How to Develop a Deep Learning Bag-of-Words Model for Predicting Movie Review Sentiment by Emmanuel      Cache   Translate Page      
Thank you. My goal is to improve the performance of my existing 'classical' bag-of-words method using Multinominal Bayesian, for both sentiment analysis and document classification. It works well with document classification. However, I am looking for a model with a better performance, especially for my sentiment analysis, given that comments are multiple languages. Would you consider/think that using a multi-channel, N-gram in a CNN would improve the performance, in general? Many thanks for the response :).
          Comment on How to Check-Point Deep Learning Models in Keras by Mohammed      Cache   Translate Page      
I have my owned pretrained model (.ckpt and .meta files). How to use such files to extract features of my dataset in form of matrix which rows represent samples and columns represent features?
          Comment on How to Develop a Deep Learning Bag-of-Words Model for Predicting Movie Review Sentiment by Jason Brownlee      Cache   Translate Page      
Thanks Emmanuel! Yes, I have many such tutorials, type embedding into the blog search.
          Comment on How to Setup a Python Environment for Machine Learning and Deep Learning with Anaconda by Jason Brownlee      Cache   Translate Page      
Well done!
          Comment on How to Develop a Deep Learning Bag-of-Words Model for Predicting Movie Review Sentiment by Emmanuel      Cache   Translate Page      
Hi Jason, You're blog, like this one helps me a lot for my work. I would like to see how 'Embedding' would perform as compared to Bag-of-Words. Do you have a tutorial using embedding for sentiment analysis? Kind Regards, Emmanuel
          How to extract building footprints from satellite images using deep learning      Cache   Translate Page      
As part of the AI for Earth team, I work with our partners and other researchers inside Microsoft to develop new ways to use machine learning and other AI approaches to solve global environmental challenges. In this post, we highlight a sample project of using Azure infrastructure for training a deep learning model to gain insight from geospatial data.
          Smartphone generation big growth opportunity for us: Canon      Cache   Translate Page      

Tokyo: More and more people, especially the millennials, are clicking photos via smartphones and for the Japanese camera giant Canon, this has created a huge market to tap as these people now want to experience something bigger and better which the camera and imaging pioneer can easily provide.

In 2017, nearly 1.3 trillion photos were taken globally -- from 660 billion in 2013 -- and most of the images were taken via smartphones.

"Today, more and more people are buying high-end cameras to rev up their smartphone experience. There will soon be more digital natives in the 10-50 age bracket than ever before.

"Our aim is to acquire the new generation in order to create new businesses and enhance our EOS camera ecosystem," emphasised Go Tokura, Executive Officer and Chief Executive, Image Communication Business Operations at Canon.

Addressing a select gathering at the Canon headquarters here, Tokura said the company is aiming to build a brand new imaging world where high-end smartphones are deciding the future of camera experience.

In India, over 400 million people are smartphone users and more than 700 million people have feature phones who will eventually shift to smartphones for a better experience.

"Although the compact and entry-level camera market is shrinking owing to smartphones, professional and premium camera market is actually growing and our EOS series has been a phenomenal success," Tokura told the audience.

According to the Japan-based Camera Imaging Products Association (CIPA), the shipment number of digital cameras dropped a massive 23 per cent in July this year compared to the same period last year.

On the other hand, the professional camera market is growing.

"We have sold 90 million EOS cameras and 130 million EF lens so far. We have been building EOS cameras for the past 30 years and today, we have achieved high speed, ease of use and high-image quality for end users," informed the Canon executive.

Entering the high-end full-frame mirrorless camera market, Canon on September 5 launched the EOS R -- along with four RF lenses and four types of mount adapters -- that ensures higher image quality and enhanced usability.

The EOS R, which will be launched in India on September 21, employs the newly-developed RF Mount. A large (54 mm) mount internal diameter and short back focus allows for an enhanced communication between the lens and camera body.

The Canon EOS R has a 30.3MP Full-frame CMOS sensor and an ISO range of 100 to 40,000 (expandable up to 50-102,400).

"This is a low-light marvel. The Dual Pixel CMOS AF ensures high operability and precision. The camera is built for an advanced video/movie recording in 4K UHD," said Yoshiyuki Mizoguchi, Group Executive, ICB Products Group, Imaging Communications Business Operations, Canon.

According to a company survey, in 2017 unit sales of interchangeable-lens cameras in the global camera market reached approximately 11,400,000 units. In 2018, the sales are expected to again reach approximately 11,000,000 units.

"For the young millennials, we have launched three concept models this year: MF telephoto camera, intelligent company camera and an outdoor activity camera.

"Then there are wearable cameras, AWS DeepLens (a deep learning enabled video camera), Google Clips, Galaxy Gear 360 and camera-equipped drones where we are present. Canon has already taken a giant leap for the future," Tokura noted.

(Nishant Arora is in Tokyo at the invitation of Canon Inc. He can be contacted at nishant.a@ians.in)


          Deep learning to predict the lab-of-origin of engineered DNA      Cache   Translate Page      
Deep learning to predict the lab-of-origin of engineered DNA Nielsen, Alec Andrew; Voigt, Christopher A. Genetic engineering projects are rapidly growing in scale and complexity, driven by new tools to design and construct DNA. There is increasing concern that widened access to these technologies could lead to attempts to construct cells for malicious intent, illegal drug production, or to steal intellectual property. Determining the origin of a DNA sequence is difficult and time-consuming. Here deep learning is applied to predict the lab-of-origin of a DNA sequence. A convolutional neural network was trained on the Addgene plasmid dataset that contained 42,364 engineered DNA sequences from 2230 labs as of February 2016. The network correctly identifies the source lab 48% of the time and 70% it appears in the top 10 predicted labs. Often, there is not a single “smoking gun” that affiliates a DNA sequence with a lab. Rather, it is a combination of design choices that are individually common but collectively reveal the designer.
          The Business Aspect of Artificial Intelligence (AI) in Data Science      Cache   Translate Page      

How can our businesses best benefit from Artificial Intelligence (AI)? We cover the relevant technologies necessary to make the most out of AI, such as GPUs to enable deep learning networks, quantum computing, and the cloud. We explore which industries and applications benefit most from AI, such as finance and retail. We share the resources required for AI projects, including computing, data, and educational resources. Know the investment required to incorporate AI into data science projects so that you can best leverage this technology.


          Whats new on arXiv      Cache   Translate Page      
Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers As deep learning methods form a critical part in commercially …

Continue reading


          The Business Aspect of Artificial Intelligence (AI) in Data Science      Cache   Translate Page      

How can our businesses best benefit from Artificial Intelligence (AI)? We cover the relevant technologies necessary to make the most out of AI, such as GPUs to enable deep learning networks, quantum computing, and the cloud. We explore which industries and applications benefit most from AI, such as finance and retail. We share the resources required for AI projects, including computing, data, and educational resources. Know the investment required to incorporate AI into data science projects so that you can best leverage this technology.


          The Business Aspect of Artificial Intelligence (AI) in Data Science      Cache   Translate Page      

How can our businesses best benefit from Artificial Intelligence (AI)? We cover the relevant technologies necessary to make the most out of AI, such as GPUs to enable deep learning networks, quantum computing, and the cloud. We explore which industries and applications benefit most from AI, such as finance and retail. We share the resources required for AI projects, including computing, data, and educational resources. Know the investment required to incorporate AI into data science projects so that you can best leverage this technology.


          RELOCATION OFFERED - Computer Vision Engineer      Cache   Translate Page      
IL-Chicago, *Position Located in St. Louis, MO* If you are a RELOCATION OFFERED - Computer Vision Engineer with Deep Learning and Object Recognition experience, please read on! We are a start-up that enhances digital media experiences through proprietary computer vision technology. We are at the forefront of computer vision technology and currently in explosive growth mode! We need YOU to continue developing
          Offer - Pandas deep learning Pune - INDIA      Cache   Translate Page      
Pandas is Python Data Analysis Library, pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools.For More:Address: 3rd Floor, Plot No 7, Common Wealth Society , Opposite Aundh Telephone Exchange, Landmark : Gaikward Petrol Pump Aundh, PunePhone: 8600998107Email: contact@technogeekscs.co.inTwitter: https://twitter.com/InfoTechnogeeksFacebook: https://www.facebook.com/TechnogeeksConsulting/LinkedIn: https://www.linkedin.com/in/paras-arora01/WebSite: https://technogeekscs.com/
          Deep Learning Super Sampling puede ser la verdadera revolución de GeForce RTX      Cache   Translate Page      

Deep Learning Super Sampling es una de las tecnologías más prometedoras de la nueva arquitectura gráfica Turing de Nvidia, la cual promete ofrecer un gran rendimiento con un nivel de calidad gráfica muy elevada. Deep Learning Super Sampling llega a más juegos Deep Learning Super Sampling permite que las tarjetas gráficas GeForce RTX ejecuten juegos …

La entrada Deep Learning Super Sampling puede ser la verdadera revolución de GeForce RTX se publicó primero en Profesional Review.


          Mycronic to set up deep learning centre in California      Cache   Translate Page      
Together with NuFlare Technology and D2S – with support from NVIDIA – Mycronic announces that it is establishing a Centre for Deep Learning in Electronics Manufacturing (CDLe) in San Jose, California.
          Cycles - Interesting Numbers      Cache   Translate Page      
Lottery Discussion forum
Reply #47
Thanks Dr San, Only gave it a cursory glance over but very very interested in this.

One thing I did note immediately is that the have used traditional input structures of the actual numbers as well as some variables that may or may not have an impact on the evaluation process of the deep learning AI.

And this is the paramount point, the AI can only try to ID what is put in front of it as the evaluation criteria.

My current evaluation criteria is not the numbers or factors and variables
[ 2,402 views ]
          Stage : Traitement d'images par réseaux de neurones et deep learning H/F      Cache   Translate Page      
Première société européenne de défense totalement intégrée, MBDA est un leader industriel mondial et un acteur global dans le domaine des systèmes de missiles. MBDA fournit des solutions innovantes qui répondent aux besoins opérationnels présents et futurs de ses clients, les forces armées. Entreprise fondamentalement multiculturelle, imprégnée d'une très forte culture d'innovation et de technicité, MBDA offre aujourd'hui des perspectives de parcours passionnants. Chaque jour, près de 10 000 collaborateurs se consacrent à la conception, au développement et à la production de nos systèmes et sont prêts à vous faire partager leur expérience. De nombreux stages et apprentissages sont proposés dans le but de former les jeunes et de faciliter leur insertion au sein de MBDA ou sur le marché de l'emploi. Convaincu que la diversité est une richesse, MBDA s'engage dans l'accueil et le maintien dans l'emploi des personnes en situation de handicap et veille au respect de l'égalité professionnelle entre les hommes et les femmes. Vous êtes passionné(e)s par le monde de l'aéronautique et de la technologie de pointe, vous voulez enrichir vos compétences et relever des défis, vous souhaitez intégrer des équipes dynamiques, conviviales et innovantes... STAGE : TRAITEMENT D'IMAGES PAR RÉSEAUX DE NEURONES ET DEEP LEARNING H/F VOTRE ENVIRONNEMENT DE TRAVAIL MBDA, au cœur de notre défense... Rejoignez notre groupe, leader européen dans la conception, la fabrication et la commercialisation de missiles et de systèmes d'armes qui répondent aux besoins présents et futurs des armées européennes et alliées ! Auprès de nos 10 000 collaborateurs, venez prendre part à nos projets, en service opérationnel ou en développement, dans un contexte multiculturel favorable à l'innovation et à l'excellence technique ! MBDA s'engage à vos côtés : parcours d'intégration, plan de formation personnalisé, accompagnement de votre évolution de carrière... Venez partager et développer vos compétences avec nos 3000 collaborateurs sur notre site du Plessis-Robinson. Au sein de la Direction Technique, vous êtes intégré(e) au service < Traitement d'images >. Votre stage s'inscrit dans la problématique de reconnaissance et d‘identification d'objets dans des images ou séquences vidéo. Il se focalisera sur les plus récentes techniques à base de réseaux de neurones et de deep learning. L'objectif de votre stage est d'effectuer une revue des techniques de traitement d'images basées sur des méthodes d'apprentissage à base de réseaux de neurones ou de deep learning, puis d'évaluer une ou plusieurs méthodes pour améliorer les performances des algorithmes existants de MBDA. VOS MISSIONS Grâce à vos compétences, vous : Réalisez un état de l'art sur les techniques de reconnaissance et d'identification d'objets dans des images utilisant des réseaux de neurones ou de deep learning. Implémentez et évaluez quantitativement sur les bases de données disponibles en interne les techniques les plus pertinentes identifiées, l'objectif étant d'adapter des codes existants pour améliorer les performances d‘algorithmes d‘apprentissage profond. Documentez l'impact de ces techniques sur les performances et les contraintes qu'elles imposent.
          Offer - Deep Learning course with Pandas in Pune - INDIA      Cache   Translate Page      
Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms.For More:Address: 3rd Floor, Plot No 7, Common Wealth Society , Opposite Aundh Telephone Exchange, Landmark : Gaikward Petrol Pump Aundh, PunePhone: 8600998107Email: contact@technogeekscs.co.inTwitter: https://twitter.com/InfoTechnogeeksFacebook: https://www.facebook.com/TechnogeeksConsulting/LinkedIn: https://www.linkedin.com/in/paras-arora01/WebSite: https://technogeekscs.com/
          Zoom: SSD-based Vector Search for Optimizing Accuracy, Latency and Memory. (arXiv:1809.04067v1 [cs.CV])      Cache   Translate Page      

Authors: Minjia Zhang, Yuxiong He

With the advancement of machine learning and deep learning, vector search becomes instrumental to many information retrieval systems, to search and find best matches to user queries based on their semantic similarities.These online services require the search architecture to be both effective with high accuracy and efficient with low latency and memory footprint, which existing work fails to offer. We develop, Zoom, a new vector search solution that collaboratively optimizes accuracy, latency and memory based on a multiview approach. (1) A "preview" step generates a small set of good candidates, leveraging compressed vectors in memory for reduced footprint and fast lookup. (2) A "fullview" step on SSDs reranks those candidates with their full-length vector, striking high accuracy. Our evaluation shows that, Zoom achieves an order of magnitude improvements on efficiency while attaining equal or higher accuracy, comparing with the state-of-the-art.


          Parallel Separable 3D Convolution for Video and Volumetric Data Understanding. (arXiv:1809.04096v1 [cs.CV])      Cache   Translate Page      

Authors: Felix Gonda, Donglai Wei, Toufiq Parag, Hanspeter Pfister

For video and volumetric data understanding, 3D convolution layers are widely used in deep learning, however, at the cost of increasing computation and training time. Recent works seek to replace the 3D convolution layer with convolution blocks, e.g. structured combinations of 2D and 1D convolution layers. In this paper, we propose a novel convolution block, Parallel Separable 3D Convolution (PmSCn), which applies m parallel streams of n 2D and one 1D convolution layers along different dimensions. We first mathematically justify the need of parallel streams (Pm) to replace a single 3D convolution layer through tensor decomposition. Then we jointly replace consecutive 3D convolution layers, common in modern network architectures, with the multiple 2D convolution layers (Cn). Lastly, we empirically show that PmSCn is applicable to different backbone architectures, such as ResNet, DenseNet, and UNet, for different applications, such as video action recognition, MRI brain segmentation, and electron microscopy segmentation. In all three applications, we replace the 3D convolution layers in state-of-the art models with PmSCn and achieve around 14% improvement in test performance and 40% reduction in model size and on average.


          Nvidia, il coach Turing velocizza gli allenamenti dell’Ai      Cache   Translate Page      
L’azienda ha presentato le Gpu Tesla T4 basate sull’architettura Turing. Rispetto alla precedente generazione Pascal, le schede migliorano i processi di inferenza nel deep learning anche di cinque volte.
          Open Sourcing TonY: Native Support of TensorFlow on Hadoop      Cache   Translate Page      
Co-authors: Jonathan Hung, Keqiu Hu, and Anthony Hsu LinkedIn heavily relies on artificial intelligence to deliver content and create economic opportunities for its 575+ million members. Following recent rapid advances of deep learning technologies, our AI engineers have started adopting deep neural networks in LinkedIn’s relevance-driven products, including feeds and smart-replies. Many of these use cases are built on TensorFlow, a popular deep learning framework written by Google. In the beginning, our internal TensorFlow users ran the framework on small and unmanaged “bare metal” […]
          deepmachine 0.6      Cache   Translate Page      
Deep Learning Framework
          How ZipRecruiter Is Helping People Adjust to the Future of Work      Cache   Translate Page      
The company is investing in data research and deep learning to power the economy of tomorrow.
          [Essay] I need help for grammar check in my essay      Cache   Translate Page      
the spread of artificial intelligence reveals more and more new possibilities. In a few decades ago, the deep learning was the game for only the big Tech Companies, but for today it has been appearing on lots of new places for example inside on a...
          [Resume] Grammar check /spelling my resume      Cache   Translate Page      
the spread of artificial intelligence reveals more and more new possibilities. In a few decades ago, the deep learning was the game for only the big Tech Companies, but for today it has been appearing on lots of new places for example inside on a...
          AI Bias Could Kill Liberalism, But Might Keep Capitalism Alive      Cache   Translate Page      
In addition, the AI algorithms opaque 'black box' characteristics, (i.e. the complexity of these algorithms, especially when it comes to Deep Learning ...
          Be future ready! 9 tips that’ll come handy if you fear losing your job      Cache   Translate Page      
The advent of machine learning, artificial intelligence and deep learning has led to this situation. So, it's important to be financially prepared for a future ...
          Machine learning and AI are changing the world – here’s how to do it better      Cache   Translate Page      
Getting even more practical, we have Isabel Sargent leading a trio of speakers from the ONS talking about their use of deep learning in landscape ...
          “Deep meta reinforcement learning will be the future of AI where we will be so close to achieving …      Cache   Translate Page      
Mckinsey report predicts that artificial intelligence techniques including deep learning and reinforcement learning have the potential to create between ...
          Atos signs strategic partnership with Technical University of Denmark to deliver its latest Atos …      Cache   Translate Page      
... quantum computing should also foster developments in deep learning, algorithms and Artificial Intelligence for domains as varied as pharmaceutical ...
          Klas Telecom unveils deep learning computer vision solution for trains      Cache   Translate Page      
Engineering and design company Klas Telecom is to launch a new deep learning computer vision solution for the railway industry. The solution is due ...
          Deep Learning System Market Trends and Forecast to 2023- Industry Analysis by Manufacturers …      Cache   Translate Page      
Deep Learning System Market is a professional and in-depth study on the current state of the global Deep Learning System industry with a focus on ...
          Global Deep Learning Software Market 2018 Product Scope, Key Players,Trends, Growth Rate …      Cache   Translate Page      
The Deep Learning Software Market report provides a clear view of the market along with the growth rate and the future market prospect. Further the ...
          Global Deep Learning Software Market 2018 Product Scope, Key Players,Trends, Growth Rate …      Cache   Translate Page      
The Deep Learning Software Market report provides a clear view of the market along with the growth rate and the future market prospect. Further the ...
          Intel, Marvell, Qualcomm, and others will support Facebook’s Glow compiler      Cache   Translate Page      

At Facebook’s 2018 @Scale conference in San Jose, California today, the company announced broad industry backing for Glow, its machine learning compiler designed to accelerate the performance of deep learning frameworks. Cadence, Esperanto, Intel, Marvell, and Qualcomm committed to supporting Glow in future silicon products. “We created Glow, an open source framework, to be community […]


          Microsoft acquires AI startup Lobe to help people make deep learning models without code      Cache   Translate Page      

Microsoft today announced it has acquired Lobe, creator of a platform for building custom deep learning models using a visual interface that requires no code or technical understanding of AI. Lobe, a platform that can understand hand gestures, read handwriting, and hear music, will continue to develop as a standalone service, according to the company’s […]


          Nvidia unveils Tesla T4 chip for faster AI inference in datacenters      Cache   Translate Page      

Nvidia today debuted the Tesla T4 graphics processing unit (GPU) chip to speed up inference from deep learning systems in datacenters. The T4 GPU is packed with 2,560 CUDA cores and 320 Tensor cores with the power to process queries nearly 40 times faster than a CPU. Inference is the process of deploying trained AI models […]


           Nvidia RTX's DLSS Roster Gets Nine New Games       Cache   Translate Page      
Nvidia revealed nine more games that will support the Deep Learning Super Sampling (DLSS) feature arriving with its Turing GPUs.
          R Deep Learning Essentials      Cache   Translate Page      

Implement neural network models in R 3.5 using TensorFlow, Keras, and MXNet Key Features Use R 3.5 for building deep learning models for computer vision and text Apply deep learning techniques in cloud for large-scale processing Build, train, and optimize neural network models on a range of datasets Book Description Deep learning is a powerful subset of machine learning that is very successful in domains such as computer vision and natural language processing (NLP). This second edition of R Deep Learning Essentials will open the gates for you to enter the world of neural networks by building powerful deep learning models using the R ecosystem. This book will introduce you to the basic principles of deep learning and teach you to build a neural network model from scratch. As you make your way through the book, you will explore deep learning libraries, such as Keras, MXNet, and TensorFlow, and create interesting deep learning models for a variety of tasks and problems, including structured data, computer vision, text data, anomaly detection, and recommendation systems. You'll cover advanced topics, such as generative adversarial networks (GANs), transfer learning, and large-scale deep learning in the cloud. In the concluding chapters, you will learn about the theoretical concepts of deep learning projects, such as model optimization, overfitting, and data augmentation, together with other advanced topics. By the end of this book, you will be fully prepared and able to implement deep learning concepts in your research work or projects. What you will learn Build shallow neural network prediction models Prevent models from overfitting the data to improve generalizability Explore techniques for finding the best hyperparameters for deep learning models Create NLP models using Keras and TensorFlow in R Use deep learning for computer vision tasks Implement deep learning tasks, such as NLP, recommendation systems, and autoencoders Who this book is for This second edition of R Deep Learning Essentials is for aspiring data scientists, data analysts, machine learning developers, and deep learning enthusiasts who are well versed in machine learning concepts and are looking to explore the deep learning paradigm using R. Fundamental understanding of the R language is necessary to get the most out of this book. Downloading the example code for this book You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you.


          Global Deep Learning Market to 2025 – ResearchAndMarkets.com      Cache   Translate Page      
DUBLIN--(BUSINESS WIRE)--Sep 13, 2018--The "2018 Future Of Global Deep Learning Market to 2025 - Growth Opportunities, Competition, And ...
          Best of arXiv.org for AI, Machine Learning, and Deep Learning – August 2018      Cache   Translate Page      
In this recurring monthly feature, we filter recent research papers appearing on the arXiv.org preprint server for compelling subjects relating to AI, ...
          Cisco Readies New Hardware Optimized for Deep Learning      Cache   Translate Page      
A new server designed to support deep learning projects is coming from Cisco Systems Inc. Cisco defines deep learning as "a compute-intensive form ...
          What Is Deep Learning Toolbox?      Cache   Translate Page      
Create, analyze, and train deep learning networks using Deep Learning Toolbox.
          An AI Deep Learning System That Smoothly Removes Visual Noise From Digital Images by Sight      Cache   Translate Page      

Two Minute Papers took a comprehensive look at an amazing collaborative project between NVIDIA, Aalto University, and MIT that solves the problem of grainy digital photos. They developed a really intelligent AI system uses deep learning to smoothly remove visual noise from digital images by looking at pixelated photos. Recent deep learning work in the...

The post An AI Deep Learning System That Smoothly Removes Visual Noise From Digital Images by Sight appeared first on Laughing Squid.


          “And we’re back for Season 6” Paris Machine Learning Newsletter, September 2018 (in French)      Cache   Translate Page      

“And we’re back for Season 6” the Paris Machine Learning Meetup Newsletter, September 2018

Sommaire
  1. L’édito de Franck, Jacqueline, Igor, “And we’re back for Season 6”
  2. On Aime Beaucoup !
  3. La saison dernière.

1 L’édito de Franck, Jacqueline, Igor, “And we’re back for Season 6”

Jacqueline Forien nous rejoint en tant qu’organisatrice du meetup.

La saison 5, c’était 8 hors série et 9 meetups réguliers, plus de 7200+ membres ce qui en fait un des plus grand meetup du monde sur cette thématique. On a vu plein de choses l’année dernière du point de vue politique mais aussi dans les meetups. On reviendra la dessus plus tard dans une autre newsletter. Ce qu’il faut savoir c’est que NIPS la conférence de référence en IA a vendu ses tickets en 11 minutes 38 secondes. D’expérience, c’est plus rapide que la vente des billets de BTS quand il viendront à Bercy en Octobre. Ce qui est sûr c’est que ces expériences que sont les rencontres autour du Machine Learning doivent rester et c’est pour cela que toutes les présentations et vidéos de nos meetups sont dans nos archives et sont listées plus bas dans cette newsletter.

Cette dernière saison n’aurait pas pu se faire sans les entreprises et associations suivantes:
MeritisXebiaDeep AlgoArtefactCap DigitalInvivooFortiaZenikaUrban Linker !LightOnDotAISwissLifeDataiku

Un grand merci pour leur implication dans une communauté dynamique sur l’IA ici à Paris et en Europe.

Notre premier meetup se fera en coordination avec le Women in Machine Learning and Data Science, pour s’inscrire c’est ici: #Hors-série — Paris WiMLDS & Paris ML Meetup

Les dates de nos meetups pour la saison 6:
  • Hors série #1 19/09
  • #2 10/10
  • #3 14/11
  • #4 12/12
  • #5 09/01
  • #6 13/02
  • #7 13/03
  • #8 10/04
  • #9 15/05
  • #10 12/06

Si vous voulez nous accueillir ou sponsoriser, n’hésitez pas à nous contacter grâce à ce formulaire ou via notre site.

Vous pouvez nous suivre sur Twitter @ParisMLgroup.



2. On Aime Beaucoup !

Chloé Azencott, une des speakers du meetup, vient de sortir un livre sur le Machine Learning en Français. C’est Introduction au Machine Learning et il y a plein d’exemples de code.

Des conférences et meetups qu’on aime bien!

++++Important: France is AI conférence: 3e édition de notre conférence annuelle les 17 et 18 octobre 2018 à Station F.+++: Le lien d’inscription eventbriteavec le code promo MEETUPS100 offre 100 place gratuites. Au-delà des 100 premières, les places peuvent être obtenu avec 50% de réduction avec le code MEETUPS50

Les petits nouveaux meetups:

Ceux qui recommencent:

3. La saison dernière


La saison dernière (Saison 5), c’était 8 hors série et 9 meetups réguliers pour un total de 95 meetups en 5 saisons. Voici les liens vers les présentations et videos faites à ces meetups:

Regular meetups

Hors série

Voilà, c’est tout pour aujourd’hui !

Franck , Jacqueline, et Igor.

PS: N’oubliez pas que vous pouvez aussi suivre le Paris Machine Learning Meetup sur Twitter, LinkedIn, Facebook et Google+ .

Vous pouvez consulter les archives des meet ups précédents.

On travaille aussi sur un nouveau site web : MLParis.org

Le Paris Machine Learning Meetup, c’est 7200 membres ce qui en fait un des plus important du monde avec déjà plus de 95 rencontres et 10 dates programmées pour cette saison 6.
  • Si vous êtes étudiant, postdoc ou chercheur, le meet up est une belle tribune pour parler de vos travaux avant de les présenter aux conférences NIPS/ICML/ICLR/COLT/UAI/ACL/KDD ;
  • Pour les startups, c’est un bon moyen de parler de vos projets ou de recruter les futurs superstars de votre équipe IA/Data Science ;
  • Et pour tous, c’est un moyen simple de se tenir informé des derniers développements du domaine et d’avoir des échanges uniques avec les conférenciers et les autres participants.

Comme toujours, premier arrivé, premier entré. Le nombre de places dans les salles est limité. Au delà de leur capacité, nous ne pourrons pas vous faire rentrer. Vous pouvez suivre le taux de remplissage en suivant #MLParis sur twitter.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

          Microsoft says it acquired Lobe, a San Francisco-based startup that lets users build machine learning models using a visual interface with no coding required (Khari Johnson/VentureBeat)      Cache   Translate Page      

Khari Johnson / VentureBeat:
Microsoft says it acquired Lobe, a San Francisco-based startup that lets users build machine learning models using a visual interface with no coding required  —  Microsoft today announced it has acquired Lobe, creator of a platform for building custom deep learning models using a visual interface …


          Embedded ML Developer - Erwin Hymer Group North America - Virginia Beach, VA      Cache   Translate Page      
NVIDIA VisionWorks, OpenCV. Game Development, Accelerated Computing, Machine Learning/Deep Learning, Virtual Reality, Professional Visualization, Autonomous...
From Indeed - Fri, 22 Jun 2018 17:57:58 GMT - View all Virginia Beach, VA jobs
          Data Scientist - ZF - Northville, MI      Cache   Translate Page      
Deep Learning, NVIDIA, NLP). You will run cost-effective data dive-ins on complex high volume data from a variety of sources and develop data solutions in close...
From ZF - Thu, 21 Jun 2018 21:14:15 GMT - View all Northville, MI jobs
          NVIDIA erweitert die Liste der Spiele mit DLSS-Unterstützung      Cache   Translate Page      

asus-geforce-rtxZur Vorstellung der GeForce-RTX-20-Serie veröffentlichte NVIDIA auch einige Details zum  Deep Learning Super Sampling oder kurz DLSS . Beim DLSS handelt es sich um eine Technik die Bildqualität zu verbessern, ohne dabei die Nachteile eines Temporalen Anti-Aliasing in Kauf zu nehmen. Dazu verwendet NVIDIA ein Deep-Learning-Netzwerk und die Tensor Cores der Turing-GPUs. Weitere Details dazu gibt es in der bereits angesprochenen News .

Nun hat NVIDIA eine Liste weiterer Spiele veröffentlicht, die DLSS unterstützen werden.

  • Darksiders III
  • Deliver Us The Moon: Fortuna
  • Fear the Wolves Hellblade: Senua's Sacrifice
  • KINETIK
  • Outpost Zero
  • Overkill's The Walking Dead
  • SCUM 
  • Stormdivers

Bereits bekannt ist die Unterstützung für folgende Spiele:


          What can linguistics and deep learning contribute to each other?. (arXiv:1809.04179v1 [cs.CL])      Cache   Translate Page      

Authors: Tal Linzen

Joe Pater's target article calls for greater interaction between neural network research and linguistics. I expand on this call and show how such interaction can benefit both fields. Linguists can contribute to research on neural networks for language technologies by clearly delineating the linguistic capabilities that can be expected of such systems, and by constructing controlled experimental paradigms that can determine whether those desiderata have been met. In the other direction, neural networks can benefit the scientific study of language by providing infrastructure for modeling human sentence processing and for evaluating the necessity of particular innate constraints on language acquisition.


          Deep Micro-Dictionary Learning and Coding Network. (arXiv:1809.04185v1 [cs.CV])      Cache   Translate Page      

Authors: Hao Tang, Heng Wei, Wei Xiao, Wei Wang, Dan Xu, Yan Yan, Nicu Sebe

In this paper, we propose a novel Deep Micro-Dictionary Learning and Coding Network (DDLCN). DDLCN has most of the standard deep learning layers (pooling, fully, connected, input/output, etc.) but the main difference is that the fundamental convolutional layers are replaced by novel compound dictionary learning and coding layers. The dictionary learning layer learns an over-complete dictionary for the input training data. At the deep coding layer, a locality constraint is added to guarantee that the activated dictionary bases are close to each other. Next, the activated dictionary atoms are assembled together and passed to the next compound dictionary learning and coding layers. In this way, the activated atoms in the first layer can be represented by the deeper atoms in the second dictionary. Intuitively, the second dictionary is designed to learn the fine-grained components which are shared among the input dictionary atoms. In this way, a more informative and discriminative low-level representation of the dictionary atoms can be obtained. We empirically compare the proposed DDLCN with several dictionary learning methods and deep learning architectures. The experimental results on four popular benchmark datasets demonstrate that the proposed DDLCN achieves competitive results compared with state-of-the-art approaches.


          Convolutional Neural Network Approach for EEG-based Emotion Recognition using Brain Connectivity and its Spatial Information. (arXiv:1809.04208v1 [cs.HC])      Cache   Translate Page      

Authors: Seong-Eun Moon, Soobeom Jang, Jong-Seok Lee

Emotion recognition based on electroencephalography (EEG) has received attention as a way to implement human-centric services. However, there is still much room for improvement, particularly in terms of the recognition accuracy. In this paper, we propose a novel deep learning approach using convolutional neural networks (CNNs) for EEG-based emotion recognition. In particular, we employ brain connectivity features that have not been used with deep learning models in previous studies, which can account for synchronous activations of different brain regions. In addition, we develop a method to effectively capture asymmetric brain activity patterns that are important for emotion recognition. Experimental results confirm the effectiveness of our approach.


          Automatic, Personalized, and Flexible Playlist Generation using Reinforcement Learning. (arXiv:1809.04214v1 [cs.CL])      Cache   Translate Page      

Authors: Shun-Yao Shih, Heng-Yu Chi

Songs can be well arranged by professional music curators to form a riveting playlist that creates engaging listening experiences. However, it is time-consuming for curators to timely rearrange these playlists for fitting trends in future. By exploiting the techniques of deep learning and reinforcement learning, in this paper, we consider music playlist generation as a language modeling problem and solve it by the proposed attention language model with policy gradient. We develop a systematic and interactive approach so that the resulting playlists can be tuned flexibly according to user preferences. Considering a playlist as a sequence of words, we first train our attention RNN language model on baseline recommended playlists. By optimizing suitable imposed reward functions, the model is thus refined for corresponding preferences. The experimental results demonstrate that our approach not only generates coherent playlists automatically but is also able to flexibly recommend personalized playlists for diversity, novelty and freshness.


          Deep Co-investment Network Learning for Financial Assets. (arXiv:1809.04227v1 [cs.CE])      Cache   Translate Page      

Authors: Yue Wang, Chenwei Zhang, Shen Wang, Philip S. Yu, Lu Bai, Lixin Cui

Most recent works model the market structure of the stock market as a correlation network of the stocks. They apply pre-defined patterns to extract correlation information from the time series of stocks. Without considering the influences of the evolving market structure to the market index trends, these methods hardly obtain the market structure models which are compatible with the market principles. Advancements in deep learning have shown their incredible modeling capacity on various finance-related tasks. However, the learned inner parameters, which capture the essence of the finance time series, are not further exploited about their representation in the financial fields. In this work, we model the financial market structure as a deep co-investment network and propose a Deep Co-investment Network Learning (DeepCNL) method. DeepCNL automatically learns deep co-investment patterns between any pairwise stocks, where the rise-fall trends of the market index are used for distance supervision. The learned inner parameters of the trained DeepCNL, which encodes the temporal dynamics of deep co-investment patterns, are used to build the co-investment network between the stocks as the investment structure of the corresponding market. We verify the effectiveness of DeepCNL on the real-world stock data and compare it with the existing methods on several financial tasks. The experimental results show that DeepCNL not only has the ability to better reflect the stock market structure that is consistent with widely-acknowledged financial principles but also is more capable to approximate the investment activities which lead to the stock performance reported in the real news or research reports than other alternatives.


          EEG-based video identification using graph signal modeling and graph convolutional neural network. (arXiv:1809.04229v1 [eess.SP])      Cache   Translate Page      

Authors: Soobeom Jang, Seong-Eun Moon, Jong-Seok Lee

This paper proposes a novel graph signal-based deep learning method for electroencephalography (EEG) and its application to EEG-based video identification. We present new methods to effectively represent EEG data as signals on graphs, and learn them using graph convolutional neural networks. Experimental results for video identification using EEG responses obtained while watching videos show the effectiveness of the proposed approach in comparison to existing methods. Effective schemes for graph signal representation of EEG are also discussed.


          Joint Segmentation and Uncertainty Visualization of Retinal Layers in Optical Coherence Tomography Images using Bayesian Deep Learning. (arXiv:1809.04282v1 [cs.CV])      Cache   Translate Page      

Authors: Suman Sedai, Bhavna Antony, Dwarikanath Mahapatra, Rahil Garnavi

Optical coherence tomography (OCT) is commonly used to analyze retinal layers for assessment of ocular diseases. In this paper, we propose a method for retinal layer segmentation and quantification of uncertainty based on Bayesian deep learning. Our method not only performs end-to-end segmentation of retinal layers, but also gives the pixel wise uncertainty measure of the segmentation output. The generated uncertainty map can be used to identify erroneously segmented image regions which is useful in downstream analysis. We have validated our method on a dataset of 1487 images obtained from 15 subjects (OCT volumes) and compared it against the state-of-the-art segmentation algorithms that does not take uncertainty into account. The proposed uncertainty based segmentation method results in comparable or improved performance, and most importantly is more robust against noise.


          Deep Learning Based Multi-modal Addressee Recognition in Visual Scenes with Utterances. (arXiv:1809.04288v1 [cs.AI])      Cache   Translate Page      

Authors: Thao Minh Le, Nobuyuki Shimizu, Takashi Miyazaki, Koichi Shinoda

With the widespread use of intelligent systems, such as smart speakers, addressee recognition has become a concern in human-computer interaction, as more and more people expect such systems to understand complicated social scenes, including those outdoors, in cafeterias, and hospitals. Because previous studies typically focused only on pre-specified tasks with limited conversational situations such as controlling smart homes, we created a mock dataset called Addressee Recognition in Visual Scenes with Utterances (ARVSU) that contains a vast body of image variations in visual scenes with an annotated utterance and a corresponding addressee for each scenario. We also propose a multi-modal deep-learning-based model that takes different human cues, specifically eye gazes and transcripts of an utterance corpus, into account to predict the conversational addressee from a specific speaker's view in various real-life conversational scenarios. To the best of our knowledge, we are the first to introduce an end-to-end deep learning model that combines vision and transcripts of utterance for addressee recognition. As a result, our study suggests that future addressee recognition can reach the ability to understand human intention in many social situations previously unexplored, and our modality dataset is a first step in promoting research in this field.


          Deep Learning in Information Security. (arXiv:1809.04332v1 [cs.CR])      Cache   Translate Page      

Authors: Stefan Thaler, Vlado Menkovski, Milan Petkovic

Machine learning has a long tradition of helping to solve complex information security problems that are difficult to solve manually. Machine learning techniques learn models from data representations to solve a task. These data representations are hand-crafted by domain experts. Deep Learning is a sub-field of machine learning, which uses models that are composed of multiple layers. Consequently, representations that are used to solve a task are learned from the data instead of being manually designed.

In this survey, we study the use of DL techniques within the domain of information security. We systematically reviewed 77 papers and presented them from a data-centric perspective. This data-centric perspective reflects one of the most crucial advantages of DL techniques -- domain independence. If DL-methods succeed to solve problems on a data type in one domain, they most likely will also succeed on similar data from another domain. Other advantages of DL methods are unrivaled scalability and efficiency, both regarding the number of examples that can be analyzed as well as with respect of dimensionality of the input data. DL methods generally are capable of achieving high-performance and generalize well.

However, information security is a domain with unique requirements and challenges. Based on an analysis of our reviewed papers, we point out shortcomings of DL-methods to those requirements and discuss further research opportunities.


          Deep learning for time series classification: a review. (arXiv:1809.04356v1 [cs.LG])      Cache   Translate Page      

Authors: Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, Pierre-Alain Muller

Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state of the art performance for document classification and speech recognition. In this article, we study the current state of the art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.


          Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy. (arXiv:1809.04430v1 [cs.CV])      Cache   Translate Page      

Authors: Stanislav Nikolov, Sam Blackwell, Ruheena Mendes, Jeffrey De Fauw, Clemens Meyer, Cían Hughes, Harry Askham, Bernardino Romera-Paredes, Alan Karthikesalingam, Carlton Chu, Dawn Carnell, Cheng Boon, Derek D'Souza, Syed Ali Moinuddin, Kevin Sullivan, DeepMind Radiographer Consortium, Hugh Montgomery, Geraint Rees, Ricky Sharma, Mustafa Suleyman, Trevor Back, Joseph R. Ledsam, Olaf Ronneberger

Over half a million individuals are diagnosed with head and neck cancer each year worldwide. Radiotherapy is an important curative treatment for this disease, but it requires manually intensive delineation of radiosensitive organs at risk (OARs). This planning process can delay treatment commencement. While auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying and achieving expert performance remain. Adopting a deep learning approach, we demonstrate a 3D U-Net architecture that achieves performance similar to experts in delineating a wide range of head and neck OARs. The model was trained on a dataset of 663 deidentified computed tomography (CT) scans acquired in routine clinical practice and segmented according to consensus OAR definitions. We demonstrate its generalisability through application to an independent test set of 24 CT scans available from The Cancer Imaging Archive collected at multiple international sites previously unseen to the model, each segmented by two independent experts and consisting of 21 OARs commonly segmented in clinical practice. With appropriate validation studies and regulatory approvals, this system could improve the effectiveness of radiotherapy pathways.


          Using the Tsetlin Machine to Learn Human-Interpretable Rules for High-Accuracy Text Categorization with Medical Applications. (arXiv:1809.04547v1 [cs.LG])      Cache   Translate Page      

Authors: Geir Thore Berge, Ole-Christoffer Granmo, Tor Oddbjørn Tveit, Morten Goodwin, Lei Jiao, Bernt Viggo Matheussen

Medical applications challenge today's text categorization techniques by demanding both high accuracy and ease-of-interpretation. Although deep learning has provided a leap ahead in accuracy, this leap comes at the sacrifice of interpretability. To address this accuracy-interpretability challenge, we here introduce, for the first time, a text categorization approach that leverages the recently introduced Tsetlin Machine. In all brevity, we represent the terms of a text as propositional variables. From these, we capture categories using simple propositional formulae, such as: if "rash" and "reaction" and "penicillin" then Allergy. The Tsetlin Machine learns these formulae from a labelled text, utilizing conjunctive clauses to represent the particular facets of each category. Indeed, even the absence of terms (negated features) can be used for categorization purposes. Our empirical results are quite conclusive. The Tsetlin Machine either performs on par with or outperforms all of the evaluated methods on both the 20 Newsgroups and IMDb datasets, as well as on a non-public clinical dataset. On average, the Tsetlin Machine delivers the best recall and precision scores across the datasets. The GPU implementation of the Tsetlin Machine is further 8 times faster than the GPU implementation of the neural network. We thus believe that our novel approach can have a significant impact on a wide range of text analysis applications, forming a promising starting point for deeper natural language understanding with the Tsetlin Machine.


          End-to-end Audiovisual Speech Activity Detection with Bimodal Recurrent Neural Models. (arXiv:1809.04553v1 [cs.CL])      Cache   Translate Page      

Authors: Fei Tao, Carlos Busso

Speech activity detection (SAD) plays an important role in current speech processing systems, including automatic speech recognition (ASR). SAD is particularly difficult in environments with acoustic noise. A practical solution is to incorporate visual information, increasing the robustness of the SAD approach. An audiovisual system has the advantage of being robust to different speech modes (e.g., whisper speech) or background noise. Recent advances in audiovisual speech processing using deep learning have opened opportunities to capture in a principled way the temporal relationships between acoustic and visual features. This study explores this idea proposing a \emph{bimodal recurrent neural network} (BRNN) framework for SAD. The approach models the temporal dynamic of the sequential audiovisual data, improving the accuracy and robustness of the proposed SAD system. Instead of estimating hand-crafted features, the study investigates an end-to-end training approach, where acoustic and visual features are directly learned from the raw data during training. The experimental evaluation considers a large audiovisual corpus with over 60.8 hours of recordings, collected from 105 speakers. The results demonstrate that the proposed framework leads to absolute improvements up to 1.2% under practical scenarios over a VAD baseline using only audio implemented with deep neural network (DNN). The proposed approach achieves 92.7% F1-score when it is evaluated using the sensors from a portable tablet under noisy acoustic environment, which is only 1.0% lower than the performance obtained under ideal conditions (e.g., clean speech obtained with a high definition camera and a close-talking microphone).


          The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches. (arXiv:1803.01164v2 [cs.CV] UPDATED)      Cache   Translate Page      

Authors: Md Zahangir Alom, Tarek M. Taha, Christopher Yakopcic, Stefan Westberg, Paheding Sidike, Mst Shamima Nasrin, Brian C Van Esesn, Abdul A S. Awwal, Vijayan K. Asari

Deep learning has demonstrated tremendous success in variety of application domains in the past few years. This new field of machine learning has been growing rapidly and applied in most of the application domains with some new modalities of applications, which helps to open new opportunity. There are different methods have been proposed on different category of learning approaches, which includes supervised, semi-supervised and un-supervised learning. The experimental results show state-of-the-art performance of deep learning over traditional machine learning approaches in the field of Image Processing, Computer Vision, Speech Recognition, Machine Translation, Art, Medical imaging, Medical information processing, Robotics and control, Bio-informatics, Natural Language Processing (NLP), Cyber security, and many more. This report presents a brief survey on development of DL approaches, including Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) including Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). In addition, we have included recent development of proposed advanced variant DL techniques based on the mentioned DL approaches. Furthermore, DL approaches have explored and evaluated in different application domains are also included in this survey. We have also comprised recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys have published on Deep Learning in Neural Networks [1, 38] and a survey on RL [234]. However, those papers have not discussed the individual advanced techniques for training large scale deep learning models and the recently developed method of generative models [1].


          Stability of Scattering Decoder For Nonlinear Diffractive Imaging. (arXiv:1806.08015v3 [cs.CV] UPDATED)      Cache   Translate Page      

Authors: Yu Sun, Ulugbek S. Kamilov

The problem of image reconstruction under multiple light scattering is usually formulated as a regularized non-convex optimization. A deep learning architecture, Scattering Decoder (ScaDec), was recently proposed to solve this problem in a purely data-driven fashion. The proposed method was shown to substantially outperform optimization-based baselines and achieve state-of-the-art results. In this paper, we thoroughly test the robustness of ScaDec to different permittivity contrasts, number of transmissions, and input signal-to-noise ratios. The results on high-fidelity simulated datasets show that the performance of ScaDec is stable in different settings.


          Pseudo-Feature Generation for Imbalanced Data Analysis in Deep Learning. (arXiv:1807.06538v3 [cs.LG] UPDATED)      Cache   Translate Page      

Authors: Tomohiko Konno, Michiaki Iwazume

We generate pseudo-features by multivariate probability distributions obtained from feature maps in a low layer of trained deep neural networks. Then, we virtually augment the data of minor classes by the pseudo-features in order to overcome imbalanced data problems. Because all the wild data are imbalanced, the proposed method has the possibility to improve the ability of DNN in a broad range of problems


          Predicting Solution Summaries to Integer Linear Programs under Imperfect Information with Machine Learning. (arXiv:1807.11876v2 [cs.LG] UPDATED)      Cache   Translate Page      

Authors: Eric Larsen, Sébastien Lachapelle, Yoshua Bengio, Emma Frejinger, Simon Lacoste-Julien, Andrea Lodi

The paper provides a methodological contribution at the intersection of machine learning and operations research. Namely, we propose a methodology to quickly predict solution summaries (i.e., solution descriptions at a given level of detail) to discrete stochastic optimization problems. We approximate the solutions based on supervised learning and the training dataset consists of a large number of deterministic problems that have been solved independently and offline. Uncertainty regarding a missing subset of the inputs is addressed through sampling and aggregation methods.

Our motivating application concerns booking decisions of intermodal containers on double-stack trains. Under perfect information, this is the so-called load planning problem and it can be formulated by means of integer linear programming. However, the formulation cannot be used for the application at hand because of the restricted computational budget and unknown container weights. The results show that standard deep learning algorithms allow one to predict descriptions of solutions with high accuracy in very short time (milliseconds or less).


          How to Make Artificial Intelligence Explainable      Cache   Translate Page      

How to Make Artificial Intelligence Explainable

How to Make Artificial Intelligence Explainable - A New Analytic Workbench

FICO today announced the latest version of FICO® Analytics Workbench™, a cloud-based advanced analytics development environment that empowers business users and data scientists with sophisticated, yet easy-to-use, data exploration, visual data wrangling, decision strategy design and machine learning.

As new data privacy regulations shine a spotlight on AI and machine learning, the FICO Analytics Workbench xAI Toolkit helps data scientists better understand the machine learning models behind AI-derived decisions.

“As businesses depend on machine learning models more and more, explanation is critical, particularly in the way that AI-derived decisions impact consumers,” said Jari Koister, vice president of product management at FICO. “Leveraging our more than 60 years of experience in analytics and more than 100 patents filed in machine learning, we are excited at opening up the machine learning black box and making AI explainable. With Analytics Workbench, our customers can gain the insights and transparency needed to support their AI-based decisions.”

How to Make Artificial Intelligence Explainable - Avoiding the Common Pitfalls

“Computers are increasingly a more important part of our lives, and automation is just going to improve over time, so it’s increasingly important to know why these complicated AI and ML systems are making the decisions that they are,” said AI expert and assistant professor of computer science at the University of California Irvine, Sameer Singh. “The more accurate the algorithm, the harder it is to interpret, especially with deep learning. Explanations are important, they can help non-experts to understand the reasons behind the AI decisions, and help avoid common pitfalls of machine learning.”

Built for both business users and data scientists, the FICO® Analytics Workbench™ combines the best elements of FICO’s existing data science tools with several open source technologies, into a single, cloud-ready, machine learning and decision science toolkit, powered by scalable Apache Spark technologies. The Analytics Workbench provides seamless and automated regulatory audit compliance support, producing the necessary documentation for internal review and external regulators.

The Analytics Workbench has been designed for users with a variety of skill sets, from credit risk officers looking for a consistent tool to data scientists and business analysts collaborating and working together to inform and enrich strategic decision making. With an intuitive interface, users can expect faster time-to-value, higher levels of productivity, and significant business improvements through analytically powered decisions.

How to Make Artificial Intelligence Explainable - See Our Demo

For more information, click here.

For a demonstration of analytics workbench, click here.

The post How to Make Artificial Intelligence Explainable appeared first on FICO.


          NEW course! Recommender Systems and Deep Learning in Python      Cache   Translate Page      

Recommender Systems and Deep Learning in Python So excited to tell you about my new course! [if you don’t want to read my little spiel just click here to get your coupon: https://www.udemy.com/recommender-systems/?couponCode=LAUNCHDAY] Believe it or not, almost all online businesses today make use of recommender systems in some way or another. What do I mean by “recommender systems”, and […]

The post NEW course! Recommender Systems and Deep Learning in Python appeared first on Lazy Programmer.


          NVIDIA Rolls out TensorRT Hyperscale Platform and New T4 GPU for Ai Datacenters      Cache   Translate Page      

This morning at GTC Japan, NVIDIA CEO Jensen Huang announced a set new products centered around Ai and accelerated computing. Targeting Hyperscale datacenters looking to run Ai workloads, NVIDIA continues to innovate Machine Learning technologies at an unprecedented pace. "There is no question that deep learning-powered AI is being deployed around the world, and we’re seeing incredible growth here,” Huang told an audience of more than 4,000 press, partners, academics and technologists gathered on the latest stop in a GTC world tour.

The post NVIDIA Rolls out TensorRT Hyperscale Platform and New T4 GPU for Ai Datacenters appeared first on insideHPC.


          Microsoft acquires Lobe, a drag-and-drop AI tool - TechCrunch      Cache   Translate Page      

TechCrunch

Microsoft acquires Lobe, a drag-and-drop AI tool
TechCrunch
Microsoft today announced that is has acquired Lobe, a startup that lets you build machine learning models with the help of a simple drag-and-drop interface. Microsoft plans to use Lobe, which only launched into beta earlier this year, to build upon ...
Microsoft acquires AI startup Lobe to help people make deep learning models without codeVentureBeat
Microsoft buys Lobe, a small start-up that makes it easier to build AI appsCNBC
Microsoft moves to make AI development more accessible with Lobe acquisitionWindows Central
GeekWire -The Official Microsoft Blog - Microsoft
all 8 news articles »

          See NVIDIA Deep Learning In Action [Webinar Series]      Cache   Translate Page      
Hear how three companies benefitted from the performance, simplicity and convenience of NVIDIA DGX Station to supercharge their deep learning development, infusing their products and services with the power of AI.
          NVIDIA, NetApp Help Businesses Accelerate AI with NetApp ONTAP AI | NVIDIA Blog      Cache   Translate Page      
Nvidia GPUs are available in Cloud Bare Metal GPU.Contact us to find out our latest offers! For all the focus these days on AI, it’s largely just the world’s largest hyperscalers that have the chops to roll out predictable, scalable deep learning across their organizations. Their vast budgets and in-house Continue Reading
          mxnet-cu92mkl 1.3.0      Cache   Translate Page      
MXNet is an ultra-scalable deep learning framework. This version uses CUDA-9.2 and MKLDNN.
           Nvidia builds DLSS momentum, reveals 9 new RTX enabled games       Cache   Translate Page      
25 games will support Nvidia's Deep Learning Super Sampling tech
          IBM Event: Scaling AI for the Enterprise      Cache   Translate Page      
Copyright © 2018 http://jtonedm.com James TaylorShadi Copty discussed one IDE and one runtime for AI and data science across the enterprise as part of IBM’s AI approach. Shadi identified three major trends that are impacting data science and ML: Diversity of skillsets and workflows with demand remaining high and new approaches like deep learning posing additional [...]
          Lobe, a startup aiming to make deep learning more accessible is joining Microsoft today      Cache   Translate Page      
Lobe, a San Francisco-based company developing a visual tool for creating custom deep learning models is the latest AI startup to join Microsoft, the company announced today.
          NVIDIA Rolls out TensorRT Hyperscale Platform and New T4 GPU for Ai Datacenters      Cache   Translate Page      

This morning at GTC Japan, NVIDIA CEO Jensen Huang announced a set new products centered around Ai and accelerated computing. Targeting Hyperscale datacenters looking to run Ai workloads, NVIDIA continues to innovate Machine Learning technologies at an unprecedented pace. "There is no question that deep learning-powered AI is being deployed around the world, and we’re seeing incredible growth here,” Huang told an audience of more than 4,000 press, partners, academics and technologists gathered on the latest stop in a GTC world tour.

The post NVIDIA Rolls out TensorRT Hyperscale Platform and New T4 GPU for Ai Datacenters appeared first on insideHPC.


          NVIDIA Shares General Performance Graphs for Turing GPUs      Cache   Translate Page      

Next week marks the launch of NVIDIA's Turing-based GeForce RTX 20 series graphics cards, but while it is only a week away, there is still a lot of speculation about how these cards will perform. NVIDIA was keen to show off the architecture's ray tracing performance when they were announced, but there was little information about performance for traditional rendering methods, or the information was clouded by the use of DLSS, Deep Learning Super Sampling. From what I have seen, DLSS works by using a pre-trained neural network to better upscale images for a game, so it can actually be rendered at a lower resolution, significantly improving performance. Anyway, now NVIDIA has shown off another graph with some generalized performance information on it at GTC Japan 2018.

According to one graph, the RTX 2080 is capable of 4K at 60 FPS, without DLSS, and is faster than the GTX 1080 Ti. With DLSS, the performance is even greater. Just how much greater though is unknown because, like many marketing graphs, there is no y-scale, or really even much of a y-axis (the only mark is for 4K 60 FPS). While WCCFtech does have the graphs, there is no information on what games or benchmarks were used to arrive at these performance values, or what settings were used.

According to VideoCardz last week, the embargo on RTX 2080 and RTX 2080 Ti reviews ends on September 19, so we should learn more then.

Source: WCCFtech


          Heroes of Deep Learning: Top Takeaways for Aspiring Data Scientists from Andrew Ng’s Interview Series      Cache   Translate Page      
Introduction Andrew Ng is the most recognizable personality of the modern deep learning world. His machine learning course is cited as the starting point ... The post Heroes of Deep Learning: Top...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

          דרושים פרילנסרים לפרוייקט פיתוח עיבוד תמונה וdeep learning      Cache   Translate Page      
מערכת עיבוד תמונה, אינטגרציה


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13