Next Page: 10000

          Scientifique de données - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Maîtrise des techniques analytiques appliquées (clustering, decision trees, neural networks, SVM (support vector machines), collaborative filtering, k-nearest...
From belairdirect - Thu, 13 Sep 2018 14:41:28 GMT - View all Montréal, QC jobs
          Data Scientist - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From belairdirect - Thu, 13 Sep 2018 00:51:55 GMT - View all Montréal, QC jobs
          Data Scientist - Big Data - Intact - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From Intact - Wed, 12 Sep 2018 22:51:36 GMT - View all Montréal, QC jobs
          SpiNNaker : un supercalculateur qui émule le cerveau humain      Cache   Translate Page      

Construit par l'Université de School of Computer Science de Manchester, SpiNNaker est le plus grand supercalculateur neuromorphique au monde. Il est conçu et construit pour fonctionner de la même manière que le cerveau humain. Il vient d'être équipé de son millionième cœur de processeur emblématique et il a été mis sous tension pour la première fois vendredi 2 novembre 2018.

La nouvelle architecture "Spiking Neural Network Architecture" ou "SpiNNaker ", dotée d'un million de processeurs, est en mesure de réaliser plus de 200 millions de millions d'actions par seconde, chacune de ses puces comportant 100 millions de transistors.

Le projet a nécessité 15 millions de livres de financement, 20 ans de conception et plus de 10 ans de construction. SpiNNaker peut modéliser plus de neurones biologiques en temps réel que toute autre machine de la planète.

Les neurones biologiques sont des cellules cérébrales de base présentes dans le système nerveux qui communiquent principalement en émettant des "pics" d'énergie électrochimique pure. L'informatique neuromorphique utilise des systèmes informatiques à grande échelle contenant des circuits électroniques pour imiter ces pointes dans une machine.

SpiNNaker, contrairement aux ordinateurs traditionnels, ne communique pas en envoyant de grandes quantités d'informations du point A au point B d'un un réseau standard. Au lieu de cela, il imite l'architecture de communication massivement parallèle du cerveau, envoyant des milliards de petites quantités d'informations simultanément vers des milliers de destinations différentes.

Steve Furber, professeur en génie informatique, à l'origine de l'idée initiale d'un tel ordinateur, explique: "SpiNNaker repense complètement le fonctionnement des ordinateurs classiques. Nous avons essentiellement créé une machine qui fonctionne plus comme un cerveau qu’un ordinateur traditionnel, ce qui est extrêmement excitant.

L'objectif ultime du projet a toujours été de créer un million de cœurs dans un seul ordinateur pour les applications de modélisation du cerveau en temps réel, et nous l'avons maintenant atteint, ce qui est fantastique."

L'objectif final est de modéliser jusqu'à un milliard de neurones biologiques en temps réel et les cheurcheurs sont maintenant plus proches. Pour donner une idée de l’échelle, un cerveau de souris est constitué d’environ 100 millions de neurones et le cerveau humain est 1000 fois plus gros que cela.

Un milliard de neurones correspond à 1% de l'échelle du cerveau humain, qui comprend un peu moins de 100 milliards de cellules du cerveau, ou neurones, qui sont tous fortement interconnectés via environ 1 quadrillion de synapses 

L'une de utilisations fondamentales de SpiNNaker est d'aider les neuroscientifiques à mieux comprendre le fonctionnement de notre cerveau. Pour ce faire, il exécute des simulations en temps réel à très grande échelle, ce qui n’est tout simplement pas possible sur d’autres machines.

Par exemple, SpiNNaker a été utilisé pour simuler un traitement en temps réel de haut niveau dans une gamme de réseaux cérébraux isolés. Cela comprend un modèle de 80 000 neurones d'un segment du cortex, la couche externe du cerveau qui reçoit et traite les informations provenant des sens.

Il a également simulé une région du cerveau appelée ganglions de la base - une zone affectée par la maladie de Parkinson, ce qui signifie qu'elle présente un potentiel énorme en percée neurologique en science, comme les tests pharmaceutiques.

La puissance de SpiNNaker a récemment été exploitée pour contrôler un robot, le SpOmnibot. Ce robot utilise le système SpiNNaker pour interpréter les informations visuelles en temps réel et naviguer vers certains objets tout en ignorant les autres.

Le professeur Furber ajoute:"Les neuroscientifiques peuvent désormais utiliser SpiNNaker pour percer certains des secrets du fonctionnement du cerveau humain en exécutant des simulations à grande échelle sans précédent. Il fonctionne également comme un simulateur de neurones en temps réel qui permet aux robotiques d’intégrer des réseaux de neurones à grande échelle dans des robots mobiles afin qu’ils puissent marcher, parler et se déplacer avec souplesse et faible consommation."

Catégorie actualité: 
Image actualité AMP: 

          A Million Processor Supercomputer      Cache   Translate Page      
Largest neuromorphic supercomputer The world’s largest neuromorphic supercomputer has just been switched on. Called the Spiking Neural Network Architecture or SpinNNaker, it’s built to work like the human brain and can complete more that 200 million million actions per second, making it the fastest of its kind in the world. First AI medical app in Swahili Ada, an AI powered health platform, is launching in Swahili, making its health assessment technology available to more than 100 million people in Sub-Saharan Africa. The app uses data from real medical cases as well as knowledge from doctors and scientists. But how useful will it be if access to the internet or a decent smart phone is limited? Coding with the Flying Scotsman The UK has the lowest percentage of female engineers in Europe. To increase these figures the UK government has embarked on a “Year of Engineering” campaign. Our reporter Jack Meegan has travelled to the National Railway Museum in York – the home of the world famous Flying Scotsman locomotive - to find out more about the Future Engineers event designed to get girls into technology and engineering. Is Uber in the US? Is the question that Yinka Adegoke was asked once when he hailed an Uber in Nairobi. Yinka is the Africa Editor for the Quatrz news website and he's just published a piece about how the gig economy, pushed on by technology like Uber, AirBnB and other apps, is becoming increasingly vital to many African economies. (Photo: The world’s largest neuromorphic supercomputer. Credit: The University of Manchester) Producer: Ania Lichtarowicz
           Application of artificial neural networks to investigate the energy performance of household refrigerator-freezers       Cache   Translate Page      
Saidur, Rahman and Masjuki, Haji Hassan (2008) Application of artificial neural networks to investigate the energy performance of household refrigerator-freezers. Journal of Applied Sciences, 8 (11). pp. 2142-2149. ISSN 1812-5654
           A global k-means approach for autonomous cluster initialization of probabilistic neural network       Cache   Translate Page      
Chang, R.K.Y. and Loo, C.K. and Rao, M.V.C. (2008) A global k-means approach for autonomous cluster initialization of probabilistic neural network. Informatica, 32. pp. 219-225. ISSN 0350-5596
           Estimation of vegetable oil-based Ethyl esters biodiesel densities using artificial neural networks       Cache   Translate Page      
Baroutian, S. and Aroua, M.K. and Abdul Raman, A.A. and Nik Sulaiman, N.M. (2008) Estimation of vegetable oil-based Ethyl esters biodiesel densities using artificial neural networks. Journal of Applied Sciences, 8 (17). pp. 3005-3011. ISSN 18125654 (ISSN)
           Prediction of palm oil-based methyl ester biodiesel density using artificial neural networks       Cache   Translate Page      
Baroutian, S. and Aroua, M.K. and Abdul Raman, A.A. and Sulaiman, N.M.N. (2008) Prediction of palm oil-based methyl ester biodiesel density using artificial neural networks. Journal of Applied Sciences, 8 (10). pp. 1938-1943. ISSN 18125654 (ISSN)
          ‘How do neural nets learn?’ A step by step explanation using the H2O Deep Learning algorithm.      Cache   Translate Page      
In my last blogpost about Random Forests I introduced the codecentric.ai Bootcamp. The next part I published was about Neural Networks and Deep Learning. Every video of our bootcamp will have example code and tasks to promote hands-on learning. While the practical parts of the bootcamp will be using Python, below you will find the English R version of this Neural Nets Practical Example, where I explain how neural nets learn and how the concepts and techniques translate to training neural nets in R with the H2O Deep Learning function. You can find the video on YouTube but as, as before, it is only available in German. Same goes for the slides, which are also currently German only. See the end of this article for the embedded video and slides. Neural Nets and Deep Learning Just like Random Forests, neural nets are a method for machine learning and can be used for supervised, unsupervised and reinforcement learning. The idea behind neural nets has already been developed back in the 1940s as a way to mimic how our human brain learns. That’s way neural nets in machine learning are also called ANNs (Artificial Neural Networks). When we say Deep Learning, we talk about big and complex neural nets, which are able to solve complex tasks, like image or language understanding. Deep Learning has gained traction and success particularly with the recent developments in GPUs and TPUs (Tensor Processing Units), the increase in computing power and data in general, as well as the development of easy-to-use frameworks, like Keras and TensorFlow. We find Deep Learning in our everyday lives, e.g. in voice recognition, computer vision, recommender systems, reinforcement learning and many more. The easiest type of ANN has only node (also called neuron) and is called perceptron. Incoming data flows into this neuron, where a result is calculated, e.g. by summing up all incoming data. Each of the incoming data points is multiplied with a weight; weights can basically be any number and are used to modify the results that are calculated by a neuron: if we change the weight, the result will change also. Optionally, we can add a so called bias to the data points to modify the results even further. But how do neural nets learn? Below, I will show with an example that uses common techniques and principles. Libraries First, we will load all the packages we need: tidyverse for data wrangling and plotting readr for reading in a csv h2o for Deep Learning (h2o.init initializes the cluster) library(tidyverse) library(readr) library(h2o) h2o.init(nthreads = -1) ## Connection successful! ## ## R is connected to the H2O cluster: ## H2O cluster uptime: 3 hours 46 minutes ## H2O cluster timezone: Europe/Berlin ## H2O data parsing timezone: UTC ## H2O cluster version: 3.20.0.8 ## H2O cluster version age: 1 month and 16 days ## H2O cluster name: H2O_started_from_R_shiringlander_jpa775 ## H2O cluster total nodes: 1 ## H2O cluster total memory: 3.16 GB ## H2O cluster total cores: 8 ## H2O cluster allowed cores: 8 ## H2O cluster healthy: TRUE ## H2O Connection ip: localhost ## H2O Connection port: 54321 ## H2O Connection proxy: NA ## H2O Internal Security: FALSE ## H2O API Extensions: XGBoost, Algos, AutoML, Core V3, Core V4 ## R Version: R version 3.5.1 (2018-07-02) Data The dataset used in this example is a customer churn dataset from Kaggle. Each row represents a customer, each column contains customer’s attributes We will load the data from a csv file: telco_data % select_if(is.numeric) %__% gather() %__% ggplot(aes(x = value)) + facet_wrap(~ key, scales = "free", ncol = 4) + geom_density() ## Warning: Removed 11 rows containing non-finite values (stat_density). … and barcharts for categorical variables. telco_data %__% select_if(is.character) %__% select(-customerID) %__% gather() %__% ggplot(aes(x = value)) + facet_wrap(~ key, scales = "free", ncol = 3) + geom_bar() Before we can work with h2o, we need to convert our data into an h2o frame object. Note, that I am also converting character columns to categorical columns, otherwise h2o will ignore them. Moreover, we will need our response variable to be in categorical format in order to perform classification on this data. hf % mutate_if(is.character, as.factor) %__% as.h2o Next, I’ll create a vector of the feature names I want to use for modeling (I am leaving out the customer ID because it doesn’t add useful information about customer churn). hf_X
          SpiNNaker, le plus grand supercalculateur au monde émulant un cerveau humain a été mis en marche, une première depuis le début de sa construction      Cache   Translate Page      
SpiNNaker, le plus grand supercalculateur au monde émulant un cerveau humain a été mise en marche
une première depuis le début de sa construction

SpiNNaker (Spiking Neural Network Architecture), c'est le nom du superordinateur conçu pour fonctionner à l'instar du cerveau humain, ou du moins s'y approcher au mieux. La machine est capable de réaliser 200 millions de millions d'opérations par seconde. Elle a été conçue et construite par l'Université de School of Computer Science de Manchester et a...
          Microsoft develops flexible AI system that can summarize the news      Cache   Translate Page      
Microsoft researchers have developed a novel method of summarizing natural language with artificial intelligence -- specifically neural networks.
          ISL Colloquium presents Estimating the Information Flow in Deep Neural Networks      Cache   Translate Page      

This talk will discuss the flow of information and the evolution of internal representations during deep neural network (DNN) training, aiming to demystify the compression aspect of the information bottleneck theory. The theory suggests that DNN training comprises a rapid fitting phase followed by a slower compression phase, in which the mutual information I(X;T) between the input X and internal representations T decreases. Several papers observe compression of estimated mutual information on different DNN models, but the true I(X;T) over these networks is provably either constant (discrete X) or infinite (continuous X). We will explain this discrepancy between theory and experiments, and explain what was actually measured by these past works.

To this end, an auxiliary (noisy) DNN framework will be introduced, in which I(X;T) is a meaningful quantity that depends on the network's parameters. We will show that this noisy framework is a good proxy for the original (deterministic) system both in terms of performance and the learned representations. To accurately track I(X;T) over noisy DNNs, a differential entropy estimator tailor to exploit the DNN's layered structure will be developed and theoretical guarantees on the associated minimax risk will be provided. Using this estimator along with a certain analogy to an information-theoretic communication problem, we will elucidate the geometric mechanism that drives compression of I(X;T) in noisy DNNs. Based on these findings, we will circle back to deterministic networks and explain what the past observations of compression were in fact showing. Future research directions inspired by this study aiming to facilitate a comprehensive information-theoretic understanding of deep learning will also be discussed.


          Spiking Neural Network Architecture będzie próbował naśladować ludzki mózg, składa się z miliona rdzeni i 1200 płyt głównych      Cache   Translate Page      
Na University of Manchester uruchomiono SpiNNaker, najpotężniejszy na świecie komputer neuromorficzny, czyli maszynę, która ma naśladować pracę mózgu. Spiking Neural Network Architecture składa się z miliona rdzeni obliczeniowych i 1200 połączonych płyt głównych. SpiNNaker nie tylko ma „myśleć” jak ludzki mózg. Jego zadaniem jest też tworzenie modeli neuronów i symulowanie w czasie rzeczywistym ich działania. Jego głównym […]
          networkunit added to PyPI      Cache   Translate Page      
A SciUnit library for validation testing of neural network models.
          Microcontroller Runs Neural Networks That Train Themselves      Cache   Translate Page      
Before machine learning algorithms can be used in factories to detect equipment malfunctions or cars to autonomously tell the difference between left and right turn arrows, they need training. That currently takes place in data centers, where neural
          Optimizing fMRI experimental design for MVPA-based BCI control: Combining the strengths of block and event-related designs.      Cache   Translate Page      
Related Articles

Optimizing fMRI experimental design for MVPA-based BCI control: Combining the strengths of block and event-related designs.

Neuroimage. 2018 Oct 31;:

Authors: Valente G, Kaas A, Formisano E, Goebel R

Abstract
Functional Magnetic Resonance Imaging (fMRI) has been successfully used for Brain Computer Interfacing (BCI) to classify (imagined) movements of different limbs. However, reliable classification of more subtle signals originating from co-localized neural networks in the sensorimotor cortex, e.g. individual movements of fingers of the same hand, has proved to be more challenging, especially when taking into account the requirement for high single trial reliability in the BCI context. In recent years, Multi Voxel Pattern Analysis (MVPA) has gained momentum as a suitable method to disclose such weak, distributed activation patterns. Much attention has been devoted to developing and validating data analysis strategies, but relatively little guidance is available on the choice of experimental design, even less so in the context of BCI-MVPA. When applicable, block designs are considered the safest choice, but the expectations, strategies and adaptation induced by blocking of similar trials can make it a sub-optimal strategy. Fast event-related designs, in contrast, require a more complicated analysis and show stronger dependence on linearity assumptions but allow for randomly alternating trials. However, they lack resting intervals that enable the BCI participant to process feedback. In this proof-of-concept paper a hybrid blocked fast-event related design is introduced that is novel in the context of MVPA and BCI experiments, and that might overcome these issues by combining the rest periods of the block design with the shorter and randomly alternating trial characteristics of a rapid event-related design. A well-established button-press experiment was used to perform a within-subject comparison of the proposed design with a block and a slow event-related design. The proposed hybrid blocked fast-event related design showed a decoding accuracy that was close to that of the block design, which showed highest accuracy. It allowed for across-design decoding, i.e. reliable prediction of examples obtained with another design. Finally, it also showed the most stable incremental decoding results, obtaining good performance with relatively few blocks. Our findings suggest that the blocked fast event-related design could be a viable alternative to block designs in the context of BCI-MVPA, when expectations, strategies and adaptation make blocking of trials of the same type a sub-optimal strategy. Additionally, the blocked fast event-related design is also suitable for applications in which fast incremental decoding is desired, and enables the use of a slow or block design during the test phase.

PMID: 30391345 [PubMed - as supplied by publisher]


          Intel Architecture Event Announced: December 11th      Cache   Translate Page      

On the back of a series of recent announcements regarding Intel’s future product line and portfolio, Intel has disclosed to us that it will be holding a forward-looking Architecture Summit/Event in a few weeks. The event will be an exclusively small affair, with only a few press invited, but an opportunity for Intel to discuss its future vision for the next few months with engineers and technical fellows set to give some detailed presentations.

One of the most frequent requests we have put to Intel over the recent months is for a return to an Intel that offers more information. In previous years, Intel would dive deep into its product portfolio and its architecture designs in order to showcase its engineering talent and prowess. This often happened at the awesome annual Intel Developer Forum, a yearly event held in the heart of San Francisco, but since it was disbanded a couple of years ago, the level of detail in each subsequent launch has been agonizingly minimal. For an engineering company that used to proudly present its technical genius on a stage, in detail, to suddenly become so very insular about its R&D raised a lot of questions. It would appear our persistence is paying off, and Intel is going to do something about it.

Details on the content of the Intel Architecture Summit/Event are slim at this point, as invites are slowly being handed out. At this point we are not immediately aware whether Intel intends to have an embargo. In the past at these sort of events, some of the information became almost immediately available, while some of the meatier details had longer embargo times to allow for the press to get to grips with the information and ask questions and write articles. When Intel discussed the Skylake design in detail before it hit the shelves, there was a short lead time. This event is likely to be along the same lines.

At this point we do not know exactly what Intel will be discussing – the only thing we’ve been told is that it will be ‘update’ with Intel’s architects and technical fellows focusing on architecture. This could extend into CPU, GPU, AI, and everything in-between, and if we’re lucky, manufacturing. Given that Cascade Lake is a known part at this point, it would be difficult to see Intel discussing more on the CPU side unless they have an ace in the design we don’t already know about. A far more interesting topic would be on the GPU side, assuming that Raja Koduri and his team have something to say. We already know that the Nervana Neural Network Processor is due out in 2019, so there could be some detail to discuss there as well. An outside possibility is Intel talking 10nm. One can hope.

I’ll be attending for AnandTech. I hope this ends up being a good event, so that there are more like it in the future.


          Scientifique de données - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Maîtrise des techniques analytiques appliquées (clustering, decision trees, neural networks, SVM (support vector machines), collaborative filtering, k-nearest...
From belairdirect - Thu, 13 Sep 2018 14:41:28 GMT - View all Montréal, QC jobs
          Scientifique de données - Big Data - Intact - Montréal, QC      Cache   Translate Page      
Maîtrise des techniques analytiques appliquées (clustering, decision trees, neural networks, SVM (support vector machines), collaborative filtering, k-nearest...
From Intact - Thu, 13 Sep 2018 00:55:20 GMT - View all Montréal, QC jobs
          Data Scientist - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From belairdirect - Thu, 13 Sep 2018 00:51:55 GMT - View all Montréal, QC jobs
          Neural Reconstruction Integrity: A Metric for Assessing the Connectivity Accuracy of Reconstructed Neural Networks      Cache   Translate Page      
Elizabeth P. Reilly, Jeffrey S. Garretson, William R. Gray Roncal, Dean M. Kleissas, Brock A. Wester, Mark A. Chevillet, Matthew J. Roos
          IORN: An Effective Remote Sensing Image Scene Classification Framework      Cache   Translate Page      
In recent times, many efforts have been made to improve remote sensing image scene classification, especially using popular deep convolutional neural networks. However, most of these methods do not consider the specific scene orientation of the remote sensing images. In this letter, we propose the improved oriented response network (IORN), which is based on the ORN, to handle the orientation problem in remote sensing image scene classification. We propose average active rotating filters (A-ARFs) in the IORN. While IORNs are being trained, A-ARFs are updated by a method that is different from the ARFs of the ORN, without additional computations. This change helps IORN improve its ability to encode orientation information and speeds up optimization during training. We also propose Squeeze-ORAlign (S-ORAlign) by adding a squeeze layer to ORAlign of ORN. With the squeeze layer, S-ORAlign can address large-scale images, unlike ORAlign. An ablation study and comparison experiments are designed on a public remote sensing image scene classification data set. The experimental results demonstrate the effectiveness and better performance of the proposed model over that of other state-of-the-art models.
          Deep Self-Paced Residual Network for Multispectral Images Classification Based on Feature-Level Fusion      Cache   Translate Page      
The classification methods based on fusion techniques of multisource multispectral (MS) images have been studied for a long time. However, it may be difficult to classify these data based on a feature level while avoiding the inconsistency of data caused by multisource and multiple regions or cities. In this letter, we propose a deep learning structure called 2-branch SPL-ResNet which combines the self-paced learning with deep residual network to classify multisource MS data based on the feature-level fusion. First, a 2-D discrete wavelet is used to obtain the multiscale features and sparse representation of MS data. Then, a 2-branch SPL-ResNet is established to extract respective characteristics of the two satellites. Finally, we implement the feature-level fusion by cascading the two feature vectors and then classify the integrated feature vector. We conduct the experiments on Landsat_8 and Sentinel_2 MS images. Compared with the commonly used classification methods such as support vector machine and convolutional neural networks, our proposed 2-branch SPL-ResNet framework has higher accuracy and more robustness.
          Toward Arbitrary-Oriented Ship Detection With Rotated Region Proposal and Discrimination Networks      Cache   Translate Page      
Ship detection from remote sensing images can provide important information for maritime reconnaissance and surveillance and is also a challenging task. Although previous detection methods including some advanced ones based on deep convolutional neural network expertize in detecting horizontal or nearly horizontal targets, they cannot give satisfying detection results for arbitrary-oriented ship detection. In this letter, we introduce a novel ship detection system that can detect arbitrary-oriented ships. In this method, a rotated region proposal networks (R2PN) is proposed to generate multiorientated proposals with ship orientation angle information. In R2PN, the orientation angles of bounding boxes are also regressed to make the inclined ship region proposals generated more accurately. For ship discrimination, a rotated region of interest pooling layer is adopted in the following classification subnetwork to extract discriminative features from such inclined candidate regions. The proposed whole ship detection system can be trained end to end. Experimental results conducted on our rotated ship data set and HRSD2016 benchmark demonstrate that our proposed method outperforms state-of-the-art approaches for the arbitrary-oriented ship detection task.
          Hyperspectral Unmixing via Deep Convolutional Neural Networks      Cache   Translate Page      
Hyperspectral unmixing (HU) is a method used to estimate the fractional abundances corresponding to endmembers in each of the mixed pixels in the hyperspectral remote sensing image. In recent times, deep learning has been recognized as an effective technique for hyperspectral image classification. In this letter, an end-to-end HU method is proposed based on the convolutional neural network (CNN). The proposed method uses a CNN architecture that consists of two stages: the first stage extracts features and the second stage performs the mapping from the extracted features to obtain the abundance percentages. Furthermore, a pixel-based CNN and cube-based CNN, which can improve the accuracy of HU, are presented in this letter. More importantly, we also use dropout to avoid overfitting. The evaluation of the complete performance is carried out on two hyperspectral data sets: Jasper Ridge and Urban. Compared with that of the existing method, our results show significantly higher accuracy.
           Million-core neuromorphic supercomputer could simulate an entire mouse brain       Cache   Translate Page      

A newly built supercomputer is able to simulate up to a billion neurons in real time, ...#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

After 12 years of work, researchers at the University of Manchester in England have completed construction of a "SpiNNaker" (Spiking Neural Network Architecture) supercomputer. It can simulate the internal workings of up to a billion neurons through a whopping one million processing units.

.. Continue Reading Million-core neuromorphic supercomputer could simulate an entire mouse brain

Category: Computers

Tags:
           People recognition and pose estimation in image sequences       Cache   Translate Page      
Nakajima, C; Pontil, M; Poggio, T; (2000) People recognition and pose estimation in image sequences. In: Amari, SI and Giles, CL and Gori, M and Piuri, V, (eds.) IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL IV. (pp. 189 - 194). IEEE COMPUTER SOC
          EECS presents annual awards for outstanding PhD and SM theses      Cache   Translate Page      

Anne Stuart | EECS

The faculty and leadership of the Department of Electrical Engineering and Computer Science (EECS) recently presented 13 awards for outstanding student work on recent master’s and PhD theses. Awards and recipients included:

Jin-Au Kong Award for Best PhD Theses in Electrical Engineering

  • Yu-Hsin Chen, now Research Scientist, NVIDIA Research, for “Architecture Design for Highly Flexible and Energy-Efficient Deep Neural Network Accelerators. Professors Vivienne Sze and Joel Emer, supervisors.
  • Chiraag Juvekar, now Research Scientist, Analog Garage, Analog Devices, for “Hardware and Protocols for Authentication and Secure Computation.” Professor Anantha Chandrakasan, Supervisor.  

George M. Sprowls Awards for Best PhD Theses in Computer Science

  • Arturs Backurs, now Research Assistant Professor, Toyota Technological Institute at Chicago (TTIC), for “Below P vs NP: Fine-Grained Hardness for Big Data Problems.” Professor Piotr Indyk, supervisor.
  • Gregory Bodwin, now Postdoctoral Researcher, Georgia Institute of Technology, for Sketching Distances in Graphs.” Professor Virginia Williams, supervisor.
  • Zoya Bylinskii, now Research Scientist, Adobe Research, for “Computational Perception for Multi-Modal Document Understanding.” Professor Fredo Durand and Dr. Aude Oliva, supervisors.
  • David Harwath, now Research Scientist, Spoken Languages Systems Group, Computer Science and Artificial Intelligence Laboratory (CSAIL), for “Learning Spoken Language Through Vision.” Dr. James R. Glass, supervisor.
  • Jerry Li, now VM Research Fellow, Simons Institute, University of California Berkeley, for “Principled Approaches to Robust Machine Learning Beyond.” Professor Ankur Moitra, supervisor.
  • Ludwig Schmdit, now Postdoctoral Researcher in Computer Science, University of California Berkeley, for “Algorithms Above the Noise Floor.” Professor Piotr Indyk, supervisor. "Mechanism Design: From Optimal Transport Theory to Revenue Maximization." Professor Constantinos Daskalakis, supervisor.
  • Adriana Schulz, now Assistant Professor, University of Washington, for “Computational Design for the Next Manufacturing Revolution.” Professor Wojciech Matusik, supervisor.

Ernst A. Guillemin Award for Best SM Thesis in Electrical Engineering

  • Matthew Brennan, now a PhD student in EECS at MIT, for “Reducibility and Computational Lower Bounds for Problems with Planted Sparce Structure.” Professor Guy Bresler, supervisor.
  • Syed Muhammad Imaduddin, now a PhD student in EECS at MIT, for “A Pseudo-Bayesian Model-Based Approach for Noninvasive Intracranial Pressure Estimation.” Professor Thomas Heldt, supervisor.

William A. Martin Award for Best SM Thesis in Computer Science

  • Favyen Bastani, now a PhD student in EECS at MIT, for “Robust Road Topology Extraction from Aerial Imagery.” Professors Sam Madden, Hari Balakrishnan, and Mohammad Alizadeh.
  • Wengong Jin, now a PhD student in EECS at MIT, for “Neural Graph Representation Learning with Application to Chemistry.” Professor Regina Barzilay, supervisor.

EECS Professor Martin Rinard and Professor Asu Ozdaglar, EECS department head, presented the awards during a luncheon ceremony. The PhD award winners were selected by Professor Dirk Englund (for electrical engineering) and Professor Vinod Vaikuntanathan (for computer science). The Sprowls Awards Committee, consisting of Professors Mohammad Alizadeh, Michael Carbin, and Julian Shun, assisted with selection of the PhD awards in computer science.

The SM awards were selected by Professor Elfar Adelsteinsson (for electrical engineering) and Professor Antonio Torralba (for computer science).

 

 

Date Posted: 

Tuesday, November 6, 2018 - 12:00pm

Card Title Color: 

Black

Card Description: 

Current and former EECS students were honored at a recent ceremony.

Photo: 

Card Wide Image: 


          Researchers train AI to spot Alzheimer’s disease ahead of diagnosis      Cache   Translate Page      

While Alzheimer's disease affects tens of millions of people worldwide, it remains difficult to detect early on. But researchers exploring whether AI can play a role in detecting Alzheimer's in patients are finding that it may be a valuable tool for helping spot the disease. Researchers in California recently published a study in the journal Radiology, and they demonstrated that, once trained, a neural network was able to accurately diagnose Alzheimer's disease in a small number of patients, and it did so based on brain scans taken years before those patients were actually diagnosed by physicians.

Via: Medical Xpress, VentureBeat

Source: Radiology


          Deep Hybrid Similarity Learning for Person Re-Identification      Cache   Translate Page      
Person re-identification (Re-ID) aims to match person images captured from two non-overlapping cameras. In this paper, a deep hybrid similarity learning (DHSL) method for person Re-ID based on a convolution neural network (CNN) is proposed. In our approach, a light CNN learning feature pair for the input image pair is simultaneously extracted. Then, both the elementwise absolute difference and multiplication of the CNN learning feature pair are calculated. Finally, a hybrid similarity function is designed to measure the similarity between the feature pair, which is realized by learning a group of weight coefficients to project the elementwise absolute difference and multiplication into a similarity score. Consequently, the proposed DHSL method is able to reasonably assign complexities of feature learning and metric learning in a CNN, so that the performance of person Re-ID is improved. Experiments on three challenging person Re-ID databases, QMUL GRID, VIPeR, and CUHK03, illustrate that the proposed DHSL method is superior to multiple state-of-the-art person Re-ID methods.
          Fast Landmark Localization With 3D Component Reconstruction and CNN for Cross-Pose Recognition      Cache   Translate Page      
Two approaches are proposed for cross-pose face recognition, one is built on the handcrafted features extracted from the 3D reconstruction of facial components and the other is built on the learned features from a deep convolutional neural network (CNN). As both approaches rely on facial landmarks for alignment across large poses, we propose the Fast Hierarchical Model (FHM) for locating cross-pose facial landmarks in real time. Unlike most 3D approaches that consider holistic faces, the first proposed approach considers 3D facial components. It segments each 2D face in the gallery into components, reconstructs the 3D surface for each component, and recognizes a query face by component features. The core part of the CNN-based approach is a modified VGG network. We study the performance with different settings on the training set, including the synthesized data from 3D reconstruction, the real-life data from an in-the-wild database, and both types of data combined. The two recognition approaches and the FHM are evaluated in extensive experiments and compared with state-of-the-art methods to demonstrate their efficacy.
          VoiceBase Extends Deep Learning Neural Network Compute to Verne Global      Cache   Translate Page      

A recent press release reports, “Verne Global, a provider of advanced data center solutions for high performance computing (HPC), today announced that VoiceBase, the leading provider of speech analytics for the cloud, is utilizing its HPC-optimized bare-metal infrastructure – hpcDIRECT – to accelerate the development of new artificial intelligence (AI) powered voice analytics services. California-based […]

The post VoiceBase Extends Deep Learning Neural Network Compute to Verne Global appeared first on DATAVERSITY.


           Million-core neuromorphic supercomputer could simulate an entire mouse brain       Cache   Translate Page      

A newly built supercomputer is able to simulate up to a billion neurons in real time, ...#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

After 12 years of work, researchers at the University of Manchester in England have completed construction of a "SpiNNaker" (Spiking Neural Network Architecture) supercomputer. It can simulate the internal workings of up to a billion neurons through a whopping one million processing units.

.. Continue Reading Million-core neuromorphic supercomputer could simulate an entire mouse brain

Category: Computers

Tags:
          Scientifique de données - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Maîtrise des techniques analytiques appliquées (clustering, decision trees, neural networks, SVM (support vector machines), collaborative filtering, k-nearest...
From belairdirect - Thu, 13 Sep 2018 14:41:28 GMT - View all Montréal, QC jobs
          Scientifique de données - Big Data - Intact - Montréal, QC      Cache   Translate Page      
Maîtrise des techniques analytiques appliquées (clustering, decision trees, neural networks, SVM (support vector machines), collaborative filtering, k-nearest...
From Intact - Thu, 13 Sep 2018 00:55:20 GMT - View all Montréal, QC jobs
          Data Scientist - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From belairdirect - Thu, 13 Sep 2018 00:51:55 GMT - View all Montréal, QC jobs
          Matrox Imaging: Flowchart software      Cache   Translate Page      
Matrox® Imaging announces Matrox Design Assistant X flowchart-based vision application software. This integrated development environment (IDE) allows developers to build intuitive flowcharts instead of writing traditional program code. It enables the development of a graphical web-based operator interface for modifying the vision application.

This update integrates a host of new features and functionality, including image classification using deep learning, a photometric stereo tool that highlight surface imperfections, and the ability to interface directly with third-party 3D sensors.

Deep learning for image classification

The classification tool leverages deep learning—specifically, convolutional neural network (CNN) technology—to categorize images of highly textured, naturally varying, and acceptably deformed goods. All inference is performed on a mainstream CPU, eliminating the dependence on third-party neural network libraries and the need for specialized GPU hardware. Matrox Imaging handles the intricate design and training of the neural network, utilizing the deep technical experience, knowledge, and skill of its machine learning and machine vision experts.

A Q&A video offers more insight into deep learning technology.

Photometric stereo for emphasizing surface irregularities

A new registration tool features photometric stereo technology, which creates a composite image from a series of images taken with light coming in from different directions. Creation of these images utilizes directional illumination light controllers, such as the Light Sequence Switch (LSS) from CCS, LED Light Manager (LLM) from Smart Vision Lights, or others similar. This composite image emphasizes surface irregularities, such as embossed or engraved features, scratches, or indentations.

A primer on photometric stereo techniques was outlined in a Q&A video.

Third-party 3D sensor interfacing

Matrox Design Assistant X makes it possible to capture and process depth-map data byinterfacing with third-party 3D sensors. Initially, the software will support LMI Gocator® line profilers and snapshot sensors and Photoneo® PhoXi® scanners, with other scanner options to be added in the future.

Other updates and additions include multiple run-times for running multiple independent projects simultaneously on the same platform; dedicated shape-finding tools for locating circles, ellipses, rectangles, and line segments; and addition of a code-grading step.

Field-proven Matrox Design Assistant X software is a perfect match for the Matrox 4Sight EV6 vision controller or the Matrox Iris GTR smart camera.

"This new version delivers on the three cornerstones of our development methodology," said Fabio Perelli, product manager, Matrox Imaging. "These are to extend Matrox Design Assistant’s capabilities while incorporating recent evolutions to the underlying vision library and also striving to simplify the overall user experience."

Availability
Matrox Design Assistant X will be officially released in Q2 2019.


          (USA-PA-Pittsburgh) Computer Vision / Deep Learning Engineer      Cache   Translate Page      
Computer Vision / Deep Learning Engineer Computer Vision / Deep Learning Engineer - Skills Required - Computer Vision, C++, Caffe, Tensorflow, Lidar, Geometry-Based Vision, Deep Learning, Multi-view stereo I am currently working with several companies in the area who are actively hiring in the field of Computer Vision and Deep Learning. AI, and specifically Computer Vision and Deep Learning are my niche market specialty and I only work with companies in this space. I am actively recruiting for multiple levels of seniority and responsibility, from experienced Individual Contributor roles, to Team Lead positions, to Principal Level Scientists and Engineers. I offer my candidates the unique proposition of representing them to multiple companies, rather than having to work with multiple different recruiters at an agency, or applying directly to many different companies without someone to manage the process with each of those opportunities. In one example, I am working with a candidate who is currently interviewing with 10 different clients of mine for similar roles across the country with companies applying Computer Vision and Deep Learning to various different applications from Robotics, Autonomous Vehicles, AR/VR/MR, Medical Imaging, Manufacturing Automation, Gaming, AI surveillance, AI Security, Facial ID, 3D Sensors and 3D Reconstruction software, Autonomous Drones, etc. I would love to work with you and introduce you to any of my clients you see as a great fit for your career! Please send me a resume and tell me a bit about yourself and I will reach out and offer some times to connect on the phone! **Top Reasons to Work with Us** Some of the current openings are for the following brief company overviews: Company 1 - company is founded by 3x Unicorn (multi-billion dollar companies) founders and are breaking into a new market with advanced technology, customers, and exciting applications including AI surveillance, robotics, AR/VR. Company 2 - Autonomous Drones! Actually, multiple different companies working on Autonomous Drones for different applications - including Air-to-Air Drone Security, Industrial Inspection, Consumer Drones, Wind Turbine and Structure Inspection. Company 3 - 3D Sensors and 3D Reconstruction Software - make 3D maps of interior spaces using our current products on the market. We work with builders, designers, Consumers and Business-to-Business solutions. Profitable company with strong leadership team currently in growth mode! Company 4 - Industrial/Manufacturing/Logistics automation using our 3D and Depth Sensors built in house and 3D Reconstruction software to automate processes for Fortune 500 clients. Solid funding and revenue approaching profitability in 2018! Company 5 - Hand Gesture Recognition technology for controlling AR/VR environments. We have a product on the market as of 2017 and are continuing to develop products for consumers and business applications that are used in the real and virtual world. We have recently brought on a renowned leader in Deep Learning and it's intersection with neuroscience and are doing groundbreaking R&D in this field! Company 6 - Full facial tracking and reconstruction for interactive AR/VR environments. Company 7 - massively scalable retail automation using Computer Vision and Deep Learning, currently partnered with one of the largest retailers in the world. Company 8 - Products in the market including 3D Sensors, and currently bringing 3D reconstruction capabilities to mobile devices everywhere. Recently closed on a $50M round of funding and expanding US operations. Company 9 - Mobile AI company using Computer Vision for sports tracking and real time analytics for players at all levels from beginner to professional athletes to track, practice and improve at their craft. Company 10 - Digitizing human actions to create a massive new dataset in manufacturing - augmenting the human/robot working relationship and giving manufacturers the necessary info to improve that relationship. We believe that AI and robotics will always need to work side by side with humans, and we are the only company providing a solution to this previously untapped dataset! Company 11 - 3D facial identification and authentication for security purposes. No more key-fobs and swipe cards, our clients use our sensors and software to identify and permit employees. **What You Will Be Doing** If you are interested in discussing any of these opportunities, I would love to speak with you! I am interested in learning about the work you are currently doing and what you would be interested in for your next step. If the above opportunities are not quite what you're looking for but would still like to discuss future opportunities and potential to work together, I would love to meet you! I provide a free service to my candidates and work diligently to help manage the stressful process of finding the right next step in your career. The companies that I work with are always evolving so I can keep you up to date on new opportunities I come across. Please apply to this job, or shoot me an email at richard.marion@cybercoders.com and let's arrange a time to talk on the phone. **What You Need for this Position** Generally, I am looking for Scientists/Engineers in the fields of Computer Vision, Deep Learning and Machine Learning. I find that a lot of my clients are looking for folks who have experience with 3D Reconstruction, SLAM / Visual Odometry, Object Detection/Recognition/Tracking, autonomy, Point Cloud Processing, Software and Algorithm development in C++ (and C++11 and C++14), GPU programming using CUDA or other GPGPU related stuff, Neural Network training, Sensor Fusion, Multi-view stereo, camera calibration or sensor calibration, Image Segmentation, Image Processing, Video Processing, and plenty more! - Computer Vision - C+- Python - Linux - UNIX **What's In It for You** A dedicated and experienced Computer Vision placement specialist! If you want to trust your job search in the hands of a professional who takes care and pride in their work, and will bring many relevant opportunities your way - I would love to work with you! So, if you are a Computer Vision Scientist or Engineer and are interested in having a conversation about the market and some of the companies I am working with, please apply or shoot me an email with resume today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Computer Vision / Deep Learning Engineer* *PA-Pittsburgh* *RM2-1492758*
          neurosciencestuff: Machine-learning system processes sounds like...      Cache   Translate Page      


neurosciencestuff:

Machine-learning system processes sounds like humans do

Using a machine-learning system known as a deep neural network, MIT researchers have created the first model that can replicate human performance on auditory tasks such as identifying a musical genre.

This model, which consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks, was used by the researchers to shed light on how the human brain may be performing the same tasks.

“What these models give us, for the first time, is machine systems that can perform sensory tasks that matter to humans and that do so at human levels,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT and the senior author of the study. “Historically, this type of sensory processing has been difficult to understand, in part because we haven’t really had a very clear theoretical foundation and a good way to develop models of what might be going on.”

The study, which appeared in the April 19 issue of Neuron, also offers evidence that the human auditory cortex is arranged in a hierarchical organization, much like the visual cortex. In this type of arrangement, sensory information passes through successive stages of processing, with basic information processed earlier and more advanced features such as word meaning extracted in later stages.

MIT graduate student Alexander Kell and Stanford University Assistant Professor Daniel Yamins are the paper’s lead authors. Other authors are former MIT visiting student Erica Shook and former MIT postdoc Sam Norman-Haignere.

Modeling the brain

When deep neural networks were first developed in the 1980s, neuroscientists hoped that such systems could be used to model the human brain. However, computers from that era were not powerful enough to build models large enough to perform real-world tasks such as object recognition or speech recognition.

Over the past five years, advances in computing power and neural network technology have made it possible to use neural networks to perform difficult real-world tasks, and they have become the standard approach in many engineering applications. In parallel, some neuroscientists have revisited the possibility that these systems might be used to model the human brain.

“That’s been an exciting opportunity for neuroscience, in that we can actually create systems that can do some of the things people can do, and we can then interrogate the models and compare them to the brain,” Kell says.

The MIT researchers trained their neural network to perform two auditory tasks, one involving speech and the other involving music. For the speech task, the researchers gave the model thousands of two-second recordings of a person talking. The task was to identify the word in the middle of the clip. For the music task, the model was asked to identify the genre of a two-second clip of music. Each clip also included background noise to make the task more realistic (and more difficult).

After many thousands of examples, the model learned to perform the task just as accurately as a human listener.

“The idea is over time the model gets better and better at the task,” Kell says. “The hope is that it’s learning something general, so if you present a new sound that the model has never heard before, it will do well, and in practice that is often the case.”

The model also tended to make mistakes on the same clips that humans made the most mistakes on.

The processing units that make up a neural network can be combined in a variety of ways, forming different architectures that affect the performance of the model.

The MIT team discovered that the best model for these two tasks was one that divided the processing into two sets of stages. The first set of stages was shared between tasks, but after that, it split into two branches for further analysis — one branch for the speech task, and one for the musical genre task.

Evidence for hierarchy

The researchers then used their model to explore a longstanding question about the structure of the auditory cortex: whether it is organized hierarchically.

In a hierarchical system, a series of brain regions performs different types of computation on sensory information as it flows through the system. It has been well documented that the visual cortex has this type of organization. Earlier regions, known as the primary visual cortex, respond to simple features such as color or orientation. Later stages enable more complex tasks such as object recognition.

However, it has been difficult to test whether this type of organization also exists in the auditory cortex, in part because there haven’t been good models that can replicate human auditory behavior.

“We thought that if we could construct a model that could do some of the same things that people do, we might then be able to compare different stages of the model to different parts of the brain and get some evidence for whether those parts of the brain might be hierarchically organized,” McDermott says.

The researchers found that in their model, basic features of sound such as frequency are easier to extract in the early stages. As information is processed and moves farther along the network, it becomes harder to extract frequency but easier to extract higher-level information such as words.

To see if the model stages might replicate how the human auditory cortex processes sound information, the researchers used functional magnetic resonance imaging (fMRI) to measure different regions of auditory cortex as the brain processes real-world sounds. They then compared the brain responses to the responses in the model when it processed the same sounds.

They found that the middle stages of the model corresponded best to activity in the primary auditory cortex, and later stages corresponded best to activity outside of the primary cortex. This provides evidence that the auditory cortex might be arranged in a hierarchical fashion, similar to the visual cortex, the researchers say.

“What we see very clearly is a distinction between primary auditory cortex and everything else,” McDermott says.

Alex Huth, an assistant professor of neuroscience and computer science at the University of Texas at Austin, says the paper is exciting in part because it offers convincing evidence that the early part of the auditory cortex performs generic sound processing while the higher auditory cortex performs more specialized tasks.

“This is one of the ongoing mysteries in auditory neuroscience: What distinguishes the early auditory cortex from the higher auditory cortex? This is the first paper I’ve seen that has a computational hypothesis for that,” says Huth, who was not involved in the research.

The authors now plan to develop models that can perform other types of auditory tasks, such as determining the location from which a particular sound came, to explore whether these tasks can be done by the pathways identified in this model or if they require separate pathways, which could then be investigated in the brain.


          Data Architect/Data Science      Cache   Translate Page      
CA-SAN JOSE, Role : Data Architect/Data Science Location : San Jose California Duration : 6+ Months Expert programming skills in Python, R Experience in writing code for various Machine learning algorithms for classification, clustering, forecasting, regression, Neural networks and Deep Learning Hands-on experience with modern enterprise data architectures and data toolsets (ex: data warehouse, data marts, dat
          Scientifique de données - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Maîtrise des techniques analytiques appliquées (clustering, decision trees, neural networks, SVM (support vector machines), collaborative filtering, k-nearest...
From belairdirect - Thu, 13 Sep 2018 14:41:28 GMT - View all Montréal, QC jobs
          Scientifique de données - Big Data - Intact - Montréal, QC      Cache   Translate Page      
Maîtrise des techniques analytiques appliquées (clustering, decision trees, neural networks, SVM (support vector machines), collaborative filtering, k-nearest...
From Intact - Thu, 13 Sep 2018 00:55:20 GMT - View all Montréal, QC jobs
          Data Scientist - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From belairdirect - Thu, 13 Sep 2018 00:51:55 GMT - View all Montréal, QC jobs
          pytorch-argus 0.0.7      Cache   Translate Page      
Easy high-level library for training neural networks in PyTorch.
          AMD Unveils World's First 7nm Datacenter GPUs with PCIe 4.02 Interconnect      Cache   Translate Page      
AMD unveiled the world's first lineup of 7nm GPUs for the datacenter that will utilize an all new version of the ROCM open software platform for accelerated computing. "The AMD Radeon Instinct MI60 and MI50 accelerators feature flexible mixed-precision capabilities, powered by high-performance compute units that expand the types of workloads these accelerators can address, including a range of HPC and deep learning applications." They are specifically designed to tackle datacenter workloads such as rapidly training complex neural networks, delivering higher levels of floating-point performance, while exhibiting greater efficiencies. The new "Vega 7nm" GPUs are also the world's first GPUs to support the PCIe 4.02 interconnect which is twice as fast as other x86 CPU-to-GPU interconnect technologies and features AMD Infinity Fabric Link GPU interconnect technology that enables GPU-to-GPU communication that is six times faster than PCIe Gen 3. The AMD Radeon Instinct MI60 Accelerator is also the world's fastest double precision PCIe accelerator with 7.4 TFLOPs of peak double precision (FP64) performance. "Google believes that open source is good for everyone," said Rajat Monga, engineering director, TensorFlow, Google. "We've seen how helpful it can be to open source machine learning technology, and we're glad to see AMD embracing it. With the ROCm open software platform, TensorFlow users will benefit from GPU acceleration and a more robust open source machine learning ecosystem." ROCm software version 2.0 provides updated math libraries for the new DLOPS; support for 64-bit Linux operating systems including CentOS, RHEL and Ubuntu; optimizations of existing components; and support for the latest versions of the most popular deep learning frameworks, including TensorFlow 1.11, PyTorch (Caffe2) and others. Discussion
           Cognitive Pattern Analysis Employing Neural Networks: Evidence from the Australian Capital Markets       Cache   Translate Page      
Wong, E.S.K. (2009) Cognitive Pattern Analysis Employing Neural Networks: Evidence from the Australian Capital Markets. International Journal of Economics and Finance, 1 (1). pp. 76-80. ISSN 1916-971X
          Linear Regression – Machine Learning with TensorFlow and Oracle JET UI Explained by Andrejus ...      Cache   Translate Page      
image

Machine learning topic is definitely popular these days. Some get wrong assumptions about it - they think machine could learn by itself and its kind of magic. The truth is - there is no magic, but math behind it. Machine will learn the way math model is defined for learning process. In my opinion, the best solution is a combination of machine learning math and algorithms.  Here I could relate to chatbots keeping conversational context - language processing can be done by machine learning with neural network, while intent and context processing can be executed by programmable algorithms.
If you are starting to learn machine learning - there are two essential concepts to start with:
1. Regression
2. Classification
This post is focused around regression, in the next posts I will talk about classification.
Regression is a method which calculates the best fit for a curve to summarize data. Its up to you which type of curve to choose, you should assume which type will be most suitable (this can be achieved with trial and error too) based on given data set. Regression goal is to understand data points by discovering the curve that might have generated them. Read the complete article here.

 

Developer Partner Community

For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

Blog Twitter LinkedIn Forum Wiki

Technorati Tags: PaaS,Cloud,Middleware Update,WebLogic, WebLogic Community,Oracle,OPN,Jürgen Kress


           Hybrid Modeling Of Well-Mixed Model For Fluidized Bed Reactors Using Artificial Neural Networks       Cache   Translate Page      
Ibrehem, A.S. (2009) Hybrid Modeling Of Well-Mixed Model For Fluidized Bed Reactors Using Artificial Neural Networks. In: International Engineering Convention, 11-14 May 2009, Damascus, Syria.. (Submitted)
           Inverse kinematics of an equal length links planar hyper redundant manipulator using neural networks       Cache   Translate Page      
Yahya, S. and Mohamed, H.A.F. and Moghavvemi, M. and Yang, S.S. (2009) Inverse kinematics of an equal length links planar hyper redundant manipulator using neural networks. In: ICROS-SICE International Joint Conference 2009, ICCAS-SICE 2009, 18-21 August 2009, Fukuoka, Japan.
           Neural network based model predictive control for a steel pickling process       Cache   Translate Page      
Kittisupakorn, P. and Thitiyasook, P. and Hussain, M.A. and Daosud, W. (2009) Neural network based model predictive control for a steel pickling process. Journal of Process Control, 19 (4). pp. 579-590. ISSN 0959-1524
           Prediction of bubble size in bubble columns using artificial neural network       Cache   Translate Page      
Ibrehem, A.S. and Hussain, M.A. (2009) Prediction of bubble size in bubble columns using artificial neural network. Journal of Applied Sciences, 9 (17). pp. 3196-3198. ISSN 1812-5654
           Application of artificial neural network to predict brake specific fuel consumption of retrofitted cng engine       Cache   Translate Page      
Jahirul, M.I. and Saidur, Rahman and Masjuki, Haji Hassan (2009) Application of artificial neural network to predict brake specific fuel consumption of retrofitted cng engine. International Journal of Mechanical and Materials Engineering, 4 (3). pp. 249-255. ISSN 1823-0334
          Superordenador 'Human brain' con 1 millón de procesadores, ha sido encendido por primera vez -ENG-      Cache   Translate Page      

El superordenador neuromórfico más grande del mundo, diseñado y construido para funcionar de la misma manera que lo hace un cerebro humano, ha sido equipado con un núcleo de un millón de procesadores, y se ha encendido por primera vez. La nueva máquina 'Spiking Neural Network Architecture' o ' SpiNNaker ' de millones de procesadores es capaz de completar más de 200 millones de acciones por segundo, con cada uno de sus chips con 100 millones de transistores.

etiquetas: spinnaker, neuromórfico, superordenador

» noticia original (www.manchester.ac.uk)


           Enhanced probabilistic neural network with data imputation capabilities for machine-fault slassification       Cache   Translate Page      
Chang, R.K.Y. and Loo, C.K. and Rao, M.V.C. (2009) Enhanced probabilistic neural network with data imputation capabilities for machine-fault slassification. Neural Computing & Applications, 18 (7). pp. 791-800. ISSN 0941-0643
           Jawi character speech-to-text engine using linear predictive and neural network for effective reading       Cache   Translate Page      
Othman, Z.A. and Razak, Z. and Abdullah, N.A. and Yusoff, M.Y.Z.B. (2009) Jawi character speech-to-text engine using linear predictive and neural network for effective reading. In: 3rd Asia International Conference on Modelling and Simulation , MAY 25-29, 2009, Bundang, INDONESIA.
           Prediction of population dynamics of bacillariophyta in the Tropical Putrajaya Lake and Wetlands (Malaysia) by a recurrent artificial neural networks.       Cache   Translate Page      
Malek, S. and Salleh, A. and Baba, M.S. (2009) Prediction of population dynamics of bacillariophyta in the Tropical Putrajaya Lake and Wetlands (Malaysia) by a recurrent artificial neural networks. In: 2nd International Conference on Environmental and Computer Science, DEC 28-30, 2009 , Dubai.
          The Rise of Artificial Intelligence – Part 2.2: From Augmenting BI to Aiming AGI      Cache   Translate Page      

In the last Part 2.1, we dove a little deeper into the meaning of AI, and explained in a simple manner how the most advanced techniques of AI (deep neural networks) were originally an inspiration of how the human brain works (biological neural networks). In this part, we will move one step forward to highlight …

The post The Rise of Artificial Intelligence – Part 2.2: From Augmenting BI to Aiming AGI appeared first on Daily News Egypt.


          Neural Networks and Optical Illusions      Cache   Translate Page      
In particular the use of the GAN architecture is of interest.

Neural Networks Don't Understand What Optical Illusions Are
By Technology Review

Machine vision systems cannot process images like this the way humans can.
University of Louisville researchers have determined that machine vision systems cannot process optical illusions the same way humans can.

Researchers at the University of Louisville have found that machine vision systems cannot process optical illusions in the same way humans can.

The researchers compiled a database of more than 6,000 images of optical illusions and trained a neural network to recognize them, then built a generative adversarial network to create optical illusions for itself. After seven hours of training, however, they found nothing of value was created.

The researchers believe generative adversarial networks are unlikely to be able to learn to trick human vision without being able to understand the principles behind such illusions, the result of crucial differences between machine vision systems and the human visual system. ... " 


          A General Theory of Equivariant CNNs on Homogeneous Spaces. (arXiv:1811.02017v1 [cs.LG])      Cache   Translate Page      

Authors: Taco Cohen, Mario Geiger, Maurice Weiler

Group equivariant convolutional neural networks (G-CNNs) have recently emerged as a very effective model class for learning from signals in the context of known symmetries. A wide variety of equivariant layers has been proposed for signals on 2D and 3D Euclidean space, graphs, and the sphere, and it has become difficult to see how all of these methods are related, and how they may be generalized.

In this paper, we present a fairly general theory of equivariant convolutional networks. Convolutional feature spaces are described as fields over a homogeneous base space, such as the plane $\mathbb{R}^2$, sphere $S^2$ or a graph $\mathcal{G}$. The theory enables a systematic classification of all existing G-CNNs in terms of their group of symmetry, base space, and field type (e.g. scalar, vector, or tensor field, etc.).

In addition to this classification, we use Mackey theory to show that convolutions with equivariant kernels are the most general class of equivariant maps between such fields, thus establishing G-CNNs as a universal class of equivariant networks. The theory also explains how the space of equivariant kernels can be parameterized for learning, thereby simplifying the development of G-CNNs for new spaces and symmetries. Finally, the theory introduces a rich geometric semantics to learned feature spaces, thus improving interpretability of deep networks, and establishing a connection to central ideas in mathematics and physics.


          Physics-Informed Generative Adversarial Networks for Stochastic Differential Equations. (arXiv:1811.02033v1 [stat.ML])      Cache   Translate Page      

Authors: Liu Yang, Dongkun Zhang, George Em Karniadakis

We developed a new class of physics-informed generative adversarial networks (PI-GANs) to solve in a unified manner forward, inverse and mixed stochastic problems based on a limited number of scattered measurements. Unlike standard GANs relying only on data for training, here we encoded into the architecture of GANs the governing physical laws in the form of stochastic differential equations (SDEs) using automatic differentiation. In particular, we applied Wasserstein GANs with gradient penalty (WGAN-GP) for its enhanced stability compared to vanilla GANs. We first tested WGAN-GP in approximating Gaussian processes of different correlation lengths based on data realizations collected from simultaneous reads at sparsely placed sensors. We obtained good approximation of the generated stochastic processes to the target ones even for a mismatch between the input noise dimensionality and the effective dimensionality of the target stochastic processes. We also studied the overfitting issue for both the discriminator and generator, and we found that overfitting occurs also in the generator in addition to the discriminator as previously reported. Subsequently, we considered the solution of elliptic SDEs requiring approximations of three stochastic processes, namely the solution, the forcing, and the diffusion coefficient. We used three generators for the PI-GANs, two of them were feed forward deep neural networks (DNNs) while the other one was the neural network induced by the SDE. Depending on the data, we employed one or multiple feed forward DNNs as the discriminators in PI-GANs. Here, we have demonstrated the accuracy and effectiveness of PI-GANs in solving SDEs for up to 30 dimensions, but in principle, PI-GANs could tackle very high dimensional problems given more sensor data with low-polynomial growth in computational cost.


          A Recurrent Graph Neural Network for Multi-Relational Data. (arXiv:1811.02061v1 [cs.LG])      Cache   Translate Page      

Authors: Vassilis N. Ioannidis, Antonio G. Marques, Georgios B. Giannakis

The era of data deluge has sparked the interest in graph-based learning methods in a number of disciplines such as sociology, biology, neuroscience, or engineering. In this paper, we introduce a graph recurrent neural network (GRNN) for scalable semi-supervised learning from multi-relational data. Key aspects of the novel GRNN architecture are the use of multi-relational graphs, the dynamic adaptation to the different relations via learnable weights, and the consideration of graph-based regularizers to promote smoothness and alleviate over-parametrization. Our ultimate goal is to design a powerful learning architecture able to: discover complex and highly non-linear data associations, combine (and select) multiple types of relations, and scale gracefully with respect to the size of the graph. Numerical tests with real data sets corroborate the design goals and illustrate the performance gains relative to competing alternatives.


          How to Improve Your Speaker Embeddings Extractor in Generic Toolkits. (arXiv:1811.02066v1 [cs.SD])      Cache   Translate Page      

Authors: Hossein Zeinali, Lukas Burget, Johan Rohdin, Themos Stafylakis, Jan Cernocky

Recently, speaker embeddings extracted with deep neural networks became the state-of-the-art method for speaker verification. In this paper we aim to facilitate its implementation on a more generic toolkit than Kaldi, which we anticipate to enable further improvements on the method. We examine several tricks in training, such as the effects of normalizing input features and pooled statistics, different methods for preventing overfitting as well as alternative non-linearities that can be used instead of Rectifier Linear Units. In addition, we investigate the difference in performance between TDNN and CNN, and between two types of attention mechanism. Experimental results on Speaker in the Wild, SRE 2016 and SRE 2018 datasets demonstrate the effectiveness of the proposed implementation.


          Generalization Bounds for Neural Networks: Kernels, Symmetry, and Sample Compression. (arXiv:1811.02067v1 [cs.LG])      Cache   Translate Page      

Authors: Christopher Snyder, Sriram Vishwanath

Though Deep Neural Networks (DNNs) are widely celebrated for their practical performance, they demonstrate many intriguing phenomena related to depth that are difficult to explain both theoretically and intuitively. Understanding how weights in deep networks coordinate together across layers to form useful learners has proven somewhat intractable, in part because of the repeated composition of nonlinearities induced by depth. We present a reparameterization of DNNs as a linear function of a particular feature map that is locally independent of the weights. This feature map transforms depth-dependencies into simple {\em tensor} products and maps each input to a discrete subset of the feature space. Then, in analogy with logistic regression, we propose a max-margin assumption that enables us to present a so-called {\em sample compression} representation of the neural network in terms of the discrete activation state of neurons induced by s "support vectors". We show how the number of support vectors relate to learning guarantees for neural networks through sample compression bounds, yielding a sample complexity O(ns/\epsilon) for networks with n neurons. Additionally, this number of support vectors has monotonic dependence on width, depth, and label noise for simple networks trained on the MNIST dataset.


          Classification of 12-Lead ECG Signals with Bi-directional LSTM Network. (arXiv:1811.02090v1 [cs.CV])      Cache   Translate Page      

Authors: Ahmed Mostayed, Junye Luo, Xingliang Shu, William Wee

We propose a recurrent neural network classifier to detect pathologies in 12-lead ECG signals and train and validate the classifier with the Chinese physiological signal challenge dataset (this http URL). The recurrent neural network consists of two bi-directional LSTM layers and can train on arbitrary-length ECG signals. Our best trained model achieved an average F1 score of 74.15% on the validation set.

Keywords: ECG classification, Deep learning, RNN, Bi-directional LSTM, QRS detection.


          Kernel Machines Beat Deep Neural Networks on Mask-based Single-channel Speech Enhancement. (arXiv:1811.02095v1 [cs.LG])      Cache   Translate Page      

Authors: Like Hui, Siyuan Ma, Mikhail Belkin

We apply a fast kernel method for mask-based single-channel speech enhancement. Specifically, our method solves a kernel regression problem associated to a non-smooth kernel function (exponential power kernel) with a highly efficient iterative method (EigenPro). Due to the simplicity of this method, its hyper-parameters such as kernel bandwidth can be automatically and efficiently selected using line search with subsamples of training data. We observe an empirical correlation between the regression loss (mean square error) and regular metrics for speech enhancement. This observation justifies our training target and motivates us to achieve lower regression loss by training separate kernel model per frequency subband. We compare our method with the state-of-the-art deep neural networks on mask-based HINT and TIMIT. Experimental results show that our kernel method consistently outperforms deep neural networks while requiring less training time.


          Image-Based Reconstruction for a 3D-PFHS Heat Transfer Problem by ReConNN. (arXiv:1811.02102v1 [cs.CE])      Cache   Translate Page      

Authors: Yu Li, Hu Wang, Xinjian Deng

The heat transfer performance of Plate Fin Heat Sink (PFHS) has been investigated experimentally and extensively. Commonly, the objective function of PFHS design is based on the responses of simulations. Compared with existing studies, the purpose of this work is to transfer from image-based model to analysis-based model for heat sink designs. It means that the sequential optimization should be based on images instead of responses. Therefore, an image-based reconstruction model of a heat transfer process for a 3D-PFHS is established. Unlike image recognition, such procedure cannot be implemented by existing recognition algorithms (e.g. Convolutional Neural Network) directly. Therefore, a Reconstructive Neural Network (ReConNN), integrated supervised learning and unsupervised learning techniques, is suggested. According to the experimental results, the heat transfer process can be observed more detailed and clearly, and the reconstructed results are meaningful for the further optimizations.


          On the role of neurogenesis in overcoming catastrophic forgetting. (arXiv:1811.02113v1 [cs.NE])      Cache   Translate Page      

Authors: German I. Parisi, Xu Ji, Stefan Wermter

Lifelong learning capabilities are crucial for artificial autonomous agents operating on real-world data, which is typically non-stationary and temporally correlated. In this work, we demonstrate that dynamically grown networks outperform static networks in incremental learning scenarios, even when bounded by the same amount of memory in both cases. Learning is unsupervised in our models, a condition that additionally makes training more challenging whilst increasing the realism of the study, since humans are able to learn without dense manual annotation. Our results on artificial neural networks reinforce that structural plasticity constitutes effective prevention against catastrophic forgetting in non-stationary environments, as well as empirically supporting the importance of neurogenesis in the mammalian brain.


          DeepConv-DTI: Prediction of drug-target interactions via deep learning with convolution on protein sequences. (arXiv:1811.02114v1 [q-bio.QM])      Cache   Translate Page      

Authors: Ingoo Lee, Jongsoo Keum, Hojung Nam

Identification of drug-target interactions (DTIs) plays a key role in drug discovery. The high cost and labor-intensive nature of in vitro and in vivo experiments have highlighted the importance of in silico-based DTI prediction approaches. In several computational models, conventional protein descriptors are shown to be not informative enough to predict accurate DTIs. Thus, in this study, we employ a convolutional neural network (CNN) on raw protein sequences to capture local residue patterns participating in DTIs. With CNN on protein sequences, our model performs better than previous protein descriptor-based models. In addition, our model performs better than the previous deep learning model for massive prediction of DTIs. By examining the pooled convolution results, we found that our model can detect binding sites of proteins for DTIs. In conclusion, our prediction model for detecting local residue patterns of target proteins successfully enriches the protein features of a raw protein sequence, yielding better prediction results than previous approaches.


          Modeling and Predicting Citation Count via Recurrent Neural Network with Long Short-Term Memory. (arXiv:1811.02129v1 [cs.DL])      Cache   Translate Page      

Authors: Sha Yuan, Jie Tang, Yu Zhang, Yifan Wang, Tong Xiao

The rapid evolution of scientific research has been creating a huge volume of publications every year. Among the many quantification measures of scientific impact, citation count stands out for its frequent use in the research community. Although peer review process is the mainly reliable way of predicting a paper's future impact, the ability to foresee lasting impact on the basis of citation records is increasingly important in the scientific impact analysis in the era of big data. This paper focuses on the long-term citation count prediction for individual publications, which has become an emerging and challenging applied research topic. Based on the four key phenomena confirmed independently in previous studies of long-term scientific impact quantification, including the intrinsic quality of publications, the aging effect and the Matthew effect and the recency effect, we unify the formulations of all these observations in this paper. Building on a foundation of the above formulations, we propose a long-term citation count prediction model for individual papers via recurrent neural network with long short-term memory units. Extensive experiments on a real-large citation data set demonstrate that the proposed model consistently outperforms existing methods, and achieves a significant performance improvement.


          DIAG-NRE: A Deep Pattern Diagnosis Framework for Distant Supervision Neural Relation Extraction. (arXiv:1811.02166v1 [cs.CL])      Cache   Translate Page      

Authors: Shun Zheng, Peilin Yu, Lu Chen, Ling Huang, Wei Xu

Modern neural network models have achieved the state-of-the-art performance on relation extraction (RE) tasks. Although distant supervision (DS) can automatically generate training labels for RE, the effectiveness of DS highly depends on datasets and relation types, and sometimes it may introduce large labeling noises. In this paper, we propose a deep pattern diagnosis framework, DIAG-NRE, that aims to diagnose and improve neural relation extraction (NRE) models trained on DS-generated data. DIAG-NRE includes three stages: (1) The deep pattern extraction stage employs reinforcement learning to extract regular-expression-style patterns from NRE models. (2) The pattern refinement stage builds a pattern hierarchy to find the most representative patterns and lets human reviewers evaluate them quantitatively by annotating a certain number of pattern-matched examples. In this way, we minimize both the number of labels to annotate and the difficulty of writing heuristic patterns. (3) The weak label fusion stage fuses multiple weak label sources, including DS and refined patterns, to produce noise-reduced labels that can train a better NRE model. To demonstrate the broad applicability of DIAG-NRE, we use it to diagnose 14 relation types of two public datasets with one simple hyper-parameter configuration. We observe different noise behaviors and obtain significant F1 improvements on all relation types suffering from large labeling noises.


          Fast OBDD Reordering using Neural Message Passing on Hypergraph. (arXiv:1811.02178v1 [cs.AI])      Cache   Translate Page      

Authors: Feifan Xu, Fei He, Enze Xie, Liang Li

Ordered binary decision diagrams (OBDDs) are an efficient data structure for representing and manipulating Boolean formulas. With respect to different variable orders, the OBDDs' sizes may vary from linear to exponential in the number of the Boolean variables. Finding the optimal variable order has been proved a NP-complete problem. Many heuristics have been proposed to find a near-optimal solution of this problem. In this paper, we propose a neural network-based method to predict near-optimal variable orders for unknown formulas. Viewing these formulas as hypergraphs, and lifting the message passing neural network into 3-hypergraph (MPNN3), we are able to learn the patterns of Boolean formula. Compared to the traditional methods, our method can find a near-the-best solution with an extremely shorter time, even for some hard examples.To the best of our knowledge, this is the first work on applying neural network to OBDD reordering.


          Neural Network-Hardware Co-design for Scalable RRAM-based BNN Accelerators. (arXiv:1811.02187v1 [cs.NE])      Cache   Translate Page      

Authors: Yulhwa Kim, Hyungjun Kim, Jae-Joon Kim

Recently, RRAM-based Binary Neural Network (BNN) hardware has been gaining interests as it requires 1-bit sense-amp only and eliminates the need for high-resolution ADC and DAC. However, RRAM-based BNN hardware still requires high-resolution ADC for partial sum calculation to implement large-scale neural network using multiple memory arrays. We propose a neural network-hardware co-design approach to split input to fit each split network on a RRAM array so that the reconstructed BNNs calculate 1-bit output neuron in each array. As a result, ADC can be completely eliminated from the design even for large-scale neural network. Simulation results show that the proposed network reconstruction and retraining recovers the inference accuracy of the original BNN. The accuracy loss of the proposed scheme in the CIFAR-10 testcase was less than 1.1% compared to the original network.


          CIS at TAC Cold Start 2015: Neural Networks and Coreference Resolution for Slot Filling. (arXiv:1811.02230v1 [cs.CL])      Cache   Translate Page      

Authors: Heike Adel, Hinrich Schütze

This paper describes the CIS slot filling system for the TAC Cold Start evaluations 2015. It extends and improves the system we have built for the evaluation last year. This paper mainly describes the changes to our last year's system. Especially, it focuses on the coreference and classification component. For coreference, we have performed several analysis and prepared a resource to simplify our end-to-end system and improve its runtime. For classification, we propose to use neural networks. We have trained convolutional and recurrent neural networks and combined them with traditional evaluation methods, namely patterns and support vector machines. Our runs for the 2015 evaluation have been designed to directly assess the effect of each network on the end-to-end performance of the system. The CIS system achieved rank 3 of all slot filling systems participating in the task.


          SparseFool: a few pixels make a big difference. (arXiv:1811.02248v1 [cs.CV])      Cache   Translate Page      

Authors: Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

Deep Neural Networks have achieved extraordinary results on image classification tasks, but have been shown to be vulnerable to attacks with carefully crafted perturbations of the input data. Although most attacks usually change values of many image's pixels, it has been shown that deep networks are also vulnerable to sparse alterations of the input. However, no \textit{efficient} method has been proposed to compute sparse perturbations. In this paper, we exploit the low mean curvature of the decision boundary, and propose SparseFool, a geometry inspired sparse attack that controls the sparsity of the perturbations. Extensive evaluations show that our approach outperforms related methods, and scales to high dimensional data. We further analyze the transferability and the visual effects of the perturbations, and show the existence of shared semantic information across the images and the networks. Finally, we show that adversarial training using $\ell_\infty$ perturbations can slightly improve the robustness against sparse additive perturbations.


          Comparison of Discrete Choice Models and Artificial Neural Networks in Presence of Missing Variables. (arXiv:1811.02284v1 [stat.ML])      Cache   Translate Page      

Authors: Johan Barthélemy, Morgane Dumont, Timoteo Carletti

Classification, the process of assigning a label (or class) to an observation given its features, is a common task in many applications. Nonetheless in most real-life applications, the labels can not be fully explained by the observed features. Indeed there can be many factors hidden to the modellers. The unexplained variation is then treated as some random noise which is handled differently depending on the method retained by the practitioner. This work focuses on two simple and widely used supervised classification algorithms: discrete choice models and artificial neural networks in the context of binary classification.

Through various numerical experiments involving continuous or discrete explanatory features, we present a comparison of the retained methods' performance in presence of missing variables. The impact of the distribution of the two classes in the training data is also investigated. The outcomes of those experiments highlight the fact that artificial neural networks outperforms the discrete choice models, except when the distribution of the classes in the training data is highly unbalanced.

Finally, this work provides some guidelines for choosing the right classifier with respect to the training data.


          Revealing Fine Structures of the Retinal Receptive Field by Deep Learning Networks. (arXiv:1811.02290v1 [q-bio.NC])      Cache   Translate Page      

Authors: Qi Yan, Yajing Zheng, Shanshan Jia, Yichen Zhang, Zhaofei Yu, Feng Chen, Yonghong Tian, Tiejun Huang, Jian K. Liu

Deep convolutional neural networks (CNNs) have demonstrated impressive performance on many visual tasks. Recently, they became useful models for the visual system in neuroscience. However, it is still not clear what are learned by CNNs in terms of neuronal circuits. When a deep CNN with many layers is used for the visual system, it is not easy to compare the structure components of CNN with possible neuroscience underpinnings due to highly complex circuits from the retina to higher visual cortex. Here we address this issue by focusing on single retinal ganglion cells with biophysical models and recording data from animals. By training CNNs with white noise images to predict neuronal responses, we found that fine structures of the retinal receptive field can be revealed. Specifically, convolutional filters learned are resembling biological components of the retinal circuit. This suggests that a CNN learning from one single retinal cell reveals a minimal neural network carried out in this cell. Furthermore, when CNNs learned from different cells are transferred between cells, there is a diversity of transfer learning performance, which indicates that CNNs are cell-specific. Moreover, when CNNs are transferred between different types of input images, here white noise v.s. natural images, transfer learning shows a good performance, which implies that CNN indeed captures the full computational ability of a single retinal cell for different inputs. Taken together, these results suggest that CNN could be used to reveal structure components of neuronal circuits, and provide a powerful model for neural system identification.


          Recurrent Skipping Networks for Entity Alignment. (arXiv:1811.02318v1 [cs.CL])      Cache   Translate Page      

Authors: Lingbing Guo, Zequn Sun, Ermei Cao, Wei Hu

We consider the problem of learning knowledge graph (KG) embeddings for entity alignment (EA). Current methods use the embedding models mainly focusing on triple-level learning, which lacks the ability of capturing long-term dependencies existing in KGs. Consequently, the embedding-based EA methods heavily rely on the amount of prior (known) alignment, due to the identity information in the prior alignment cannot be efficiently propagated from one KG to another. In this paper, we propose RSN4EA (recurrent skipping networks for EA), which leverages biased random walk sampling for generating long paths across KGs and models the paths with a novel recurrent skipping network (RSN). RSN integrates the conventional recurrent neural network (RNN) with residual learning and can largely improve the convergence speed and performance with only a few more parameters. We evaluated RSN4EA on a series of datasets constructed from real-world KGs. Our experimental results showed that it outperformed a number of state-of-the-art embedding-based EA methods and also achieved comparable performance for KG completion.


          Fast Hyperparameter Optimization of Deep Neural Networks via Ensembling Multiple Surrogates. (arXiv:1811.02319v1 [cs.LG])      Cache   Translate Page      

Authors: Yang Li, Jiawei Jiang, Yingxia Shao, Bin Cui

The performance of deep neural networks crucially depends on good hyperparameter configurations. Bayesian optimization is a powerful framework for optimizing the hyperparameters of DNNs. These methods need sufficient evaluation data to approximate and minimize the validation error function of hyperparameters. However, the expensive evaluation cost of DNNs leads to very few evaluation data within a limited time, which greatly reduces the efficiency of Bayesian optimization. Besides, the previous researches focus on using the complete evaluation data to conduct Bayesian optimization, and ignore the intermediate evaluation data generated by early stopping methods. To alleviate the insufficient evaluation data problem, we propose a fast hyperparameter optimization method, HOIST, that utilizes both the complete and intermediate evaluation data to accelerate the hyperparameter optimization of DNNs. Specifically, we train multiple basic surrogates to gather information from the mixed evaluation data, and then combine all basic surrogates using weighted bagging to provide an accurate ensemble surrogate. Our empirical studies show that HOIST outperforms the state-of-the-art approaches on a wide range of DNNs, including feed forward neural networks, convolutional neural networks, recurrent neural networks, and variational autoencoder.


          Hierarchical Neural Network Architecture In Keyword Spotting. (arXiv:1811.02320v1 [cs.CL])      Cache   Translate Page      

Authors: Yixiao Qu, Sihao Xue, Zhenyi Ying, Hang Zhou, Jue Sun

Keyword Spotting (KWS) provides the start signal of ASR problem, and thus it is essential to ensure a high recall rate. However, its real-time property requires low computation complexity. This contradiction inspires people to find a suitable model which is small enough to perform well in multi environments. To deal with this contradiction, we implement the Hierarchical Neural Network(HNN), which is proved to be effective in many speech recognition problems. HNN outperforms traditional DNN and CNN even though its model size and computation complexity are slightly less. Also, its simple topology structure makes easy to deploy on any device.


          Super-Identity Convolutional Neural Network for Face Hallucination. (arXiv:1811.02328v1 [cs.CV])      Cache   Translate Page      

Authors: Kaipeng Zhang, Zhanpeng Zhang, Chia-Wen Cheng, Winston H. Hsu, Yu Qiao, Wei Liu, Tong Zhang

Face hallucination is a generative task to super-resolve the facial image with low resolution while human perception of face heavily relies on identity information. However, previous face hallucination approaches largely ignore facial identity recovery. This paper proposes Super-Identity Convolutional Neural Network (SICNN) to recover identity information for generating faces closed to the real identity. Specifically, we define a super-identity loss to measure the identity difference between a hallucinated face and its corresponding high-resolution face within the hypersphere identity metric space. However, directly using this loss will lead to a Dynamic Domain Divergence problem, which is caused by the large margin between the high-resolution domain and the hallucination domain. To overcome this challenge, we present a domain-integrated training approach by constructing a robust identity metric for faces from these two domains. Extensive experimental evaluations demonstrate that the proposed SICNN achieves superior visual quality over the state-of-the-art methods on a challenging task to super-resolve 12$\times$14 faces with an 8$\times$ upscaling factor. In addition, SICNN significantly improves the recognizability of ultra-low-resolution faces.


          An amplitudes-perturbation data augmentation method in convolutional neural networks for EEG decoding. (arXiv:1811.02353v1 [eess.SP])      Cache   Translate Page      

Authors: Xian-Rui Zhang, Meng-Ying Lei, Yang Li

Brain-Computer Interface (BCI) system provides a pathway between humans and the outside world by analyzing brain signals which contain potential neural information. Electroencephalography (EEG) is one of most commonly used brain signals and EEG recognition is an important part of BCI system. Recently, convolutional neural networks (ConvNet) in deep learning are becoming the new cutting edge tools to tackle the problem of EEG recognition. However, training an effective deep learning model requires a big number of data, which limits the application of EEG datasets with a small number of samples. In order to solve the issue of data insufficiency in deep learning for EEG decoding, we propose a novel data augmentation method that add perturbations to amplitudes of EEG signals after transform them to frequency domain. In experiments, we explore the performance of signal recognition with the state-of-the-art models before and after data augmentation on BCI Competition IV dataset 2a and our local dataset. The results show that our data augmentation technique can improve the accuracy of EEG recognition effectively.


          Kalman Filter Modifier for Neural Networks in Non-stationary Environments. (arXiv:1811.02361v1 [cs.LG])      Cache   Translate Page      

Authors: Honglin Li, Frieder Ganz, Shirin Enshaeifar, Payam Barnaghi

Learning in a non-stationary environment is an inevitable problem when applying machine learning algorithm to real world environment. Learning new tasks without forgetting the previous knowledge is a challenge issue in machine learning. We propose a Kalman Filter based modifier to maintain the performance of Neural Network models under non-stationary environments. The result shows that our proposed model can preserve the key information and adapts better to the changes. The accuracy of proposed model decreases by 0.4% in our experiments, while the accuracy of conventional model decreases by 90% in the drifts environment.


          DeepChannel: Salience Estimation by Contrastive Learning for Extractive Document Summarization. (arXiv:1811.02394v1 [cs.CL])      Cache   Translate Page      

Authors: Jiaxin Shi, Chen Liang, Lei Hou, Juanzi Li, Zhiyuan Liu, Hanwang Zhang

We propose DeepChannel, a robust, data-efficient, and interpretable neural model for extractive document summarization. Given any document-summary pair, we estimate a salience score, which is modeled using an attention-based deep neural network, to represent the salience degree of the summary for yielding the document. We devise a contrastive training strategy to learn the salience estimation network, and then use the learned salience score as a guide and iteratively extract the most salient sentences from the document as our generated summary. In experiments, our model not only achieves state-of-the-art ROUGE scores on CNN/Daily Mail dataset, but also shows strong robustness in the out-of-domain test on DUC2007 test set. Moreover, our model reaches a ROUGE-1 F-1 score of 39.41 on CNN/Daily Mail test set with merely $1 / 100$ training set, demonstrating a tremendous data efficiency.


          Multi-Level Sensor Fusion with Deep Learning. (arXiv:1811.02447v1 [cs.CV])      Cache   Translate Page      

Authors: Valentin Vielzeuf, Alexis Lechervy, Stéphane Pateux, Frédéric Jurie

In the context of deep learning, this article presents an original deep network, namely CentralNet, for the fusion of information coming from different sensors. This approach is designed to efficiently and automatically balance the trade-off between early and late fusion (i.e. between the fusion of low-level vs high-level information). More specifically, at each level of abstraction-the different levels of deep networks-uni-modal representations of the data are fed to a central neural network which combines them into a common embedding. In addition, a multi-objective regularization is also introduced, helping to both optimize the central network and the unimodal networks. Experiments on four multimodal datasets not only show state-of-the-art performance, but also demonstrate that CentralNet can actually choose the best possible fusion strategy for a given problem.


          Synaptic Strength For Convolutional Neural Network. (arXiv:1811.02454v1 [cs.LG])      Cache   Translate Page      

Authors: Chen Lin, Zhao Zhong, Wei Wu, Junjie Yan

Convolutional Neural Networks(CNNs) are both computation and memory intensive which hindered their deployment in mobile devices. Inspired by the relevant concept in neural science literature, we propose Synaptic Pruning: a data-driven method to prune connections between input and output feature maps with a newly proposed class of parameters called Synaptic Strength. Synaptic Strength is designed to capture the importance of a connection based on the amount of information it transports. Experiment results show the effectiveness of our approach. On CIFAR-10, we prune connections for various CNN models with up to 96% , which results in significant size reduction and computation saving. Further evaluation on ImageNet demonstrates that synaptic pruning is able to discover efficient models which is competitive to state-of-the-art compact CNNs such as MobileNet-V2 and NasNet-Mobile. Our contribution is summarized as following: (1) We introduce Synaptic Strength, a new class of parameters for CNNs to indicate the importance of each connections. (2) Our approach can prune various CNNs with high compression without compromising accuracy. (3) Further investigation shows, the proposed Synaptic Strength is a better indicator for kernel pruning compared with the previous approach in both empirical result and theoretical analysis.


          Towards continual learning in medical imaging. (arXiv:1811.02496v1 [cs.CV])      Cache   Translate Page      

Authors: Chaitanya Baweja, Ben Glocker, Konstantinos Kamnitsas

This work investigates continual learning of two segmentation tasks in brain MRI with neural networks. To explore in this context the capabilities of current methods for countering catastrophic forgetting of the first task when a new one is learned, we investigate elastic weight consolidation, a recently proposed method based on Fisher information, originally evaluated on reinforcement learning of Atari games. We use it to sequentially learn segmentation of normal brain structures and then segmentation of white matter lesions. Our findings show this recent method reduces catastrophic forgetting, while large room for improvement exists in these challenging settings for continual learning.


          UAlacant machine translation quality estimation at WMT 2018: a simple approach using phrase tables and feed-forward neural networks. (arXiv:1811.02510v1 [cs.CL])      Cache   Translate Page      

Authors: Miquel Esplà-Gomis, Felipe Sánchez-Martínez, Mikel L. Forcada

We describe the Universitat d'Alacant submissions to the word- and sentence-level machine translation (MT) quality estimation (QE) shared task at WMT 2018. Our approach to word-level MT QE builds on previous work to mark the words in the machine-translated sentence as \textit{OK} or \textit{BAD}, and is extended to determine if a word or sequence of words need to be inserted in the gap after each word. Our sentence-level submission simply uses the edit operations predicted by the word-level approach to approximate TER. The method presented ranked first in the sub-task of identifying insertions in gaps for three out of the six datasets, and second in the rest of them.


          NeuralDrop: DNN-based Simulation of Small-Scale Liquid Flows on Solids. (arXiv:1811.02517v1 [cs.GR])      Cache   Translate Page      

Authors: Rajaditya Mukherjee, Qingyang Li, Zhili Chen, Shicheng Chu, Huamin Wang

Small-scale liquid flows on solid surfaces provide convincing details in liquid animation, but they are difficult to be simulated with efficiency and fidelity, mostly due to the complex nature of the surface tension at the contact front where liquid, air, and solid meet. In this paper, we propose to simulate the dynamics of new liquid drops from captured real-world liquid flow data, using deep neural networks. To achieve this goal, we develop a data capture system that acquires liquid flow patterns from hundreds of real-world water drops. We then convert raw data into compact data for training neural networks, in which liquid drops are represented by their contact fronts in a Lagrangian form. Using the LSTM units based on recurrent neural networks, our neural networks serve three purposes in our simulator: predicting the contour of a contact front, predicting the color field gradient of a contact front, and finally predicting whether a contact front is going to break or not. Using these predictions, our simulator recovers the overall shape of a liquid drop at every time step, and handles merging and splitting events by simple operations. The experiment shows that our trained neural networks are able to perform predictions well. The whole simulator is robust, convenient to use, and capable of generating realistic small-scale liquid effects in animation.


          From Perception to Decision: A Data-driven Approach to End-to-end Motion Planning for Autonomous Ground Robots. (arXiv:1609.07910v3 [cs.RO] UPDATED)      Cache   Translate Page      

Authors: Mark Pfeiffer, Michael Schaeuble, Juan Nieto, Roland Siegwart, Cesar Cadena

Learning from demonstration for motion planning is an ongoing research topic. In this paper we present a model that is able to learn the complex mapping from raw 2D-laser range findings and a target position to the required steering commands for the robot. To our best knowledge, this work presents the first approach that learns a target-oriented end-to-end navigation model for a robotic platform. The supervised model training is based on expert demonstrations generated in simulation with an existing motion planner. We demonstrate that the learned navigation model is directly transferable to previously unseen virtual and, more interestingly, real-world environments. It can safely navigate the robot through obstacle-cluttered environments to reach the provided targets. We present an extensive qualitative and quantitative evaluation of the neural network-based motion planner, and compare it to a grid-based global approach, both in simulation and in real-world experiments.


          Interpretation of Neural Networks is Fragile. (arXiv:1710.10547v2 [stat.ML] UPDATED)      Cache   Translate Page      

Authors: Amirata Ghorbani, Abubakar Abid, James Zou

In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions. For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification. How to interpret black-box predictors is thus an important and active area of research. A fundamental question is: how much can we trust the interpretation itself? In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations. We systematically characterize the fragility of several widely-used feature-importance interpretation methods (saliency maps, relevance propagation, and DeepLIFT) on ImageNet and CIFAR-10. Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly fragile. Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches.


          A Likelihood-Free Inference Framework for Population Genetic Data using Exchangeable Neural Networks. (arXiv:1802.06153v2 [cs.LG] UPDATED)      Cache   Translate Page      

Authors: Jeffrey Chan, Valerio Perrone, Jeffrey P. Spence, Paul A. Jenkins, Sara Mathieson, Yun S. Song

An explosion of high-throughput DNA sequencing in the past decade has led to a surge of interest in population-scale inference with whole-genome data. Recent work in population genetics has centered on designing inference methods for relatively simple model classes, and few scalable general-purpose inference techniques exist for more realistic, complex models. To achieve this, two inferential challenges need to be addressed: (1) population data are exchangeable, calling for methods that efficiently exploit the symmetries of the data, and (2) computing likelihoods is intractable as it requires integrating over a set of correlated, extremely high-dimensional latent variables. These challenges are traditionally tackled by likelihood-free methods that use scientific simulators to generate datasets and reduce them to hand-designed, permutation-invariant summary statistics, often leading to inaccurate inference. In this work, we develop an exchangeable neural network that performs summary statistic-free, likelihood-free inference. Our framework can be applied in a black-box fashion across a variety of simulation-based tasks, both within and outside biology. We demonstrate the power of our approach on the recombination hotspot testing problem, outperforming the state-of-the-art.


          Reservoir computing approaches for representation and classification of multivariate time series. (arXiv:1803.07870v2 [cs.NE] UPDATED)      Cache   Translate Page      

Authors: Filippo Maria Bianchi, Simone Scardapane, Sigurd Løkse, Robert Jenssen

Classification of multivariate time series (MTS) has been tackled with a large variety of methodologies and applied to a wide range of scenarios. Among the existing approaches, reservoir computing (RC) techniques, which implement a fixed and high-dimensional recurrent network to process sequential data, are computationally efficient tools to generate a vectorial, fixed-size representation of the MTS that can be further processed by standard classifiers. Despite their unrivaled training speed, MTS classifiers based on a standard RC architecture fail to achieve the same accuracy of other classifiers, such as those exploiting fully trainable recurrent networks. In this paper we introduce the reservoir model space, an RC approach to learn vectorial representations of MTS in an unsupervised fashion. Each MTS is encoded within the parameters of a linear model trained to predict a low-dimensional embedding of the reservoir dynamics. Our model space yields a powerful representation of the MTS and, thanks to an intermediate dimensionality reduction procedure, attains computational performance comparable to other RC methods. As a second contribution we propose a modular RC framework for MTS classification, with an associated open source Python library. By combining the different modules it is possible to seamlessly implement advanced RC architectures, including our proposed unsupervised representation, bidirectional reservoirs, and non-linear readouts, such as deep neural networks with both fixed and flexible activation functions. Results obtained on benchmark and real-world MTS datasets show that RC classifiers are dramatically faster and, when implemented using our proposed representation, also achieve superior classification accuracy.


          ENG: End-to-end Neural Geometry for Robust Depth and Pose Estimation using CNNs. (arXiv:1807.05705v2 [cs.CV] UPDATED)      Cache   Translate Page      

Authors: Thanuja Dharmasiri, Andrew Spek, Tom Drummond

Recovering structure and motion parameters given a image pair or a sequence of images is a well studied problem in computer vision. This is often achieved by employing Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) algorithms based on the real-time requirements. Recently, with the advent of Convolutional Neural Networks (CNNs) researchers have explored the possibility of using machine learning techniques to reconstruct the 3D structure of a scene and jointly predict the camera pose. In this work, we present a framework that achieves state-of-the-art performance on single image depth prediction for both indoor and outdoor scenes. The depth prediction system is then extended to predict optical flow and ultimately the camera pose and trained end-to-end. Our motion estimation framework outperforms the previous motion prediction systems and we also demonstrate that the state-of-the-art metric depths can be further improved using the knowledge of pose.


          Fast Spectrogram Inversion using Multi-head Convolutional Neural Networks. (arXiv:1808.06719v2 [cs.SD] UPDATED)      Cache   Translate Page      

Authors: Sercan O. Arik, Heewoo Jun, Gregory Diamos

We propose the multi-head convolutional neural network (MCNN) architecture for waveform synthesis from spectrograms. Nonlinear interpolation in MCNN is employed with transposed convolution layers in parallel heads. MCNN achieves more than an order of magnitude higher compute intensity than commonly-used iterative algorithms like Griffin-Lim, yielding efficient utilization for modern multi-core processors, and very fast (more than 300x real-time) waveform synthesis. For training of MCNN, we use a large-scale speech recognition dataset and losses defined on waveforms that are related to perceptual audio quality. We demonstrate that MCNN constitutes a very promising approach for high-quality speech synthesis, without any iterative algorithms or autoregression in computations.


          A Cross-Modal Distillation Network for Person Re-identification in RGB-Depth. (arXiv:1810.11641v2 [cs.CV] UPDATED)      Cache   Translate Page      

Authors: Frank Hafner, Amran Bhuiyan, Julian F. P. Kooij, Eric Granger

Person re-identification involves the recognition over time of individuals captured using multiple distributed sensors. With the advent of powerful deep learning methods able to learn discriminant representations for visual recognition, cross-modal person re-identification based on different sensor modalities has become viable in many challenging applications in, e.g., autonomous driving, robotics and video surveillance. Although some methods have been proposed for re-identification between infrared and RGB images, few address depth and RGB images. In addition to the challenges for each modality associated with occlusion, clutter, misalignment, and variations in pose and illumination, there is a considerable shift across modalities since data from RGB and depth images are heterogeneous. In this paper, a new cross-modal distillation network is proposed for robust person re-identification between RGB and depth sensors. Using a two-step optimization process, the proposed method transfers supervision between modalities such that similar structural features are extracted from both RGB and depth modalities, yielding a discriminative mapping to a common feature space. Our experiments investigate the influence of the dimensionality of the embedding space, compares transfer learning from depth to RGB and vice versa, and compares against other state-of-the-art cross-modal re-identification methods. Results obtained with BIWI and RobotPKU datasets indicate that the proposed method can successfully transfer descriptive structural features from the depth modality to the RGB modality. It can significantly outperform state-of-the-art conventional methods and deep neural networks for cross-modal sensing between RGB and depth, with no impact on computational complexity.


          Predicting Hurricane Trajectories using a Recurrent Neural Network. (arXiv:1802.02548v3 [cs.LG] CROSS LISTED)      Cache   Translate Page      

Authors: Sheila Alemany, Jonathan Beltran, Adrian Perez, Sam Ganzfried

Hurricanes are cyclones circulating about a defined center whose closed wind speeds exceed 75 mph originating over tropical and subtropical waters. At landfall, hurricanes can result in severe disasters. The accuracy of predicting their trajectory paths is critical to reduce economic loss and save human lives. Given the complexity and nonlinearity of weather data, a recurrent neural network (RNN) could be beneficial in modeling hurricane behavior. We propose the application of a fully connected RNN to predict the trajectory of hurricanes. We employed the RNN over a fine grid to reduce typical truncation errors. We utilized their latitude, longitude, wind speed, and pressure publicly provided by the National Hurricane Center (NHC) to predict the trajectory of a hurricane at 6-hour intervals. Results show that this proposed technique is competitive to methods currently employed by the NHC and can predict up to approximately 120 hours of hurricane path.


          Qué recomiendan estudiar los expertos en inteligencia artificial para trabajar y vivir de ello      Cache   Translate Page      

Qué recomiendan estudiar los expertos en inteligencia artificial para trabajar y vivir de ello#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Según Indeed, un portal de empleo tecnológico de Estados Unidos, entre junio de 2015 y junio de 2018 prácticamente se han duplicado las ofertas de empleo relacionadas con la Inteligencia Artificial (IA). Ingeniero de aprendizaje automático, científico de datos o ingenieros de visión computacional son los puestos que, según este análisis, mejor encajan en las ofertas de trabajo de IA. Y las previsiones son que haya más oferta de empleo que profesionales que puedan cubrir esas vacantes

A primera vista, las carreras STEM (Ciencia, Tecnología, Ingeniería y Matemáticas por su nombre en inglés) las más indicadas para poder trabajar en IA. Pero quienes trabajan ahora mismo en Inteligencia Artificial aseguran que los perfiles y los equipos multidisciplinares son cada vez más necesarios.

Así pues, ¿qué hay que estudiar para trabajar en Inteligencia Artificial? ¿Cómo me puedo formar? ¿Qué tengo que hacer si quiero desarrollar mi carrera en este campo?

Primer punto: decide en qué ámbito de la IA quieres trabajar

Decir Inteligencia Artificial es como decir tecnología: es un campo tan amplio y tan variado que lo que es básico para algunas de sus ramas no tiene porqué ser ni necesario para otras.

Por eso, varios de los expertos con los que hemos hablado aseguran, como primer punto, que quien quiera trabajar en IA debe plantearse primero en qué rama quiere hacerlo.

José Javier Gutiérrez Pulgar trabaja para el Grupo Santander, responsabilizándose de la estrategia de Asistentes Virtuales y de la aplicación de la inteligencia artificial en el contact center. “El campo de la IA es muy amplio y no puedes conocerlo todo. Hay que saber en qué te quieres especializar” (desarrollador, algoritmo, diseñar funciones de negocio…), asegura. Por eso, recomienda comenzar con un análisis de las habilidades e intereses de cada uno.

David Pereira 2 David Pereira, ingeniero en telecomunicaciones y responsable de Inteligencia Artificial en Everis

Una opinión compartida por Richard Benjamins, Data & AI Ambassador en Telefonica. "Se debe ver si nos interesan los datos. En internet podemos encontrar muchos sobre cosas interesantes. Con estos datos, le diría a esta persona que juegue con ellos, que intente visualizarlos y que después también investigue algoritmos para descubrir patrones, manejar herramientas, saber programar....".

Una base STEM es recomendable e ¿imprescindible?

Todos los expertos con los que hemos hablado coinciden en que para trabajar en IA es importante contar con una base fuerte de matemáticas o ingeniería. “Con carácter general creo que hay que tener un fundamento matemático medio-alto, conocimientos de programación, capacidad de modelizar y ser muy creativo”, resume José Antonio Torres, físico especialista en Computación e Inteligencia Artificial y Cofundador y Chief Scientist Officer en Altoro Analytics.

Ese grado de conocimiento de la parte más física o matemática va a depender, de nuevo, del trabajo que se quiera desarrollar en inteligencia artificial. “Cualquiera que tenga un conocimiento de programación puede desarrollar aplicaciones de IA sin bajar al detalle, porque los gigantes digitales como Microsoft, Amazon, IBM o Google nos permite consumir inteligencia artificial a través de sus servicios de sin meternos en un conocimiento exhaustivo y de primer nivel”, explica David Pereira, Responsable de Inteligencia Artificial en Everis.

La IA demandará, cada vez más, perfiles multidisciplinares para su desarrollo. Lingüistas computacionales, juristas, filósofos o músicos son algunas de las áreas humanistas buscadas

Una visión con la que coincide Richard Benjamins. “Las FAANG (Facebook, Amazon, Applen, Netflix, Google) tienen tantos datos y buenos ingenieros que ofrecen la IA como un servicio. Eso quiere decir que podrás utilizar sus API para analizar textos y construir un servicio para vender a otras profesiones”. Aunque estas empresas tienen “muchos datos y muy buenos ingenieros” y pueden ofrecer estos servicios “a un precio barato”, Benjamins advierte que a la hora de utilizar estos recursos de terceros “hay que confiar en que el código es bueno y transparente”.

Digamos, en este punto, que podemos distinguir dos tipos de trabajos relacionados con la IA: los más técnicos (como el desarrollo de los algoritmos) y los de negocio (o la aplicación a cada industria o área).

Para la parte más técnica se debe contar con una base mínima de matemáticas y estadísticas, según estos profesionales consultados. Esta formación es fundamental para poder entender cómo funciona la inteligencia artificial. “Son operaciones matemáticas que se estudian en la Universidad y que puede que en ese momento no sepas para qué se utilizan hasta que te metes en ese tipo de cosas y ves su aplicación práctica”, detalla el responsable de Everis, ingeniero en telecomunicaciones de formación.

Universidad, ¿sí o no?

Pero, ¿es necesario u obligatorio tener una formación universitaria para trabajar en IA? Todos los expertos consultados por Xataka coinciden: no es condición sine qua non, pero sí altamente recomendable.

Diana de la Iglesia Jiménez es doctora en Inteligencia Artificial y trabaja como Computer Scientist en el CNIO (Centro Nacional de Investigaciones Oncológicas) aplicando la IA a la investigación contra el cáncer. En su grupo, hay otros perfiles del área biomédica que también tienen conocimientos de programación, pero considera que “es importante que la gente que desarrolla IA, que detecta nuevos algoritmos y detecta los fallos de los existentes, tenga esa base matemática que te da una ingeniería”.

“Hasta cierto punto se puede aprender por libre. Internet es un repositorio de información increíble y puedes aprender lo que quieras si le dedicas tiempo”, reconoce Diana de la Iglesia. “Pero hay un límite para entender cómo funcionan todos esos algoritmos. Una cosa es aplicarlos y otra entenderlos de forma detallada”, añade.

Dianacnio 2 Diana de la Iglesia, doctora en Inteligencia Artificial y Computer Scientist en CNIO

Pero lo cierto es que en Internet hay muchos recursos formativos y cada vez las escuelas de negocio y universidades ofrecen una formación más ad hoc para este mundo de la IA. Podemos encontrar ejemplos de ello en U-tad, Unir, Universidad Politécnica de Madrid, ESADE o el IE. Richard Benjamins cree que si tu base es universitaria partes con ventaja, pero considera que teniendo una base matemática importante podemos aprovecharnos de los “muchos cursos y muy buenos que hay de data mining o de los cada vez más numerosos másters”, que permiten complementar esta formación.

Así que, tal y como asegura José Antonio Torres, “a mi hijo le diría que estudiara en la universidad aunque supiera hacer robots super-inteligentes con 15 años”. ¿Por qué? “Es imprescindible para esta disciplina que los que se dediquen a ella (o al menos una parte) tengan “fundamentos científicos sólidos, y la Universidad hasta ahora, era una buena vía para esto”.

Si ya tengo esa base, ¿cómo me especializado en IA?

Si ya contamos con esa base STEM y queremos trabajar en IA, David Pereira recomienda los cursos disponibles en Fast.ai para poder especializarnos en Inteligencia Artificial. En esta web muchos de los recursos “son gratuitos y de mucha calidad”, según su valoración.

Para profundizar en técnicas de análisis y visualización de datos, recomienda el libro que él mismo leyó (“Python for data analysis”, de Wes Mckinney) así como el curso online de Stanford “Convolutional neural networks for visual recognition”, creado por el responsable de IA de Tesla y que cuenta con un claustro de profesores como la gerente de IA en Google Cloud. Es gratuito tanto en Stanford como en Youtube. “Es un curso más avanzado porque requiere base matemática más sólida, casi de ingeniería, matemática o física”, advierte.

Qué lenguajes dominar

Aunque, como recuerda José Antonio Torres, Claude Shannon "no tiró una línea de código y fue uno de los fundadores de la IA", saber programar sí parece indispensable para trabajar en IA.

De nuevo, en función del área de negocio al que nos vayamos a dedicar deberemos dominar un lenguaje de programación u otro, dado que no hay uno específico para la IA.

No es imprescindible, pero casi. Tener una formación universitaria en ramas STEM facilita la formación y búsqueda de empleo en IA

Python es uno de los que más en boga está, sobre todo en ciencia de datos. Orientado al análisis de datos en masa y tiene una comunidad de desarrollo open detrás que está constantemente trabajando en él y mejorándolo. “Siempre es mejor tener conocimiento en varios lenguajes y con diferentes enfoques porque eso te ayuda a programar mejor”, explica Diana de la Iglesia. Una visión con la que coincide el cofundador y Chief Scientist Officer en Altoro Analytics. “Hoy en día hay que saber programar, y mejor en varios lenguajes. Lo que conteste ahora será distinto de lo que te contestaré dentro de una año, porque esto evoluciona muy rápido, y hay una gran batalla por nuevos estándares, pero 3 de los más populares son Java, Python o Scala”.

José Javier Gutiérrez Pulgar explica que en el Banco Santander se impone Java como política de la casa, pero alude también a Python y a Spring boot. Como apuesta de futuro, eso sí, menciona Akka. “Se pide mucho pero apenas hay profesionales, así que si te formas en esto vas a tener mucho futuro y salidas, especialmente en gran empresa”.

Benjamins Richard 011 Richard Benajmins, doctor en Inteligencia Artificial y Data & AI Ambassador en Telefonica

La versión más humanista de la IA

Como decíamos antes, más allá de los profesionales más técnicos del IA, hay otros perfiles que también se demandan en este campo: humanistas, juristas, lingüistas...

Richard Benjamins, Data & AI Ambassador en Telefonica, estudiaba psicología en Holanda. Dentro de este plan de estudios se incluía una rama de ciencia cognitiva: cómo razonan las personas y cómo programarlo en un ordenador. “Eran los años 90 y no tiene nada que ver a cómo se hace ahora”, nos reconoce este experto, quien continuó estudiando estas materias relacionadas con el lenguaje natural y lingüismo computacional.

Para Benjamins es imprescindible tener equipos multidisciplinares para el desarrollo de la IA y no solo por las cuestiones más legales (aquí alude al típico problema de un coche autónomo que choca con otro). “Habrá muchos perfiles que no sabemos ni qué van a existir y que serán necesarios para el desarrollo de la IA. Si quieres generar una IA sostenible no puedes meter solo ingenieros en el equipo”. Perfiles sociales, de diseño o jurídicos serán básicos para que las app no discriminen por sexos, raza o religión. “Si solo tienes técnicos los problemas salen al final, pero si tienes un equipo multidisciplinar los prevés antes de que salga una app a producción”.

Más allá de la formación reglada, hay muchos cursos disponibles en Internet (algunos gratuitos) que permiten formarse en IA

David Pereira, sin embargo, considera que el problema en estos momentos es que hay un gap muy grande en el espectro de perfiles humanistas en inteligencia artificial. “Tenemos que hacer un esfuerzo muy grande para lanzar formación específica para este tipo de perfiles. No sabemos cuáles son los marcos legales o las cuestiones éticas de desarrollo de la inteligencia y aplicación del GDPR, por ejemplo”.

Como concluye Diana de la Iglesia, “si queremos imitar una inteligencia humana necesitamos entender al ser humano, entender sus razonamientos, la forma de ejecutar el lenguajes y hacer comunicable el pensamiento. No pueden ir desligados la IA y las humanidades o la generación del arte. La IA debe abarcar todas las áreas si quiere ser completa”.

La formación ética del técnico de IA

Hablando de esta vertiente más humanista y de la polémica que rodea siempre a la Inteligencia Artificial, hemos querido preguntar a estos expertos si los profesionales (especialmente los más técnicos, de desarrollo de algoritmos y código) deben reforzar su formación ético-humanista o si con estos equipos multidisciplinares es suficiente.

26555599897 Db129b2178 K

“El problema de la IA es que autoaprende cuando sale de su entorno. No es tanto lo que se hace internamente como lo que puede aprender fuera”, explica el responsable de IA para contact center del Banco Santander, quien considera que tiene que haber personas que identifiquen estos problemas “para que las máquinas desprendan”.

Para José Antonio Torres, la mayor parte de la polémica es “una polémica “fake”, una polémica basura” porque, en su opinión, los profesionales de la IA deben tener los mismos valores éticos y morales que cualquier otro científico que trabaje con innovación punta, como un ingeniero de centrales nucleares, un ingeniero genético, alguien que diseña y experimenta con fármacos, etc.

Así pues, y como declara David Pereira, cualquier perfil trabaje en inteligencia artificial debe tener una mínima base ética. “Precisamente porque la tecnología va más rápida que la regulación es importante que los que trabajamos en este ámbito tengamos una formación ética. Esto va de personas haciendo cosas para el benificio de otras personas”.

También te recomendamos

Este cuadro lo ha pintado una máquina, y alguien lo ha comprado por 432.500 dólares en Christie's

Cómo hacer fotos mejores utilizando el objetivo adecuado: 6 casos prácticos

'Autoblow A.I.', el dispositivo de masturbación masculina que asegura utilizar IA para "replicar técnicas humanas"

-
La noticia Qué recomiendan estudiar los expertos en inteligencia artificial para trabajar y vivir de ello fue publicada originalmente en Xataka por Arantxa Herranz .


          Scientifique de données - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Maîtrise des techniques analytiques appliquées (clustering, decision trees, neural networks, SVM (support vector machines), collaborative filtering, k-nearest...
From belairdirect - Thu, 13 Sep 2018 14:41:28 GMT - View all Montréal, QC jobs
          Scientifique de données - Big Data - Intact - Montréal, QC      Cache   Translate Page      
Maîtrise des techniques analytiques appliquées (clustering, decision trees, neural networks, SVM (support vector machines), collaborative filtering, k-nearest...
From Intact - Thu, 13 Sep 2018 00:55:20 GMT - View all Montréal, QC jobs
          Data Scientist - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From belairdirect - Thu, 13 Sep 2018 00:51:55 GMT - View all Montréal, QC jobs
          AI may not suffice to analyse data across multiple health systems      Cache   Translate Page      

[USA], Nov 7 (ANI): Researchers have observed that artificial intelligence (AI) tools trained to detect pneumonia using chest X-rays suffered significant decreases in performance when tested on data from outside health systems.

According to a study conducted at the Icahn School of Medicine and published in a special issue of PLOS Medicine, these findings suggest that AI in the medical space must be carefully tested for performance across a wide range of populations; otherwise, the deep learning models may not perform as accurately as expected.

As interest in the use of computer system frameworks called convolution neural networks (CNN) to analyse medical imaging and provide a computer-aided diagnosis grows, recent studies have suggested that AI image classification may not generalise to new data as well as commonly portrayed.

Researchers assessed how AI models identified pneumonia in 158,000 chest X-rays across three medical institutions: the National Institutes of Health; The Mount Sinai Hospital; and Indiana University Hospital. They chose to study the diagnosis of pneumonia on chest X-rays for its common occurrence, clinical significance, and prevalence in the research community.

In three out of five comparisons, CNN's performance in diagnosing diseases on X-rays from hospitals outside of its own network was significantly lower than on X-rays from the original health system. However, CNNs were able to detect the hospital system where an X-ray was acquired with a high degree of accuracy and cheated at their predictive task based on the prevalence of pneumonia at the training institution.

Researchers found that the difficulty of using deep learning models in medicine is that they use a massive number of parameters, making it challenging to identify specific variables driving predictions, such as the types of CT scanners used at a hospital and the resolution quality of imaging.

"Our findings should give pause to those considering rapid deployment of artificial intelligence platforms without rigorously assessing their performance in real-world clinical settings reflective of where they are being deployed," said senior author Eric Oermann, MD. "Deep learning models trained to perform medical diagnosis can generalise well, but this cannot be taken for granted since patient populations and imaging techniques differ significantly across institutions."(ANI)


          A common neural network differentially mediates direct and social fear learning.      Cache   Translate Page      
Icon for Elsevier Science Related Articles

A common neural network differentially mediates direct and social fear learning.

Neuroimage. 2018 02 15;167:121-129

Authors: Lindström B, Haaker J, Olsson A

Abstract
Across species, fears often spread between individuals through social learning. Yet, little is known about the neural and computational mechanisms underlying social learning. Addressing this question, we compared social and direct (Pavlovian) fear learning showing that they showed indistinguishable behavioral effects, and involved the same cross-modal (self/other) aversive learning network, centered on the amygdala, the anterior insula (AI), and the anterior cingulate cortex (ACC). Crucially, the information flow within this network differed between social and direct fear learning. Dynamic causal modeling combined with reinforcement learning modeling revealed that the amygdala and AI provided input to this network during direct and social learning, respectively. Furthermore, the AI gated learning signals based on surprise (associability), which were conveyed to the ACC, in both learning modalities. Our findings provide insights into the mechanisms underlying social fear learning, with implications for understanding common psychological dysfunctions, such as phobias and other anxiety disorders.

PMID: 29170069 [PubMed - indexed for MEDLINE]


          Lotto Sorcerer 9.0.4      Cache   Translate Page      
This program uses neural networking technology to find a pattern to prior draws.
          Comparison of the automatic segmentation of multiple organs at risk in CT images of lung cancer between deep convolutional neural network-based and atlas-based techniques.      Cache   Translate Page      
Related Articles

Comparison of the automatic segmentation of multiple organs at risk in CT images of lung cancer between deep convolutional neural network-based and atlas-based techniques.

Acta Oncol. 2018 Nov 06;:1-8

Authors: Zhu J, Zhang J, Qiu B, Liu Y, Liu X, Chen L

Abstract
BACKGROUND: In this study, a deep convolutional neural network (CNN)-based automatic segmentation technique was applied to multiple organs at risk (OARs) depicted in computed tomography (CT) images of lung cancer patients, and the results were compared with those generated through atlas-based automatic segmentation.
MATERIALS AND METHODS: An encoder-decoder U-Net neural network was produced. The trained deep CNN performed the automatic segmentation of CT images for 36 cases of lung cancer. The Dice similarity coefficient (DSC), the mean surface distance (MSD) and the 95% Hausdorff distance (95% HD) were calculated, with manual segmentation results used as the standard, and were compared with the results obtained through atlas-based segmentation.
RESULTS: For the heart, lungs and liver, both the deep CNN-based and atlas-based techniques performed satisfactorily (average values: 0.87 < DSC < 0.95, 1.8 mm < MSD < 3.8 mm, 7.9 mm < 95% HD <11 mm). For the spinal cord and the oesophagus, the two methods had statistically significant differences. For the atlas-based technique, the average values were 0.54 < DSC < 0.71, 2.6 mm < MSD < 3.1 mm and 9.4 mm < 95% HD <12 mm. For the deep CNN-based technique, the average values were 0.71 < DSC < 0.79, 1.2 mm < MSD <2.2 mm and 4.0 mm < 95% HD < 7.9 mm.
CONCLUSION: Our results showed that automatic segmentation based on a deep convolutional neural network enabled us to complete automatic segmentation tasks rapidly. Deep convolutional neural networks can be satisfactorily adapted to segment OARs during radiation treatment planning for lung cancer patients.

PMID: 30398090 [PubMed - as supplied by publisher]


          Komentář k příspěvku AMD odhalilo Zen 2: velká zlepšení architektury CPU, šílené hybridní MCM v serverech od Maudit      Cache   Translate Page      
Nejsem to ja, kdo pouziva alternativni fakta. "Jan Olšan 17.9.2018 at 15:13" "Co vidíš, je prostě efekt toho, že DLSS pracuje jako spatial filtr, jen s jedním obrázkem. Protože nefunguje temporálně (s více snímky)" https://www.cnews.cz/nvidia-dlss-deep-learning-supersampling-upscaling-princip-fungovani/#comment-194887 Dodnes (a uz jsem o to zadal nekolikrat) jsi nedolozil zadny zdroj, kde by tohle bylo napsane. Existuje ale cela rada zdroju, vcetne samotne prezentaci architektury Turing Jensenem Huangem, kde se jasne mluvi o temporalni DNN. "Network has to remember part of the past." "Network has to be temporaly stable." https://youtu.be/Mrixi27G9yM?t=52m35s Stejne jako tvoje tvrzeni o jakemsi "spatial filtru" je "alternativni pravda". Renomovani revieweri, kteri meli pristup primo k review guide, mluvi jasne o rozpoznavani objektu ve scene, stejne jako Huang: "For each DLSS game, NVIDIA receives early builds from game developers and trains that neural network to recognize common forms and shapes of the models, textures, and terrain to build a 'ground truth' database that is distributed through Game Ready driver updates." https://www.techpowerup.com/reviews/NVIDIA/GeForce_RTX_2080_Founders_Edition/38.html Cely tvuj clanek o DLSS je vymysl. Ja jsem fakta a zdroje dodal. Ty jen placas. At si kazdej udela obrazek sam. Je mi jasny, ze te tu vsechny zdejsi AMD paka budou placat po zadech, ale jak ukazala ta nedavna predvolebni studie o nazorovem rozlozeni ve spojenych statech, ta ticha vetsina si bude myslet svoje.
          Multi-Modal Spectral Image Super-Resolution      Cache   Translate Page      
Recent advances have shown the great power of deep convolutional neural networks (CNN) to learn the relationship between low and high-resolution image patches. However, these methods only take a single-scale image as input and require large amount of data to train without the risk of overfitting. In this paper, we tackle the problem of multi-modal spectral image super-resolution while constraining our-selves to a small dataset. We propose the use of different modalities to improve the performance of neural networks on the spectral super-resolution problem. First, we use multiple downscaled versions of the same image to infer a better high-resolution image for training, we refer to these inputs as a multi-scale modality. Furthermore, color images are usually taken at a higher resolution than spectral images, so we make use of color images as another modality to improve the super-resolution network. By combining both modalities, we build a pipeline that learns to super-resolve using multi-scale spectral inputs guided by a color image. Finally, we validate our method and show that it is economic in terms of parameters and computation time, while still producing state-of-the-art results.
          Data Engineer 2 - IMO - Intelligent Medical Objects, Inc. - Northbrook, IL      Cache   Translate Page      
Familiarity with machine learning methods, such as clustering analysis and neural networks. Downtown commuters will enjoy free shuttle service to IMO’s...
From IMO - Intelligent Medical Objects, Inc. - Mon, 24 Sep 2018 17:52:43 GMT - View all Northbrook, IL jobs


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10
Site Map 2018_10_11
Site Map 2018_10_12
Site Map 2018_10_13
Site Map 2018_10_14
Site Map 2018_10_15
Site Map 2018_10_16
Site Map 2018_10_17
Site Map 2018_10_18
Site Map 2018_10_19
Site Map 2018_10_20
Site Map 2018_10_21
Site Map 2018_10_22
Site Map 2018_10_23
Site Map 2018_10_24
Site Map 2018_10_25
Site Map 2018_10_26
Site Map 2018_10_27
Site Map 2018_10_28
Site Map 2018_10_29
Site Map 2018_10_30
Site Map 2018_10_31
Site Map 2018_11_01
Site Map 2018_11_02
Site Map 2018_11_03
Site Map 2018_11_04
Site Map 2018_11_05
Site Map 2018_11_06
Site Map 2018_11_07