Next Page: 10000

          Java Deep Learning Projects      Cache   Translate Page   Web Page Cache   

eBook Details: Paperback: 436 pages Publisher: WOW! eBook (June 29, 2018) Language: English ISBN-10: 178899745X ISBN-13: 978-1788997454 eBook Description: Java Deep Learning Projects: Build and deploy powerful neural network models using the latest Java deep learning libraries

The post Java Deep Learning Projects appeared first on WOW! eBook: Free eBooks Download.

          Machine learning technique reconstructs images passing through a multimode fiber      Cache   Translate Page   Web Page Cache   
(The Optical Society) Through innovative use of a neural network that mimics image processing by the human brain, a research team reports accurate reconstruction of images transmitted over optical fibers for distances of up to a kilometer.
          The chAIr Project – Reversing the role of human and machine in the design process      Cache   Translate Page   Web Page Cache   
The chAIr Project – Reversing the roles of human and machine in the design process
The chAIr Project is a series of four chairs created using a generative neural network (GAN) trained on a dataset of iconic 20th-century chairs with the goal to “generate a classic”. The results are semi-abstract visual prompts for a human designer who used them as a starting point for actual chair design concepts. 
          Neural network-based multivariable fixed-time terminal sliding mode control for re-entry vehicles      Cache   Translate Page   Web Page Cache   
This study develops a neural network (NN)-based multivariable fixed-time terminal sliding mode control (MFTTSMC) strategy for re-entry vehicles (RVs) with uncertainties. A coupled MFTTSMC scheme is designed for the attitude system on the basis of feedback linearisation. A saturation function is introduced to avoid the singularity problem. Adaptive NNs are employed to approximate the uncertainties in RVs, thus alleviating chattering without sacrificing robustness. The whole closed-loop system is proven to be bounded and tracking errors are fixed-time stable. Simulations verify the effectiveness of the proposed strategy.
          IBM toont AI-malware die gericht slachtoffers kiest      Cache   Translate Page   Web Page Cache   
Tijdens de Black Hat-conferentie in Las Vegas toont IBM geavanceerde malware die een neuraal netwerk gebruikt om een payload te activeren. Door de complexiteit van zo’n netwerk is reverse-engineering bijna niet mogelijk. IBM gelooft dat het vandaag al wordt toegepast. IBM presenteert vandaag op Black Hat AI-malware DeepLocker. Het gebruikt een convolutional neural network om […]
          Chemical Engineers Simplify Models Via AI       Cache   Translate Page   Web Page Cache   

Nikolaos Sahinidis

Researchers in Carnegie Mellon University’s Department of Chemical Engineering are using a novel machine learning approach, called ALAMO, to build simple, but accurate models for applications that can be used to make sense of massive amounts of data quickly.

“We don't just use algorithms that others develop,” said Nikolaos Sahinidis, the John E. Swearingen Professor of Chemical Engineering, developer of ALAMO and a CMU alumnus. “In our group, we also develop the algorithms ourselves, and then we apply them to many application domains, both within and outside of process systems engineering.”

Process systems engineering involves making decisions about chemical processes — from designing molecules to designing entire supply chains. In all of these domains there are decision-making problems in which algorithms are useful for optimizing these processes.

While deep neural networks provide accurate models, these models are very complex. Leveraging mathematical optimization techniques, ALAMO was developed as a new methodology to simply and accurately represent complex processes and account for physical constraints.

“What we started looking at back seven years ago was the modeling and optimization of very complex processes for which we don't have analytical models,” Sahinidis said. “So then the question was, ‘can we use data to build mathematical models that we can then use to analyze and optimize these processes?’”

To create these models, the ALAMO methodology uses a small set of experimental or simulation data and builds models that are as simple as possible. In the development process the team has found how to enforce physical constraints of processes in the modeling process.

A number of students in Sahinidis’ group are applying the ALAMO methodology to multiple chemical engineering problems.

Fifth-year Ph.D. student Zachary Wilson is using the ALAMO method to work in reactions engineering. Wilson uses the ALAMO approach to create models that can predict what reactions or reaction mechanisms are occurring inside a chemical reactor, based on process data. In many problems, such as in computer vision and other problems that computer scientists tackle, the main goal of a model is to generalize and predict well. Understanding and interpreting the inner workings of the model often becomes a secondary priority. But in engineering, the parameters that researchers need to estimate are imperative, often having physical meaning.

“We’ve taken the integer programming methodology in ALAMO, which discreetly considers sub-models, and have applied it to these engineering domains,” Wilson said.

Another application is in thermodynamics. Third-year Ph.D. student Marissa Engle is extending the ALAMO approach to incorporate all of the datasets measuring different properties of the same fluid, creating one big picture to characterize its thermodynamic properties. Using data on pressure, volume, temperature, heat capacities and speed of sound, Engle is developing machine learning techniques to find one optimized equation.

“The problem with these equations is that they get very complex,” Engle said. “Using an ALAMO-like approach, we can suggest basis functions and limit how many terms are being used. We want to improve on these empirical equations so that they are simple, but accurate in the regions where new technologies are starting to push into areas where the thermodynamics get complicated, so we can accurately represent them and control them.”

Artificial intelligence and machine learning are providing new avenues for scientists and engineers to do their work better. But not all types of machine learning work for every problem. ALAMO is one example of how engineers are leveraging these techniques in order to accurately solve the problems that face engineers of every discipline.

“In some cases you can model from first principles,” Sahinidis said. “If the problem is too complex or too modern for first principles, then that's where we see the potential usefulness of machine learning.”

          On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization. (arXiv:1808.02941v1 [cs.LG])      Cache   Translate Page   Web Page Cache   

Authors: Xiangyi Chen, Sijia Liu, Ruoyu Sun, Mingyi Hong

This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the "Adam-type", includes the popular algorithms such as the Adam, AMSGrad and AdaGrad. Despite their popularity in training deep neural networks, the convergence of these algorithms for solving nonconvex problems remains an open question. This paper provides a set of mild sufficient conditions that guarantee the convergence for the Adam-type methods. We prove that under our derived conditions, these methods can achieve the convergence rate of order $O(\log{T}/\sqrt{T})$ for nonconvex stochastic optimization. We show the conditions are essential in the sense that violating them may make the algorithm diverge. Moreover, we propose and analyze a class of (deterministic) incremental adaptive gradient algorithms, which has the same $O(\log{T}/\sqrt{T})$ convergence rate. Our study could also be extended to a broader class of adaptive gradient methods in machine learning and optimization.

          Exploiting Effective Representations for Chinese Sentiment Analysis Using a Multi-Channel Convolutional Neural Network. (arXiv:1808.02961v1 [cs.CL])      Cache   Translate Page   Web Page Cache   

Authors: Pengfei Liu, Ji Zhang, Cane Wing-Ki Leung, Chao He, Thomas L. Griffiths

Effective representation of a text is critical for various natural language processing tasks. For the particular task of Chinese sentiment analysis, it is important to understand and choose an effective representation of a text from different forms of Chinese representations such as word, character and pinyin. This paper presents a systematic study of the effect of these representations for Chinese sentiment analysis by proposing a multi-channel convolutional neural network (MCCNN), where each channel corresponds to a representation. Experimental results show that: (1) Word wins on the dataset of low OOV rate while character wins otherwise; (2) Using these representations in combination generally improves the performance; (3) The representations based on MCCNN outperform conventional ngram features using SVM; (4) The proposed MCCNN model achieves the competitive performance against the state-of-the-art model fastText for Chinese sentiment analysis.

          Controllable Image-to-Video Translation: A Case Study on Facial Expression Generation. (arXiv:1808.02992v1 [cs.CV])      Cache   Translate Page   Web Page Cache   

Authors: Lijie Fan, Wenbing Huang, Chuang Gan, Junzhou Huang, Boqing Gong

The recent advances in deep learning have made it possible to generate photo-realistic images by using neural networks and even to extrapolate video frames from an input video clip. In this paper, for the sake of both furthering this exploration and our own interest in a realistic application, we study image-to-video translation and particularly focus on the videos of facial expressions. This problem challenges the deep neural networks by another temporal dimension comparing to the image-to-image translation. Moreover, its single input image fails most existing video generation methods that rely on recurrent models. We propose a user-controllable approach so as to generate video clips of various lengths from a single face image. The lengths and types of the expressions are controlled by users. To this end, we design a novel neural network architecture that can incorporate the user input into its skip connections and propose several improvements to the adversarial training method for the neural network. Experiments and user studies verify the effectiveness of our approach. Especially, we would like to highlight that even for the face images in the wild (downloaded from the Web and the authors' own photos), our model can generate high-quality facial expression videos of which about 50\% are labeled as real by Amazon Mechanical Turk workers.

          Object Detection in Satellite Imagery using 2-Step Convolutional Neural Networks. (arXiv:1808.02996v1 [cs.CV])      Cache   Translate Page   Web Page Cache   

Authors: Hiroki Miyamoto, Kazuki Uehara, Masahiro Murakawa, Hidenori Sakanashi, Hirokazu Nosato, Toru Kouyama, Ryosuke Nakamura

This paper presents an efficient object detection method from satellite imagery. Among a number of machine learning algorithms, we proposed a combination of two convolutional neural networks (CNN) aimed at high precision and high recall, respectively. We validated our models using golf courses as target objects. The proposed deep learning method demonstrated higher accuracy than previous object identification methods.

          Radon Inversion via Deep Learning. (arXiv:1808.03015v1 [cs.CV])      Cache   Translate Page   Web Page Cache   

Authors: Ji He, Jianhua Ma

Radon transform is widely used in physical and life sciences and one of its major applications is the X-ray computed tomography (X-ray CT), which is significant in modern health examination. The Radon inversion or image reconstruction is challenging due to the potentially defective radon projections. Conventionally, the reconstruction process contains several ad hoc stages to approximate the corresponding Radon inversion. Each of the stages is highly dependent on the results of the previous stage. In this paper, we propose a novel unified framework for Radon inversion via deep learning (DL). The Radon inversion can be approximated by the proposed framework with an end-to-end fashion instead of processing step-by-step with multiple stages. For simplicity, the proposed framework is short as iRadonMap (inverse Radon transform approximation). Specifically, we implement the iRadonMap as an appropriative neural network, of which the architecture can be divided into two segments. In the first segment, a learnable fully-connected filtering layer is used to filter the radon projections along the view-angle direction, which is followed by a learnable sinusoidal back-projection layer to transfer the filtered radon projections into an image. The second segment is a common neural network architecture to further improve the reconstruction performance in the image domain. The iRadonMap is overall optimized by training a large number of generic images from ImageNet database. To evaluate the performance of the iRadonMap, clinical patient data is used. Qualitative results show promising reconstruction performance of the iRadonMap.

          Image Inspired Poetry Generation in XiaoIce. (arXiv:1808.03090v1 [cs.AI])      Cache   Translate Page   Web Page Cache   

Authors: Wen-Feng Cheng, Chao-Chung Wu, Ruihua Song, Jianlong Fu, Xing Xie, Jian-Yun Nie

Vision is a common source of inspiration for poetry. The objects and the sentimental imprints that one perceives from an image may lead to various feelings depending on the reader. In this paper, we present a system of poetry generation from images to mimic the process. Given an image, we first extract a few keywords representing objects and sentiments perceived from the image. These keywords are then expanded to related ones based on their associations in human written poems. Finally, verses are generated gradually from the keywords using recurrent neural networks trained on existing poems. Our approach is evaluated by human assessors and compared to other generation baselines. The results show that our method can generate poems that are more artistic than the baseline methods. This is one of the few attempts to generate poetry from images. By deploying our proposed approach, XiaoIce has already generated more than 12 million poems for users since its release in July 2017. A book of its poems has been published by Cheers Publishing, which claimed that the book is the first-ever poetry collection written by an AI in human history.

          Rhythm-Flexible Voice Conversion without Parallel Data Using Cycle-GAN over Phoneme Posteriorgram Sequences. (arXiv:1808.03113v1 [cs.SD])      Cache   Translate Page   Web Page Cache   

Authors: Cheng-chieh Yeh, Po-chun Hsu, Ju-chieh Chou, Hung-yi Lee, Lin-shan Lee

Speaking rate refers to the average number of phonemes within some unit time, while the rhythmic patterns refer to duration distributions for realizations of different phonemes within different phonetic structures. Both are key components of prosody in speech, which is different for different speakers. Models like cycle-consistent adversarial network (Cycle-GAN) and variational auto-encoder (VAE) have been successfully applied to voice conversion tasks without parallel data. However, due to the neural network architectures and feature vectors chosen for these approaches, the length of the predicted utterance has to be fixed to that of the input utterance, which limits the flexibility in mimicking the speaking rates and rhythmic patterns for the target speaker. On the other hand, sequence-to-sequence learning model was used to remove the above length constraint, but parallel training data are needed. In this paper, we propose an approach utilizing sequence-to-sequence model trained with unsupervised Cycle-GAN to perform the transformation between the phoneme posteriorgram sequences for different speakers. In this way, the length constraint mentioned above is removed to offer rhythm-flexible voice conversion without requiring parallel data. Preliminary evaluation on two datasets showed very encouraging results.

          Training De-Confusion: An Interactive, Network-Supported Visual Analysis System for Resolving Errors in Image Classification Training Data. (arXiv:1808.03114v1 [cs.CV])      Cache   Translate Page   Web Page Cache   

Authors: Alex Bäuerle, Heiko Neumann, Timo Ropinski

Convolutional neural networks gain more and more popularity in image classification tasks since they are often even able to outperform human classifiers. While much research has been targeted towards network architecture optimization, the optimization of the labeled training data has not been explicitly targeted yet. Since labeling of training data is time-consuming, it is often performed by less experienced domain experts or even outsourced to online services. Unfortunately, this results in labeling errors, which directly impact the classification performance of the trained network. To overcome this problem, we propose an interactive visual analysis system that helps to spot and correct errors in the training dataset. For this purpose, we have identified instance interpretation errors, class interpretation errors and similarity errors as frequently occurring errors, which shall be resolved to improve classification performance. After we detect these errors, users are guided towards them through a two-step visual analysis process, in which they can directly reassign labels to resolve the detected errors. Thus, with the proposed visual analysis system, the user has to inspect far fewer items to resolve labeling errors in the training dataset, and thus arrives at satisfying training results more quickly.

          Building a Kannada POS Tagger Using Machine Learning and Neural Network Models. (arXiv:1808.03175v1 [cs.CL])      Cache   Translate Page   Web Page Cache   

Authors: Ketan Kumar Todi, Pruthwik Mishra, Dipti Misra Sharma

POS Tagging serves as a preliminary task for many NLP applications. Kannada is a relatively poor Indian language with very limited number of quality NLP tools available for use. An accurate and reliable POS Tagger is essential for many NLP tasks like shallow parsing, dependency parsing, sentiment analysis, named entity recognition. We present a statistical POS tagger for Kannada using different machine learning and neural network models. Our Kannada POS tagger outperforms the state-of-the-art Kannada POS tagger by 6%. Our contribution in this paper is three folds - building a generic POS Tagger, comparing the performances of different modeling techniques, exploring the use of character and word embeddings together for Kannada POS Tagging.

          Data-driven polynomial chaos expansion for machine learning regression. (arXiv:1808.03216v1 [stat.ML])      Cache   Translate Page   Web Page Cache   

Authors: E. Torre, S. Marelli, P. Embrechts, B. Sudret

We present a regression technique for data driven problems based on polynomial chaos expansion (PCE). PCE is a popular technique in the field of uncertainty quantification (UQ), where it is typically used to replace a runnable but expensive computational model subject to random inputs with an inexpensive-to-evaluate polynomial function. The metamodel obtained enables a reliable estimation of the statistics of the output, provided that a suitable probabilistic model of the input is available.

In classical machine learning (ML) regression settings, however, the system is only known through observations of its inputs and output, and the interest lies in obtaining accurate pointwise predictions of the latter. Here, we show that a PCE metamodel purely trained on data can yield pointwise predictions whose accuracy is comparable to that of other ML regression models, such as neural networks and support vector machines. The comparisons are performed on benchmark datasets available from the literature. The methodology also enables the quantification of the output uncertainties and is robust to noise. Furthermore, it enjoys additional desirable properties, such as good performance for small training sets and simplicity of construction, with only little parameter tuning required. In the presence of statistically dependent inputs, we investigate two ways to build the PCE, and show through simulations that one approach is superior to the other in the stated settings.

          Identifying Protein-Protein Interaction using Tree LSTM and Structured Attention. (arXiv:1808.03227v1 [q-bio.QM])      Cache   Translate Page   Web Page Cache   

Authors: Mahtab Ahmed, Jumayel Islam, Muhammad Rifayat Samee, Robert E. Mercer

Identifying interactions between proteins is important to understand underlying biological processes. Extracting a protein-protein interaction (PPI) from the raw text is often very difficult. Previous supervised learning methods have used handcrafted features on human-annotated data sets. In this paper, we propose a novel tree recurrent neural network with structured attention architecture for doing PPI. Our architecture achieves state of the art results (precision, recall, and F1-score) on the AIMed and BioInfer benchmark data sets. Moreover, our models achieve a significant improvement over previous best models without any explicit feature extraction. Our experimental results show that traditional recurrent networks have inferior performance compared to tree recurrent networks for the supervised PPI problem.

          Augmenting Physical Simulators with Stochastic Neural Networks: Case Study of Planar Pushing and Bouncing. (arXiv:1808.03246v1 [cs.RO])      Cache   Translate Page   Web Page Cache   

Authors: Anurag Ajay, Jiajun Wu, Nima Fazeli, Maria Bauza, Leslie P. Kaelbling, Joshua B. Tenenbaum, Alberto Rodriguez

An efficient, generalizable physical simulator with universal uncertainty estimates has wide applications in robot state estimation, planning, and control. In this paper, we build such a simulator for two scenarios, planar pushing and ball bouncing, by augmenting an analytical rigid-body simulator with a neural network that learns to model uncertainty as residuals. Combining symbolic, deterministic simulators with learnable, stochastic neural nets provides us with expressiveness, efficiency, and generalizability simultaneously. Our model outperforms both purely analytical and purely learned simulators consistently on real, standard benchmarks. Compared with methods that model uncertainty using Gaussian processes, our model runs much faster, generalizes better to new object shapes, and is able to characterize the complex distribution of object trajectories.

          3D Shape Perception from Monocular Vision, Touch, and Shape Priors. (arXiv:1808.03247v1 [cs.CV])      Cache   Translate Page   Web Page Cache   

Authors: Shaoxiong Wang, Jiajun Wu, Xingyuan Sun, Wenzhen Yuan, William T. Freeman, Joshua B. Tenenbaum, Edward H. Adelson

Perceiving accurate 3D object shape is important for robots to interact with the physical world. Current research along this direction has been primarily relying on visual observations. Vision, however useful, has inherent limitations due to occlusions and the 2D-3D ambiguities, especially for perception with a monocular camera. In contrast, touch gets precise local shape information, though its efficiency for reconstructing the entire shape could be low. In this paper, we propose a novel paradigm that efficiently perceives accurate 3D object shape by incorporating visual and tactile observations, as well as prior knowledge of common object shapes learned from large-scale shape repositories. We use vision first, applying neural networks with learned shape priors to predict an object's 3D shape from a single-view color image. We then use tactile sensing to refine the shape; the robot actively touches the object regions where the visual prediction has high uncertainty. Our method efficiently builds the 3D shape of common objects from a color image and a small number of tactile explorations (around 10). Our setup is easy to apply and has potentials to help robots better perform grasping or manipulation tasks on real-world objects.

          Application of Bounded Total Variation Denoising in Urban Traffic Analysis. (arXiv:1808.03258v1 [cs.LG])      Cache   Translate Page   Web Page Cache   

Authors: Shanshan Tang, Haijun Yu

While it is believed that denoising is not always necessary in many big data applications, we show in this paper that denoising is helpful in urban traffic analysis by applying the method of bounded total variation denoising to the urban road traffic prediction and clustering problem. We propose two easy-to-implement methods to estimate the noise strength parameter in the denoising algorithm, and apply the denoising algorithm to GPS-based traffic data from Beijing taxi system. For the traffic prediction problem, we combine neural network and history matching method for roads randomly chosen from an urban area of Beijing. Numerical experiments show that the predicting accuracy is improved significantly by applying the proposed bounded total variation denoising algorithm. We also test the algorithm on clustering problem, where a recently developed clustering analysis method is applied to more than one hundred urban road segments in Beijing based on their velocity profiles. Better clustering result is obtained after denoising.

          Spurious Local Minima are Common in Two-Layer ReLU Neural Networks. (arXiv:1712.08968v3 [cs.LG] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Itay Safran, Ohad Shamir

We consider the optimization problem associated with training simple ReLU neural networks of the form $\mathbf{x}\mapsto \sum_{i=1}^{k}\max\{0,\mathbf{w}_i^\top \mathbf{x}\}$ with respect to the squared loss. We provide a computer-assisted proof that even if the input distribution is standard Gaussian, even if the dimension is arbitrarily large, and even if the target values are generated by such a network, with orthonormal parameter vectors, the problem can still have spurious local minima once $6\le k\le 20$. By a concentration of measure argument, this implies that in high input dimensions, \emph{nearly all} target networks of the relevant sizes lead to spurious local minima. Moreover, we conduct experiments which show that the probability of hitting such local minima is quite high, and increasing with the network size. On the positive side, mild over-parameterization appears to drastically reduce such local minima, indicating that an over-parameterization assumption is necessary to get a positive result in this setting.

          Multi-Label Learning from Medical Plain Text with Convolutional Residual Models. (arXiv:1801.05062v2 [stat.ML] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Xinyuan Zhang, Ricardo Henao, Zhe Gan, Yitong Li, Lawrence Carin

Predicting diagnoses from Electronic Health Records (EHRs) is an important medical application of multi-label learning. We propose a convolutional residual model for multi-label classification from doctor notes in EHR data. A given patient may have multiple diagnoses, and therefore multi-label learning is required. We employ a Convolutional Neural Network (CNN) to encode plain text into a fixed-length sentence embedding vector. Since diagnoses are typically correlated, a deep residual network is employed on top of the CNN encoder, to capture label (diagnosis) dependencies and incorporate information directly from the encoded sentence vector. A real EHR dataset is considered, and we compare the proposed model with several well-known baselines, to predict diagnoses based on doctor notes. Experimental results demonstrate the superiority of the proposed convolutional residual model.

          Multi-region segmentation of bladder cancer structures in MRI with progressive dilated convolutional networks. (arXiv:1805.10720v2 [cs.CV] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Jose Dolz, Xiaopan Xu, Jerome Rony, Jing Yuan, Yang Liu, Eric Granger, Christian Desrosiers, Xi Zhang, Ismail Ben Ayed, Hongbing Lu

Precise segmentation of bladder walls and tumor regions is an essential step towards non-invasive identification of tumor stage and grade, which is critical for treatment decision and prognosis of patients with bladder cancer (BC). However, the automatic delineation of bladder walls and tumor in magnetic resonance images (MRI) is a challenging task, due to important bladder shape variations, strong intensity inhomogeneity in urine and very high variability across population, particularly on tumors appearance. To tackle these issues, we propose to use a deep fully convolutional neural network. The proposed network includes dilated convolutions to increase the receptive field without incurring extra cost nor degrading its performance. Furthermore, we introduce progressive dilations in each convolutional block, thereby enabling extensive receptive fields without the need for large dilation rates. The proposed network is evaluated on 3.0T T2-weighted MRI scans from 60 pathologically confirmed patients with BC. Experiments shows the proposed model to achieve high accuracy, with a mean Dice similarity coefficient of 0.98, 0.84 and 0.69 for inner wall, outer wall and tumor region, respectively. These results represent a very good agreement with reference contours and an increase in performance compared to existing methods. In addition, inference times are less than a second for a whole 3D volume, which is between 2-3 orders of magnitude faster than related state-of-the-art methods for this application. We showed that a CNN can yield precise segmentation of bladder walls and tumors in bladder cancer patients on MRI. The whole segmentation process is fully-automatic and yields results in very good agreement with the reference standard, demonstrating the viability of deep learning models for the automatic multi-region segmentation of bladder cancer MRI images.

          Java Deep Learning Projects      Cache   Translate Page   Web Page Cache   

eBook Details: Paperback: 436 pages Publisher: WOW! eBook (June 29, 2018) Language: English ISBN-10: 178899745X ISBN-13: 978-1788997454 eBook Description: Java Deep Learning Projects: Build and deploy powerful neural network models using the latest Java deep learning libraries

The post Java Deep Learning Projects appeared first on eBookee: Free eBooks Download.

          Así crea refranes un ordenador: "Cuando la muerte venga no tendrá ovejas"      Cache   Translate Page   Web Page Cache   

Ronda de refranes. Pensemos en tres de esas frases solemnes que llevan entre nosotros siglos: un buen yunque no es el que más ruido hace; cría cuervos y te sacarán los ojos; no por mucho madrugar amanece más temprano. Pues bien, uno de estos refranes no proviene de la tradición española.

Si las máquinas son capaces de reconocer caras o de 'escribir' poemas, ¿por qué no un refrán que suene a verdad de generaciones pasadas? Janelle Shane es una investigadora y creativa estadounidense conocida por sus trabajos con redes neuronales. Lo hace siempre con un toque de humor y extravagancia. Por ejemplo, creó un robot que ideaba frases para cortejar a alguien, con resultado estrambótico, como "si me dieran una rosa cada vez que pienso en ti, tengo un precio ajustado" o "tengo que darte un libro, porque eres la única cosa en tus ojos". También ha creado disfraces para Halloween o nombres de colores: verde azúcar, rosa fastidioso…

Incluso, sus redes neuronales se han atrevido a crear nombres de grupos de heavy metal, como Arena Inhumana, que según la inteligencia artificial haría death metal melódico desde Rusia, o Cielo Clónico Negro, cuyo black metal nos llegaría desde Grecia. "Una de las cosas que me gustan del machine learning es lo impredecible que puede ser. Puedes sorprenderte por resultados que el ordenador se inventa y no sabes en absoluto cómo lo hace, pero de alguna manera son mejores o diferentes a lo que una inteligencia humana pudiera haber inventado", cuenta Shane a

Así son los refranes de una máquina

Sin embargo, nada tan divertido como los proverbios. En esta ocasión, recibió la ayuda de Anthony Mandelli, un joven coleccionista de refranes clásicos que montó la base de datos con la que la inteligencia artificial, llamada char-rnn, dio a luz sus creaciones. En total, 2000 dichos en inglés. El resultado fueron secuencias de palabras que se pueden leer y entender (tienen su verbo, sus artículos determinantes, sus complementos…), pero sin sentido o significado, que los hablantes tenemos que dar ahora. 

El listado resultante suena a anciano, a algo que ha transmitido la tradición oral. El favorito de Shane es Death when it comes will have no sheep ("cuando la muerte venga no tendrá ovejas"). Pero hay muchos más: A good anvil does not make the most noise ("una buena excusa es tan buena como un descanso"), a good face is a letter to get out of the fire ("una buena cara es una carta para salir del fuego") o there is no smoke without the best sin ("no hay humo sin el mejor pecado"). Junto a ellos, a good anvil does not make the most noise ("un buen yunque no es el que más ruido hace")" o a good excuse is as good as a rest ("una buena excusa es tan buena como un descanso").

El buey (ox en inglés) y el zorro (fox) son dos animales que se repiten mucho en los refranes de Shane. La artista cree que, al ser común la secuencia -ox en los refranes de la base de datos, la máquina pudo entender que eran importantes para hacer combinaciones. Así, tenemos an ox is not fill when he will eat forever ("un buey no está lleno cuando comerá para siempre") o a fox smells it better than a fool’s for a day ("un zorro lo huele mejor que un tonto por un día").

Aunque cueste encontrar sentido a muchas combinaciones de palabras (¿valdría como refrán "un tonto en una taza de té es un silencio por una aguja en la venta"?), parece que la máquina entendió el armazón exterior que tienen muchos refranes, pues la mayoría cumplen una lógica de "si haces A, entonces B" o "la C es un/a D". "Se podría decir que la red entiende un poco la estructura de los refranes", explica Mandelli a, que asegura además haber incluido en su vida diaria a good wine makes the best sermon ("un buen vino hace el mejor sermón"), an ox is never known till needed ("no conoces a un buey hasta que lo necesitas") y el ya citado "cuando la muerte venga no tendrá ovejas".

El del vino y los sermones lo usa cuando está tomando algo con amigos: "Nunca me preguntan por sus orígenes, ya que suena como un refrán legítimo y funciona por contexto cuando brindas antes de beber". El del buey le parece muy profundo: para él significa que "no sabemos qué es necesario para nuestras vidas hasta que se vuelve evidente que está faltando". Por eso es su favorito.

Mendelli no es es el único que trata de buscar significado a los refranes. Con Shane han contactado otras personas que les pretenden dar aplicaciones: "Quieren usarlos específicamente para escritura de ficción; para fantasía, en concreto", porque, según le dicen, necesitan algunos proverbios que suenen antiguos. Además, las frases ya decoran pósteres motivacionales paródicos, como si de citas célebres se tratara.

La inteligencia artificial chat-rnn ya había creado en el pasado recetas de cocina ("nada deliciosas", recuerda la investigadora). Incluso se atrevió a parir pokémones con la misma red que luego usaría para los refranes. Los resultados fueron un disparate (había una tortuga con una especie de lago o estaque en el caparazón y un pájaro con una bolsa de papel en la cabeza), pero seguro que alguno podría pertenecer al ya largo listado oficial de criaturas de Nintendo. Otras redes se han usado para escribir fan fiction de Harry Potter y diseñar nombres de cervezas.

Shane no descarta seguir trabajando en los refranes, mientras que Mandelli se formaba en inteligencia artificial y machine learning para acompañarla. Para él, el trabajo con redes neuronales contribuye a mejorar nuestro entendimiento de las posibilidades de los ordenadores para interpretar grandes cantidades de datos y realizar con ellos "tareas complejas". Ahora solo queda que los humanos demos significado a todos esos nuevos refranes.


Las imágenes son propiedad del Portland Guinea Pig Rescue y Steve Jurvetson

          Can IBM Watermark Neural Networks?      Cache   Translate Page   Web Page Cache   
Leave it to IBM to figure out how to put their stamp on their AI models. Of course, as with other intellectual property, AI code can be stolen, so this is a welcome development for the field. In the article, “IBM Patenting Watermark Technology to Protect Ownership of AI Models at Neowin, we learn the […]
          Comment on More on Evolution: From the Mail Room, by Fred Reed      Cache   Translate Page   Web Page Cache   
you’re saying that naturalists understand that the big bang, abiogenesis and evolution are acknowledged as ‘pretend’ stories?
No. Try again, halfwit.
both are based in belief and neither one can be proved
Believe you can fly when you jump off a building. Superjew said you (plus a couple other fruitcakes) can pray in his name and he'll do anything for you. See how that works.
‘Better’ is a philosophical position… not a ‘scientific’ one… are you getting any of this?
Yes, I'm getting just how utterly retarded you are, with a little visit to with the search term "better." Building better batteries M Armand, JM Tarascon - nature, 2008 - Prediction of protein secondary structure at better than 70% accuracy B Rost, C Sander - Journal of molecular biology, 1993 - Elsevier Approximate is better than “exact” for interval estimation of binomial proportions A Agresti, BA Coull - The American Statistician, 1998 - Taylor & Francis Ensembling neural networks: many could be better than all ZH Zhou, J Wu, W Tang - Artificial intelligence, 2002 - Elsevier
          Whats new on arXiv      Cache   Translate Page   Web Page Cache   
Rethinking Numerical Representations for Deep Neural Networks With ever-increasing computational demand for deep learning, it is critical to investigate the …

Continue reading

          UCI Health Opens Center for Artificial Intelligence, Deep Learning      Cache   Translate Page   Web Page Cache   
The team will focus on developing deep learning neural networks and applying them to diagnostics, disease prediction, and treatment planning.
          Choosing Between GAN Or Encoder Decoder Architecture For ML Applications Is Like Comparing …      Cache   Translate Page   Web Page Cache   
Since the deep learning boom has started, numerous researchers have started building many architectures around neural networks. It is often ...
          Machine learning technique reconstructs images passing through a multimode fiber      Cache   Translate Page   Web Page Cache   
Through innovative use of a neural network that mimics image processing by the human brain, a research team reports accurate reconstruction of images transmitted over optical fibers for distances of up to a kilometer.
          Deep convolutional autoencoder for radar-based classification of similar aided and unaided human activities      Cache   Translate Page   Web Page Cache   
Radar-based activity recognition is a problem that has been of great interest due to applications such as border control and security, pedestrian identification for automotive safety, and remote health monitoring. This paper seeks to show the efficacy of micro-Doppler analysis to distinguish even those gaits whose micro-Doppler signatures are not visually distinguishable. Moreover, a three-layer, deep convolutional autoencoder (CAE) is proposed, which utilizes unsupervised pretraining to initialize the weights in the subsequent convolutional layers. This architecture is shown to be more effective than other deep learning architectures, such as convolutional neural networks and autoencoders, as well as conventional classifiers employing predefined features, such as support vector machines (SVM), random forest, and extreme gradient boosting. Results show the performance of the proposed deep CAE yields a correct classification rate of 94.2% for micro-Doppler signatures of 12 different human activities measured indoors using a 4 GHz continuous wave radar—17.3% improvement over SVM.
          Machine learning technique reconstructs images passing through a multimode fiber      Cache   Translate Page   Web Page Cache   
Through innovative use of a neural network that mimics image processing by the human brain, a research team reports accurate reconstruction of images transmitted over optical fibers for distances of up to a kilometer.
          Samsung Announces The Galaxy Note 9: 4000mAh And New S-Pen      Cache   Translate Page   Web Page Cache   

Today at the Unpacked Event at the Barclays Center in New York Samsung launched alongside a new smartwatch, the Galaxy Watch and a new home speaker, the Galaxy Home, the main subject of the show, the new Galaxy Note 9.

Samsung’s marketing focus for the Note 9 focused primarily on two new improved features: An increased battery capacity, and an increase in the storage capacity of the phones.

Let’s go over the specifications to see all of the new internals of the new summer flagship;

Samsung Galaxy Family
  Samsung Galaxy Note 8 Samsung Galaxy Note 9
SoC (US, China, Japan)
Qualcomm Snapdragon 835 
4x Kryo 280 (CA73) @ 2.35GHz
4x Kryo 280 (CA53) @ 1.90GHz
Adreno 540 @ 670MHz
(Americas, China, Japan)
Qualcomm Snapdragon 845 
4x Kryo 385 (CA75) @ 2.8GHz
4x Kryo 385 (CA55) @ 1.77GHz
Adreno 630 @ 710MHz
 (Rest of World)
Samsung Exynos 8895
4x Exynos M2 @ 2.30GHz
4x Cortex-A53 @ 1.70GHz
ARM Mali-G71MP20 @ 546MHz
 (Rest of World)
Samsung Exynos 9810
4x Exynos M3 @ 1.8-2.7GHz
4x Cortex-A53 @ 1.76GHz
ARM Mali-G72MP18 @ 572MHz
Display 6.3-inch 2960x1440 (18.5:9)
SAMOLED (curved edges)
6.4-inch 2960x1440 (18.5:9)
SAMOLED (curved edges)
Dimensions 162.5 x 74.8 x 8.6 mm
161.9 x 76.4 x 8.8 mm
NAND 64GB / 128GB (UFS)
+ microSD
128GB / 512GB (UFS)
+ microSD
Battery 3300mAh (12.7Wh)
4000mAh (15.4Wh)
Front Camera 8MP, f/1.7 8MP, f/1.7
Rear Cameras 12MP, 1.4µm pixels,
dual-pixel PDAF, OIS
12MP, 1.4µm pixels,
f/1.5 / f/2.4 adaptive aperture,
dual-pixel PDAF, OIS
2x zoom telephoto
12MP, f/2.4, OIS
2x zoom telephoto 
12MP, f/2.4, OIS
Modem Snapdragon X16 LTE (Integrated)
2G / 3G / 4G LTE (Category 16/13)
Snapdragon X20 LTE (Integrated)
2G / 3G / 4G LTE (Category 18/13)
Samsung LTE (Integrated)
2G / 3G / 4G LTE (Category 16/13)
Samsung LTE (Integrated)
2G / 3G / 4G LTE (Category 18/13)
SIM Size NanoSIM
Wireless 802.11a/b/g/n/ac 2x2 MU-MIMO,
BT 5.0 LE, NFC, GPS/Glonass/Galileo/BDS
Connectivity USB Type-C, 3.5mm headset
Features fingerprint sensor, heart-rate sensor, iris scanner, face unlock, fast charging (Qualcomm QC 2.0, Adaptive Fast Charging, USB PD),
wireless charging (WPC & PMA), IP68, Mobile HDR Premium
Launch OS Android 7.1.1 Samsung Experience Android 8.1 Samsung Experience

In terms of SoC, the Note9 follows in the steps of the Galaxy S9 and employs the same Snapdragon 845 and Exynos 9810. We’ve extensively covered the two new chipsets in our review of the S9 earlier in the year. The Snapdragon 845 is a fantastic chipset for 2018 – while the Exynos 9810 showed some weakness in terms of performance as well as power efficiency – showcasing a particularly large gap this year.

It’s been some time that Samsung actually talked about the internals of its Galaxy devices, so it was particularly surprising to hear an emphasis on the performance of the chipsets this year round. Gaming was one topic where Samsung pulled in outside help to promote the Note 9 – Tim Sweeney was on stage unveiling the early Android beta launch of Fortnite for Samsung Galaxy phones, as well an exclusive in-game skin for Note 9  users.

Both the Snapdragon and Exynos S9 showcased thermal throttling in 3D games, and it seems Samsung took note of this and actually introduced a new beefier thermal dissipation solution in order to better cool the SoC and maintain higher performance.

The Note 9 comes in 6 GB or 8GB RAM variants, tied together with doubled base storage capacity of 128GB and with an enormous 512GB in the higher tiered variant.

In terms of display, we see the same resolution as on the Note 8 and S9 – employing a 18.5:9 2960x1440 AMOLED screen. The diagonal has increased this time around to 6.4”, and indeed this has increased the footprint of the device as the width has widened 1.6mm to 76.4mm.

A notable increase in comparison to the Note 8 is the Note 9’s battery capacity. Here we see a 21% increase to reach the 4000mAh mark. While this is certainly a psychologically large number, the 14% boost versus the Galaxy S9+ is still within reasonable levels of improvements, and I think Samsung’s taking a bit of a heavy-handed marketing approach here when it comes to the battery promises – especially for the Exynos variants in EMEA and SEA markets.

The camera of the Note 9 is the same as on the S9+ - this includes again the main 12MP sensor with 1.4µm pixel pitch with full dual-pixel PDAF layout, and a variable aperture of either f/1.5 or f/2.4. The telephoto lens also is the same with a 2x optical zoom capability. Both modules support OIS.

The one point where the Note 9 promises to improve camera capture in is through the introduction of neural network based inferencing and scene recognition. The Note 9 is said to recognize 20 scenarios and apply a respective image processing such as colour temperature adjustments. Again, this is something we’ve seen in the past introduced by various vendors, but if one thing we’ve come to discover over the last few months is that the implementation can be a hit & miss – so Samsung will have to focus on getting this right.

The fingerprint sensor sees the same design change as on the S9 and is now located centrally below the camera modules.

Finally the biggest feature improvement on the Note 9 is the new S-Pen: The new unit is no longer just a passive component, but now is an active remote working over Bluetooth LE and has its own little battery incorporated. The phone is said to be able to charge the S-Pen in 40 seconds for 30 minutes of usage – or when it’s actually discharged then it doesn’t limit the traditional functions which still work as passive components. In addition to the new remote functionality, the new S-Pen has a finer tip and increases the pressure sensitivity to 4096 levels.

The Note 9 is now available for preorder with availability on August 24th at prices of USD $999 for the 6GB/128GB base variant and $1249 for the 8GB/512GB version.

          Proof of Concept Malware Uses Intelligence      Cache   Translate Page   Web Page Cache   
I know executives who are alarmed when they see programmers or major companies outline how new kinds of Malware can be built.  Isn't it bad enough?  But it does help to outline the vulnerabilities to plan for them. This example might be Outlined:  If we can use intelligence to feel you out and trick you as to who we are , we can more likely be successful.    All Phishing does this.   Recent demos of Google Duplex hint at this.   Somewhat unusual to see IBM doing this.   Expect to see more of this.

IBM's proof-of-concept malware uses AI for spear phishing   in V3
The neural network running DeepLocker hides its intent until it finds the right victim  The world is beginning to transition from the cloud era to the artificial intelligence (AI) era, as systems and networks grow and learn. But just as the web and cloud eras had their own threats, the same applies to this new landscape - and it is AI itself.

Excitement and confusion abound over AI, but despite - or perhaps because of - this, the technology can pose a real danger to computer users.

"As machine learning matures into AI, nascent use of AI for cyber threat defense will likely be countered by threat actors using AI for offense," Rick Hemsley, managing director at Accenture Security, told us earlier this year.

So on the face of it, IBM's development of DeepLocker - ‘a new breed of highly targeted and evasive attack tools powered by AI' - seems like it sets a dangerous precedent.

There is method to the madness. IBM reasons that cybercriminals are already working to weaponise AI, and the best way to counter such a threat is to watch how it works ... " 

Here is the report on Deeplocker by IBM.

          (USA-CA-San Jose) Python Engineer Software 2      Cache   Translate Page   Web Page Cache   
At Northrop Grumman, our work with **cutting-edge technology** is driven by something **human** : **the lives our technologies protects** . It's the value of innovation that makes a difference today and tomorrow. Here you'll have the opportunity to connect with coworkers in an environment that's uniquely caring, diverse, and respectful; where employees share experience, insights, perspectives and creative solutions through integrated product & cross-functional teams, and employee resource groups. Don't just build a career, build a life at Northrop Grumman. The Cyber Intelligence Mission Solutions team is seeking an Engineer Software 2 to join our team in San Jose as we kick off a new 10 year program to protect our nation's security. You will be using your Python skills to perform advanced data analytics on a newly architected platform. Hadoop, Spark, Storm, and other big data technologies will be used as the basic framework for the program's enterprise. **Roles and Responsibilities:** + Python development of new functionality and automation tools using Agile methodologies + Build new framework using Hadoop, Spark, Storm, and other big data technologies + Migrate legacy enterprise to new platform + Test and troubleshoot using Python and some Java on Linux + Function well as a team player with great communication skills **Basic Qualifications:** + Bachelor Degree in a STEM discipline (Science, Technology, Engineering or Math)from an accredited institutionwith 2+ years of relevant work experience, or Masters in a STEM discipline with 0+ years of experience + 1+ years of Python experience in a work setting + Active SCI clearance **Preferred Qualifications:** + Machine learning / AI / Deep Learning / Neural Networks + Familiar withHadoop, Spark or other Big Data technologies + Familiar with Agile Scrum methodology + Familiar withRally, GitHub, Jenkins, Selenium applications Northrop Grumman is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class. For our complete EEO/AA statement, please visit . U.S. Citizenship is required for most positions.
          (USA) Software Engineer- Infrastructure      Cache   Translate Page   Web Page Cache   
Software Engineer- Infrastructure Job Summary Apply Now + Job:18968-MKAI + Location:US-MA-Natick + Department:Product Development We are looking for a versatile, enthusiastic computer scientist or engineer capable of multi-tasking to join the Control & Identification team. You will help develop software tools to facilitate the application of reinforcement learning to practical industrial application in areas such as robotics and other autonomous systems. You will need skills that cross traditional domain boundaries in areas such as machine learning, optimization, object-oriented programming, and graphical user interface design. Responsibilities + Develop and implement new software tools to help our customers apply reinforcement learning to their applications. + Work on improving the integration and deployment of reinforcement learning tools with workflows utilizing GPUs, parallel computing and cloud computing. + Contribute to all aspects of the product development process from writing functional specifications to designing software architecture to implementing software features. + Work with quality engineering, documentation, and usability teams to develop state-of-the-art software tools. Minimum Qualifications + A bachelor's degree and 3 years of professional work experience (or a master's degree) is required. Additional Qualifications In addition, a combination of some of the follow skills is important: + Knowledge of numerical algorithms. + Experience with MATLAB or Simulink. + Experience with machine learning. + Experience with neural networks. + Experience with object-oriented design and programming. + Experience with GPUs and parallel computing. + Experience with IoT and cloud computing is a plus. + Experience with software development lifecycle is a plus. + Experience with other programming languages is a nice to have. Why MathWorks? It’s the chance to collaborate with bright, passionate people. It’s contributing to software products that make a difference in the world. And it’s being part of a company with an incredible commitment to doing the right thing – for each individual, our customers, and the local community. MathWorks develops MATLAB and Simulink, the leading technical computing software used by engineers and scientists. The company employs 4000 people in 16 countries, with headquarters in Natick, Massachusetts, U.S.A. MathWorks is privately held and has been profitable every year since its founding in 1984. Apply Now Contact usif you need reasonable accommodation because of a disability in order to apply for a position. The MathWorks, Inc. is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other protected characteristics. View TheEEO is the Law posterandits supplement. The pay transparency policy is availablehere. MathWorks participates in E-Verify. View the E-Verify postershere. Apply Now + Job:18968-MKAI + Location:US-MA-Natick + Department:Product Development
          DNN-based approach for fault detection in a direct drive wind turbine      Cache   Translate Page   Web Page Cache   
Incipient fault detection of wind turbines is beneficial for making maintenance strategy and avoiding catastrophic result in a wind farm. A deep neural network (DNN)-based approach is proposed to deal with the challenging task for a direct drive wind turbine, involving four steps: a preprocessing method considering operational mechanism is presented to get rid of the outliers in supervisory control and data acquisition (SCADA); the conventional random forest method is used to evaluate the importance of variables related to the target variable; the historical healthy SCADA data excluding outliers is used to train a deep neural network; and the exponentially weighted moving average control chart is adopted to determine the fault threshold. With the online data being input into the trained deep neural network model of a wind turbine with healthy state, the testing error is regarded as the metric of fault alarm of the wind turbine. The proposed approach is successfully applied to the fault detection of the fall off of permanent magnets in a direct drive wind turbine generator.
          Enhanced Multi-Level Signal Recovery in Mobile Fronthaul Network Using DNN Decoder      Cache   Translate Page   Web Page Cache   
A deep neural network (DNN) decoder which combines the function of both equalization and decoding is proposed and experimentally demonstrated for mobile fronthaul (MFH) transmission. This DNN consists of one input layer, one output layer, and several hidden layers. Adamax algorithm is implemented for finding the global minima while dropout mechanism and early stopping are utilized to avoid overfitting. The DNN accepts the received samples as the input and output the decoded samples directly to recover the transmitted samples. Using this DNN decoder, a record-high data-rate transmission-distance product at 1800-Gb/s $cdot $ km based on the directly modulated laser (DML) with intensity-modulation direct-detection is obtained. Besides, the PAM8 modulation with 3-bps/Hz spectral efficiency is implemented. Due to the smaller source spectrum bandwidth compared with traditionally widely used PAM4 and OOK at a certain data rate, the power fading limited transmission distance is extended by 1.5 and three times, respectively. The DML is a low-cost device and this simple transmission system eliminates the need of single-side-band modulation or dispersion compensation, which makes this system an ideal candidate for a low-cost enhanced MFH network.
          Bioelectronics and Biosensors Market Research Report 2018- 2025 by Key Players are Bayer, Abbott Point of Care, F. Hoffmann-La Roche, AgaMatrix      Cache   Translate Page   Web Page Cache   
Bioelectronics contribute and evaluate the new technologies to expand the biological sector that enable in raising the efficiency of the medical community. Bioelectronics adopted various number of ideas, methods and technology which are bio electromagnetics, instrumentation, neural networks, robotics, and sensor technologies. Sample Copy of This Report: Scope of the Report: This report focuses on the Bioelectronics and Biosensors in global...
          Researchers Move Closer to Completely Optical Artificial Neural Network      Cache   Translate Page   Web Page Cache   
          Machine Learning SW Engineer      Cache   Translate Page   Web Page Cache   
MD-Rockville, Our client now has an open Machine Learning SW Engineer opening. It is a SW Engineering role with Machine Learning. If the right candidate had Deep Learning concepts (Algorithm using Neural Networks) that would be a plus but at least Machine Learning. Mission: There is a NEW Data Strategy for our client to enable to solve DATA problems more efficiently. DAY to DAY: Will be working with Product Tea
          Il labirinto di luce      Cache   Translate Page   Web Page Cache   

RICERCA – Un team di ingegneri elettronici ed elettrici dell’UCLA (Università della California – Los Angeles) ha creato una rete neurale artificiale fisica, ossia un dispositivo che implementa modelli di neural networks ed algoritmi di deep learning, in grado di analizzare elevate quantità di dati alla velocità della luce. Cos’è il deep learning? Il deep learning, […]

L'articolo Il labirinto di luce proviene da OggiScienza.

          New Nvidia Paper Accelerates Large-Scale Language Modelling      Cache   Translate Page   Web Page Cache   
Nvidia's paper Large Scale Language Modeling: Converging on 40GB of Text in Four Hours introduces a model that uses mixed precision arithmetic and a 32k batch size distributed across 128 Nvidia Tesla V100 GPUs to improve scalability and transfer in Recurrent Neural Networks (RNNs) for Natural Language tasks.
          109: Neural Network C# Predictions for Everyone      Cache   Translate Page   Web Page Cache   
It is that time again for more machine learning! This time it is actually something that you can totally build and something that Frank shipped inside of his application to do code prediction using Python, Keras, PlaidML, and CoreML! We talk about the main use case, the route Frank took to create the machine learning model, what hardware and software he used, and the final outcome to predict code while you type. Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Merge Conflict: Twitter, Facebook, Website Music : Amethyst Seer - Citrine by Adventureface ⭐⭐ Review Us ( ⭐⭐ SUPPORT US ON PATREON: Special Thanks to Syncfusion: Download their e-bools: * Xamarin.Forms Succinctly ( * Xamarin.Forms for macOS Succinctly (
          First Class GPUs support in Apache Hadoop 3.1, YARN & HDP 3.0      Cache   Translate Page   Web Page Cache   

This blog is also co-authored by Zian Chen and Sunil Govindan from Hortonworks.

Introduction Apache Hadoop 3.1, YARN, & HDP 3.0
First Class GPUs support in Apache Hadoop 3.1, YARN & HDP 3.0
Without speed up with GPUs, some computations take forever! (Image from Movie “Howl’s Moving Castle”)

GPUs are increasingly becoming a key tool for many big data applications. Deep-learning / machine learning, data analytics , Genome Sequencing etc all have applications that rely on GPUs for tractable performance. In many cases, GPUs can get up to 10x speedups. And in some reported cases (like this ), GPUs can get up to 300x speedups! Many modern deep-learning applications directly build on top of GPU libraries like cuDNN (CUDA Deep Neural Network library). It’s not a stretch to say that many applications like deep-learning cannot live without GPU support.

Starting Apache Hadoop 3.1 and with HDP 3.0, we have a first-class support for operators and admins to be able to configure YARN clusters to schedule and use GPU resources.

Previously, without first-class GPU support, YARN has a not-so-comprehensive story around GPU support. Without this new feature, users have to use node-labels ( YARN-796 ) to partition clusters to make use of GPUs, which simply puts machines equipped GPUs to a different partition and requires jobs to be submitted that need GPUs to the specific partition. For a detailed example of this pattern of GPU usage, see Yahoo!’s blog post about Large Scale Distributed deep-learning on Hadoop Clusters .

Without a native and more comprehensive GPU support, there’s no isolation of GPU resources also! For example, multiple tasks compete for a GPU resource simultaneously which could cause task failures / GPU memory exhaustion, etc.

To this end, the YARN community looked for a comprehensive solution to natively support GPU resources on YARN.

First class GPU support on YARN GPU scheduling using “extensible resource-types “in YARN

We need to recognize GPU as a resource type when doing scheduling. YARN-3926 extends the YARN resource model to a more flexible model which makes it easier to add new countable resource-types. It also considers the related aspect of “resource profiles” which allow users to easily specify the resources they need for containers. Once we have GPUs type added to YARN, YARN can schedule applications on GPU machines. By specifying the number of requested GPU to containers, YARN can find machines with available GPUs to satisfy container requests.

First Class GPUs support in Apache Hadoop 3.1, YARN & HDP 3.0
GPU isolation

With GPU scheduling support, containers with GPU request can be placed to machines with enough available GPU resources. We still need to solve the isolation problem: When multiple applications use GPU resources on the same machine, they should not affect each other.

Even if GPU has many cores, there’s no easy isolation story for processes sharing the same GPU. For instance, Nvidia Multi-Process Service (MPS) provides isolation for multiple process access the same GPU, however, it only works for Volta architecture, and MPS is not widely support by deep learning platforms yet. ,So our isolation, for now, is per-GPU device: each container can ask for an integer number of GPU devices along with memory, vcores (for example 4G memory, 4 vcores and 2 GPUs). With this, each application uses their assigned GPUs exclusively .

We use cgroups to enforce the isolation. This works by putting a YARN container a process tree into a cgroup that allows access to only the prescribed GPU devices. When Docker containers are used on YARN, nvidia-docker-plugin an optional plugin that admins have to configure is used to enforce GPU resource isolation.

GPU discovery

For properly doing scheduling and isolation, we need to know how many GPU devices are available in the system. Admins can configure this manually on a YARN cluster. But it may also be desirable to discover GPU resources through the framework automatically. Currently, we’re using Nvidia system management interface (nvidia-smi) to get number of GPUs in each machine and usages of these GPU devices. An example output of nvidia-smi looks like below:

First Class GPUs support in Apache Hadoop 3.1, YARN & HDP 3.0
Web UI

We also added GPU information to the new YARN web UI. On ResourceManager page, we show total used and available GPU resources across the cluster along with other resources like memory / cpu.

First Class GPUs support in Apache Hadoop 3.1, YARN & HDP 3.0

On NodeManager page, YARN shows per-GPU device usage and metrics:

First Class GPUs support in Apache Hadoop 3.1, YARN & HDP 3.0

To enable GPU support in YARN, administrators need to set configs for GPU Scheduling and GPU isolation.

GPU Scheduling

(1) yarn.resource-types in resource.type.xml

This gives YARN a list of available resource types supported for user to use. We need to add “” here if we want to support GPU as a resource type

(2) yarn.scheduler.capacity.resource-calculator in capacity-scheduler.xml

DominantResourceCalculator MUST be configured to enable GPU scheduling. It has to be set to, org.apache.hadoop.yarn.util.resource.DominantResourceCalculator

GPU Isolation

(1) yarn.nodemanager.resource-plugins in yarn-site.xml

This is to enable GPU isolation module on NodeManager side. By default, YARN will automatically detect and config GPUs when above config is set. It should also add “”

(2) yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices in yarn-site.xml

Specify GPU devices which can be managed by YARN NodeManager, split by comma Number of GPU devices will be reported to RM to make scheduling decisions. Set to auto (default) to let YARN automatically discover GPU resource from system.

Manually specify GPU devices if auto detect GPU device failed or admin only wants a s
          A little bit of Machine Learning: Playing with Google's Prediction API      Cache   Translate Page   Web Page Cache   
Before we get started, let’s begin by making clear that this isn’t going to be a deep dive on TensorFlow, neural networks, inductive logic, Bayesian networks, genetic algorithms or any other sub-heading from the Machine Learning Wikipedia article. Nor is this really a Go-heavy article, but rather an introduction to machine learning via a simple consumption of the Google Prediction API. How the Google Prediction API works The Google Prediction API attempts to guess answers to questions by either predicting a numeric value between 0 and 1 for that item based on similar valued examples in its training data (“regression”), or choosing a category that describes it given a set of similar categorized items in its training data (“categorical”).
          Using artificial neural networks for open-loop tomography      Cache   Translate Page   Web Page Cache   
Дата и время публикации : 2011-12-22T15:35:08Z Авторы публикации и институты : James Osborn Francisco Javier De Cos Juez Dani Guzman Timothy Butterley Richard Myers Andres Guesalaga Jesus Laine Ссылка на журнал-издание: Ссылка на журнал-издание не найденаКоментарии к cтатье: 15 pages, 11 figures, submitted Optics Express 6/10/11Первичная категория: astro-ph.IM Все категории : astro-ph.IM Краткий обзор статьи: [...]
          Theorizing that Adult Neurogenesis is Linked to Olfactory Function      Cache   Translate Page   Web Page Cache   
Neurogenesis is the production and integration of new neurons into neural networks in the brain. Along with synaptic plasticity, it determines the ability of the brain to recover from damage. There is some controversy over the degree to which it occurs in adult humans; the consensus is that it does, but the vast majority of research on this topic has been carried out in mice, not humans. If there is little or no natural neurogenesis in the adult human brain, a situation quite different from that of mice, then the prospects diminish for the development of therapies to hold back aging that work by increasing neurogenesis. This is an important topic in the field of regenenerative research. The open access paper noted here offers an […]
          Economist on "Words that make you sound cool and "in" the millenial culture"      Cache   Translate Page   Web Page Cache   

We were hearing about "neural networks" in the 1980s. - Ol' Timer

          Machine Learning SW Engineer      Cache   Translate Page   Web Page Cache   
MD-Rockville, Our client now has an open Machine Learning SW Engineer opening. It is a SW Engineering role with Machine Learning. If the right candidate had Deep Learning concepts (Algorithm using Neural Networks) that would be a plus but at least Machine Learning. Mission: There is a NEW Data Strategy for our client to enable to solve DATA problems more efficiently. DAY to DAY: Will be working with Product Tea
          Deep phenotyping: deep learning for temporal phenotype/genotype classification.      Cache   Translate Page   Web Page Cache   
Related Articles

Deep phenotyping: deep learning for temporal phenotype/genotype classification.

Plant Methods. 2018;14:66

Authors: Taghavi Namin S, Esmaeilzadeh M, Najafi M, Brown TB, Borevitz JO

Background: High resolution and high throughput genotype to phenotype studies in plants are underway to accelerate breeding of climate ready crops. In the recent years, deep learning techniques and in particular Convolutional Neural Networks (CNNs), Recurrent Neural Networks and Long-Short Term Memories (LSTMs), have shown great success in visual data recognition, classification, and sequence learning tasks. More recently, CNNs have been used for plant classification and phenotyping, using individual static images of the plants. On the other hand, dynamic behavior of the plants as well as their growth has been an important phenotype for plant biologists, and this motivated us to study the potential of LSTMs in encoding these temporal information for the accession classification task, which is useful in automation of plant production and care.
Methods: In this paper, we propose a CNN-LSTM framework for plant classification of various genotypes. Here, we exploit the power of deep CNNs for automatic joint feature and classifier learning, compared to using hand-crafted features. In addition, we leverage the potential of LSTMs to study the growth of the plants and their dynamic behaviors as important discriminative phenotypes for accession classification. Moreover, we collected a dataset of time-series image sequences of four accessions of Arabidopsis, captured in similar imaging conditions, which could be used as a standard benchmark by researchers in the field. We made this dataset publicly available.
Conclusion: The results provide evidence of the benefits of our accession classification approach over using traditional hand-crafted image analysis features and other accession classification frameworks. We also demonstrate that utilizing temporal information using LSTMs can further improve the performance of the system. The proposed framework can be used in other applications such as in plant classification given the environment conditions or in distinguishing diseased plants from healthy ones.

PMID: 30087695 [PubMed]

           Embedded Computing on the Edge       Cache   Translate Page   Web Page Cache   
Embedded Computing on the Edge

Embedded computing has passed—more or less unscathed—through many technology shifts and marketing fashions. But the most recent—the rise of edge computing—could mean important new possibilities and challenges. So what is edge computing (Figure 1)? The cynic might say it is just a grab for market share by giant cloud companies that have in the past struggled […]

Embedded computing has passed—more or less unscathed—through many technology shifts and marketing fashions. But the most recent—the rise of edge computing—could mean important new possibilities and challenges.

So what is edge computing (Figure 1)? The cynic might say it is just a grab for market share by giant cloud companies that have in the past struggled in the fragmented embedded market, but now see their chance. That theory goes something like this.

Figure 1. Computing at the network edge puts embedded systems in a whole new world.

With the concept of the Internet of Things came a rather naïve new notion of embedded architecture: all the embedded system’s sensors and actuators would be connected directly to the Internet—think smart wall switch and smart lightbulb—and all the computing would be done in the cloud. Naturally, this proved wildly impractical for a number of reasons, so the gurus of the IoT retreated to a more tenable position: some computing had to be local, even though the embedded system was still very much connected to the Internet.

Since the local processing would be done at the extreme periphery of the Internet, where IP connectivity ended and private industrial networks or dedicated connections began, the cloud- and network-centric folks called it edge computing. They saw the opportunity to lever their command of the cloud and network resources to redefine embedded computing as a networking application, with edge computing as its natural extension.

A less cynical and more useful view looks at edge computing as one facet of a new partitioning problem that the concurrence of cloud computing, widespread broadband access, and some innovations in LTE cellular networks have created. Today, embedded systems designers must, from requirements definition on through the design process, remember that there are several very different processing sites available to them (Figure 2). There is the cloud. There is the so-called fog. And there is the edge. Partitioning tasks and data among these sits has become a necessary skill to the success of an embedded design project. If you don’t use the new computing resources wisely, you will be vulnerable to a competitor who does—not only in terms of features, performance, and cost advantages to be gained, but in consideration of the growing value of data that can be collected from embedded systems in operation.

.Figure 2. Edge computing offers the choice of three different kinds of processing sites.

The Joy of Partitioning

Unfortunately, partitioning is not often a skill embedded-system designers cultivate. Traditional embedded designs employ a single processor, or at worst a multi-core SoC with an obvious division of labor amongst the cores.

But edge computing creates a new scale of difficulty. There are several different kinds of processing sites, each with quite distinct characteristics. And the connections between processors are far more complicated than the nearly transparent inter-task communications of shared-memory multicore systems. So, doing edge computing well requires a rather formal partitioning process. It begins with defining the tasks and identifying their computing, storage, bandwidth, and latency requirements. Then the process continues by characterizing the compute resources you have available, and the links between them. Finally, partitioning must map tasks onto processors and inter-task communications onto links so that the system requirements are met. This is often an iterative process that at best refines the architecture and at worst turns into a protracted, multi-party game of Whack-a-Mole. It is helpful, perhaps, to look at each of these issues: tasks, processing and storage sites, and communications links, in more detail.

The Tasks

There are several categories of tasks in a traditional embedded system, and a couple of categories that have recently become important for many designs. Each category has its own characteristic needs in computing, storage, I/O bandwidth, and task latency.

In any embedded design there are supervisory and housekeeping tasks that are necessary, but are not particularly compute- or I/O- intensive, and that have no hard deadlines. This category includes most operating-system services, user interfaces, utilities, system maintenance and update, and data logging.

A second category of tasks with very different characteristics is present in most embedded designs. These tasks directly influence the physical behavior of the system, and they do have hard real-time deadlines, often because they are implementing algorithms within feedback control loops responsible for motion control or dynamic process control. Or they may be signal-processing or signal interpretation tasks that lie on a critical path to a system response, such as object recognition routines behind a camera input.

Often these tasks don’t have complex I/O needs: just a stream or two of data in and one or two out. But today these data rates can be extremely high, as in the case of multiple HD cameras on a robot or digitized radar signals coming off a target-acquisition and tracking radar. Algorithm complexity has traditionally been low, held down by the history of budget-constrained embedded designs in which a microcontroller had to implement the digital transfer function in a control loop. But as control systems adopt more modern techniques, including stochastic state estimation, model-based control, and, recently, insertion of artificial intelligence into control loops, in some designs the complexity of algorithms inside time-critical loops has exploded. As we will see, this explosion scatters shrapnel over a wide area.

The most important issue for all these time-critical tasks is that the overall delay from sensor or control input to actuator response be below a set maximum latency, and often that it lies within a narrow jitter window. That makes partitioning of these tasks particularly interesting, because it forces designers to consider both execution time—fully laden with indeterminacies, memory access and storage access delays—and communications latencies together. The fastest place to execute a complex algorithm may be unacceptably far from the system.

We also need to recognize a third category of tasks. These have appeared fairly recently for many designers, and differ from both supervisory and real-time tasks. They arise from the intrusion of three new areas of concern: machine learning, functional safety, and cyber security. The distinguishing characteristic of these tasks is that, while each can be performed in miniature with very modest demands on the system, each can quickly develop an enormous appetite for computing and memory resources. And, most unfortunately, each can end up inside delay-sensitive control loops, posing very tricky challenges for the design team.

Machine learning is a good case in point. Relatively simply deep-learning programs are already being used as supervisory tasks to, for instance, examine sensor data to detect progressive wear on machinery or signs of impending failure. Such tasks normally run in the cloud without any real-time constraints, which is just as well, as they do best with access to huge volumes of data. At the other extreme, trained networks can be ported to quite compact blocks of code, especially with the use of small hardware accelerators, making it possible to use a neural network inside a smart phone. But a deep-learning inference engine trained to detect, say, excessive vibration in a cutting tool during a cut or the intrusion of an unidentified object into a robot’s planned trajectory—either of which could require immediate intervention—could end up being both computationally intensive and on a time-critical path.

Similarly for functional safety and system security, simple rule-based safety checks or authentication/encryption tasks may present few problems for the system design. But simple often, in these areas, means weak. Systems that must operate in an unfamiliar environment or must actively repel novel intrusion attempts may require very complex algorithms, including machine learning, with very fast response times. Intrusion detection, for instance, is much less valuable as a forensic tool than as a prevention.


Traditionally, the computing and storage resources available to an embedded system designer were easy to list. There were microcontroller chips, single-board computers based on commercial microprocessors, and in some cases boards or boxes using digital signal processing hardware of one sort or another. Any of these could have external memory, and most could attach, with the aid of an operating system, mass storage ranging from a thumb drive to a RAID disk array. And these resources were all in one place: they were physically part of the system, directly connected to sensors, actuators, and maybe to an industrial network.

But add Internet connectivity, and this simple picture snaps out of focus. The original system is now just the network edge. And in addition to edge computing, there are two new locations where there may be important computing resources: the cloud, and what Cisco and some others are calling the fog.

The edge remains much as it has been, except of course that everything is growing in power. In the shadow of the massive market for smart-phone SoCs, microcontrollers have morphed into low-cost SoCs too, often with multiple 32-bit CPU cores, extensive caches, and dedicated functional IP suited to a particular range of applications. Board-level computers have exploited the monotonically growing power of personal computer CPU chips and the growth in solid-state storage. And the commoditization of servers for the world’s data centers has put even racks of data-center-class servers within the reach of well-funded edge computing sites, if the sites can provide the necessary space, power, and cooling.

Recently, with the advent of more demanding algorithms, hardware accelerators have become important options for edge computing as well. FPGAs have long been used to accelerate signal-processing and numerically intensive transfer functions. Today, with effective high-level design tools they have broadened their use beyond these applications into just about anything that can benefit from massively parallel or, more importantly, deeply pipelined execution. GPUs have applications in massively data-parallel tasks such as vision processing and neural network training. And as soon as an algorithm becomes stable and widely used enough to have good library support—machine vision, location and mapping, security, and deep learning are examples—someone will start work on an ASIC to accelerate it.

The cloud, of course, is a profoundly different environment: a world of essentially infinite numbers of big x86 servers and storage resources. Recently, hardware accelerators from all three races—FPGAs, GPUs, and ASICs—have begun appearing in the cloud as well. All these resources are available for the embedded system end-user to rent on an as-used basis.

The important questions in the cloud are not about how many resources are available—there are more than you need—but about terms and conditions. Will your workload run continuously, and if not, what is the activation latency? What guarantees of performance and availability are there? What will this cost the end user? And what happens if the cloud platform provider—who in specialized application areas is often not a giant data-center owner, but a small company that itself leases or rents the cloud resources—suffers a change in situation? These sorts of questions are generally not familiar to embedded-system developers, nor to their customers.

Recently there has been discussion of yet another possible processing site: the so-called fog. The fog is located somewhere between the edge and the cloud, both physically and in terms of its characteristics.

As network operators and wireless service providers turn from old dedicated switching hardware to software on servers, increasingly, Internet connections from the edge will run not through racks of networking hardware, but through data centers. For edge systems relying on cloud computing, this raises an important question: why send your inter-task communications through one data center just to get it to another one? It may be that the networking data center can provide all the resources your task needs without having to go all the way to a cloud service provider (CSP). Or it may be that a service provider can offer hardware or software packages to allow some processing in your edge-computing system, or in an aggregation node near your system, before having to make the jump to a central facility. At the very least you would have one less vendor to deal with. And you might also have less latency and uncertainly introduced by Internet connections. Thus, you can think of fog computing as a cloud computing service spread across the network and into the edge, with all the advantages and questions we have just discussed.


When all embedded computing is local, inter-task communications can almost be neglected. There are situations where multiple tasks share a critical resource, like a message-passing utility in an operating system, and on extremely critical timing paths you must be aware of the uncertainly in the delay in getting a message between tasks. But for most situations, how long it takes to trigger a task and get data to it is a secondary concern. Most designs confine real-time tasks to a subset of the system where they have a nearly deterministic environment, and focus their timing analyses there.

But when you partition a system between edge, fog, and cloud resources, the kinds of connections between those three environments, their delay characteristics, and their reliability all become important system issues. They may limit where you can place particular tasks. And they may require—by imposing timing uncertainty and the possibility of non-delivery on inter-task messages—the use of more complex control algorithms that can tolerate such surprises.

So what are the connections? We have to look at two different situations: when the edge hardware is connected to an internet service provider (ISP) through copper or fiber-optics (or a blend of the two), and when the connection is wireless (Figure 3).

Figure 3. Tasks can be categorized by computational complexity and latency needs.

The two situations have one thing in common. Unless your system will have a dedicated leased virtual channel to a cloud or fog service provider, part of the connection will be over the public Internet. That part could be from your ISP’s switch plant to the CSP’s data center, or it could be from a wireless operator’s central office to the CSP’s data center.

That Internet connection has two unfortunate characteristics, from this point of view. First, it is a packet-switching network in which different packets may take very different routes, with very different latencies. So, it is impossible to predict more than statistically what the transmission delay between two points will be. Second, Internet Protocol by itself offers only best-effort, not guaranteed, delivery. So, a system that relies on cloud tasks must tolerate some packets simply vanishing.

An additional point worth considering is that so-called data locality laws—which limit or prohibit transmission of data outside the country of origin—are spreading around the world. Inside the European Union, for instance, it is currently illegal to transmit data containing personal information across the borders of a number of member countries, even to other EU members. And in China, which uses locality rules for both privacy and industrial policy purposes, it is illegal to transmit virtually any sort of data to any destination outside the country. So, designers must ask whether their edge system will be able to exchange data with the cloud legally, given the rapidly evolving country-by-country legislation.

These limitations are one of the potential advantages of the fog computing concept. By not traversing the public network, systems relying on ISP or wireless-carrier computing resources or local edge resources can exploit additional provisions to reduce the uncertainty in connection delays.

But messages still have to get from your edge system to the service provider’s aggregation hardware or data center. For ISPs, that will mean a physical connection, typically using Internet Protocol over fiber or hybrid copper/fiber connections, often arranged in a tree structure. Such connections allow for provisioning of fog computing nodes at points where branches intersect. But as any cable TV viewer can attest, they also allow for congestion at nodes or on branches to create great uncertainties in available bandwidth and latency. Suspension of net neutrality in the US has added a further uncertainty, allowing carriers to offer different levels of service to traffic from different sources, and to charge for quality-of-service guarantees.

If the connection is wireless, as we are assured many will be once 5G is deployed, the uncertainties multiply. A 5G link will connect your edge system through multiple parallel RF channels and multiple antennas to one or more base stations. The base stations may be anything from a small cell with minimal hardware to a large local processing site with, again, the ability to offer fog-computing resources, to a remote radio transceiver that relies on a central data center for all its processing. In at least the first two cases, there will be a separate backhaul network, usually either fiber or microwave, connecting the base station to the service provider’s central data center.

The challenges include, first, that latency will depend on what kind of base stations you are working with—something often completely beyond your control. Second, changes in RF transmission characteristics along the mostly line-of-site paths can be caused by obstacles, multipath shifts, vegetation, and even weather. If the channel deteriorates, retry rates will go up, and at some point the base station and your edge system will negotiate a new data rate, or roll the connection over to a different base station. So even for a fixed client system, the characteristics of the connection may change significantly over time, sometimes quite rapidly.


Connectivity opens a new world for the embedded-system designer, offering amounts of computing power and storage inconceivable in local platforms. But it creates a partitioning problem: an iterative process of locating tasks where they have the resources they need, but with the latencies, predictability, and reliability they require.

For many tasks location is obvious. Big-data analyses that comb terabytes of data to predict maintenance needs or extract valuable conclusions about the user can go in the cloud. So, can compute-intensive real-time tasks when acceptable latency is long, and the occasional lost message is survivable or handled in a higher-level networking protocol. A smart speaker in your kitchen can always reply “Let me think on that a moment,” or “Sorry, what?”

Critical, high-frequency control loops must stay at or very near the edge. Conventional control algorithms can’t tolerate the delay and uncertainty of any other choice.

But what if there is a conflict: a task too big for the edge resources, but too time-sensitive to be located across the Internet? Fog computing may solve some of these dilemmas. Others may require you to place more resources in your system.

Just how far today’s technology has enriched the choices was illustrated recently by a series of Microsoft announcements. Primarily involved in edge computing as a CSP, Microsoft has for some time offered the Azure Stack—essentially, an instance of their Azure cloud platform—to run on servers on the customer premises. Just recently, the company enriched this offering with two new options: FPGA acceleration, including the Microsoft’s Project Brainwave machine-learning acceleration, for Azure Stack installations, and Azure Sphere, a way of encapsulating Azure’s security provisions in an approved microcontroller, secure operating system, and coordinated cloud service for use at the edge. Similarly, Intel recently announced the OpenVINO™ toolkit, a platform for implementing vision-processing and machine intelligence algorithms at the edge, relying on CPUs with optional support from FPGAs or vision-processing ASICs. Such fog-oriented provisions could allow embedded-system designers to simply incorporate cloud-oriented tasks into hardware within the confines of their own systems, eliminating the communications considerations and making ideas like deep-learning networks within control loops far more feasible.

In other cases, designers may simply have to refactor critical tasks into time-critical and time-tolerant portions. Or they may have to replace tried and true control algorithms with far more complex approaches that can tolerate the delay and uncertainty of communications links. For example, a complex model-based control algorithm could be moved to the cloud, and used to monitor and adjust a much simpler control loop that is running locally at the edge.

Life at the edge, then, is full of opportunities and complexities. It offers a range of computing and storage resources, and hence of algorithms, never before available to most embedded systems. But it demands a new level of analysis and partitioning, and it beckons the system designer into realms of advanced system control that go far beyond traditional PID control loops. Competitive pressures will force many embedded systems into this new territory, so it is best to get ahead of the curve.





       Contact Us  |  New User  |  Site Map  |  Privacy  |  Legal Notice 
        Copyright © 1995-2016 Altera Corporation, 101 Innovation Drive, San Jose, California 95134, USA
Update feed preferences

           Understanding Neuromorphic Computing       Cache   Translate Page   Web Page Cache   
Understanding Neuromorphic Computing

The phrase neuromorphic computing has a long history, dating back at least to the 1980s, when legendary Caltech researcher Carver Mead proposed designing ICs to mimic the organization of living neuron cells. But recently the term has taken on a much more specific meaning, to denote a branch of neural network research that has diverged […]

The phrase neuromorphic computing has a long history, dating back at least to the 1980s, when legendary Caltech researcher Carver Mead proposed designing ICs to mimic the organization of living neuron cells. But recently the term has taken on a much more specific meaning, to denote a branch of neural network research that has diverged significantly from the orthodoxy of convolutional deep-learning networks. So, what exactly is neuromorphic computing now? And does it have a future of important applications, or is it just another fertile ground for sowing thesis projects?

A Matter of Definition

As the name implies—if you rea Greek, anyway—neuromorphic networks model themselves closely on biological nerve cells, or neurons. This is quite unlike modern deep-learning networks, so it is worthwhile to take a quick look at biological neurons.

Living nerve cells have four major components (Figure 1). Electrochemical pulses enter the cell through tiny interface points called synapses. The synapses are scattered over the surfaces of tree-root-like fibers called dendrites, which reach out into the surrounding nerve tissue, gather pulses from their synapses, and conduct the pulses back to the heart of the neuron, the cell body.

Figure 1. A schematic diagram shows synapses, dendrites, the cell body, and an axon.

In the cell body are structures that transform the many pulse trains arriving over the dendrites into an output pulse train. At least 20 different transform types have been identified in nature, ranging from simple logic-like functions to some rather sophisticated transforms. One of the most interesting for researchers—and the most widely used in neuromorphic computing—is the leaky integrator: a function that adds up pulses as they arrive, while constantly decrementing the sum at a fixed rate. If the sum exceeds a threshold, the cell body outputs a pulse.

Synapses, dendrites, and cell bodies are three of the four components. The fourth one is the axon: the tree-like fiber that conducts output pulses from the cell body into the nervous tissue, ending at synapses on other cells’ dendrites or on muscle or organ synapses.

So neuromorphic computers use architectural structures modeled on neurons. But there are many different implementation approaches, ranging from pure software simulations to dedicated ICs. The best way to define the field as it exists today may be to contrast it against traditional neural networks. Both are networks in which relatively simple computations occur at the nodes. But beyond that generalization there are many important differences.

Perhaps the most fundamental difference is in signaling. The nodes in traditional neural networks communicate by sending numbers across the network, usually represented as either floating-point or integer digital quantities. Neuromorphic nodes send pulses, or sometimes strings of pulses, in which timing and frequency carry the information—in other words, forms of pulse code modulation. This is similar to what we observe in biological nervous systems.

A second important difference is in the function performed in each node. Conventional network nodes do arithmetic: they multiply the numbers arriving on each of their inputs by predetermined weights and add up the products. Mathematicians see this as a simple dot product of the input vector and the weight vector. The resulting sum may then be subjected to some non-linear function such as normalization, min or max setting, or whatever other creative impulse moves the network designer. The number is then sent on to the next layer in the network.

In contrast, neuromorphic nodes, like neuron cell bodies, can perform a large array of pulse-oriented functions. Most commonly used, as we have mentioned, is the leaky integrate and spike function, but various designers have implemented many others. Like real neurons, neuromorphic nodes usually have many input connections feeding in, but usually only one output. In reference to living cells, neuromorphic inputs are often called synapses or dendrites, the node may be called a neuron, and the output tree an axon.

The topologies of conventional and neuromorphic networks also differ significantly. Conventional deep-learning networks comprise strictly cascaded layers of computing nodes. The outputs from one layer of nodes go only into selected inputs of the next layer (Figure 2). In inference mode—when the network is already trained and is in use—signals flow only in one direction. (During training, signals flow in both directions, as we will discuss in a moment.)

.Figure 2. The conventional deep-learning network is a cascaded series of computing nodes.

There are no such restrictions on the topology of neuromorphic networks. As in real nervous tissue, a neuromorphic node may get inputs from any other node, and its axon may extend to anywhere (Figure 3). Thus, configurations such as feedback loops and delay-line memories, anathema in conventional neural networks, are in principle quite acceptable in the neuromorphic field. This allows the topologies of neuromorphic networks to extend well beyond what can be done in conventional networks, into areas of research such as long-short term memory networks and other recurrent networks.

Figure 3. Connections between living neurons can be complex and three-dimensional.



Carver Mead may have dreamt of implementing the structure of a neuron in silicon, but developers of today’s deep-learning networks have abandoned that idea for a much simpler approach. Modern, conventional neural networks are in effect software simulations—computer programs that perform the matrix arithmetic defined by the neural network architecture. The network is just a graphic representation of a large linear algebra computation.

Given the inefficiencies of simulation, developers have been quick to adopt optimizations to reduce the computing load, and hardware accelerators to speed execution. Data compression, use of shorter number formats for the weights and outputs, and use of sparse-matrix algorithms have all been applied. GPUs, clever arrangements of multiply-accumulator arrays, and FPGAs have been used as accelerators. An interesting recent trend has been to explore FPGAs or ASICs organized as data-flow engines with embedded RAM, in an effort to reduce the massive memory traffic loads that can form around the accelerators—in effect, extracting a data-flow graph from the network and encoding it in silicon.

In contrast, silicon implementations of neuromorphic processors tend to resemble architecturally the biological neurons they consciously mimic, with identifiable hardware blocks corresponding to synapses, dendrites, cell bodies, and axons. The implementations are usually, but not always, digital, allowing them to run much faster than organic neurons or analog emulations, but they retain the pulsed operation of the biological cells and are often event-driven, offering the opportunity for huge energy savings compared to software or to synchronous arithmetic circuits.

Some Examples

The grandfather of neuromorphic chips is IBM’s TrueNorth, a 2014 spin-off from the US DARPA research program Systems of Neuromorphic Adaptive Plastic Scalable Electronics. (Now that is really working for an acronym.) The heart of TrueNorth is a digital core that is replicated within a network-on-chip interconnect grid. The core contains five key blocks:

  1. The neuron: a time-multiplexed pulse-train engine that implements the cell-body functions for a group of 256 virtual neurons.
  2. A local 256 x 410-bit SRAM which serves as a crossbar connecting synapses to neurons and axons to synapses, and which stores neuron state and parameters.
  3. A scheduler that manages sequencing and processing of pulse packets.
  4. A router that manages transmission of pulse packets between cores.
  5. A controller that sequences operations within the core.

The TrueNorth chip includes 4,096 such cores.

The components in the core cooperate to perform a hardware emulation of neuron activity. Pulses move through the crossbar switch from axons to synapses to the neuron processor, and are transformed for each virtual neuron. Pulse trains pass through the routers to and from other cores as encoded packets. Since transforms like leaky integration depend on arrival time, the supervisory hardware in the cores must keep track of a time-stamping mechanism to understand the intended arrival time of packets.

Like many other neuromorphic implementations, TrueNorth’s main neuron function is a leaky pulse integrator, but designers have added a number of other functions, selectable via control bits in the local SRAM. As an exercise, IBM designers showed that their neuron was sufficiently flexible to mimic 20 different functions that have been observed in living neurons.


So far we have discussed mostly behavior of conventional and neuromorphic networks that have already been fully trained. But of course that is only part of the story. How the networks learn defines another important distinction between conventional and neuromorphic networks. And that subject will introduce another IC example.

Let’s start with networks of living neurons. Learning in these living organisms is not well understood, but a few of the things we do know are relevant here. First, there are two separate aspects to learning: real nerve cells are able to reach out and establish new connections, in effect rewiring the network as they learn. And they also have a wide variety of functions available in cell bodies. So, learning can involve both changing connections and changing functions. Second, real nervous systems learn very quickly. Humans can learn to recognize a new face or a new abstract symbol, with one or two instances. Conventional convolutional deep-learning networks might require tens of thousands of training examples to master the new item.

This observation suggests, correctly, that training of deep-learning networks is profoundly different from biological learning. To begin with, the two aspects of learning are separated. Designers specify a topology before training, and it does not change unless the network requires redesign. Only the weights applied to the inputs at each node are altered during training.

The process itself is also different. The implementation of the network that gets trained is generally a software simulation running on server CPUs, often with graphics processing unit (GPU) acceleration. Trainers must assemble huge numbers—often tens or hundreds of thousands—of input data sets, and label each one with the correct classification values. Then one by one, trainers feed an input data set into the simulation’s inputs, and simultaneously input the labels. The software compares the output of the network to the correct classification and adjusts the weights of the final stage to bring the output closer to the right answers, generally using a gradient descent algorithm. Then the software moves back to the next previous stage, and repeats the process, and so on, until all the weights in the network have been adjusted to be a bit closer to yielding the correct classification for this example. Then on to the next example. Obviously this is time- and compute-intensive.

Once the network has been trained and tested—there is no guarantee that training on a given network and set of examples will be successful—designers extract the weights from the trained network, optimize the computations, and port the topology and weights to an entirely different piece of software with a quite different sort of hardware acceleration, this time optimized for inference. This is how a convolutional network that required days of training in a GPU-accelerated cloud can end up running in a smart phone.

Neuromorphic Learning

Learning in TrueNorth is quite a different matter. The system includes its own programming language that allows users to set up the parameters in each core’s local SRAM, defining synapses within the core, selecting weights to apply to them, and choosing the functions for the virtual neurons, as well as setting up the routing table for connections with other cores. There is no learning mode per se, but apparently the programming environment can be set up so that TrueNorth cores can modify their own SRAMs, allowing for experiments with a wide variety of learning models.

That brings us to one more example, the Loihi chip described this year by Intel. Superficially, Loihi resembles TrueNorth rather closely. The chip is built as an orthogonal array cores that contain digital emulations of cell-body functions and SRAM-based synaptic connection tables. Both use digital pulses to carry information. But that is about the end of the similarity.

Instead of one time-multiplexed neuron processor in each core, each Loihi core contains 1,024 simple pulse processors, preconnected in what Intel describes as tree-like groups. Communications between these little pulse processors are said to be entirely asynchronous. The processors themselves perform leaky integration via a digital state machine. Synapse weights vary the influence of each synapse on the neuron body. Connectivity is hierarchical, with direct tree connections within a group, links between groups within a core, and a mesh packet network connecting the 128 cores on the die.

The largest difference between Loihi and TrueNorth is in learning. Each Loihi core includes a microcoded Learning Engine that captures trace data from each neuron’s synaptic inputs and axon outputs and can modify the synaptic weights during operation. The fact that the engine is programmable allows users to explore different kinds of learning, including unsupervised approaches, where the network learns without requiring tagged examples.

Where are the Apps?

We have only described two digital implementations of neuromorphic networks. There are many more examples, both digital and mixed-signal, as well as some rather speculative projects such as an MIT analog device using crystalline silicon-germanium to implement synapses. But are these devices only research aids and curiosities, or will they have practical applications? After all, conventional deep-learning networks, for all their training costs and—probably under-appreciated—limitations, are quite good at some kinds of pattern recognition.

It is just too early to say. Critics point out that in the four years TrueNorth has been available to researchers, the most impressive demo has been a pattern recognition implementation that was less effective than convolutional neural networks, and to make things even less impressive, was constructed by emulating a conventional neural network in the TrueNorth architecture. As for the other implementations, some were intended only for neurological research, some have been little-used, and some, like Loihi, are too recent to have been explored much.

But neuromorphic networks offer two tantalizing promises. First, because they are pulse-driven, potentially asynchronous, and highly parallel, they could be a gateway to an entirely new way of computing at high performance and very low energy. Second, they could be the best vehicle for developing unsupervised learning—a goal that may prove necessary for key applications like autonomous vehicles, security, and natural-language comprehension. Succeed or fail, they will create a lot more thesis projects.

       Contact Us  |  New User  |  Site Map  |  Privacy  |  Legal Notice 
        Copyright © 1995-2016 Altera Corporation, 101 Innovation Drive, San Jose, California 95134, USA
Update feed preferences

          [Из песочницы] Обнаружение сарказма с помощью сверточных нейросетей      Cache   Translate Page   Web Page Cache   
Привет, Хабр! Представляю вашему вниманию перевод статьи "Detecting Sarcasm with Deep Convolutional Neural Networks" автора Elvis Saravia.

Одна из ключевых проблем обработки естественного языка — обнаружение сарказма. Обнаружение сарказма важно в других областях, таких как эмоциональные вычисления и анализ настроений, поскольку это может отражать полярность предложения.

В этой статье показано, как обнаружить сарказм и также приведена ссылка на нейросетевой детектор сарказма.
Читать дальше →
          In The Age Of Relevancy, Will Impressions Matter?      Cache   Translate Page   Web Page Cache   
With the increasing sophistication of deep learning algorithms and neural networks, this is becoming less of a problem. Deep learning can do a lot for ...
          Time to get smart about artificial intelligence      Cache   Translate Page   Web Page Cache   
The current technology puts electronic neural networks through a deep-learning process using large datasets. The resulting AI is not perfect, but it is ...
          Finally, Computers Can Learn To Count Better      Cache   Translate Page   Web Page Cache   
With the onslaught of neural networks and deep learning, the breadth of tasks carried out by a computer has grown very fast. Neural networks have ...
          [Из песочницы] Обнаружение сарказма с помощью сверточных нейросетей      Cache   Translate Page   Web Page Cache   
Привет, Хабр! Представляю вашему вниманию перевод статьи "Detecting Sarcasm with Deep Convolutional Neural Networks" автора Elvis Saravia.

Одна из ключевых проблем обработки естественного языка — обнаружение сарказма. Обнаружение сарказма важно в других областях, таких как эмоциональные вычисления и анализ настроений, поскольку это может отражать полярность предложения.

В этой статье показано, как обнаружить сарказм и также приведена ссылка на нейросетевой детектор сарказма.
Читать дальше →
          08-15-2018 Joint INCOSE/IEEE SMCS Webinar       Cache   Translate Page   Web Page Cache   
Speaker: Thomas McDermott, Jr., Sunil Bharitkar, and Chistopher Nemeth, Stevens Institute of Technology, HP Labs, and Applied Research Associates Talk Title: Bridging the Gulf of Execution Series: INCOSE Speaker Series Abstract: Research results routinely fail to survive into the development phase of Research and Development projects. This gulf of execution that blocks research findings from being realized in the development phase of many projects continues to bog down R and D practice. Concurrent engineering was supposed to be a solution, but was it? Open innovation models were designed to bridge the gap, but have they? What is the gulf, and how did we get here? It might be a matter of professional focus. Research invests in understanding the problem, and Development invests in producing the solution. Innovation happens when these link up around people in a culture that promotes risk-taking. Or is it communication? Innovation happens when people from different disciplines or roles come together with common understanding. The issue spans multiple sectors. Large industries struggle to build an innovation culture when delivery of existing products and services is at the forefront. Universities have an innovation culture, but industry needs to adopt a systems approach to realize value from that culture. Industry-university partnerships are effective when both parties realize relationships across a broad range of university programs, from students to startups, and learn how to couple the university innovation system to the industry innovation enterprise. Three examples from INCOSE and IEEE SMCS members will suggest ways to resolve this enduring issue. Georgia Tech--We view this relationship as a system-of-systems model, where the industry product/service enterprise is coupled to the university innovation enterprise in a larger sociotechnical systems context, and where the relationship promotes all three innovation horizons--sustaining, disruptive, and transformational. Our experience in building such relationships at Georgia Tech indicates both parties can realize success when a range of enablers to industry-university interaction promote a range of innovation opportunities--basic and translational--over a long term partnerships. This systems-of-systems model will be presented as a general context, then generalized examples from Georgia Tech industry partnership efforts will be discussed to illustrate the model. Applied Research Associates--Our team developed a system for DoD over 3 years to support real time decision and communication support among Burn Intensive Care Unit clinicians. This example will describe collaboration among 35 members from military healthcare professionals, to cognitive psychologists and software and machine learning developers. HP Labs--In the Emerging Computer Lab within HP Labs, among other research areas we are involved in the areas of speech analysis and interpretation, audio signal processing in conjunction with machine learning. In this part of the webinar we will explore one research topic in audio processing that we undertook, after identifying the deficiencies on HP devices, and the challenges encountered during development. We also present examples of the solutions to overcome these challenges which have helped contribute towards a scalable deployment of the technology based off of this research. Biography: Thomas A. (Tom) McDermott, Jr is a leader, educator, and innovator in multiple technology fields. He currently serves as Deputy Director of the Systems Engineering Research Center at Stevens Institute of Technology in Hoboken, NJ, as well as a consultant specializing in strategic planning for uncertain environments. He studies systems engineering, systems thinking, organizational dynamics, and the nature of complex human socio-technical systems. He teaches system architecture concepts, systems thinking and decision making, and the composite skills required at the intersection of leadership and engineering. Tom has over 30 years of background and experience in technical and management disciplines, including over 15 years at the Georgia Institute of Technology and 18 years with Lockheed Martin. He is a graduate of the Georgia Institute of Technology, with degrees in Physics and Electrical Engineering. With Lockheed Martin he served as Chief Engineer and Program Manager for the F-22 Raptor Avionics Team, leading the program to avionics first flight. Tom was GTRI Director of Research and interim Director from 2007-2013. During his tenure the impact of GTRI significantly expanded, research awards doubled to over 300M dollars, faculty research positions increased by 60 percent, and the organization was recognized as one of Atlanta's best places to work. He also has a visiting appointment in the Georgia Tech Sam Nunn School of International Affairs. Tom is one of the creators of Georgia Tech's Professional Masters degree in Applied Systems Engineering and lead instructor of the Leading Systems Engineering Teams course. Sunil Bharitkar received his Ph.D. in Electrical Engineering, minor in Mathematics from the University of Southern California in 2004 and is presently the speech-audio research Distinguished Technologist at HP Labs. He is involved in research in array signal processing, speech/audio analysis and processing, biomedical signal processing, and machine learning. From 2011-2016 he was the Director of Audio Technology at Dolby leading-guiding research in audio, signal processing, haptics, machine learning, hearing augmentation, &standardization activities at ITU, SMPTE, AES. He co-founded the company Audyssey Labs in 2002 where he was VP Research responsible for inventing new technologies which were licensed to companies including IMAX, Denon, Audi, Sharp, etc. He also taught in the Department of Electrical Engineering at USC. Sunil has published over 50 technical papers and has over 20 patents in the area of signal processing applied to acoustics, neural networks and pattern recognition, and a textbook, Immersive Audio Signal Processing, from Springer-Verlag. Chris Nemeth is a Principal Scientist with Applied Research Associates, a 1200 member national science and engineering consulting firm. His recent research interests include technical work in complex high stakes settings, research methods in individual and distributed cognition, and understanding how information technology erodes or enhances system resilience. He has served as a committee member of the National Academy of Sciences, is widely published in technical journals. Dr. Nemeth earned his PhD in human factors and ergonomics from the Union Institute and University in 2003, and an MS in product design from the Institute of Design at Illinois Institute of Technology in 1984. His design and human factors consulting practice and his corporate career have encompassed a variety of application areas, including health care, transportation and manufacturing. As a consultant, he has performed human factors analysis and product development, and served as an expert witness in litigation related to human performance. His 26-year academic career has included seven years in the Department of Anesthesia and Critical Care at the University of Chicago Medical Center, and adjunct positions with the Northwestern University McCormick College of Engineering and Applied Sciences, and Illinois Institute of Technology. He is a Fellow of the Design Research Society, a Life Senior Member of the Institute of Electrical and Electronic Engineers and has served 8 years on the IEEE Systems, Man and Cybernetics Society Board of Governors. He retired from the Navy in 2001 at the rank of Captain after a 30-year active duty and reserve career. More Info: Event number: 592 564 704, Event password: INCOSE115 Webcast:

Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10