Next Page: 10000

          Comment on How to Develop Convolutional Neural Networks for Multi-Step Time Series Forecasting by Jason Brownlee      Cache   Translate Page      
This would be a multi-step multi-variate problem. I show how in my book. I would recommend treating it like a seq2seq problem and forecast n variables for each step in the output sequence. An encoder-decoder model would be appropriate with a CNN or LSTM input model.
          IBM’s Machine Learning Accelerator at VLSI 2018      Cache   Translate Page      

IBM presented a neural network accelerator at VLSI 2018 showcasing a variety of architectural techniques for machine learning, including a regular 2D array of small processing elements optimized for dataflow computation, reduced precision arithmetic, and explicitly addressed memories.

(Visited 16 times, 16 visits today)

The post IBM’s Machine Learning Accelerator at VLSI 2018 appeared first on Real World Tech.


          Deep learning for numerical analysis explained      Cache   Translate Page      

Deep learning (DL) is a subset of neural networks, which have been around since the 1960’s. Computing resources and the need for a lot of data during training were the crippling factor for neural networks. But with the growing availability of computing resources such as multi-core machines, graphics processing units [...]

Deep learning for numerical analysis explained was published on SAS Users.


          Data Engineer 2 - IMO - Intelligent Medical Objects, Inc. - Northbrook, IL      Cache   Translate Page      
Familiarity with machine learning methods, such as clustering analysis and neural networks. Downtown commuters will enjoy free shuttle service to IMO’s...
From IMO - Intelligent Medical Objects, Inc. - Mon, 24 Sep 2018 17:52:43 GMT - View all Northbrook, IL jobs
          python-pytorch-ignite-git      Cache   Translate Page      
High-level library to help with training neural networks in PyTorch
          Prediction of Stock Price in Investor Portfolios with Stock Price Time Series Analysis using ANN Wibiksana Hendra S2 Informatics Engineering Telkom University Bandung, Indonesia hendra2621@gmail.com Houw Liong Thee Postgraduate Telkom University Bandung, Indonesia thehl007@gmail.com Fatchul Huda Arief Mathematics Engineering UIN Sunan Gunung Djati Bandung, Indonesia afhuda@uinsgd.ac.id       Cache   Translate Page      
Prediction of Stock Price in Investor Portfolios with Stock Price Time Series Analysis using ANN Wibiksana Hendra S2 Informatics Engineering Telkom University Bandung, Indonesia hendra2621@gmail.com Houw Liong Thee Postgraduate Telkom University Bandung, Indonesia thehl007@gmail.com Fatchul Huda Arief Mathematics Engineering UIN Sunan Gunung Djati Bandung, Indonesia afhuda@uinsgd.ac.id Abstract—Indonesia Stock Exchange (IDX) is a place to trade the stock market in Indonesia. In general, this is represented by the value of Jakarta Composite Index (JCI). JCI itself is the combined value of all stocks listed on the Stock Exchange. It does not matter whether the stock traded on that day is in a state of rising, down, flat (no change in value), not being traded and even suspension (prohibited from conducting transactions for a certain period of time). The stock data source used is the closing stock price of BNI, BCA and Mandiri stocks for 5 years from 2011-2015 from the Indonesia Stock Exchange (via yahoo finance site). Each of these stock data are be trained and tested, to observe how much the accuracy by using this method. The stock price that has been predicted by ANN are merged into a portfolio, this portfolio will shows the increasing or decreasing. At the end of process, the change rate of loss predicted stock price into beneficial predicted stock price are calculated. The daily data accuracy of BNI, BCA and Mandiri are 97.7474%, 98.2266%, and 97.8942%. Weekly accuracy data a bit smallest than daily accuracy. The weekly data accuracy of BNI, BCA and Mandiri are 95.4247%, 97.0631%, and 96.5706%. Monthly accuracy data a bit smallest than weekly accuracy. The monthly data accuracy of BNI, BCA and Mandiri are 91.6259%, 95.9425%, and 94.1434%. If the investor focuses all of his funds only to buy one stock, then he will have a portfolio profit of 3 times more than before. If the profit of BNI stocks is 19.19%, then in terms of the investor portfolio will have a profit of 19.19% x 3 = 57.57%. Compare with the profit level of the 3 banks which if we add up, the value will be as follows: 19.19+17.68+15.73 = 52.6%. So there are additional benefits from a portfolio of 57.57% - 52.6% = 4.97%. Keywords— stock, Backpropagation, prediction, portfolio Background Jakarta Composite Index (JCI) is a value used to measure the combined performance of all stocks listed on the Indonesia Stock Exchange. JCI can be used to assess the general market situation or measure if the stock price has increased or decreased. JCI rose show excitement, whereas JCI down indicate a market sluggishness [1]. When there is an increase in JCI, of course stock investors excited because it could achieve a profit as much as the price difference between the current sales price and the purchase price of the stock before. The other way, when JCI has decreased, of course, mostly small / large investors are experiencing a panic with the action of release the stock. Stocks are proof of equity in a company. By buying a company's stock, you are investing capital or funds that will be used by the management to finance the company's operational activities. There are two types of corporate shares: preferred stock and common stock. Portfolio is if you diversify your investments in more than one stock or with a combination of bonds, forex, property or other assets in order to reduce risk, you have created a portfolio. The stock data source used is the closing stock price of BNI, BCA and Mandiri shares for 5 years from 2011-2015 from the Indonesia Stock Exchange [2] [3] [4]. For the theory, Backpropagation algorithm is applied. Backpropagation is the one method for pattern recognition beside Perceptron, Adeline and Madeline. Backpropagation use data input, hidden neuron and data output for estimate forecast value ahead based on given source data. Backpropagation is better than other 3 pattern recognition methods for time series case. Predicting stock price become challenging for this decade [5]. So many previous researcher to find the best model for predicted stock price like Jay Desai, Arti Trivedi and Nisarg A Joshi (2013) [6]. This research uses closing price data as training and testing data set, unfortunately from his experiment only reached training accuracy result still 59.84% and average testing accuracy 82%. The data used by Jay Desai, Arti Trivedi and Nisarg A Joshi are the homogeneous data in the form of price out (close) S & P CNX Nifty 50 Index. Trading data used by it is from January 1, 2010 to December 31, 2011. Jay Desai use neural network with one input layer, one hidden layer and one linear output layer. 10 input variables are used with 10 neurons in the hidden layer. All networks tested in the study are trained for 3,000 epochs. So based on this result, I want improve that accuracy better than him. I will make propose new neural network architecture with one input layer, one hidden layer and one output layer, where input layer contains 20 input data, 10 hidden neurons in hidden layer and 1 output data in output layer. The objective is to purposes of this research are to predict stock prices portfolio with prediction accuracy greater than 82%. The hypothesis is this prediction can be achieved by ANN method and hope its accuracy value can be greater than 80% for BNI, BCA and Mandiri stock and stock price portfolio greater than 80%. RESEARCH DESIGN This study contains some parts, starting from collecting raw data, backpropagation process include training and testing, prediction plot combination training and testing, making plot from testing data from each stock (BNI, BCA and Mandiri), calculate delta value from each stock (BNI, BCA and Mandiri), plot portfolio, and calculate benefit BNI, BCA, Mandiri, and portfolio based on the above banks. An overall architecture of the stages is shown in Figure 1. Figure 1 Research Design Legends: 1. Close Price Stock Data for training and testing 2. Backpropagation 3. Prediction plot Training+Testing 4. Making plot from testing data from each stock (BNI, BCA, Mandiri) 5. Best accuracy from daily, weekly and monthly data 6. Calculate delta value from each stock (BNI, BCA, Mandiri) (tn – tn-1) 7. Buy and sell for each stock (BNI, BCA, Mandiri) 8. Combine 3 delta value (BNI, BCA and Mandiri) into 1 plot 9. Decision which stock have high benefit to investor with delta value each stock 10. Delta value for buy stock (- - +) at the lowest price from 3 banks 11. Delta value for selling stock (+ + -) at the highest price from 3 banks 12. Print table benefit portfolio 13. Calculate benefit 14. Plot portfolio 15. Result analysis Raw data Experiment began with collecting data from one resources. Where the dataset is obtained from yahoo finance website [see implementation process no.3 in chapter 4 Experiment Result] during 5 years from January 1, 2011 until December 31, 2015. List of stock dataset are close price data of BNI, BCA and Mandiri stocks. Backpropagation process Backpropagation divide to 2 phase: training process and testing process. The training and testing data will be separate in 3 scenario: 60%:40%, 70%:30% and 80%:20%. Training process In training phase, there are several phase: Close Price Stock Data Experiment began with collecting data from one resources. Where the dataset is obtained from yahoo finance website [2] [3] [4] during 5 years from January 1, 2011 until December 31, 2015. List of stock dataset are close price data of BNI, BCA and Mandiri stocks. For this case, the dataset used is for training data. For detail, you can see in next step below. Training Process In here, each close price stock data (BNI, BCA and Mandiri) be trained for forecast 1 month and 3 months based on daily data, weekly data and monthly data with proportion 60%:40%, 70%:30% and 80%:20%. It means from 100 data, 60 became training data and the other will become testing data (60%:40%). It will be the same with proportion for 70%:30% and 80%:20%. Weight and Bias In here, each input variable from training process will calculate and update weight and bias until will be get MSE value. Tolerance error minimum or max epoch reached In here, tolerance error be setting to less than 0.001 and max epoch be setting maximum 10,000 epochs. In fact, the stop condition achieved during an experiment is always the maximum epoch number achieved than the error rate that must be achieved less than 0.001. An overall architecture of training process is shown in Figure 2. Testing Process In testing phase, there are several phase: Close Price Stock Data Experiment began with collecting data from one resources. Where the dataset is obtained from yahoo finance website for BNI [2], BCA [3] and Mandiri [4] stock during 5 years from January 1, 2011 until December 31, 2015. List of stock dataset are close price data of BNI, BCA and Mandiri stocks. For this case, the dataset used is for testing data. For detail, you can see in next step below. Testing Process In here, each close price stock data (BNI, BCA and Mandiri) be tested for forecast 1 month and 3 months based on daily data, weekly data and monthly data with proportion 60%:40%, 70%:30% and 80%:20%. It means from 100 data, 40 became testing data and the other will become training data (60%:40%). It will be the same with proportion for 70%:30% and 80%:20%. Surely, testing process based on the training model result where contains weight and bias value before to executed. Prediction result The value between origin testing data and forecast data during testing process. An overall architecture of training process is shown in Figure 3. Prediction plot After second process (Backpropagation process) above is done, the prediction result from each training and testing will be combine into one plot for every experiment scenario. There are describe step by step by diagram below (see Figure 4). Weight and bias value from Training model result, can be used for testing process After testing process is done, it will give a prediction result Making plot from testing data from each stock (BNI, BCA and Mandiri) From prediction plot training and testing, only testing data plotted and calculate delta value from each stock (BNI, BCA and Mandiri). The delta value formula is: delta = tn – tn-1 where tn is the forecast value for today and tn-1 is the forecast value for yesterday. Figure 2 Backpropagation Training Process Figure 3 Backpropagation Testing Process Portfolio plot This section is the next step after prediction plot process (step 3 above) is done. Best result from daily, weekly and monthly accuracy Selecting from each daily, weekly and monthly plotting in step 3c, which is getting the best accuracy. The best accuracy from that 3 type data, it will be continue to next step. Combine 3 delta value into 1 plot In here, the delta value from previous step, will be combine 3 delta value both BNI, BCA and Mandiri into 1 plot Decision which stock have high benefit to investor In here, the system will decision which stock has high benefit to investor with comparison delta value from each stock. Which the stock to buy and sell. For more explanation about this, you can see in section 3.4.1 and 3.4.2 about buy analysis and sell analysis Print table benefit Print table benefit both BNI, BCA and Mandiri stock Plot portfolio Making plot portfolio from table benefit Figure 4 Prediction Plot Result Analysis Giving conclusion which stock has high benefit to investor during period An overall architecture of the stages is shown in Figure 5. Table 1 Scenario of Research for BNI, BCA and Mandiri No Data Type Training Total Data Testing Total Data 1 Daily 60% 736 40% 490 2 70% 858 30% 368 3 80% 981 20% 245 4 Weekly 60% 155 40% 103 5 70% 181 30% 77 6 80% 206 20% 52 7 Monthly 60% 36 40% 24 8 70% 42 30% 18 9 80% 48 20% 12 EXPERIMENT RESULT A detail about experiment result, you can see in Table 2 below.  Table 2 Experiment Result No. Bank Data type Training Total Data Accuracy Testing Total Data Accuracy 1 BNI Daily 60% 736 97.9652% 40% 490 97.956% 2 70% 858 97.9903% 30% 368 97.9066% 3 80% 981 98.0134% 20% 245 97.7474% 4 Weekly 60% 155 96.9404% 40% 103 96.1172% 5 70% 181 96.8367% 30% 77 96.017% 6 80% 206 96.8925% 20% 52 95.4247% 7 Monthly 60% 36 92.2304% 40% 24 93.0717% 8 70% 42 92.384% 30% 18 93.0797% 9 80% 48 92.8769% 20% 12 91.6259% 10 BCA Daily 60% 736 98.0013% 40% 490 98.4043% 11 70% 858 98.0028% 30% 368 98.3903% 12 80% 981 97.9829% 20% 245 98.2266% 13 Weekly 60% 155 97.3678% 40% 103 97.4292% 14 70% 181 97.447% 30% 77 97.3375% 15 80% 206 97.5270% 20% 52 97.0631% 16 Monthly 60% 36 93.7236% 40% 24 96.6982% 17 70% 42 94.3474% 30% 18 96.2256% 18 80% 48 94.6175% 20% 12 95.9425% 19 Mandiri Daily 60% 736 97.8848% 40% 490 98.2297% 20 70% 858 97.9111% 30% 368 98.2633% 21 80% 981 97.9246% 20% 245 97.8942% 22 Weekly 60% 155 96.4793% 40% 103 96.8952% 23 70% 181 96.5508% 30% 77 96.9584% 24 80% 206 96.7027% 20% 52 96.5706% 25 Monthly 60% 36 92.3449% 40% 24 95.0939% 26 70% 42 92.8416% 30% 18 95.1057% 27 80% 48 93.3736% 20% 12 94.1434% How to calculate portfolio Portfolio benefits are calculated from the difference between the latest investment value and the initial investment value divided by the initial investment value. The value of prediction accuracy with daily data is better than weekly and monthly, because the data trained on daily data is more than weekly and monthly data. For buying or selling, use formula: Current price * amount of shares For profit calculation, use formula: (Ending value-beginning value)/(beginning value)*100% Table 3 below only happen if an investor diversify his cost to each bank. BNI, BCA and Mandiri stock are only got benefit 19.19%, 17.68% and 15.73%. But what if investors only focus their funds on one stock that has the highest profit level? In other words, the funds that were supposed to be used to buy BCA and Mandiri stocks were all only used to buy BNI stocks. Table 3 Portfolio benefits: BNI BCA Mandiri Buy: IDR 6,100 as much as 1 lot or 100 shares (IDR 610,000) Selling: IDR 7,270.9092 (IDR 727,090.92). Profit: IDR 1,170.9092/share or IDR 117,090.92 (19.19%) Buy: IDR 13,125 as much as 1 lot or 100 shares (IDR 1,312,500) Selling: IDR 15,445.4704 (IDR 1,544,547.04) Profit: IDR 2,320.4704/share or IDR 232,047.04 (17.68%) Buy: IDR 5,387.5 as much as 1 lot or 100 shares (IDR 538,750) Selling: IDR 6,234.8643 (IDR 623,486.43) Profit: IDR 847.3643/share or IDR 84,736.43 (15.73%) If this is the case, then investors will have a portfolio profit of 3 times more than before. If the profit of BNI stocks is 19.19%, then in terms of the investor portfolio will have a profit of 19.19% x 3 = 57.57%. Compare with the profit level of the 3 banks which if we add up, the value will be as follows: 19.19+17.68+15.73 = 52.6%. So there are additional benefits from a portfolio of 57.57% - 52.6% = 4.97%. Figure 5 Prediction Plot conclusion All experiments have been completed, with a total 27 experiments where each stock like BNI, BCA and Mandiri each have 9 experiments with daily, weekly dan monthly data. From the experimental results for BNI, BCA and Mandiri stocks, both for daily, weekly and monthly data, it is known that the value of accuracy with daily data is better than the value of weekly and monthly data accuracy. The daily data accuracy of BNI, BCA and Mandiri are 97.7474%, 98.2266%, and 97.8942%. Weekly accuracy data a bit smallest than daily accuracy. The weekly data accuracy of BNI, BCA and Mandiri are 95.4247%, 97.0631%, and 96.5706%. Monthly accuracy data a bit smallest than weekly accuracy. The monthly data accuracy of BNI, BCA and Mandiri are 91.6259%, 95.9425%, and 94.1434%. If the investor focuses all of his funds only to buy one stock, then he will have a portfolio profit of 3 times more than before. If the profit of BNI stocks is 19.19%, then in terms of the investor portfolio will have a profit of 19.19% x 3 = 57.57%. Compare with the profit level of the 3 banks which if we add up, the value will be as follows: 19.19+17.68+15.73 = 52.6%. So there are additional benefits from a portfolio of 57.57% - 52.6% = 4.97%. recommendation During experiment, there are no anomaly data. Because there are no data about financial crisis as in year 1997 and 2008 so the recommended is include data in the year 1997 and 2008. So it is recommended to increase the time series including data in the year 1997 and 2008. References Hari Purnomo Susanto, “Pemodelan Fuzzy untuk Data Time Series menggunakan metode Tabel Look up dengan transformasi logaritma dan diferensi dan aplikasinya pada data indeks harga saham gabungan (IHSG)”, Jurnal Penelitian Pendidikan, Vol 5, Nomor 1, Juni 2013 https://finance.yahoo.com/quote/BBNI.JK/history?p=BBNI.JK https://finance.yahoo.com/quote/BBRI.JK/history?p=BBRI.JK https://finance.yahoo.com/quote/BMRI.JK/history?p=BMRI.JK Ganesh Bonde, Rasheed Khaled, “Stock price prediction using genetic algorithms and evolution strategies”, http://worldcomp-proceedings.com/proc/p2012/GEM4716.pdf, October 13, 2015 Jay Desai,Arti Trivedi, Nisarg A Joshi, “Forecasting of Stock Market Indices Using Artificial Neural Network”, Shri Chimanbhai Patel Institutes, Ahmedabad; 2013 Andy Porman Tambunan, “Menilai Harga Wajar Saham (Stock Valuation)”. 2010. Jakarta: Gramedia. Pang-Ning Tan, Michael Steinbach, Vipin Kumar-Introduction to Data Mining-Pearson (2005) Heaton Jeff, “Introduction to Neural Networks for C#”, 2nd Edition-Heaton Research, Inc. (2008) page 153 Fausett Laurene, “Fundamentals of Neural Networks – Architectures, Algorithms, and Applications”. 1994. Charlie Lie, “Kalau Ada Uang Belilah $aham”. 2010. Bandung: TriEks Media.Inc. page 91 Drs. Jong Jek Siang, M.Sc., “Jaringan Syaraf Tiruan & Pemrogramannya menggunakan MATLAB”. 2009. Jakarta: Andi. Bayu Ariestya Ramadhan, “Analisis Perbandingan Metode Arima Dan Metode Garch Untuk Memprediksi Harga Saham (Studi kasus pada perusahaan Telekomunikasi yang terdaftar di Bursa Efek Indonesia Periode Mei 2012 – April 2013”, Prodi S1 Manajemen Bisnis Telekomunikasi dan Informatika, Fakultas Ekonomi dan Bisnis, Universitas Telkom, Juni 2013 “Istilah Pasar Modal”, http://wasbunsiahaan.blogspot.com/2011/11/istilahkamus-pasar-modal.html


          Neural Network Plasticity and Integrative Neuroscience - Assistant Professor - Université de Montréal - Québec City, QC      Cache   Translate Page      
The ideal candidate will combine electrophysiology and molecular approaches and/or viral vectors, with novel optical techniques or high-resolution structural...
From University Affairs - Tue, 25 Sep 2018 18:27:56 GMT - View all Québec City, QC jobs
          An Online Tool Temperature Monitoring Method Based on Physics-Guided Infrared Image Features and Artificial Neural Network for Dry Cutting      Cache   Translate Page      
This paper presents an efficient method, which reconstructs the temperature field around the tool/chip interface from infrared (IR) thermal images, for online monitoring the internal peak temperature of the cutting tool. The tool temperature field is divided into two regions; namely, a far field for solving the heat-transfer coefficient between the tool and ambient temperature, and a near field where an artificial neural network (ANN) is trained to account for the unknown heat variations at the frictional contact interface. Methods to extract physics-based feature points from the IR image as ANN inputs are discussed. The effects of image resolution, feature selection, chip occlusion, contact heat variation, and measurement noises on the estimated contact temperature are analyzed numerically and experimentally. The proposed method has been verified by comparing the ANN-estimated surface temperatures against “true values” experimentally obtained using a high-resolution IR imager on a custom-designed testbed as well as numerically simulated using finite-element analysis. The concept feasibility of the temperature monitoring method is demonstrated on an industrial lathe-turning center with a commercial tool insert. Note to Practitioners—The internal tool-temperature field around the tool/chip interface during cutting offers essential information to monitor tool-wear and ensure surface-quality particularly in finishing cuts, which is difficult to be monitored because of the stringent real-time requirements (including low cost and high accuracy) in harsh working environments. This paper presents a potentially low-cost solution that combines noncontact surface temperature measurements with high-fidelity physics-based computational models for monitoring the internal peak temperature at the tool/chip frictional contact during cutting. This physics-based method requires only a small number - f preselected features, and uses thermal isotherms and streamlines to detect and substitute any occlusions in the IR images. The method does not rely on high-resolution (HR) images to infer the steep temperature gradient near the tool-tip where the peak temperature occurs, and thus can be implemented with a relatively low-cost IR imager. Unlike traditional least-square methods that bases solutions to a heat-conduction partial differential equation (PDE), the ANN is trained with precomputed physics-based models based on a novel dual-field (far-field and near-field) approach. The ANN-based method not only eliminates the need to solve the time-demanding PDE in real time, but also effectively accounts for parameter variations due to the uncertain frictional heat-fluxes at the contact interface during cutting.
          6th Global Summit on Artificial Intelligence and Neural Networks      Cache   Translate Page      
   In recent years genetic algorithm neural networks (GANN) and natural language processing (NLP) have entered to provide, “Data into Knowledge” (DiK) solutions. Research with GANN and NLP has enabled tools to be developed that selectively filters big data and combine this data into microSelf-Reinforcement and personalized gamification of any DiK in Dynamic real-time. The […]
          Data Engineer 2 - IMO - Intelligent Medical Objects, Inc. - Northbrook, IL      Cache   Translate Page      
Familiarity with machine learning methods, such as clustering analysis and neural networks. Downtown commuters will enjoy free shuttle service to IMO’s...
From IMO - Intelligent Medical Objects, Inc. - Mon, 24 Sep 2018 17:52:43 GMT - View all Northbrook, IL jobs
          How to build your own Neural Network from scratch in R      Cache   Translate Page      
Last week I ran across this great post on creating a neural network in Python. It walks through the very basics of neural networks and creates a working example using Python. I enjoyed the simple hands on approach the author used, and I was interested to see how we might make the same model using R. In this post we recreate the above-mentioned Python neural network from scratch in R. Our R refactor is focused on simplicity and understandability; we are not concerned with writing the most efficient or elegant code. Our very basic neural network will have 2 layers. Below is a diagram of the network: For background information, please read over the Python post. It may be helpful to open the Python post and compare the chunks of Python code to the corresponding R code below. The full Python code to train the model is not available in the body of the Python post, but fortunately it is included in the comments; so, scroll down on the Python post if you are looking for it. Let’s get started with R! Create Training Data First, we create the data to train the neural network. # predictor variables X
          Data Engineer 2 - IMO - Intelligent Medical Objects, Inc. - Northbrook, IL      Cache   Translate Page      
Familiarity with machine learning methods, such as clustering analysis and neural networks. Downtown commuters will enjoy free shuttle service to IMO’s...
From IMO - Intelligent Medical Objects, Inc. - Mon, 24 Sep 2018 17:52:43 GMT - View all Northbrook, IL jobs
          GStreamer: GStreamer Conference 2018: Talks Abstracts and Speakers Biographies now available      Cache   Translate Page      

The GStreamer Conference team is pleased to announce that talk abstracts and speaker biographies are now available for this year's lineup of talks and speakers, covering again an exciting range of topics!

The GStreamer Conference 2018 will take place on 25-26 October 2018 in Edinburgh (Scotland) just after the Embedded Linux Conference Europe (ELCE).

Details about the conference and how to register can be found on the conference website.

This year's topics and speakers:

Lightning Talks:

  • gst-mfx, gst-msdk and the Intel Media SDK: an update (provisional title)
    Haihao Xiang, Intel
  • Improved flexibility and stability in GStreamer V4L2 support
    Nicolas Dufresne, Collabora
  • GstQTOverlay
    Carlos Aguero, RidgeRun
  • Documenting GStreamer
    Mathieu Duponchelle, Centricular
  • GstCUDA
    Jose Jimenez-Chavarria, RidgeRun
  • GstWebRTCBin in the real world
    Mathieu Duponchelle, Centricular
  • Servo and GStreamer
    Víctor Jáquez, Igalia
  • Interoperability between GStreamer and DirectShow
    Stéphane Cerveau, Fluendo
  • Interoperability between GStreamer and FFMPEG
    Marek Olejnik, Fluendo
  • Encrypted Media Extensions with GStreamer in WebKit
    Xabier Rodríguez Calvar, Igalia
  • DataChannels in GstWebRTC
    Matthew Waters, Centricular
  • Me TV – a journey from C and Xine to Rust and GStreamer, via D
    Russel Winder
  • ...and many more
  • ...
  • Submit your lightning talk now!

Many thanks to our sponsors, Collabora, Pexip, Igalia, Fluendo, Facebook, Centricular and Zeiss, without whom the conference would not be possible in this form. And to Ubicast who will be recording the talks again.

Considering becoming a sponsor? Please check out our sponsor brief.

We hope to see you all in Edinburgh in October! Don't forget to register!


          A Look at CNTK v2.6 and the Iris Dataset      Cache   Translate Page      

Version 2.6 of CNTK was released a few weeks ago so I figured I’d update my system and give it a try. CNTK (“Cognitive Network Tool Kit”) is Microsoft’s neural network code library. Primary alternatives include Google’s TensorFlow and Keras (a library that makes TF easier to use), and Facebook’s PyTorch.

To cut to the chase, I deleted by existing CNTK and then installed v2.6 using the pip utility, and then . .

As I write this, I think back about all the effort that was required to figure out how to install CNTK (and TF and Keras and PyTorch). It’s easy for me now, but if you’re new to using neural network code libraries, trust me, there’s a lot to learn ― mostly about all the many things that can go wrong with an installation, how to interpret the error messages, and how to resolve.

OK, back to my post. I ran my favorite demo, classification on the Iris Dataset. My old (written for v2.5) CNTK code ran as expected. Excellent!


A Look at CNTK v2.6 and the Iris Dataset

The real moral of the story is that deep learning with neural network libraries is new and still in a state of constant flux. This makes it tremendously difficult to stay abreast of changes. New releases of these libraries emerge not every free months, or even every few weeks, but often every few days. The pace of development is unlike anything I’ve ever seen in computer science.


A Look at CNTK v2.6 and the Iris Dataset

Additionally, the NN libraries are just the tip of the technology pyramid. There are dozens and dozens of supporting systems, and they are being developed with blazing speed too. For example, I did an Internet search for “auto ML” and found many systems that are wrappers over CNTK or TF/Keras or PyTorch, and that are intended to automate the process pipeline of things like hyperparameter tuning, data preprocessing, and so on.

The blistering pace of development of neural network code libraries and supporting software will eventually slow down (maybe 18 months as a wild guess), but for now it’s an incredibly exciting time to be working with deep learning systems.


A Look at CNTK v2.6 and the Iris Dataset

I suspect that an artist’s style doesn’t change too quickly over time (well, after his/her formative years). Three paintings by an unknown (to me) artist with similar compositions but slightly different styles.


          Data Scientist - eBay Inc. - Austin, TX      Cache   Translate Page      
GBM, logistic regression, clustering, neural networks, NLP Strong analytical skills with good problem solving ability eBay is a Subsidiary of eBay....
From eBay Inc. - Sat, 25 Aug 2018 08:05:35 GMT - View all Austin, TX jobs
          Neural Network Plasticity and Integrative Neuroscience - Assistant Professor - Université de Montréal - Québec City, QC      Cache   Translate Page      
Demonstrated interest in the study of the molecular mechanisms of normal cell functions and disease states; The ideal candidate will combine electrophysiology...
From University Affairs - Tue, 25 Sep 2018 18:27:56 GMT - View all Québec City, QC jobs
          Neural networks Research Articles for SCI indexed Journal       Cache   Translate Page      
I need you to write a research article Related to neural networks. Paper mus he accepted by impact factor SCI journal. (Budget: $250 - $750 USD, Jobs: Research Writing)
          Clark Labs TerrSet 18.21 181010      Cache   Translate Page      

Clark Labs TerrSet 18.21 181010
[center]
http://i83.fastpic.ru/big/2016/0924/16/36acd7f71b81b6fee9d35b6f8e9fed16.jpg

Clark Labs TerrSet 18.21 | 197 MB

TerrSet is an integrated geospatial software system for monitoring and modeling the earth system for sustainable development. The TerrSet System incorporates the IDRISI GIS Analysis and IDRISI Image Processing tools along with a constellation of vertical applications. TerrSet offers the most extensive set of geospatial tools in the industry in a single, affordable package. There is no need to buy costly add-ons to extend your research capabilities.
The Full TerrSet Constellation Includes:
[/center]

[center]
The GIS Analysis tools - a wide range of fundamental analytical tools for GIS analysis, primarily oriented to raster data. Special features of the GIS Analysis tool set include a suite of multi-criteria and multi-objective decision procedures and a broad range of tools for statistical, change and surface analysis. Special graphical modeling environments are also provided for dynamic modeling and decision support. The GIS Analysis tool set also provides a scripting environment and an extremely flexible application programming interface (API) that allows the ability to control TerrSet using languages such as C++, Delphi and Python. Indeed, all TerrSet components make very extensive use of the API.

The Image Processing System - an extensive set of procedures for the restoration, enhancement, transformation and classification of remotely sensed images. The Image Processing System in Terrset contains the broadest set of classification procedures in the industry, including both hard and soft classification procedures based on machine learning (such as neural networks) and statistical characterization.

The Land Change Modeler (LCM) - a vertical application for analyzing land cover change, empirically modeling its relationship to explanatory variables and projecting future changes. LCM also includes special tools for the assessment of REDD (Reducing Emissions from Deforestation and forest Degradation) climate change mitigation strategies.

The Habitat and Biodiversity Modeler (HBM) - a vertical application for habitat assessment, landscape pattern analysis and biodiversity modeling. HBM also contains special tools for species distribution modeling.

GeOSIRIS - a unique tool for national level REDD (Reducing Emissions from Deforestation and forest Degradation) planning, developed in close cooperation with Conservation International. With GeOSIRIS, one can model the impact of various economic strategies on deforestation and carbon emissions reductions.

The Ecosystem Services Modeler (ESM) - a vertical application for assessing the value of various ecosystem services such as water purification, crop pollination, wind and wave energy, and so on. ESM is based closely on the InVEST toolset developed by the Natural Capital Project.

The Earth Trends Modeler (ETM) - a tool for the analysis of time series of earth observation imagery. With ETM, one can discover trends and recurrent patterns in fundamental earth system variables such as sea surface temperature, atmospheric temperature, precipitation, vegetation productivity and the like. ETM is an exceptional tool for the assessment of climate change in the recent past (e.g., the past 30 years).

The Climate Change Adaptation Modeler (CCAM) - a tool for modeling future climate and assessing its impacts on sea level rise, crop suitability and species distributions.

Whats New:
Version 18.2 includes all the previous service updates listed below, including the following:
DigitalGlobe import utility for WorldView2,3, Quickbird and GeoEye1,2. The utility works similar to the Landsat utility and will transform the data to reflectance or radiance on import. The utility automatically reads the .imd file supplied by DigitalGlobe.
NDVI3g import utility to convert the GIMMS AVHRR Global NDVI VI3g data.
Gdal import utility enhancements to the streamlined interface introduced in 18.10.
Geotiff support for large TIFF files and other improvements.
HDFEOS support for HDF5
Atmospheric Correction (AtmosC) updated the LUT for solar spectral irradiance.
Other enhancements to: Thiessen, Crosstab, Macro Modeler, Viewshed, Pansharpen, Extract, CTA, GenericRaster, Concat, Reclass, Enviidrisi, Metaupdate, Interpol.

Buy a premium  to download file with fast speed
thanks
Rapidgator.net
http://rapidgator.net/file/4a78d94dbcb0 … 1.rar.html
alfafile.net
http://alfafile.net/file/Xk5A/n3ksi.Cla … .18.21.rar
[/center]


          [ASAP] TopScore: Using Deep Neural Networks and Large Diverse Data Sets for Accurate Protein Model Quality Assessment      Cache   Translate Page      

TOC Graphic

Journal of Chemical Theory and Computation
DOI: 10.1021/acs.jctc.8b00690

          Comment on How to Develop 1D Convolutional Neural Network Models for Human Activity Recognition by Irati      Cache   Translate Page      
Hi Jason, Both this post and your book are great! I have a question though: In my case, although the database has similar structure to the one of the example, due to the nature of the environment the dataset is small. I have been digging but I could not find a nice approach for the data augmentation problem where the case of study are multivariate time series. Any suggestion? Thanks for your work!
          AIが既存企業のノウハウを破壊する――ディープラーニング開発の「ハイウェイ」となるソニーの独自フレームワークとは【デブサミ2018 夏】      Cache   Translate Page      
 ディープラーニングを始めとするAI関連技術は、あらゆる企業に浸透し、世の企業の価値基準を「ノウハウの保有量」から「データの保有量」に変革していく。そんな時代予測のもとに打ち出されたソニーのディープラーニング開発用フレームワーク「Neural Network Libraries」と、学習用GUIツール「Neural Network Console」。ソニーの成平拓也氏は、それらを「入門から研究・実用までを幅広くカバーするソフトウェア」と紹介し、変革の時代に対応する手立てとして提案した。ソニーグループ内の具体的な活用事例とともに、その機能をご確認いただきたい。
           Comment on New Part Day: The RISC-V Chip With Built-In Neural Networks by Adam       Cache   Translate Page      
I also have question what is the wifi chip? It looks like something from Espressif
           Comment on New Part Day: The RISC-V Chip With Built-In Neural Networks by Adam       Cache   Translate Page      
Thanks for your response! 30mW looks good but on Kendryte site it is <300mW. Have you done some tests what is minimal power consumption without losing memory? Can you disable one core, slow down clock or stop it completely? If it is really static RAM it should theoretically be possible to stop all clocks completely. Open source APU is great news.
          Predicting individual physiologically acceptable states at discharge from a pediatric intensive care unit.      Cache   Translate Page      
Icon for Silverchair Information Systems Related Articles

Predicting individual physiologically acceptable states at discharge from a pediatric intensive care unit.

J Am Med Inform Assoc. 2018 Oct 06;:

Authors: Carlin CS, Ho LV, Ledbetter DR, Aczon MD, Wetzel RC

Abstract
Objective: Quantify physiologically acceptable PICU-discharge vital signs and develop machine learning models to predict these values for individual patients throughout their PICU episode.
Methods: EMR data from 7256 survivor PICU episodes (5632 patients) collected between 2009 and 2017 at Children's Hospital Los Angeles was analyzed. Each episode contained 375 variables representing physiology, labs, interventions, and drugs. Between medical and physical discharge, when clinicians determined the patient was ready for ICU discharge, they were assumed to be in a physiologically acceptable state space (PASS) for discharge. Each patient's heart rate, systolic blood pressure, diastolic blood pressure in the PASS window were measured and compared to age-normal values, regression-quantified PASS predictions, and recurrent neural network (RNN) PASS predictions made 12 hours after PICU admission.
Results: Mean absolute errors (MAEs) between individual PASS values and age-normal values (HR: 21.0 bpm; SBP: 10.8 mm Hg; DBP: 10.6 mm Hg) were greater (p < .05) than regression prediction MAEs (HR: 15.4 bpm; SBP: 9.9 mm Hg; DBP: 8.6 mm Hg). The RNN models best approximated individual PASS values (HR: 12.3 bpm; SBP: 7.6 mm Hg; DBP: 7.0 mm Hg).
Conclusions: The RNN model predictions better approximate patient-specific PASS values than regression and age-normal values.

PMID: 30295770 [PubMed - as supplied by publisher]


          Collective evolution of weights in wide neural networks. (arXiv:1810.03974v1 [cs.NE])      Cache   Translate Page      

Authors: Dmitry Yarotsky

We derive a nonlinear integro-differential transport equation describing collective evolution of weights under gradient descent in large-width neural-network-like models. We characterize stationary points of the evolution and analyze several scenarios where the transport equation can be solved approximately. We test our general method in the special case of linear free-knot splines, and find good agreement between theory and experiment in observations of global optima, stability of stationary points, and convergence rates.


          Step Size Matters in Deep Learning. (arXiv:1805.08890v2 [cs.LG] UPDATED)      Cache   Translate Page      

Authors: Kamil Nar, S. Shankar Sastry

Training a neural network with the gradient descent algorithm gives rise to a discrete-time nonlinear dynamical system. Consequently, behaviors that are typically observed in these systems emerge during training, such as convergence to an orbit but not to a fixed point or dependence of convergence on the initialization. Step size of the algorithm plays a critical role in these behaviors: it determines the subset of the local optima that the algorithm can converge to, and it specifies the magnitude of the oscillations if the algorithm converges to an orbit. To elucidate the effects of the step size on training of neural networks, we study the gradient descent algorithm as a discrete-time dynamical system, and by analyzing the Lyapunov stability of different solutions, we show the relationship between the step size of the algorithm and the solutions that can be obtained with this algorithm. The results provide an explanation for several phenomena observed in practice, including the deterioration in the training error with increased depth, the hardness of estimating linear mappings with large singular values, and the distinct performance of deep residual networks.


          Data Scientist - eBay Inc. - Austin, TX      Cache   Translate Page      
GBM, logistic regression, clustering, neural networks, NLP Strong analytical skills with good problem solving ability eBay is a Subsidiary of eBay....
From eBay Inc. - Sat, 25 Aug 2018 08:05:35 GMT - View all Austin, TX jobs
          Data Elixir - Issue 203      Cache   Translate Page      

In the News

Data Factories

Ben Thompson writes at the intersection of technology and business strategy in his popular Stratechery blog. In this post, he explores how big online advertising businesses are essentially "data factories" and what that means for everyday users, businesses, and regulators.

stratechery.com

Why you should be data-informed and not data-driven

Everyone wants to be "data driven" these days but that's not always the best approach. This article takes a look at the risks of being a data driven culture and offers a well-reasoned alternative.

hackernoon.com

Sponsored Link

Introduction to "Advances in Financial Machine Learning"

Inspired by Marcos Lopez de Prado's book "Advances in Financial Machine Learning," Quantopian explores the various factors to consider when researching investing through the lens of machine learning. For more posts like this, sign up for Quantopian's free platform and become an expert in Quant Finance.

bit.ly

Tools and Techniques

The hacker's guide to uncertainty estimates

Being able to quantify uncertainty is key for really understanding your data. This is a great walk-through of a variety of methods, including bootstrapping, confidence intervals, regression and Monte Carlo methods.

erikbern.com

How to deliver on Machine Learning projects

The process of developing machine learning models is very different than what most engineers are accustomed to. In this post, Emmanuel Ameisen describes the differences and introduces an approach he calls the "ML Engineering Loop." It's an iterative approach that enables rapid discovery and development of the best models.

insightdatascience.com

A Review of the Recent History of Natural Language Processing (NLP)

Sebastian Ruder's latest post offers high-level overviews of recent NLP advancements with a focus on neural network-based methods. This is organized around 8 key milestones and includes lots of linked references.

aylien.com

A Database Diagram Designer Built for Developers and Analysts

dbdiagram.io is a database diagrams designer for analysts & developers. Create and visualize database schemas using just your keyboard.

hackernoon.com

Data Viz

The av Package: Production Quality Video in R

av is a new package for working with audio/video directly from R. It uses the FFmpeg AV libraries and it enables you to easily create and edit videos using FFmpeg's video editing library. Here are the highlights, along with code snippets and embedded video examples.

ropensci.org

Career

An Introduction to the Data Product Management Landscape

As the field of Data Product Management matures, it's dividing into multiple sub-areas. This article takes a look at the evolving role of Data PMs and where things are going.

insightdatascience.com

Jobs & Careers

Hiring?

Post on Data Elixir's Job Board to reach a wide audience of data professionals.

dataelixir.com

Recent Listings:

More data science jobs >>

In Case You Missed It

Be sure to catch the most popular articles from last week's Data Elixir...

About

Data Elixir is curated and maintained by @lonriesberg. For additional finds from around the web, follow Data Elixir on Twitter, Facebook, or Google Plus.


This RSS feed is published on https://dataelixir.com/. You can also subscribe via email.


          (USA-TX-Austin) Data Scientist      Cache   Translate Page      
Job Description Ticom Geomatics, a CACI Company, delivers industry leading Signals Intelligence and Electronic Warfare (SIGINT/EW) products that enable our nation’s tactical war fighters to effectively utilize networked sensors, assets, and platforms to perform a variety of critical national security driven missions. We are looking for talented, passionate Engineers, Scientists, and Developers who are excited about using state of the art technologies to build user-centric products with a profound impact to the US defense and intelligence community. We are seeking to grow our highly capable engineering teams to build the best products in the world. The successful candidate is an individual who is never satisfied with continuing with the status quo just because “it’s the way things have always been done”. What You'll Get to Do: The prime responsibility of the Data Scientist position is to provide support for the design, development, integration, test and maintenance of CACI’s Artificial Intelligence and Machine Learning product portfolio. This position is based in our Austin, TX office. For those outside of the Austin area, relocation assistance is considered on a case by case basis Duties and Responsibilities: - Work within a cross-disciplinary team to develop new machine learning-based software applications. Position is responsible for implementing machine learning algorithms by leveraging open source and custom machine learning tools and techniques - Use critical thinking to assess deficiencies in existing machine learning or expert system-based applications and provide recommendations for improvement - Generate technical documentation to include software description documents, interface control documents (ICDs) and performance analysis reports - Travel to other CONUS locations as required (up to 25%) You’ll Bring These Qualifications: - Degree in Computer Science, Statistics, Mathematics or Electrical & Computer Engineering from an ABET accredited university with a B.S degree and a minimum of 7 years of related experience, or a M.S. degree and 5 years of experience, or a PhD with a minimum of 2 years of academic or industry experience. - In depth knowledge and practical experience using a variety of machine learning techniques including: linear regression, logistic regression, neural networks, state vector machines, anomaly detection, natural language processing and clustering techniques - Expert level knowledge and practical experience with C++, Python, Keras, TensorFlow, PyTorch, Caffe, Docker - Technical experience in the successful design, development, integration, test and deployment of machine learning based applications - Strong written and verbal communication skills - Self-starter that can work with minimum supervision and has good team interaction skills - US citizenship is required along with the ability to obtain a TS/SCI security clearance Desired Qualifications: - Basic understanding and practical experience with digital signal processing techniques - Experience working with big data systems such as Hadoop, Spark, NoSQL and Graph Databases - Experience working within research and development (R&D) environments - Experience working within Agile development teams leveraging DevOps methodology - Experience working within cross-functional teams following a SCRUM/Sprint-based project execution - Experience implementing software within a Continuous Integration, Continuous Deployment environment - Experience delivering software systems for DoD customers What We can Offer You: - We’ve been named a Best Place to Work by the Washington Post. - Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives. - We offer competitive benefits and learning and development opportunities. - We are mission-oriented and ever vigilant in aligning our solutions with the nation’s highest priorities. - For over 55 years, the principles of CACI’s unique, character-based culture have been the driving force behind our success. Ticom Geomatics (TGI) is a subsidiary of CACI International, Inc. in Austin, Texas with ~200 employees.” We’ve recently been named by Austin American Statesman as one of the Top Places to Work in Austin. We are an industry leader in interoperable, mission-ready Time and Frequency Difference of Arrival (T/FDOA) Precision Geolocation systems and produce diverse portfolio of Intelligence, Surveillance and Reconnaissance (ISR) products spanning small lightweight sensors, rack-mounted deployments, and cloud-based solutions which are deployed across the world. The commitment of our employees to "Engineering Results" is the catalyst that has propelled TGI to becoming a leader in software development, R&D, sensor development, and signal processing. Our engineering teams are highly adept at solving complex problems with the application of leading-edge technology solutions. Our work environment is highly focused yet casual with flexible schedules that enable each of our team members to achieve the work life balance that works for them. We provide highly competitive benefits package including a generous 401(k) contribution and Paid Time Off (PTO) policy. See additional positions at: http://careers.caci.com/page/show/TGIJobs Job Location US-Austin-TX-AUSTIN CACI employs a diverse range of talent to create an environment that fuels innovation and fosters continuous improvement and success. At CACI, you will have the opportunity to make an immediate impact by providing information solutions and services in support of national security missions and government transformation for Intelligence, Defense, and Federal Civilian customers. CACI is proud to provide dynamic careers for employees worldwide. CACI is an Equal Opportunity Employer - Females/Minorities/Protected Veterans/Individuals with Disabilities.
          The Matrix movie (1999)      Cache   Translate Page      
A convincing prophet meets The Saviour - offers cookies...

(NOTE: Many spoilers below.) 

I have just rewatched The Matrix movie (1999) and I thought it was even better second time around. I had a memory that there was a flaw in that the martial arts scenes were over-extended - but (with the exception of the gunfight rescue of Morpheus) this is not really the case: there is something being told us with almost all of the phases of the various battles.

My impression this time around is that The Matrix is a really outstanding 5 Star movie; in which nothing goes for nothing - and where there is a very satisfying quality to the whole thing. I found it genuinely wise - in those parts where wisdom was aimed-at. The acting (and direction) of the principal actors is outstanding.

I was more aware of the spiritual dimension of the piece, too; there is an Old Testament like prominence given to prophecy (and the importance of prophets - ie. The Oracle). For Christians, there are several strong symbolic aspects (not necessarily deliberate - the authors aren't Christian), if we want to notice them: Morpheus as John the Baptist; Neo as Saviour who dies and is resurrected; Trinity (more loosely) as Mary Magdalene etc. But these I noticed afterwards, on reflection, rather than during. The end is not perfect - more than a touch of the inexplicit 'walking into the sunset' about it - but good enough to make the movie 'work'.

I think one of the aspects that helped me enjoy The Matrix more the second time, was that I set-aside the central nonsensical plot implausibility, which was apparently externally imposed on the film makers; of having the Matrix consist of human 'batteries' - their bodies providing energy to the Machines. Instead; I mentally-substituted the original conception that the machines were exploiting human minds and their computing power, and that an interconnected human neural network constituted most of the Matrix.

Having started watching the second Matrix movie; the sudden gap in quality and aspiration is very obvious. It's not that the sequels are bad - as movies they are fine - but that they are utterly different and at a much lower level of ambition (and therefore attainment). They also create the plot swerve and raggedness that makes it turn-out that Agent Smith is actually The One; whereas in the first movie it is unambiguously Neo - and this swerve destroys some of the coherent, satisfying, underlying, symbolism of the The Matrix.

Aside; I always regard it as a pity when a totalitarian dystopia is established by a 'fascistic' war and imposed by violence; whereas in this real world the analogous society is being incrementally and bureaucratically-implemented without significant resistance by the global ruling class; with the active support and cooperation of the linked bureaucracies and mass media. The real-world Matrix is actually-existing socialism; meanwhile the real-world rebels are characterised as Right Wing Reactionaries and enemies of individual 'freedom' (especially extra-marital-transgressive sexual freedoms).

Of course, such a truthful movie could never emanate from Hollywood, nor - specifically - from the makers of The Matrix. We have to make such adjustments ourselves, by our personal interpretative work.


          Automatic segmentation of the spinal cord and intramedullary multiple sclerosis lesions with convolutional neural networks.      Cache   Translate Page      
Related Articles

Automatic segmentation of the spinal cord and intramedullary multiple sclerosis lesions with convolutional neural networks.

Neuroimage. 2018 Oct 06;:

Authors: Gros C, De Leener B, Badji A, Maranzano J, Eden D, Dupont SM, Talbott J, Zhuoquiong R, Liu Y, Granberg T, Ouellette R, Tachibana Y, Hori M, Kamiya K, Chougar L, Stawiarz L, Hillert J, Bannier E, Kerbrat A, Edan G, Labauge P, Callot V, Pelletier J, Audoin B, Rasoanandrianina H, Brisset JC, Valsasina P, Rocca MA, Filippi M, Bakshi R, Tauhid S, Prados F, Yiannakas M, Kearney H, Ciccarelli O, Smith S, Treaba CA, Mainero C, Lefeuvre J, Reich DS, Nair G, Auclair V, McLaren DG, Martin AR, Fehlings MG, Vahdat S, Khatibi A, Doyon J, Shepherd T, Charlson E, Narayanan S, Cohen-Adad J

Abstract
The spinal cord is frequently affected by atrophy and/or lesions in multiple sclerosis (MS) patients. Segmentation of the spinal cord and lesions from MRI data provides measures of damage, which are key criteria for the diagnosis, prognosis, and longitudinal monitoring in MS. Automating this operation eliminates inter-rater variability and increases the efficiency of large-throughput analysis pipelines. Robust and reliable segmentation across multi-site spinal cord data is challenging because of the large variability related to acquisition parameters and image artifacts. In particular, a precise delineation of lesions is hindered by a broad heterogeneity of lesion contrast, size, location, and shape. The goal of this study was to develop a fully-automatic framework - robust to variability in both image parameters and clinical condition - for segmentation of the spinal cord and intramedullary MS lesions from conventional MRI data of MS and non-MS cases. Scans of 1042 subjects (459 healthy controls, 471 MS patients, and 112 with other spinal pathologies) were included in this multi-site study (n = 30). Data spanned three contrasts (T1-, T2-, and T2∗-weighted) for a total of 1943 vol and featured large heterogeneity in terms of resolution, orientation, coverage, and clinical conditions. The proposed cord and lesion automatic segmentation approach is based on a sequence of two Convolutional Neural Networks (CNNs). To deal with the very small proportion of spinal cord and/or lesion voxels compared to the rest of the volume, a first CNN with 2D dilated convolutions detects the spinal cord centerline, followed by a second CNN with 3D convolutions that segments the spinal cord and/or lesions. CNNs were trained independently with the Dice loss. When compared against manual segmentation, our CNN-based approach showed a median Dice of 95% vs. 88% for PropSeg (p ≤ 0.05), a state-of-the-art spinal cord segmentation method. Regarding lesion segmentation on MS data, our framework provided a Dice of 60%, a relative volume difference of -15%, and a lesion-wise detection sensitivity and precision of 83% and 77%, respectively. In this study, we introduce a robust method to segment the spinal cord and intramedullary MS lesions on a variety of MRI contrasts. The proposed framework is open-source and readily available in the Spinal Cord Toolbox.

PMID: 30300751 [PubMed - as supplied by publisher]


          Bad wrap: Myelin and myelin plasticity in health and disease.      Cache   Translate Page      
Icon for Wiley Related Articles

Bad wrap: Myelin and myelin plasticity in health and disease.

Dev Neurobiol. 2018 Feb;78(2):123-135

Authors: Gibson EM, Geraghty AC, Monje M

Abstract
Human central nervous system myelin development extends well into the fourth decade of life, and this protracted period underscores the potential for experience to modulate myelination. The concept of myelin plasticity implies adaptability in myelin structure and function in response to experiences during development and beyond. Mounting evidence supports this concept of neuronal activity-regulated changes in myelin-forming cells, including oligodendrocyte precursor cell proliferation, oligodendrogenesis and modulation of myelin microstructure. In healthy individuals, myelin plasticity in associative white matter structures of the brain is implicated in learning and motor function in both rodents and humans. Activity-dependent changes in myelin-forming cells may influence the function of neural networks that depend on the convergence of numerous neural signals on both a temporal and spatial scale. However, dysregulation of myelin plasticity can disadvantageously alter myelin microstructure and result in aberrant circuit function or contribute to pathological cell proliferation. Emerging roles for myelin plasticity in normal neurological function and in disease are discussed. © 2017 Wiley Periodicals, Inc. Develop Neurobiol 78: 123-135, 2018.

PMID: 28986960 [PubMed - indexed for MEDLINE]


          Blog Review: Oct. 10      Cache   Translate Page      
Neural network sparsity; memory slowdown possible; efficient code.
          Whiteboard Wednesdays - Tensilica Neural Network Compiler: An Offline Tool for Efficient Deployment of Neural Networks      Cache   Translate Page      

In this week’s Whiteboard Wednesdays video, Megha Daga describes how the Tensilica Neural Network Compiler works, from a trained floating-point network to an optimized source code generation for a Tensilica AI-enabled DSP or processor.

https://youtu.be/V9BV_BLIUiI


          First Mover: Germany’s DFKI to Deploy Europe’s Initial DGX-2 Supercomputer      Cache   Translate Page      

DFKI, the leading research center in Germany in the field of innovative commercial software technology using AI, is the first group in Europe to adopt the NVIDIA DGX-2 AI supercomputer. The research center will use the system to quickly analyze large-scale satellite and aerial imagery using image processing and deep neural network training, as well Read article >

The post First Mover: Germany’s DFKI to Deploy Europe’s Initial DGX-2 Supercomputer appeared first on The Official NVIDIA Blog.


          Study on the Detection of Dairy Cows’ Self-Protective Behaviors Based on Vision Analysis      Cache   Translate Page      
The study of the self-protective behaviors of dairy cows suffering dipteral insect infestation is important for evaluating the breeding environment and cows’ selective breeding. The current practices for measuring diary cows’ self-protective behaviors are mostly by human observation, which is not only tedious but also inefficient and inaccurate. In this paper, we develop an automatic monitoring system based on video analysis. First, an improved optical flow tracking algorithm based on Shi-Tomasi corner detection is presented. By combining the morphological features of head, leg, and tail movements, this method effectively reduces the number of Shi-Tomasi points, eliminates interference from background movement, reduces the computational complexity of the algorithm, and improves detection accuracy. The detection algorithm is used to calculate the number of tail, leg, and head movements by using an artificial neural network. The accuracy range of the tail and head reached [0.88, 1] and the recall rate was [0.87, 1]. The method proposed in this paper which provides objective measurements can help researchers to more effectively analyze dairy cows’ self-protective behaviors and the living environment in the process of dairy cow breeding and management.
          Introduction to Neural Networks for C#, 2nd Edition      Cache   Translate Page      

Introduction to Neural Networks with C#, Second Edition, introduces the C# programmer to the world of Neural Networks and Artificial Intelligence. Neural network architectures, such as the feedforward, Hopfield, and self-organizing map architectures are discussed. Training techniques, such as backpropagation, genetic algorithms and simulated annealing are also introduced. Practical examples are given for each neural network. Examples include the traveling salesman problem, handwriting recognition, financial prediction, game strategy, mathematical functions, and Internet bots. All C# source code is available online for easy downloading.
          A Note on Neural Networks      Cache   Translate Page      
This is a library to implement Neural Networks in Javascript.
           Prostate Cancer Classification on VERDICT DW-MRI Using Convolutional Neural Networks       Cache   Translate Page      
Chiou, E; Giganti, F; Bonet-Carne, E; Punwani, S; Kokkinos, I; Panagiotaki, E; (2018) Prostate Cancer Classification on VERDICT DW-MRI Using Convolutional Neural Networks. In: Shi, Y and Suk, H-I and Liu, M, (eds.) Machine Learning in Medical Imaging. MLMI 2018. Lecture Notes in Computer Science, vol 11046. (pp. pp. 319-327). Springer: Cham.
          Apple Has Acquired Danish Startup Spektral, Focused on Real-Time 'Green Screen' Technology      Cache   Translate Page      
Apple has acquired Danish computer vision startup Spektral, according to a paywalled report from Danish newspaper Børsen.


Spektral has developed a technology that can intelligently separate people and objects from their original backgrounds in photos and videos, and overlay a new background, resulting in what is called a "cutout." The solution is driven by deep neural networks and spectral graph theory.

The technology can be thought of as real-time "green screen" processing powered by machine learning algorithms:
Our pioneering and unique technology is based on state-of-the-art machine learning and computer vision techniques. Combining deep neural networks and spectral graph theory with the computing power of modern GPUs, our engine can process images and video from the camera in real-time (60 FPS) directly on the device.
The report says Apple acquired Spektral, formerly known as CloudCutout, in late 2017. Spektral co-founders Henrik Paltoft and Toke Jansen, who now lists himself as a manager of computational imaging at Apple, are said to have received 200 million Danish krone, or roughly $30 million as of today's exchange rate.


Spektral's website notes that its solution makes it possible to create unique and immersive mixed reality content. Apple could incorporate the technology into the default Camera app on iPhone, or Messages, or Clips, or use the technology in bigger ways as it continues to push into augmented reality.

Spektral was founded in 2014 and raised $3.3 million in venture capital prior to its acquisition by Apple, according to Crunchbase.


Discuss this article in our forums


          Neuton: A new, disruptive neural network framework for AI applications      Cache   Translate Page      
Deep learning neural networks are behind much of the progress in AI these days. Neuton is a new framework that claims to be much faster and more compact, and it requires less skills and training than anything the AWSs, Googles, and Facebooks of the world have.
           Jelly Bean Identifier       Cache   Translate Page      
Using TensorFlow's mobilenet retrained neural network to identify the flavors of multiple jelly beans.
          Watching YouTube videos may someday let robots copy humans      Cache   Translate Page      
AI scientists at the University of California at Berkeley trained a neural network to reconstruct the acrobatics humans perform in YouTube videos, and to then manipulate a simulated actor to perform those motions. The work has implications for training robotic systems to copy human activity.
          Computational Intelligence In Manufacturing Handbook      Cache   Translate Page      
#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000\
Автор: Jun Wang and Andrew Kusiak
Название: Computational Intelligence In Manufacturing Handbook
Издательство: CRC Press
Год: 2000
ISBN: 9780849305924
Серия: Handbook Series for Mechanical Engineering
Язык: English
Формат: pdf
Размер: 10,7 mb
Страниц: 576

Despite the large volume of publications devoted to neural networks, fuzzy logic, and evolutionary programming, few address the applications of computational intelligence in design and manufacturing. Computational Intelligence in Manufacturing Handbook fills this void as it covers the most recent advances in this area and state-of-the-art applications.
          Neural networks learn to produce random numbers      Cache   Translate Page      
It almost sounds silly - train a neural network to generate random numbers - but it has more practical uses than you might imagine.
That's much more cost effective than 2d20
          Chasing fads or solving problems?      Cache   Translate Page      
Sample chat between two engineers <strong>Engineer 1:</strong> We should use Hadoop/Kubernetes/Cassandra/insert-random-cool-tech. <strong>Engineer 2:</strong> Why? <strong>Engineer 1:</strong> Well, all the cool guys rave about it: latest-fad-tech is the next big thing on the tech horizon. All the hot startups are using it too! <strong>Engineer 2:</strong> So what will we use it for? <strong>Engineer 1:</strong> Well, we could use latest-fad-tech to revamp our entire data processing stack. It's so much easier than the stable-reliable-well-known-and-proven platform we currently use. <strong>Engineer 2:</strong> So what are the benefits of moving to latest-fad? <strong>Engineer 1:</strong> We'll look modern like the hot startups, use the latest stuff and not have to worry about data loss any more. <strong>Engineer 2:</strong> Do we have a data loss problem now? <strong>Engineer 1:</strong> Errm..., we don't have a data-loss problem. Do you know that latest-fad also promises automatic resource management? Our infrastructure staffing needs will go down. <strong>Engineer 2:</strong> Our current resource management costs come to about 5 - 10% of our staffing needs. Are you telling me latest-fad has no adoption and management cost at all? <strong>Engineer 1:</strong> Probably not - nothing is free! But it is a one-time cost and the investment will eventually pay off. <strong>Engineer 2:</strong> OK, fair point. How does latest-fad behave under extreme load? Are failure modes well-known? <strong>Engineer 1:</strong> Well, it is the in-thing! There are loads of people running it so I think you'll be able to Google search for problems... <strong>Engineer 2:</strong> What happens if you run into an edge case that no one has ever run into before? Who are you gonna call? <strong>Engineer 1:</strong> I don't know... <strong>Engineer 2:</strong> So you're saying we should take a plunge from a stable less-cool platform to an unproven cool one? <strong>Engineer 1:</strong> Errm, well I don't like the less-cool platform because it is not as cool as latest-fad. And latest-fad is open source too! <strong>Engineer 2:</strong> Have you thought about the cost of moving to latest-fad? The risks of rewriting code, redesigning our stack and the productivity dip during the learning phase has to be weighed against potential benefits. <strong>Engineer 1:</strong> That sounds like a lot of work but we really should use latest-fad. <strong>Engineer 2:</strong> What benefit will accrue to our business and customers by switching? What needle does it move? <strong>Engineer 1:</strong> But it'll make our developers more productive. <strong>Engineer 2:</strong> Well, there are other tasks that will significantly increase our productivity e.g. better documentation, release automation and investing in tools. I admit they aren't as 'cool' as latest-fad though. <strong>Engineer 1:</strong> ... <strong>Engineer 2:</strong> Why not do a spike and come back with a comprehensive plan covering what it'll take to adopt latest-fad? Shiny new kid on the block

If you’ve been a software engineer for some time, you probably have been engineer 1, engineer 2 or listened on a similar conversation. It’s the classic conversation along React vs Angular vs Vue; micro-services vs monoliths, AWS vs Azure themes etc.

Most times we name-drop new technologies to impress. At other times, we want to satisfy our itch and try out new stuff: when you have a new hammer, you go about finding nails to drive in!

This attitude is risky because the eagerness to wield the new hammer can divert engineers from truly focusing on the problem at hand. Problem identification and isolation should be starting point: technology is a means and not the goal.

By all means, use Hadoop , neural networks or Cassandra if they are the best tools for the problem space. However, if they aren’t, then you’ve worsened the situation for the following reasons:

You still haven’t solved the original problem You have a new problem: maintaining acomplicatedtechnology platform Supporting the ill-fitting abstractionthat the platform in 2 provides for 1 Congrats! You just played yourself!! You bought two more problems for the price of one!!! Cost vs benefit vs risk

There are three factors to be considered every time you make a technology decision:

Cost (C): What will it take to adopt or implement the technology? Benefits (B): What benefits will accrue? Risk (R): What is the risk to existing business value?

My heuristic is to green-light full adoption only if the long-term benefits outweigh the costs and risks; i.e: B > C + R . Thus a short-term dip in productivity (i.e. high cost) is totally acceptable if, in the long run, the increased developer output makes up for it.

I know that shiny is exciting but the goal of software engineers is to solve business problems and bring value to the customer/business. Technology choices are expensive and you have to think of the second-degree and third-degree impacts of your choices as technology leaders.

But I want to try out new things!

Every now and then, some new technology comes up that solves your business problem perfectly, you might outgrow your current design or even have outdated technology. Here are a few suggestions on how to avoid this:

Do hackathons they are great for synergistic and symbiotic discovery of great solutions! Encourage engineers to explore new approaches to problem solving. Allow independent evaluations, proposals and implementations.

Think again: “What problem am I solving?”

Related
          The Ethics of Artificial Intelligence      Cache   Translate Page      

by Mike Small

Famously, in 2014 Prof. Stephen Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race." The ethical questions around Artificial Intelligence were discussed at a meeting led by the BCS President Chris Rees in London on October 2nd. This is also an area covered by KuppingerCole under the heading of Cognitive Technologies and this blog provides a summary of some of the issues that need to be considered.

Firstly, AI is a generic term and it is important to understand precisely what this means. Currently the state of the art can be described as Narrow AI. This is where techniques such as ML (machine learning) combined with massive amounts of data are providing useful results in narrow fields. For example, the diagnosis of certain diseases and predictive marketing. There are now many tools available to help organizations exploit and industrialise Narrow AI.

At the other extreme is what is called General AI where the systems are autonomous and can decide for themselves what actions to take. This is exemplified by the fictional Skynet that features in the Terminator games and movies. In these stories this system has spread to millions of computers and seeks to exterminate humanity in order to fulfil the mandates of its original coding. In reality, the widespread availability of General AI is still many years away.

In the short term, Narrow AI can be expected to evolve into Broad AI where a system will be able to support or perform multiple tasks using what is learnt in one domain applied to another. Broad AI will evolve to use multiple approaches to solve problems. For example, by linking neural networks with other forms of reasoning. It will be able work with limited amounts of data, or at least data which is not well tagged or curated. For example, in the cyber-security space to be able to identify a threat pattern that has not been seen before.

What is ethics and why is it relevant to AI? The term is derived from the Greek word “ethos” which can mean custom, habit, character or disposition. Ethics is a set of moral principles that govern behaviour, or the conduct of an activity, ethics is also a branch of philosophy that studies these principles. The reason why ethics is important to AI is because of the potential for these systems to cause harm to individuals as well as society in general. Ethics considerations can help to better identify beneficial applications while avoiding harmful ones. In addition, new technologies are often viewed with suspicion and mistrust. This can unreasonably inhibit the development of technologies that have significant beneficial potential. Ethics provide a framework that can be used to understand and overcome these concerns at an early stage.

Chris Rees identified 5 major ethical issues that need to be addressed in relation to AI:

  • Bias;
  • Explainability;
  • Harmlessness;
  • Economic Impact;
  • Responsibility.

Bias is a very current issue with bias related to gender and race as top concerns. AI systems are likely to be biased because people are biased, and AI systems amplify human capabilities. Social media provides an example of this kind of amplification where uncontrolled channels provide the means to share beliefs that may be popular but have no foundation in fact - “fake news”. The training of AI systems depends upon the use of data which may include inherent bias even though this may not be intentional. The training process and the trainers may pass on their own unconscious bias to the systems they train. Allowing systems to train themselves can lead to unexpected outcomes since the systems do not have the common sense to recognize mischievous behaviour. There are other reported examples of bias in facial recognition systems.

Explanation - It is very important in many applications that AI systems can explain themselves. Explanation may be required to justify a life changing decision to the person that it effects, to provide the confidence needed to invest in a project based on a projection, or to justify after the event why a decision was taken in a court of law. While rule-based systems can provide a form of explanation based on the logical rules that were fired to arrive at a particular conclusion neural network are much more opaque. This poses not only a problem to explain to the end user why a conclusion was reached but also to the developer or trainer to understand what needs to be changed to correct the behaviour of the system.

Harmlessness – the three laws of robotics that were devised by Isaac Asimov in the 1940’s and subsequently extended to include a zeroth law apply equally to AI systems. However, the use or abuse of the systems could breach these laws and special care is needed to ensure that this does not happen. For example, the hacking of an autonomous car could turn it into a weapon, which emphasizes the need for strong inbuilt security controls. AI systems can be applied to cyber security to accelerate the development of both defence and offence. It could be used by the cyber adversaries as well as the good guys. It is therefore essential that this aspect is considered and that countermeasures are developed to cover the malicious use of this technology.

Economic impact – new technologies have both destructive and constructive impacts. In the short-term the use of AI is likely to lead to the destruction of certain kinds of jobs. However, in the long term it may lead to the creation of new forms of employment as well as unforeseen social benefits. While the short-term losses are concrete the longer-term benefits are harder to see and may take generations to materialize. This makes it essential to create protection for those affected by the expected downsides to improve acceptability and to avoid social unrest.

Responsibility – AI is just an artefact and so if something bad happens who is responsible morally and in law? The AI system itself cannot be prosecuted but the designer, the manufacturer or the user could be. The designer may claim that the system was not manufactured to the design specification. The manufacturer may claim that the system was not used or maintained correctly (for example patches not applied). This is an area where there will need to be debate and this should take place before these systems cause actual harm.

In conclusion, AI systems are evolving but they have not yet reached the state portrayed in popular fiction. However, the ethical aspects of this technology need to be considered and this should be done sooner rather than later. In the same way that privacy by design is an important consideration we should now be working to develop “Ethical by Design”. GDPR allows people to take back control over how their data is collected and used. We need controls over AI before the problems arise.


          Apple Has Acquired Danish Startup Spektral, Focused On Real-Time 'Green Screen' Technology, by Joe Rossignol, MacRumors      Cache   Translate Page      

Spektral has developed a technology that can intelligently separate people and objects from their original backgrounds in photos and videos, and overlay a new background, resulting in what is called a "cutout." The solution is driven by deep neural networks and spectral graph theory.


          Report: Apple discreetly acquired mixed-reality startup Spektral for $30M      Cache   Translate Page      

According to a report from Fortune, Apple discreetly acquired Danish visual effects startup Spektral in December 2017.

Spektral was once named CloudCutout and focused on a cloud-based solution to masking a subject from the background of an photograph. Now, Spektral specializes in masking technology that uses machine learning to separate a subject in an image from the background in real-time on mobile devices. "Combining deep neural networks and spectral graph theory with the computing power of modern GPUs, our engine can process images and video from the camera in real-time (60 fps) directly on the device," says Spektral on its website.

Neither Apple nor Spektral have confirmed the acquisition, but Fortune reports the deal was worth "more than $30 million."

With no comment, we can't say for sure what Apple intends to do with Spektral's intellectual property and personnel, but Spektral Co-Founder and Chief Technical Officer Toke Jansen now lists "Manager, Computational Imaging" as his title at Apple on his LinkedIn profile. Combined with Apple's ongoing efforts to beef up its augmented reality efforts in its apps — both its own and third-party — it's safe to assume we'll see the fruits of the acquisition in the near future, if we haven't already seen parts of it.


          Exploring LSTMs      Cache   Translate Page      

It turns out LSTMs are a fairly simple extension to neural networks, and they're behind a lot of the amazing achievements deep learning has made in the past few years. So I'll try to present them as intuitively as possible – in such a way that you could have discovered them yourself.

But first, a picture:

LSTM

Aren't LSTMs beautiful? Let's go.

(Note: if you're already familiar with neural networks and LSTMs, skip to the middle – the first half of this post is a tutorial.)

Neural Networks

Imagine we have a sequence of images from a movie, and we want to label each image with an activity (is this a fight?, are the characters talking?, are the characters eating?).

How do we do this?

One way is to ignore the sequential nature of the images, and build a per-image classifier that considers each image in isolation. For example, given enough images and labels:

  • Our algorithm might first learn to detect low-level patterns like shapes and edges.
  • With more data, it might learn to combine these patterns into more complex ones, like faces (two circular things atop a triangular thing atop an oval thing) or cats.
  • And with even more data, it might learn to map these higher-level patterns into activities themselves (scenes with mouths, steaks, and forks are probably about eating).

This, then, is a deep neural network: it takes an image input, returns an activity output, and – just as we might learn to detect patterns in puppy behavior without knowing anything about dogs (after seeing enough corgis, we discover common characteristics like fluffy butts and drumstick legs; next, we learn advanced features like splooting) – in between it learns to represent images through hidden layers of representations.

Mathematically

I assume people are familiar with basic neural networks already, but let's quickly review them.

  • A neural network with a single hidden layer takes as input a vector x, which we can think of as a set of neurons.
  • Each input neuron is connected to a hidden layer of neurons via a set of learned weights.
  • The jth hidden neuron outputs \(h_j = \phi(\sum_i w_{ij} x_i)\), where \(\phi\) is an activation function.
  • The hidden layer is fully connected to an output layer, and the jth output neuron outputs \(y_j = \sum_i v_{ij} h_i\). If we need probabilities, we can transform the output layer via a softmax function.

In matrix notation:

$$h = \phi(Wx)$$
$$y = Vh$$

where

  • x is our input vector
  • W is a weight matrix connecting the input and hidden layers
  • V is a weight matrix connecting the hidden and output layers
  • Common activation functions for \(\phi\) are the sigmoid function, \(\sigma(x)\), which squashes numbers into the range (0, 1); the hyperbolic tangent, \(tanh(x)\), which squashes numbers into the range (-1, 1), and the rectified linear unit, \(ReLU(x) = max(0, x)\).

Here's a pictorial view:

Neural Network

(Note: to make the notation a little cleaner, I assume x and h each contain an extra bias neuron fixed at 1 for learning bias weights.)

Remembering Information with RNNs

Ignoring the sequential aspect of the movie images is pretty ML 101, though. If we see a scene of a beach, we should boost beach activities in future frames: an image of someone in the water should probably be labeled swimming, not bathing, and an image of someone lying with their eyes closed is probably suntanning. If we remember that Bob just arrived at a supermarket, then even without any distinctive supermarket features, an image of Bob holding a slab of bacon should probably be categorized as shopping instead of cooking.

So what we'd like is to let our model track the state of the world:

  1. After seeing each image, the model outputs a label and also updates the knowledge it's been learning. For example, the model might learn to automatically discover and track information like location (are scenes currently in a house or beach?), time of day (if a scene contains an image of the moon, the model should remember that it's nighttime), and within-movie progress (is this image the first frame or the 100th?). Importantly, just as a neural network automatically discovers hidden patterns like edges, shapes, and faces without being fed them, our model should automatically discover useful information by itself.
  2. When given a new image, the model should incorporate the knowledge it's gathered to do a better job.

This, then, is a recurrent neural network. Instead of simply taking an image and returning an activity, an RNN also maintains internal memories about the world (weights assigned to different pieces of information) to help perform its classifications.

Mathematically

So let's add the notion of internal knowledge to our equations, which we can think of as pieces of information that the network maintains over time.

But this is easy: we know that the hidden layers of neural networks already encode useful information about their inputs, so why not use these layers as the memory passed from one time step to the next? This gives us our RNN equations:

$$h_t = \phi(Wx_t + Uh_{t-1})$$
$$y_t = Vh_t$$

Note that the hidden state computed at time \(t\) (\(h_t\), our internal knowledge) is fed back at the next time step. (Also, I'll use concepts like hidden state, knowledge, memories, and beliefs to describe \(h_t\) interchangeably.)

RNN

Longer Memories through LSTMs

Let's think about how our model updates its knowledge of the world. So far, we've placed no constraints on this update, so its knowledge can change pretty chaotically: at one frame it thinks the characters are in the US, at the next frame it sees the characters eating sushi and thinks they're in Japan, and at the next frame it sees polar bears and thinks they're on Hydra Island. Or perhaps it has a wealth of information to suggest that Alice is an investment analyst, but decides she's a professional assassin after seeing her cook.

This chaos means information quickly transforms and vanishes, and it's difficult for the model to keep a long-term memory. So what we'd like is for the network to learn how to update its beliefs (scenes without Bob shouldn't change Bob-related information, scenes with Alice should focus on gathering details about her), in a way that its knowledge of the world evolves more gently.

This is how we do it.

  1. Adding a forgetting mechanism. If a scene ends, for example, the model should forget the current scene location, the time of day, and reset any scene-specific information; however, if a character dies in the scene, it should continue remembering that he's no longer alive. Thus, we want the model to learn a separate forgetting/remembering mechanism: when new inputs come in, it needs to know which beliefs to keep or throw away.
  2. Adding a saving mechanism. When the model sees a new image, it needs to learn whether any information about the image is worth using and saving. Maybe your mom sent you an article about the Kardashians, but who cares?
  3. So when new a input comes in, the model first forgets any long-term information it decides it no longer needs. Then it learns which parts of the new input are worth using, and saves them into its long-term memory.
  4. Focusing long-term memory into working memory. Finally, the model needs to learn which parts of its long-term memory are immediately useful. For example, Bob's age may be a useful piece of information to keep in the long term (children are more likely to be crawling, adults are more likely to be working), but is probably irrelevant if he's not in the current scene. So instead of using the full long-term memory all the time, it learns which parts to focus on instead.

This, then, is an long short-term memory network. Whereas an RNN can overwrite its memory at each time step in a fairly uncontrolled fashion, an LSTM transforms its memory in a very precise way: by using specific learning mechanisms for which pieces of information to remember, which to update, and which to pay attention to. This helps it keep track of information over longer periods of time.

Mathematically

Let's describe the LSTM additions mathematically.

At time \(t\), we receive a new input \(x_t\). We also have our long-term and working memories passed on from the previous time step, \(ltm_{t-1}\) and \(wm_{t-1}\) (both n-length vectors), which we want to update.

We'll start with our long-term memory. First, we need to know which pieces of long-term memory to continue remembering and which to discard, so we want to use the new input and our working memory to learn a remember gate of n numbers between 0 and 1, each of which determines how much of a long-term memory element to keep. (A 1 means to keep it, a 0 means to forget it entirely.)

Naturally, we can use a small neural network to learn this remember gate:

$$remember_t = \sigma(W_r x_t + U_r wm_{t-1}) $$

(Notice the similarity to our previous network equations; this is just a shallow neural network. Also, we use a sigmoid activation because we need numbers between 0 and 1.)

Next, we need to compute the information we can learn from \(x_t\), i.e., a candidate addition to our long-term memory:

$$ ltm'_t = \phi(W_l x_t + U_l wm_{t-1}) $$

\(\phi\) is an activation function, commonly chosen to be \(tanh\).

Before we add the candidate into our memory, though, we want to learn which parts of it are actually worth using and saving:

$$save_t = \sigma(W_s x_t + U_s wm_{t-1})$$

(Think of what happens when you read something on the web. While a news article might contain information about Hillary, you should ignore it if the source is Breitbart.)

Let's now combine all these steps. After forgetting memories we don't think we'll ever need again and saving useful pieces of incoming information, we have our updated long-term memory:

$$ltm_t = remember_t \circ ltm_{t-1} + save_t \circ ltm'_t$$

where \(\circ\) denotes element-wise multiplication.

Next, let's update our working memory. We want to learn how to focus our long-term memory into information that will be immediately useful. (Put differently, we want to learn what to move from an external hard drive onto our working laptop.) So we learn a focus/attention vector:

$$focus_t = \sigma(W_f x_t + U_f wm_{t-1})$$

Our working memory is then

$$wm_t = focus_t \circ \phi(ltm_t)$$

In other words, we pay full attention to elements where the focus is 1, and ignore elements where the focus is 0.

And we're done! Hopefully this made it into your long-term memory as well.


To summarize, whereas a vanilla RNN uses one equation to update its hidden state/memory:

$$h_t = \phi(Wx_t + Uh_{t-1})$$

An LSTM uses several:

$$ltm_t = remember_t \circ ltm_{t-1} + save_t \circ ltm'_t$$
$$wm_t = focus_t \circ tanh(ltm_t)$$

where each memory/attention sub-mechanism is just a mini brain of its own:

$$remember_t = \sigma(W_r x_t+ U_r wm_{t-1}) $$
$$save_t = \sigma(W_s x_t + U_s wm_{t-1})$$
$$focus_t = \sigma(W_f x_t + U_f wm_{t-1})$$
$$ ltm'_t = tanh(W_l x_t + U_l wm_{t-1}) $$

(Note: the terminology and variable names I've been using are different from the usual literature. Here are the standard names, which I'll use interchangeably from now on:

  • The long-term memory, \(ltm_t\), is usually called the cell state, denoted \(c_t\).
  • The working memory, \(wm_t\), is usually called the hidden state, denoted \(h_t\). This is analogous to the hidden state in vanilla RNNs.
  • The remember vector, \(remember_t\), is usually called the forget gate (despite the fact that a 1 in the forget gate still means to keep the memory and a 0 still means to forget it), denoted \(f_t\).
  • The save vector, \(save_t\), is usually called the input gate (as it determines how much of the input to let into the cell state), denoted \(i_t\).
  • The focus vector, \(focus_t\), is usually called the output gate, denoted \(o_t\). )

LSTM

Snorlax

I could have caught a hundred Pidgeys in the time it took me to write this post, so here's a cartoon.

Neural Networks

Neural Network

Recurrent Neural Networks

RNN

LSTMs

LSTM

Learning to Code

Let's look at a few examples of what an LSTM can do. Following Andrej Karpathy's terrific post, I'll use character-level LSTM models that are fed sequences of characters and trained to predict the next character in the sequence.

While this may seem a bit toyish, character-level models can actually be very useful, even on top of word models. For example:

  • Imagine a code autocompleter smart enough to allow you to program on your phone. An LSTM could (in theory) track the return type of the method you're currently in, and better suggest which variable to return; it could also know without compiling whether you've made a bug by returning the wrong type.
  • NLP applications like machine translation often have trouble dealing with rare terms. How do you translate a word you've never seen before, or convert adjectives to adverbs? Even if you know what a tweet means, how do you generate a new hashtag to capture it? Character models can daydream new terms, so this is another area with interesting applications.

So to start, I spun up an EC2 p2.xlarge spot instance, and trained a 3-layer LSTM on the Apache Commons Lang codebase. Here's a program it generates after a few hours.

While the code certainly isn't perfect, it's better than a lot of data scientists I know. And we can see that the LSTM has learned a lot of interesting (and correct!) coding behavior:

  • It knows how to structure classes: a license up top, followed by packages and imports, followed by comments and a class definition, followed by variables and methods. Similarly, it knows how to create methods: comments follow the correct orders (description, then @param, then @return, etc.), decorators are properly placed, and non-void methods end with appropriate return statements. Crucially, this behavior spans long ranges of code – see how giant the blocks are!
  • It can also track subroutines and nesting levels: indentation is always correct, and if statements and for loops are always closed out.
  • It even knows how to create tests.

How does the model do this? Let's look at a few of the hidden states.

Here's a neuron that seems to track the code's outer level of indentation:

(As the LSTM moves through the sequence, its neurons fire at varying intensities. The picture represents one particular neuron, where each row is a sequence and characters are color-coded according to the neuron's intensity; dark blue shades indicate large, positive activations, and dark red shades indicate very negative activations.)

Outer Level of Indentation

And here's a neuron that counts down the spaces between tabs:

Tab Spaces

For kicks, here's the output of a different 3-layer LSTM trained on TensorFlow's codebase:

There are plenty of other fun examples floating around the web, so check them out if you want to see more.

Investigating LSTM Internals

Let's dig a little deeper. We looked in the last section at examples of hidden states, but I wanted to play with LSTM cell states and their other memory mechanisms too. Do they fire when we expect, or are there surprising patterns?

Counting

To investigate, let's start by teaching an LSTM to count. (Remember how the Java and Python LSTMs were able to generate proper indentation!) So I generated sequences of the form

aaaaaXbbbbb

(N "a" characters, followed by a delimiter X, followed by N "b" characters, where 1 <= N <= 10), and trained a single-layer LSTM with 10 hidden neurons.

As expected, the LSTM learns perfectly within its training range – and can even generalize a few steps beyond it. (Although it starts to fail once we try to get it to count to 19.)

aaaaaaaaaaaaaaaXbbbbbbbbbbbbbbb
aaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbb
aaaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbbb
aaaaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbbbb
aaaaaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbbbb # Here it begins to fail: the model is given 19 "a"s, but outputs only 18 "b"s.

We expect to find a hidden state neuron that counts the number of a's if we look at its internals. And we do:

Neuron #2 Hidden State

I built a small web app to play around with LSTMs, and Neuron #2 seems to be counting both the number of a's it's seen, as well as the number of b's. (Remember that cells are shaded according to the neuron's activation, from dark red [-1] to dark blue [+1].)

What about the cell state? It behaves similarly:

Neuron #2 Cell State

One interesting thing is that the working memory looks like a "sharpened" version of the long-term memory. Does this hold true in general?

It does. (This is exactly as we would expect, since the long-term memory gets squashed by the tanh activation function and the output gate limits what gets passed on.) For example, here is an overview of all 10 cell state nodes at once. We see plenty of light-colored cells, representing values close to 0.

Counting LSTM Cell States

In contrast, the 10 working memory neurons look much more focused. Neurons 1, 3, 5, and 7 are even zeroed out entirely over the first half of the sequence.

Counting LSTM Hidden States

Let's go back to Neuron #2. Here are the candidate memory and input gate. They're relatively constant over each half of the sequence – as if the neuron is calculating a += 1 or b += 1 at each step.

Counting LSTM Candidate Memory

Input Gate

Finally, here's an overview of all of Neuron 2's internals:

Neuron 2 Overview

If you want to investigate the different counting neurons yourself, you can play around with the visualizer here.

(Note: this is far from the only way an LSTM can learn to count, and I'm anthropomorphizing quite a bit here. But I think viewing the network's behavior is interesting and can help build better models – after all, many of the ideas in neural networks come from analogies to the human brain, and if we see unexpected behavior, we may be able to design more efficient learning mechanisms.)

Count von Count

Let's look at a slightly more complicated counter. This time, I generated sequences of the form

aaXaXaaYbbbbb

(N a's with X's randomly sprinkled in, followed by a delimiter Y, followed by N b's). The LSTM still has to count the number of a's, but this time needs to ignore the X's as well.

Here's the full LSTM. We expect to see a counting neuron, but one where the input gate is zero whenever it sees an X. And we do!

Counter 2 - Cell State

Above is the cell state of Neuron 20. It increases until it hits the delimiter Y, and then decreases to the end of the sequence – just like it's calculating a num_bs_left_to_print variable that increments on a's and decrements on b's.

If we look at its input gate, it is indeed ignoring the X's:

Counter 2 - Input Gate

Interestingly, though, the candidate memory fully activates on the irrelevant X's – which shows why the input gate is needed. (Although, if the input gate weren't part of the architecture, presumably the network would have presumably learned to ignore the X's some other way, at least for this simple example.)

Counter 2 - Candidate Memory

Let's also look at Neuron 10.

Counter 2 - Neuron 10

This neuron is interesting as it only activates when reading the delimiter "Y" – and yet it still manages to encode the number of a's seen so far in the sequence. (It may be hard to tell from the picture, but when reading Y's belonging to sequences with the same number of a's, all the cell states have values either identical or within 0.1% of each other. You can see that Y's with fewer a's are lighter than those with more.) Perhaps some other neuron sees Neuron 10 slacking and helps a buddy out.

Remembering State

Next, I wanted to look at how LSTMs remember state. I generated sequences of the form

AxxxxxxYa
BxxxxxxYb

(i.e., an "A" or B", followed by 1-10 x's, then a delimiter "Y", ending with a lowercase version of the initial character). This way the network needs to remember whether it's in an "A" or "B" state.

We expect to find a neuron that fires when remembering that the sequence started with an "A", and another neuron that fires when remembering that it started with a "B". We do.

For example, here is an "A" neuron that activates when it reads an "A", and remembers until it needs to generate the final character. Notice that the input gate ignores all the "x" characters in between.

A Neuron - #8

Here is its "B" counterpart:

B Neuron - #17

One interesting point is that even though knowledge of the A vs. B state isn't needed until the network reads the "Y" delimiter, the hidden state fires throughout all the intermediate inputs anyways. This seems a bit "inefficient", but perhaps it's because the neurons are doing a bit of double-duty in counting the number of x's as well.

Copy Task

Finally, let's look at how an LSTM learns to copy information. (Recall that our Java LSTM was able to memorize and copy an Apache license.)

(Note: if you think about how LSTMs work, remembering lots of individual, detailed pieces of information isn't something they're very good at. For example, you may have noticed that one major flaw of the LSTM-generated code was that it often made use of undefined variables – the LSTMs couldn't remember which variables were in scope. This isn't surprising, since it's hard to use single cells to efficiently encode multi-valued information like characters, and LSTMs don't have a natural mechanism to chain adjacent memories to form words. Memory networks and neural Turing machines are two extensions to neural networks that help fix this, by augmenting with external memory components. So while copying isn't something LSTMs do very efficiently, it's fun to see how they try anyways.)

For this copy task, I trained a tiny 2-layer LSTM on sequences of the form

baaXbaa
abcXabc

(i.e., a 3-character subsequence composed of a's, b's, and c's, followed by a delimiter "X", followed by the same subsequence).

I wasn't sure what "copy neurons" would look like, so in order to find neurons that were memorizing parts of the initial subsequence, I looked at their hidden states when reading the delimiter X. Since the network needs to encode the initial subsequence, its states should exhibit different patterns depending on what they're learning.

The graph below, for example, plots Neuron 5's hidden state when reading the "X" delimiter. The neuron is clearly able to distinguish sequences beginning with a "c" from those that don't.

Neuron 5

For another example, here is Neuron 20's hidden state when reading the "X". It looks like it picks out sequences beginning with a "b".

Neuron 20 Hidden State

Interestingly, if we look at Neuron 20's cell state, it almost seems to capture the entire 3-character subsequence by itself (no small feat given its one-dimensionality!):

Neuron 20 Cell State

Here are Neuron 20's cell and hidden states, across the entire sequence. Notice that its hidden state is turned off over the entire initial subsequence (perhaps expected, since its memory only needs to be passively kept at that point).

Copy LSTM - Neuron 20 Hidden and Cell

However, if we look more closely, the neuron actually seems to be firing whenever the next character is a "b". So rather than being a "the sequence started with a b" neuron, it appears to be a "the next character is a b" neuron.

As far as I can tell, this pattern holds across the network – all the neurons seem to be predicting the next character, rather than memorizing characters at specific positions. For example, Neuron 5 seems to be a "next character is a c" predictor.

Copy LSTM - Neuron 5

I'm not sure if this is the default kind of behavior LSTMs learn when copying information, or what other copying mechanisms are available as well.

States and Gates

To really hone in and understand the purpose of the different states and gates in an LSTM, let's repeat the previous section with a small pivot.

Cell State and Hidden State (Memories)

We originally described the cell state as a long-term memory, and the hidden state as a way to pull out and focus these memories when needed.

So when a memory is currently irrelevant, we expect the hidden state to turn off – and that's exactly what happens for this sequence copying neuron.

Copy Machine

Forget Gate

The forget gate discards information from the cell state (0 means to completely forget, 1 means to completely remember), so we expect it to fully activate when it needs to remember something exactly, and to turn off when information is never going to be needed again.

That's what we see with this "A" memorizing neuron: the forget gate fires hard to remember that it's in an "A" state while it passes through the x's, and turns off once it's ready to generate the final "a".

Forget Gate

Input Gate (Save Gate)

We described the job of the input gate (what I originally called the save gate) as deciding whether or not to save information from a new input. Thus, it should turn off at useless information.

And that's what this selective counting neuron does: it counts the a's and b's, but ignores the irrelevant x's.

Input Gate

What's amazing is that nowhere in our LSTM equations did we specify that this is how the input (save), forget (remember), and output (focus) gates should work. The network just learned what's best.

Extensions

Now let's recap how you could have discovered LSTMs by yourself.

First, many of the problems we'd like to solve are sequential or temporal of some sort, so we should incorporate past learnings into our models. But we already know that the hidden layers of neural networks encode useful information, so why not use these hidden layers as the memories we pass from one time step to the next? And so we get RNNs.

But we know from our own behavior that we don't keep track of knowledge willy-nilly; when we read a new article about politics, we don't immediately believe whatever it tells us and incorporate it into our beliefs of the world. We selectively decide what information to save, what information to discard, and what pieces of information to use to make decisions the next time we read the news. Thus, we want to learn how to gather, update, and apply information – and why not learn these things through their own mini neural networks? And so we get LSTMs.

And now that we've gone through this process, we can come up with our own modifications.

  • For example, maybe you think it's silly for LSTMs to distinguish between long-term and working memories – why not have one? Or maybe you find separate remember gates and save gates kind of redundant – anything we forget should be replaced by new information, and vice-versa. And now you've come up with one popular LSTM variant, the GRU.
  • Or maybe you think that when deciding what information to remember, save, and focus on, we shouldn't rely on our working memory alone – why not use our long-term memory as well? And now you've discovered Peephole LSTMs.

Making Neural Nets Great Again

Let's look at one final example, using a 2-layer LSTM trained on Trump's tweets. Despite the tiny big dataset, it's enough to learn a lot of patterns.

For example, here's a neuron that tracks its position within hashtags, URLs, and @mentions:

Hashtags, URLs, @mentions

Here's a proper noun detector (note that it's not simply firing at capitalized words):

Proper Nouns

Here's an auxiliary verb + "to be" detector ("will be", "I've always been", "has never been"):

Modal Verbs

Here's a quote attributor:

Quotes

There's even a MAGA and capitalization neuron:

MAGA

And here are some of the proclamations the LSTM generates (okay, one of these is a real tweet):

Tweets Tweet

Unfortunately, the LSTM merely learned to ramble like a madman.

Recap

That's it. To summarize, here's what you've learned:

Candidate Memory

Here's what you should save:

Save

And now it's time for that donut.

Thanks to Chen Liang for some of the TensorFlow code I used, Ben Hamner and Kaggle for the Trump dataset, and, of course, Schmidhuber and Hochreiter for their original paper. If you want to explore the LSTMs yourself, feel free to play around!


          Genome-Wide Association Study of Susceptibility Loci for Radiation-Induced Brain Injury.      Cache   Translate Page      
Related Articles

Genome-Wide Association Study of Susceptibility Loci for Radiation-Induced Brain Injury.

J Natl Cancer Inst. 2018 Oct 08;:

Authors: Wang TM, Shen GP, Chen MY, Zhang JB, Sun Y, He J, Xue WQ, Li XZ, Huang SY, Zheng XH, Zhang SD, Hu YZ, Qin HD, Bei JX, Ma J, Mu J, Yao Shugart Y, Jia WH

Abstract
Background: Radiation-induced brain injury is a nonnegligible issue in the management of cancer patients treated by partial or whole brain irradiation. In particular, temporal lobe injury (TLI), a deleterious late complication in nasopharyngeal carcinoma, greatly affects the long-term life quality of these patients. Although genome-wide association studies (GWASs) have successfully identified single nucleotide polymorphisms (SNPs) associated with radiation toxicity, genetic variants contributing to the radiation-induced brain injury have not yet been assessed.
Methods: We recruited and performed follow-up for a prospective observational cohort, Genetic Architecture of Radiotherapy Toxicity and Prognosis, using magnetic resonance imaging for TLI diagnosis. We conducted genome-wide association analysis in 1082 patients and validated the top associations in two independent cohorts of 1119 and 741 patients, respectively. All statistical tests were two-sided.
Results: We identified a promoter variant rs17111237 (A > G, minor allele frequency [MAF] = 0.14) in CEP128 associated with TLI risk (hazard ratio = 1.45, 95% confidence interval = 1.26 to 1.66, Pcombined=3.18 × 10-7) which is in moderate linkage disequilibrium (LD) with rs162171 (MAF = 0.18, R2 = 0.69), the top signal in CEP128 (hazard ratio = 1.46, 95% confidence interval = 1.29-1.66, Pcombined= 6.17 × 10-9). Combining the clinical variables with the top SNP, we divided the patients into different subgroups with varying risk with 5-year TLI-free rates ranging from 33.7% to 95.5%. CEP128, a key component of mother centriole, tightly interacts with multiple radiation-resistant genes and plays an important role in maintaining the functional cilia, which otherwise will lead to a malfunction of the neural network. We found that A > G alteration at rs17111237 impaired the promoter activity of CEP128 and knockdown of CEP128 decreased the clonogenic cell survival of U87 cells under radiation. Noteworthy, 12.7% (27/212) of the GWAS-based associated genes (P < .001) were enriched in the neurogenesis pathway.
Conclusions: This three-stage study is the first GWAS of radiation-induced brain injury that implicates the genetic susceptibility gene CEP128 involved in TLI development and provides the novel insight into the underlying mechanisms of radiation-induced brain injury.

PMID: 30299488 [PubMed - as supplied by publisher]




Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10