Next Page: 10000

          

USING DEEP LEARNING ALGORITHM TO DIAGNOSE PARKINSON DISEASE WITH HIGH ACCURACY

 Cache   
Early diagnosis of Parkinson's disease, which causes vital and permanent damage to both motor and non-motor symptoms, is very important to prevent further deterioration of the patient condition. In the present study, Parkinson's Disease data set from UCI repository is classified using deep learning architecture. The deep learning architecture in the study is a feed-forward neural network (FFNN) which is builded by Keras of Python. The architecture in the study composes of an input layer, two hidden layers and softmax function with ReLu (Rectified Linear Units) as an output layer. The deep learning architecture solves binary classification problem since PD data set has two classes. In order to classify the PD data set, many tests were performed by splitting the test and train data in different ratios. The PD data set classification was succeeded with 100% accuracy using deep learning algorithm splitting in %20 of the data as the test and the remaining as train data in epoch 30.
          

RoboNet: A Dataset for Large-Scale Multi-Robot Learning

 Cache   
In the last decade, we’ve seen learning-based systems provide transformative solutions for a wide range of perception and reasoning problems, from recognizing objects in images to recognizing and translating human speech. Recent progress in deep reinforcement learning (i.e. integrating deep neural networks into reinforcement learning systems) suggests that the same kind of success could be realized in automated decision making domains. If fruitful, this line of work could allow learning-based systems to tackle active control tasks, such as robotics and autonomous driving, alongside the passive perception tasks to which they have already been successfully applied.
          

Hidden Markov Model Based Islanding Prediction in Smart Grids

 Cache   
Islanding is a recent issue for power system engineers. In every normal or faulty case, the power system undergoes a sequence of states. These states are not directly observable but have a particular pattern. The patterns can be reflected in the measurements. The nonobserving states are referred to as hidden states. This paper has developed a novel hidden Markov model (HMM) based algorithm to engender a probability related to the event of islanding, depending upon the phasor measurements obtained from the smart grid. After processing the phasor data, an artificial neural network is methodically trained to provide emission probability of the hidden states. This emission probability is useful for the evaluation of the HMM. An IEEE 9 bus system is selected to test the algorithm. Several case studies are performed to generate a statistical analysis for the parameters of the HMM. The posterior probability is responsible for final pronouncement of the occurrence of islanding. The accuracy of the algorithm is estimated on the basis of finishing result. Nondetection zone is also estimated for the HMM-based islanding detection method. It is observed that this method HMM can detect islanding with less nondetection zone proficiently in a short time.
          

A Data Clustering Based Probabilistic Power Flow Method for AC/VSC-MTDC

 Cache   
This paper investigates the role of modified clustering methods to account correlated uncertainties in the probabilistic power flow problem for alternate current/voltage source control-multiple terminal direct current (ac/VSC-MTdc). Such uncertainties are introduced by stochastic renewable generation units or variable loads. The wind farm is a type of stochastic generation units that grows dramatically in power networks. Due to the increase of penetration level of renewable generations, number of uncertainty sources is increased. Clustering methods are suitable tools for handling stochastic power system problems, because these methods provide acceptable results with less computation in comparison to other common methods such as Monte Carlo simulations. In this paper, two modified methods are used for clustering uncertainty sets. These methods include modified K-means algorithm, and modified self-organized map neural network. After clustering, the power flow of ac/VSC-MTdc is carried out for cluster centers of each method. Results of the probabilistic power flow for case study that is based on the IEEE 24-bus and IEEE 118-bus test systems are presented and compared with different clustering techniques, Monte Carlo simulation, and Latin hypercube sampling (LHS) methods. Such results show that power flow calculation with clustering methods is more accurate than LHS method and is faster than Monte Carlo simulation method.
          

SDS 317: A Deep Dive Into Neural Nets

 Cache   
An incredible young guest is in this episode after he attended DSGO. Edis is a 15-year-old, building his own neural networks. We discussed his background, his process of building neural networks from scratch, Kaggle competitions, and the benefit of online data science education.Additional materials: www.superdatascience.com/317


          

rain Launches Intelligent 5G Transport Network in South Africa

 Cache   
South African mobile data network operator, rain announced on Friday, 29 November 2019, that it has launched its intelligent 5G transport network in partnership with Huawei. Transport networks can be thought of as the “neural networks” that connect core networks to services. “With Huawei’s E2E solutions and new products, our first 5G users can experience [&hellip
          

Molecules, Vol. 24, Pages 4393: Construction of Quantitative Structure Activity Relationship (QSAR) Models to Predict Potency of Structurally Diversed Janus Kinase 2 Inhibitors

 Cache   
Janus kinase 2 (JAK2) inhibitors represent a promising therapeutic class of anticancer agents against many myeloproliferative disorders. Bioactivity data on pIC 50 of 2229 JAK2 inhibitors were employed in the construction of quantitative structure-activity relationship (QSAR) models. The models were built from 100 data splits using decision tree (DT), support vector machine (SVM), deep neural network (DNN) and random forest (RF). The predictive power of RF models were assessed via 10-fold cross validation, which afforded excellent predictive performance with R 2 and RMSE of 0.74 ± 0.05 and 0.63 ± 0.05, respectively. Moreover, test set has excellent performance of R 2 (0.75 ± 0.03) and RMSE (0.62 ± 0.04). In addition, Y-scrambling was utilized to evaluate the possibility of chance correlation of the predictive model. A thorough analysis of the substructure fingerprint count was conducted to provide insights on the inhibitory properties of JAK2 inhibitors. Molecular cluster analysis revealed that pyrazine scaffolds have nanomolar potency against JAK2.
          

Sensors, Vol. 19, Pages 5277: Optimized CapsNet for Traffic Jam Speed Prediction Using Mobile Sensor Data under Urban Swarming Transportation

 Cache   
Urban swarming transportation (UST) is a type of road transportation where multiple types of vehicles such as cars, buses, trucks, motorcycles, and bicycles, as well as pedestrians are allowed and mixed together on the roads. Predicting the traffic jam speed under UST is very different and difficult from the single road network traffic prediction which has been commonly studied in the intelligent traffic system (ITS) research. In this research, the road network wide (RNW) traffic prediction which predicts traffic jam speeds of multiple roads at once by utilizing citizens’ mobile GPS sensor records is proposed to better predict traffic jam under UST. In order to conduct the RNW traffic prediction, a specific data preprocessing is needed to convert traffic data into an image representing spatial-temporal relationships among RNW. In addition, a revised capsule network (CapsNet), named OCapsNet, which utilizes nonlinearity functions in the first two convolution layers and the modified dynamic routing to optimize the performance of CapsNet, is proposed. The experiments were conducted using real-world urban road traffic data of Jakarta to evaluate the performance. The results show that OCapsNet has better performance than Convolution Neural Network (CNN) and original CapsNet with better accuracy and precision.
          

Deep Learning: GANs and Variational Autoencoders for $25

 Cache   
Expires November 23, 2022 23:59 PST
Buy now and get 86% off

KEY FEATURES

Variational autoencoders and GANs have been two of the most interesting recent developments in deep learning and machine learning. GAN stands for generative adversarial network, where two neural networks compete with each other. Unsupervised learning means you're not trying to map input data to targets, you're just trying to learn the structure of that input data. In this course, you'll learn the structure of data in order to produce more stuff that resembles the original data.

  • Access 41 lectures & 5.5 hours of content 24/7
  • Incorporate ideas from Bayesian Machine Learning, Reinforcement Learning, & Game Theory
  • Discuss variational autoencoder architecture
  • Discover GAN basics

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels, but knowledge of calculus, probability, object-oriented programming, Python, Numpy, linear regression, gradient descent, and how to build a feedforward and convolutional neural network in Theano and TensorFlow is expected
  • All code for this course is available for download here, in the directory unsupervised_class3

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

          

Why computational neuroscience and AI will converge

 Cache   

Neural networks have peaked in their ability to deliver effective and meaningful results. There are four big developments that will happen with AI. A crossover with computational neuroscience will happen soon. What will be the impact of this crossover? There will be an increasing focus on how AI algorithms interact with one another.

The post Why computational neuroscience and AI will converge appeared first on JAXenter.


          

EARTHQUAKE EARLY WARNING APP NOW AVAILABLE

 Cache   

By Miriam Raftery

November 30, 2019 (San Diego) – An earthquake early warning is now available to the pubic in California through the Berkeley Seismology Lab.  MyShake collects motion data from your phone’s sensors and uses a patented neural network to determine whether that motion fits the model of an earthquake.

Through MyShake, you can have earthquake information at your fingertips, see damage reports shared by citizen scientists like you, and help build a global seismic network.  Click here to download theMyShake  app from Apple or the Google Play store.  


          

Leveraging Micro-Stories to Build Engagement, Inclusion, and Neural Networking in Immunology Education

 Cache   

Storytelling is a highly effective strategy for delivering course content. It can provide real-world contexts and the relevance students desire. Through personal connections to the narrative details, anecdotes facilitate the incorporation of content into pre-existing knowledge and neural networks that enhances retention. In addition, stories can honor students' diverse backgrounds, which builds a sense of belonging and community. In turn, these aspects can drive intrinsic motivation to learn and increase students' alertness in class and overall engagement in the course. Despite the educational power of stories, there often is not enough time to integrate them into curricula. To address this dilemma, faculty can condense stories into micro-stories that require relatively minimal class time. Many aspects of stories that enhance learning can be leveraged in just a few sentences by focusing on narrative details that engage a variety of cognitive and emotional processes. In particular, the inclusion of multiple sensory descriptions and small details, like locations and names, can provide sufficient context to maintain the value stories provide. Micro-stories can function independently or extend a single theme throughout a course. Presented in this Perspective are examples of micro-stories for concepts in immunology and strategies for developing them. Proposals are made for leveraging micro-stories to enhance student engagement and course community, content retention and retrieval, and satisfaction with immunology courses of all sizes and levels.


          

Training Neural Networks

 Cache   
Neural Networks offer great promise with their ability to "create" algorithms to solve problems - without the programmer knowing how to solve the problem in the first place. Example based problem solving if you will. I would expect that if you knew precisely how to solve a particular problem to the same degree, you could certainly do it perhaps many orders of magnitude faster and possibly higher quality by coding the solution directly -- however its not always easy or practical to know such a so [...]
          

Machine Learning Mortality-Classification in Clinical Documentation with Increased Accuracy in Visual-Based Analyses.

 Cache   
Related Articles

Machine Learning Mortality-Classification in Clinical Documentation with Increased Accuracy in Visual-Based Analyses.

Acta Paediatr. 2019 Nov 24;:

Authors: Slattery SM, Knight DC, Weese-Mayer DE, Grobman WA, Downey DC, Murthy K

Abstract
AIM: The role of machine learning on clinical documentation for predictive outcomes remains undefined. We aimed to compare three neural networks on inpatient providers' notes to predict mortality in neonatal hypoxic-ischemic encephalopathy (HIE).
METHODS: Using Children's Hospitals Neonatal Database, non-anomalous neonates with HIE treated with therapeutic hypothermia were identified at a single-centre. Data were linked with the initial seven days of documentation. Exposures were derived using the databases and applying convolutional and two recurrent neural networks. The primary outcome was mortality. The predictive accuracy and performance measures for models were determined.
RESULTS: The cohort included 52 eligible infants. Most infants survived (n=36, 69%) and 23 had severe HIE (44%). Neural networks performed above baseline and differed in their median accuracy for predicting mortality (p=0.0001): recurrent models with long short-term memory 69% (65, 73%) and gated-recurrent model units 65% (62, 69%) and convolutional 72% (64, 96%). Convolutional networks' median specificity was 81% (72, 97%).
CONCLUSION: The neural network models demonstrated fundamental validity in predicting mortality using inpatient provider documentation. Convolutional models had high specificity for (excluding) mortality in neonatal HIE. These findings provide a platform for future model training and ultimately tool development to assist clinicians in patient assessments and risk-stratifications.

PMID: 31762098 [PubMed - as supplied by publisher]


          

Deep learning video action recognition professional

 Cache   
Open Position: An established company is looking for deep learning/AI professional for video-based action detection recognition. You will need to train Neural networks with deep learning tools and techniques to identify figures actions based on video frames sequence.
          

Handbook of Evolutionary Computation (Computational Intelligence Library)

 Cache   

Many scientists and engineers now use the paradigms of evolutionary computation (genetic algorithms, evolution strategies, evolutionary programming, genetic programming, classifier systems, and combinations or hybrids thereof) to tackle problems that are either intractable or unrealistically time consuming to solve through traditional computational strategies. Recently there have been vigorous initiatives to promote cross-fertilization between the EC paradigms, and also to combine these paradigms with other approaches such as neural networks to create hybrid systems with enhanced capabilities. To address the need for speedy dissemination of new ideas in these fields, and also to assist in cross-disciplinary communications and understanding, Oxford University Press and the Institute of Physics have joined forces to create a major reference publication devoted to EC fundamentals, models, algorithms and applications. This work is intended to become the standard reference resource for the evolutionary computation community. The Handbook of Evolutionary Computation will be available in loose-leaf print form, as well as in an electronic version that combines both CD-ROM and on-line (World Wide Web) access to its contents. Regularly published supplements will be available on a subscription basis. Read more...

          

Fresh efforts at Google to understand why an AI system says yes or no launches Explainable AI product

 Cache   
Google has announced a new Explainable AI feature for its cloud platform, which provides more information about the features that cause an AI prediction to come up with its results. Artificial neural networks, which are used by many of today’s machine learning and AI systems, are modelled to some extent on biological brains. One of Read more about Fresh efforts at Google to understand why an AI system says yes or no launches Explainable AI product[…]
          

Curriculum-based Teacher Ensemble for Robust Neural Network Distillation

 Cache   

Curriculum-based Teacher Ensemble for Robust Neural Network Distillation

Panagiotatos, G., Passalis, N., Iosifidis, A., Gabbouj, M. & Tefas, A., syyskuuta 2019, 2019 27th European Signal Processing Conference (EUSIPCO). IEEE, 5 Sivumäärä (European Signal Processing Conference).

Tutkimustuotosvertaisarvioitu

AlkuperäiskieliEnglanti
Otsikko2019 27th European Signal Processing Conference (EUSIPCO)
KustantajaIEEE
Sivumäärä5
ISBN (elektroninen)978-9-0827-9703-9
ISBN (painettu)978-1-5386-7300-3
DOI - pysyväislinkit
TilaJulkaistu - syyskuuta 2019
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaEUROPEAN SIGNAL PROCESSING CONFERENCE -
Kesto: 1 tammikuuta 1900 → …

Julkaisusarja

NimiEuropean Signal Processing Conference
ISSN (painettu)2219-5491
ISSN (elektroninen)2076-1465

Conference

ConferenceEUROPEAN SIGNAL PROCESSING CONFERENCE
Ajanjakso1/01/00 → …

          

Multi-Level Dual-Attention Based CNN for Macular Optical Coherence Tomography Classification

 Cache   
In this letter, we propose a multi-level dual-attention model to classify two common macular diseases, age-related macular degeneration (AMD) and diabetic macular edema (DME) from normal macular eye conditions using optical coherence tomography (OCT) imaging technique. Our approach unifies the dual-attention mechanism at multi-levels of the pre-trained deep convolutional neural network (CNN). It provides a focused learning mechanism by taking into account both multi-level features based attention focusing on the salient coarser features and self-attention mechanism attending higher entropy regions of the finer features. Our proposed method enables the network to automatically focus on the relevant parts of the input images at different levels of feature subspaces. This leads to a more locally deformation-aware feature generation and classification. The proposed approach does not require pre-processing steps such as extraction of region of interest, denoising, and retinal flattening, making the network more robust and fully automatic. Experimental results on two macular OCT databases show the superior performance of our proposed approach as compared to the current state-of-the-art methodologies.
          

A Neural Network-Based Nonlinear Acoustic Echo Canceller

 Cache   
In this letter, we introduce a novel approach for nonlinear acoustic echo cancellation. The proposed approach uses the principle of transfer learning to train a neural network that approximates the nonlinear function responsible for the nonlinear distortions and generalizes this network to different acoustic conditions. The topology of the proposed network is inspired by the conventional adaptive filtering approaches for nonlinear acoustic echo cancellation. The network is trained to model the nonlinear distortions using the conventional error backpropagation algorithm. In deployment, and in order to account for any variation or discrepancy between training and deployment conditions, only a subset of the network's parameters is adapted using the significance-aware elitist resampling particle filter. The proposed approach is evaluated and verified using synthesized nonlinear distortions and real nonlinear distortions recorded by a commercial mobile phone.
          

Data Scientist Uses Deep Learning to Predict BTC Price in Real-Time

 Cache   
LSTM neural networks can purportedly be used to predict crypto prices in real-time, demonstrates data scientist
          

Finest Pies

 Cache   

onion cassette pie, stimulant cherry pie, cowdy-doo pie

People wonder about what it would be like if a super-intelligent AI decided to place all of humanity in a realistic simulation. I wonder what it would be like if the simulation were built by today’s AI instead - whose computing power is somewhere around the level of an earthworm’s.

Specifically, what would the pies be like?

In 2017 and 2018 I gave increasingly powerful neural networks about 2000 names of existing pie recipes, and their imitations included Caribou Meringue, Mur’s Tartless Tart, and Cromberry Yaas.


Caribou Meringue, Mur’s Tartless Tart, and Cromberry Yaas

But there are now even more advanced neural nets, including GPT-2, many times larger and pretrained on millions of pages from the internet. When I used gpt-2-simple to train the smallest version of GPT-2 (145M) on the list of pies, even it was able to come up with plausible (and highly repetitive) pies, many copied from the list I gave it. I can’t blame it for being a smartass and giving me the pies right back - technically this is a perfect solution to its task of “name some pies”.

I have ways of messing with the neural net, however. One thing I can do is increase the chaos level of the text it’s generating (in this case, called the “temperature”), forcing it to go with its less-likely guesses. Another thing I can do is give it weird text to complete. When I’m asking it to generate more text, I can give it a few example pies and ask it to add to the list (the original list of pies I showed it was alphabetical, so if I give it apple pies it tends to respond with more apple pies). OR, I can give it much weirder text. If I give it a bit of Harry Potter fanfiction, it draws on its previous training and can continue the story. But, similarly to the version of GPT-2 I trained on crochet patterns, it tends to twist the stories around to its special interest. In this case, pie.

Prompt: Harry glared at Snape.

Harry glared at Snape.
“No, I’m not even talking about the boy here. You’re talking about the duplicate.”
“Fine. Then I’ll get the rest of the ingredients.”
So then, they got canned fruit and whipped cream and whipped cream and whipped cream and whipped cream and whipped cream and whipped cream and whipped cream.
But one thing was for sure: there was no trace of rum.
There was.

Prompt: Voldemort ran his fingers gently over Darth Vader’s shoulder.

Voldemort ran his fingers gently over Darth Vader’s shoulder. “C-Custard”
“C-Custard”

Once it has steered the conversation, the neural net transitions to listing pies. Although some of them retain a certain flavor.

Voldemort Crumble Pie
Mudblood Mincemeat Pies
Snape Pie With Cheese

And its other pies are indeed more creative, definitely not found in the original training set. They don’t sound exactly delicious to me, but maybe this advanced artificial intelligence knows something we don’t. After all, it read most of the internet, including the cooking blogs.


Hot and sour apple pie, spiced coconut pizza chiffon pie, peach-pickle pie

Hot & Sour Apple Pie
Caramelized-Zucchini Nog
Nachos From Maple Syrup Pie
Syrupy Chicken Pie
Camel Crunch Crunch
Mustard Raisin Pie
Peach-pickle pie
Pie-Bro From Pizza Particles
Iceberg Chiffon Pie, Famous
Finger-To-Finger Meringue Pie
Spiced Coconut Pizza Chiffon Pie

On the other hand, maybe just maybe the neural net doesn’t know what it’s doing. These seem like uniformly a bad idea.


fog crust pecan pie, tweezers, olives, and seafood applie pie, frozen custard pie with three glorious chiles

Fog-Crust Pecan Pie
Tweezers, Olives, and Seafood Apple Pie
Frozen Custard Pie with Three Glorious Chilies
Half-Ton Pie Crust
Fog-Crust Pecan Pie
Pumpkin Ear Pie
Onion Cassette Pie
Tweezers, Olives, and Seafood Apple Pie
Frog-Fudge Pie
Ice-Egg Pie
Bedtime Oil Tart
Half-Tooth Nail Pie
Frozen Custard Pie with Three Glorious Chilies
Must-Fail Pecan Almonds with Pecan Almond Crust
Beef and Cheese Fried Ronald McDonald’s Walnut Chocolate Pie

These neural net inventions miiight be okay if we knew what they actually were. At the very least, approach with caution.


applewool pie, brandywine schnoches, meat pie**

Applewool Pie
Brandywine Schnoches
Pumpkin Pie Dough - Oh Noe
Cowdy-Doo Pie
Dried & Squirred Pumpkin Pie
Morgue Pie
Temptation To Fill A Beef Pies
Woollewinkle Pie
“Vinegar Pie”
Meat Pie**
Stimulant Cherry Pie
Deep Dish Curb And Spoon Pie
Standard Apples-and-Moolah
My Meat Crust (Might As Well)

And these particular examples seem to have outed themselves as the work of a less-than-whollly competent AI.


it's crunchy famous, ahh so good 'n easy meat pie, medieval pastries with liquid crystal

Vessel With Raisin Bran
It’s Crunchy Famous
Pie Crust (Spinach Friend)
Food And Flavors Whoopie Pie
Valet Restaurant’s Chompin Park Pie
Medieval Pastries With Liquid Crystal
Tomato/Banana Pie As I Layered
Mr. Robot’s Sweet Potato Pie
Fluffy Cheese Pie #2 By BestMakesCherryToEggs
Ahh So Good ‘n Easy Meat Pie
Scheduled Coconut Cream Pie
Standard Pie (Panties & Puff Pastry)
Smiled and Cheerful Thanksgiving Pie, Famous
Tarte Barre “Oral” Pie
Robocoin Pie

For some examples of how the neural net managed its transitions from fanfiction to pies more or less gracefully (too long to fit in this blog post), enter your email here.

My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s


          

Juan Pablo Vielma — Mixed Integer Programming Methods for Machine Learning and Statistics, Dec 2

 Cache   
Abstract: More than 50 years of development have made mixed integer programming (MIP) an extremely successful tool. MIP’s modeling flexibility allows it describe a wide range of business, engineering and scientific problems, and, while MIP is NP-hard, many of these problems are routinely solved in practice thanks to state-of-the-art solvers that nearly double their machine-independent speeds every year. In this talk we show how a careful application of MIP modeling techniques can lead to extremely effective MIP-based methods for three problems in machine learning and statistics.

The first problem concerns causal inference of treatment effects in observational studies [1]. For this problem we introduce a MIP-based matching method that directly balances covariates for multi-variate treatments and produces samples that are representative of a target population. We show how using the right MIP formulation for the problem is critical for large data sets, and illustrate the effectiveness of the resulting approach by estimating the effect that the different intensities of the 2010 Chilean earthquake had on educational outcomes. The second problem concerns the design of adaptive questionnaires for consumer preference elicitation [2]. For this problem we introduce an approximate Bayesian method for the design of the questionnaires, which can significantly reduce the variance of the estimates obtained for certain consumer preference parameters. We show how carefully modeling the associated question selection using MIP is crucial to achieving the required near-realtime selection of the next question asked to the consumer. The third problem concerns certifying that a trained neural network is robust to adversarial attacks [3]. For this problem we introduce strong MIP formulations that can significantly reduce the computational time needed to achieve the certification.

[1] Building Representative Matched Samples with Multi-valued Treatments in Large Observational Studies. M. Bennett, J. P. Vielma and J. R. Zubizarreta. Submitted for publication, 2019. arXiv:1810.06707

[2] Ellipsoidal methods for adaptive choice-based conjoint analysis. D. Saure and J. P. Vielma. Operations Research 67, 2019. pp. 295-597.

[3] Strong mixed-integer programming formulations for trained neural networks. R. Anderson, J. Huchette, C. Tjandraatmadja and J. P. Vielma. In A. Lodi and V. Nagarajan, editors, Proceedings of the 20th Conference on Integer Programming and Combinatorial Optimization (IPCO 2019), Lecture Notes in Computer Science 11480, 2019. pp. 27-42.

Bio: Juan Pablo Vielma is the Richard S. Leghorn (1939) Career Development Associate Professor at MIT Sloan School of Management and is affiliated to MIT’s Operations Research Center. Dr. Vielma has a B.S. in Mathematical Engineering from University of Chile and a Ph.D. in Industrial Engineering from the Georgia Institute of Technology. His current research interests include the theory and practice of mixed-integer mathematical optimization and applications in energy, natural resource management, marketing and statistics. In January of 2017 he was named by President Obama as one of the recipients of the Presidential Early Career Award for Scientists and Engineers (PECASE). Some of his other recognitions include the NSF CAREER Award and the INFORMS Computing Society Prize. He is currently an associate editor for Operations Research and Operations Research Letters, a member of the board of directors of the INFORMS Computing Society, and a member of the NumFocus steering committee for JuMP.
          

Blaize Emerges from Stealth to Transform AI Computing

 Cache   

Blaize, formerly known as Thinci, unveils the first true Graph-Native silicon architecture and software platform built to process neural networks and enable AI applications with unprecedented ...


          

Introducing knowledge distillation in a multi-task neural network

 Cache   
Hernández Munuera, Alejandro (2019). Introducing knowledge distillation in a multi-task neural network. Tesis (Master), E.T.S. de Ingenieros Informáticos (UPM) .
          

HPR2955: Machine Learning / Data Analysis Basics

 Cache   

In this episode, I talk about different techniques that we can use to predict the outcome of some question depending on input features.

The different techniques I will go through are the ZeroR and OneR that will create a baseline for the rest of the methods.

Next up, we have the Naive Bayes classifier that is simple but powerful for some applications.

Nearest neighbor and Decision trees are next up that requires more training but is very efficient when you infer results.

Multi-layer perceptron (MLP) is the first technique that is close to the ones we usually see in Machine Learning frameworks used today. But it is just a precursor to Convolutional Neural Network (CNN) because of the size requirements. MLPs have the same size for all the hidden layers, which makes it unfeasible for larger networks.

CNNs, on the other hand, uses subsampling that will shrink the layer maps to reduce the size of the network without reducing the accuracy of the predictions.


          

Is Technical Analysis Prophetic or Preposterous? We Asked 7 Crypto Traders

 Cache   
CoinDesk     •     December 2, 2019, 9:00 am
Technical analysis, or the art of divining future price movements from historical data, divides opinion in the crypto world. So does does it offer real insight into the markets?
99 Bitcoins     •     December 2, 2019, 6:00 am
The post Bitcoin News Summary – December 2, 2019 appeared first on 99 Bitcoins.    Worth nearly $50 million in Ethereum, 342,000 ether was hacked from South Korea’s Upbit exchange.…
Cointelegraph.com News     •     December 2, 2019, 8:44 am
LSTM neural networks can purportedly be used to predict crypto prices in real-time, demonstrates data scientist
Bitcoin - The Currency of the Internet     •     December 2, 2019, 9:46 am
submitted by /u/cryptoadventura [link] [comments]
Bitcoin - The Currency of the Internet     •     December 2, 2019, 7:50 am
submitted by /u/cointastical [link] [comments]
https://www.coindebate.com/

          

Haykin Neural Networks Solution

 Cache   
Haykin Neural Networks Solution
          

Data Scientist Uses Deep Learning to Predict BTC Price in Real-Time

 Cache   

LSTM neural networks can purportedly be used to predict crypto prices in real-time, demonstrates data scientist


          

Multi-language identification and transcription in Video Indexer

 Cache   

Multi-language speech transcription was recently introduced into Microsoft Video Indexer at the International Broadcasters Conference (IBC). It is available as a preview capability and customers can already start experiencing it in our portal. More details on all our IBC2019 enhancements can be found here.

Multi-language videos are common media assets in the globalization context, global political summits, economic forums, and sport press conferences are examples of venues where speakers use their native language to convey their own statements. Those videos pose a unique challenge for companies that need to provide automatic transcription for video archives of large volumes. Automatic transcription technologies expect users to explicitly determine the video language in advance to convert speech to text. This manual step becomes a scalability obstacle when transcribing multi-language content as one would have to manually tag audio segments with the appropriate language.

Microsoft Video Indexer provides a unique capability of automatic spoken language identification for multi-language content. This solution allows users to easily transcribe multi-language content without going through tedious manual preparation steps before triggering it. By that, it can save anyone with large archive of videos both time and money, and enable discoverability and accessibility scenarios.

Multi-language audio transcription in Video Indexer

The multi-language transcription capability is available as part of the Video Indexer portal. Currently, it supports four languages including English, French, German and Spanish, while expecting up to three different languages in an input media asset. While uploading a new media asset you can select the “Auto-detect multi-language” option as shown below.

1.	A new multi-language option available in the upload page of Video Indexer portal

Our application programming interface (API) supports this capability as well by enabling users to specify 'multi' as the language in the upload API. Once the indexing process is completed, the index JavaScript object notation (JSON) will include the underlying languages. Refer to our documentation for more details.

Additionally, each instance in the transcription section will include the language in which it was transcribed.

2.	A transcription snippet from Video Indexer timeline presenting different language segments

Customers can view the transcript and identified languages by time, jump to the specific places in the video for each language, and even see the multi-language transcription as video captions. The result transcription is also available as closed caption files (VTT, TTML, SRT, TXT, and CSV).

two languages

Methodology

Language identification from an audio signal is a complex task. Acoustic environment, speaker gender, and speaker age are among a variety of factors that affect this process. We represent audio signal using a visual representation, such as spectrograms, assuming that, different languages induce unique visual patterns which can be learned using deep neural networks.

Our solution has two main stages to determine the languages used in multi-language media content. First, it employs a deep neural network to classify audio segments with very high granularity, in other words, very few seconds. While a good model will successfully identify the underlying language, it can still miss-identify some segments due to similarities between languages. Therefore, we apply a second stage for examining these misses and smooth the results accordingly.

3.	A new insight pane showing the detected spoken languages and their exact occurrences on the timeline

Next steps

We introduced a differentiated capability for multi-language speech transcription. With this unique capability in Video Indexer, you can become more effective about the content of your videos as it allows you to immediately start searching across videos for different language segments. During the coming few months, we will be improving this capability by adding support for more languages and improving the model’s accuracy.

For more information, visit Video Indexer’s portal or the Video Indexer developer portal, and try this new capability. Read more about the new multi-language option and how to use it in our documentation.

Please use our UserVoice to share feedback and help us prioritize features or email visupport@microsoft.com with any questions.


          

Σεμινάριο για μεταπτυχιακούς φοιτητές "Deep Learning for Legal Texts" 25/11 16:00, Αίθουσα Α56

 Cache   

Ημερομηνία: Δευτέρα 25/11, Αίθουσα: Α56

Ομιλητής: Ηλίας Χαλκίδης, Υποψήφιος Διδάκτορας, Οικονομικό Πανεπιστήμιο Αθηνών

Τίτλος: Deep Neural Networks for Information Mining from Legal Texts

διαβάστε περισσότερα


          

How neural networks work—and why they’ve become a big business

 Cache   
Neural networks have grown from an academic curiosity to a massive industry.
          

All-optical diffractive neural networks process broadband light

 Cache   
Diffractive deep neural network is an optical machine learning framework that blends deep learning with optical diffraction and light-matter interaction to engineer diffractive surfaces that collectively perform optical computation at the speed of light. A diffractive neural network is first designed in a computer using deep learning techniques, followed by the physical fabrication of the designed layers of the neural network using e.g., 3-D printing or lithography. Since the connection between the input and output planes of a diffractive neural network is established via diffraction of light through passive layers, the inference process and the associated optical computation does not consume any power except the light used to illuminate the object of interest.

          

Machine-Learning Microscope Speeds Malaria Diagnosis

 Cache   
System combines deep neural network with programmable LED array to identify malaria parasite faster and more accurately in sparse samples. - Source: www.osa-opn.org


Next Page: 10000

© Googlier LLC, 2019