Next Page: 10000

          Senior Developer - Computer Vision - IRIS Research & Development - Mississauga, ON      Cache   Translate Page   Web Page Cache   
IRIS Research & Development is pushing the boundaries between technology and public infrastructure by leveraging AI, IoT, Computer Vision and Deep Learning....
From Indeed - Wed, 08 Aug 2018 12:23:05 GMT - View all Mississauga, ON jobs
          Document worth reading: “Mathematics of Deep Learning”      Cache   Translate Page   Web Page Cache   
Recently there has been a dramatic increase in the performance of recognition systems due to the introduction of deep architectures …

Continue reading


          Embedded ML Developer - Erwin Hymer Group North America - Virginia Beach, VA      Cache   Translate Page   Web Page Cache   
NVIDIA VisionWorks, OpenCV. Game Development, Accelerated Computing, Machine Learning/Deep Learning, Virtual Reality, Professional Visualization, Autonomous...
From Indeed - Fri, 22 Jun 2018 17:57:58 GMT - View all Virginia Beach, VA jobs
          Data Scientist - ZF - Northville, MI      Cache   Translate Page   Web Page Cache   
Deep Learning, NVIDIA, NLP). You will run cost-effective data dive-ins on complex high volume data from a variety of sources and develop data solutions in close...
From ZF - Thu, 21 Jun 2018 21:14:15 GMT - View all Northville, MI jobs
          Artificial Intelligence Deep Learning Engineer - Advantest - San Jose, CA      Cache   Translate Page   Web Page Cache   
Work with Advantest business units to incorporate developed AI/Deep Learning technologies into products and/or internal processes....
From Advantest - Mon, 06 Aug 2018 17:45:44 GMT - View all San Jose, CA jobs
          Solutions Architect - Autonomous Driving - NVIDIA - Santa Clara, CA      Cache   Translate Page   Web Page Cache   
Be an internal champion for Deep Learning and HPC among the Nvidia technical community. You will assist field business development in guiding the customer...
From NVIDIA - Wed, 18 Jul 2018 07:54:45 GMT - View all Santa Clara, CA jobs
          Comment on Deep Learning And The Future by Deep Learning Training In Pune      Cache   Translate Page   Web Page Cache   
Thank you for sharing information about Deep Learning
          Nut4Health, blockchain para acabar con la desnutrición      Cache   Translate Page   Web Page Cache   

Big data, blockchain, deep learning… Muchos se preguntan: ¿Qué utilidad tienen estas tecnologías para la vida diaria?

Una de sus aplicaciones es que soluciona problemas de gravedad como las hambrunas. Este es, precisamente, el objetivo de Nut4Health, una plataforma tecnológica basada en la cadena de bloques o blockchain "para acabar con la desnutrición; tan ambicioso como suena", afirma Borja Monreal, copromotor de este proyecto junto con Blanca Pérez. "Los dos hemos montado varias iniciativas juntos en la ONG SIC4Change, desde la que lanzamos proyectos tecnológicos para resolver problemas sociales. Nuestro objetivo no es solo crear empresas viables: es demostrar que se pueden generar empresas sociales rentables", prosigue.

Nut4Health transforma las intervenciones en países con desnutrición a través de la combinación de tecnología blockchain y desarrollo local. Establece un sistema de incentivos para dinamizar la búsqueda e identificación de casos de riesgo e implementar la prevención temprana. Para ello, transforma a los voluntarios encargados de hacer el primer rastreo en profesionales del diagnóstico, pagándoles por los resultados obtenidos. Los pagos son automatizados con ‘smart contracts’ (contratos inteligentes), que permiten una transparencia absoluta de los fondos y que garantizan la trazabilidad de la transacción. Igualmente, automatizar el diagnóstico hace posible minimizar errores humanos e integrar los datos para su posterior análisis, tanto en el tratamiento como en los procesos de toma de decisiones en el sistema sanitario.

"Nut4Health surge del dolor de participar en emergencias alimentarias y darnos cuenta de que los métodos y los mecanismos que se ponían en funcionamiento fracasaban estrepitosamente", relata Borja. "Así que decidimos emprender un análisis exhaustivo de todo el sistema, de arriba abajo: desde los que ponen el dinero a las instituciones encargadas de implementar las intervenciones y los técnicos y voluntarios que trabajaban en terreno. Tras seis meses de trabajo junto a decenas de expertos, conseguimos dibujar Nut4Health", detalla el emprendedor.

Para darle el empujón definitivo a este proyecto, Borja y Blanca presentaron su candidatura para participar en el Espacio Coworking EOI - SPEGC de Gran Canaria, una iniciativa del Gobierno de España desarrollada por la Escuela de Organización Industrial (EOI) y la Sociedad de Promoción Económica de Gran Canaria para apoyar la creación de startups innovadoras, que cuenta con la cofinanciación del Fondo Social Europeo. En palabras de Monreal: "Éramos conscientes de que necesitábamos el apoyo de expertos en áreas en las que nos faltaba expertise. Y teníamos que sacarlo de nuestro ámbito: de la gente como nosotros lo sabíamos casi todo, así que necesitábamos contrastarlo con otra que no estuviera en nuestra onda. Funcionó".

A lo largo de cinco meses, el equipo de Nut4Health ha recibido formación especializada en aspectos imprescindibles para emprender un negocio y ha trabajado codo a codo con expertos en emprendimiento de EOI que les han aportado las claves necesarias para impulsar esta iniciativa. 

Actualmente, Nut4Health está en fase de desarrollo. "Hemos llegado a un acuerdo de colaboración con Acción Contra el Hambre y presentado el proyecto a organizaciones financiadoras internacionales y nacionales. Estamos en conversaciones avanzadas para arrancar con un piloto en Guatemala a finales de año", explica Monreal. 

Este emprendedor tiene muy claro el objetivo social de su iniciativa: "Creemos en la tecnología como motor de cambio. Pero creemos mucho más en el cambio que las personas tienen que generar en las organizaciones, especialmente en las empresas, para hacer que los objetivos sociales pasen al centro del modelo de negocio. Si no tenemos en cuenta el entorno en el que nos desarrollamos, no habrá futuro para nadie. Todo lo demás son miradas cortoplacistas".


          Comment on User Research: Deep Learning for Gravitational Wave Detection with the Wolfram Language by David Collins      Cache   Translate Page   Web Page Cache   
The mathematical basis for the Experiment based Standard Model I would like to show you further Evidence that the Universe is Quantized I have used a Remainder Equation of Constants values in my work on 9! factorial 362880 a Fractal Harmonic Method with equations using The quantum Standard Model Experimental CODATA & WIKI values  examples are            Matrix9! 362880/6.67 Newton Gravity Constant 362880/6.67 = 54404.7976011994 54404.7976011994/.7976011994 = 68210.52631581512 68210.52631581512/.52631581512 = 129600.00 M9! Newton Q-Gravity Constant 129600/600 = 216 216/6 = 36 36/6 = 6 The smallest Perfect Number Constant                                    Planck length 1.61624 × 10-35 m -    Matrix9! Quantum-Planck length 1.61624799572421 × 10-35 m                      Matrix9! 362880/1.6162479957242 362880/1.61624799572421 = 224520.000000000             224520/20 = 11226 11226/6 = 1871 M9! Quantum Gravity Planck length Constant                                 M9! P-Length 1.61624799572421 = 2.585996793158735 nm M9! 362880/2.585996793158735 Planck Length-Nanometer 362880/2.585996793158735 = 140325.000000 M9! Planck Length-Nanometer C. 140325/25 = 5613.00000000000 5613/3 = 1871.00000000000 M9! Higgs-Planck Length Constant                Matrix9! 362880/125.36 Higgs Constant 362880/125.36 = 2894.703254626675 2894.703254626675/.703254626675 = 4116.152450091771 4116.152450091771/.152450091771 = 27000.0000 M9! Higgs Field M-F  S-T C.                     1.274 MeV/c2 M9! Charm Quark   Matrix9! 362880/1.274 M9! Charm Quark Constant 362880/1.274 = 284835.16483516485 284835.16483516485/.16483516485 = 1728000.0000000       Matrix9! 362880/172 M9! Top Quark Constant 362880/172 = 2109.767441860465 2109.767441860465/.767441860465 = 2749.090909090909 2749.09090909090909/.09090909090909 = 30240.000000000         Matrix9! 362880/137.036 Fine Structure Constant 362880/137.036 = 2648.063282641058 2648.063282641058/.063282641058 = 41845.0184501? 41845.0184501845/.018450184501845 = 2268000.0000000 M9! Constant 2268000/8000 = 283.5 283.5/.5 = 567 2268000/567 = 4000 and many more This would be very inspirational for the students Thanks xxxxxxxxxx Gravitational Coupling Constant 1.7518 * 10^-45 -wiki Matrix9! 362880/17518 Gravitational Coupling Constant 362880/17518 = 20.71469345815733 20.71469345815733/.71469345815733 = 28.9840255591052 28.9840255591052/.9840255591052 = 29.45454545454545 29.45454545454545454/.45454545454545454 = 64.8 64.8/.8 = 81 M9! Constant 81/9 = 9 Matrix9! Constant Matrix9! 362880/1.50659133709981 Mandelbrot Fractal Constant 800/531 = 1.50659133709981 362880/1.50659133709981 = 240861.600000000 240861.6/.6 = 401436.000000000 401436/6 = 66906.0000000000 66906/6 = 11151.00000000000 11151/531 = 21 Matrix9! Constant Thanks
          Best Aggressive Stocks Based on Deep Learning: Returns up to 26.41% in 3 Days      Cache   Translate Page   Web Page Cache   


Package Name: Risk-Conscious - Aggressive Stocks Forecast
Recommended Positions: Long
Forecast Length: 3 Days (08/03/2018 - 08/07/2018)
I Know First Average: 2.70%

Read The Full Forecast


Best Aggressive Stocks

The post Best Aggressive Stocks Based on Deep Learning: Returns up to 26.41% in 3 Days appeared first on Stock Forecast Based On a Predictive Algorithm | I Know First |.


          Deep Learning Engineer - Devoteam - Randstad      Cache   Translate Page   Web Page Cache   
Unique culture where people speak frankly, have great mutual respect and are united by a genuine interest in technology. Is Deep Learning your competency realm?...
Van Devoteam - Mon, 06 Aug 2018 13:27:24 GMT - Toon alle vacatures in Randstad
          Exploring Deep Learning for Efficient and Reliable Mobile Sensing      Cache   Translate Page   Web Page Cache   

          Deep Learning Based Inference of Private Information Using Embedded Sensors in Smart Devices      Cache   Translate Page   Web Page Cache   
Smart mobile devices and mobile apps have been rolling out at swift speeds over the last decade, turning these devices into convenient and general-purpose computing platforms. Sensory data from smart devices are important resources to nourish mobile services, and they are regarded as innocuous information that can be obtained without user permissions. In this article, we show that this seemingly innocuous information could cause serious privacy issues. First, we demonstrate that users' tap positions on the screens of smart devices can be identified based on sensory data by employing some deep learning techniques. Second, it is shown that tap stream profiles for each type of apps can be collected, so that a user's app usage habit can be accurately inferred. In our experiments, the sensory data and mobile app usage information of 102 volunteers are collected. The experiment results demonstrate that the prediction accuracy of tap position inference can be at least 90 percent by utilizing convolutional neural networks. Furthermore, based on the inferred tap position information, users' app usage habits and passwords may be inferred with high accuracy.
          Multistage and Elastic Spam Detection in Mobile Social Networks through Deep Learning      Cache   Translate Page   Web Page Cache   
While mobile social networks (MSNs) enrich people's lives, they also bring many security issues. Many attackers spread malicious URLs through MSNs, which causes serious threats to users' privacy and security. In order to provide users with a secure social environment, many researchers make great efforts. The majority of existing work is aimed at deploying a detection system on the server and classifying messages or users in MSNs through graph-based algorithms, machine learning or other methods. However, as a kind of instant messaging service, MSNs continually generate a large amount of user data. Without affecting the user experience, with existing detection mechanisms it is difficult to implement real-time detection in practical applications. In order to realize real-time message detection in MSNs, we can build more powerful server clusters or improve the utilization rate of computing resources. Assuming that computing resources of servers are limited, we use edge computing to improve the utilization rate of computing resources. In this article, we propose a multistage and elastic detection framework based on deep learning, which sets up a detection system at the mobile terminal and the server, respectively. Messages are first detected on the mobile terminal, and then the detection results are forwarded to the server along with the messages. We also design a detection queue, according to which the server can detect messages elastically when computing resources are limited, and more computing resources can be used for detecting more suspicious messages. We evaluate our detection framework on a Sina Weibo dataset. The results of the experiment show that our detection framework can improve the utilization rate of computing resources and can realize real-time detection with a high detection rate at a low false positive rate.
          A Dropconnect Deep Computation Model for Highly Heterogeneous Data Feature Learning in Mobile Sensing Networks      Cache   Translate Page   Web Page Cache   
Deep computation model, as a tensor deep learning model, outperforms multi-modal deep learning models for feature learning on heterogenous data. However, deep computation model is limited in generalization to small heterogeneous data sets since it typically requires many training objects to learn the parameters. In this article, we propose a dropconnect deep computation model (DDCM) for highly heterogeneous data feature learning in mobile sensing networks. Specifically, the dropconnect technique is used to generalize the large fully-connected layers in the deep computation model for small heterogeneous data sets. Furthermore, the rectified linear units (ReLU) are used as the activation function to reduce computation and prevent overfitting. Finally, we compare the classification accuracy and execution time for learning the parameters between our model and the traditional deep computation model on two highly heterogeneous data sets. Results illustrate that our model achieves 2 percent higher classification accuracy and performs more efficiently than the deep computation model, proving the potential of our proposed model for highly heterogeneous data learning in mobile sensing networks.
          Vehicle Safety Improvement through Deep Learning and Mobile Sensing      Cache   Translate Page   Web Page Cache   
Information about vehicle safety, such as the driving safety status and the road safety index, is of great importance to protect humans and support safe driving route planning. Despite some research on driving safety analysis, the accuracy and granularity of driving safety assessment are both very limited. Also, the problem of precisely and dynamically predicting road safety throughout a city has not been sufficiently studied and remains open. With the proliferation of sensor-equipped vehicles and smart devices, a huge amount of mobile sensing data provides an opportunity to conduct vehicle safety analysis. In this article, we first discuss mobile sensing data collection in VANETs and then identify two main challengs in vehicle safety analysis in VANETs, i.e., driving safety analysis and road safety analysis. In each issue, we review and classify the state-of-theart vehicle safety analysis techniques into different categories. For each category, a short description is given followed by a discussion of limitations. In order to improve vehicle safety, we propose a new deep learning framework (DeepRSI) to conduct real-time road safety prediction from the data mining perspective. Specifically, the proposed framework considers the spatio-temporal relationship of vehicle GPS trajectories and external environment factors. The evaluation results demonstrate the advantages of our proposed scheme over other methods by utilizing mobile sensing data collected in VANETs.
          Urban Traffic Prediction from Mobility Data Using Deep Learning      Cache   Translate Page   Web Page Cache   
Traffic information is of great importance for urban cities, and accurate prediction of urban traffics has been pursued for many years. Urban traffic prediction aims to exploit sophisticated models to capture hidden traffic characteristics from substantial historical mobility data and then makes use of trained models to predict traffic conditions in the future. Due to the powerful capabilities of representation learning and feature extraction, emerging deep learning becomes a potent alternative for such traffic modeling. In this article, we envision the potential and broard usage of deep learning in predictions of various traffic indicators, for example, traffic speed, traffic flow, and accident risk. In addition, we summarize and analyze some early attempts that have achieved notable performance. By discussing these existing advances, we propose two future research directions to improve the accuracy and efficiency of urban traffic prediction on a large scale.
          Robust Mobile Crowd Sensing: When Deep Learning Meets Edge Computing      Cache   Translate Page   Web Page Cache   
The emergence of MCS technologies provides a cost-efficient solution to accommodate large-scale sensing tasks. However, despite the potential benefits of MCS, there are several critical issues that remain to be solved, such as lack of incentive-compatible mechanisms for recruiting participants, lack of data validation, and high traffic load and latency. This motivates us to develop robust mobile crowd sensing (RMCS), a framework that integrates deep learning based data validation and edge computing based local processing. First, we present a comprehensive state-of-the-art literature review. Then, the conceptual design architecture of RMCS and practical implementations are described in detail. Next, a case study of smart transportation is provided to demonstrate the feasibility of the proposed RMCS framework. Finally, we identify several open issues and conclude the article.
          Machine Learning in Node.js With TensorFlow.js      Cache   Translate Page   Web Page Cache   

TensorFlow.js is a new version of the popular open-source library which brings deep learning to javascript. Developers can now define, train, and run machine learning models using the high-level library API .

Pre-trained models mean developers can now easily perform complex tasks like visual recognition , generating music or detecting human poses with just a few lines of JavaScript.

Having started as a front-end library for web browsers, recent updates added experimental support for Node.js. This allows TensorFlow.js to be used in backend JavaScript applications without having to use python.

Reading about the library, I wanted to test it out with a simple task…

Use TensorFlow.js to perform visual recognition on images using JavaScript from Node.js

Unfortunately, most of the documentation and example code provided uses the library in a browser. Project utilities provided to simplify loading and using pre-trained models have not yet been extended with Node.js support. Getting this working did end up with me spending a lot of time reading the Typescript source files for the library. :-1:

However, after a few days’ hacking, I managed to get this completed ! Hurrah!

Before we dive into the code, let’s start with an overview of the different TensorFlow libraries.


          Choosing a Neural Network      Cache   Translate Page   Web Page Cache   
Another excellent piece from Jason, suggest you join up with his service:

Jason Brownlee writes:   What neural network is appropriate for your predictive modeling problem?

It can be difficult for a beginner to the field of deep learning to know what type of network to use. There are so many types of networks to choose from and new methods being published and discussed every day.

To make things worse, most neural networks are flexible enough that they work (make a prediction) even when used with the wrong type of data or prediction problem.

In this post, you will discover the suggested use for the three main classes of artificial neural networks.

After reading this post, you will know:

Which types of neural networks to focus on when working on a predictive modeling problem.
When to use, not use, and possible try using an MLP, CNN, and RNN on a project. ... To consider the use of hybrid models and to have a clear idea of your project goals before selecting a model.

Let’s get started.  ... "


          Deep Learning Engineer - Devoteam - Randstad      Cache   Translate Page   Web Page Cache   
Unique culture where people speak frankly, have great mutual respect and are united by a genuine interest in technology. Is Deep Learning your competency realm?...
Van Devoteam - Mon, 06 Aug 2018 13:27:24 GMT - Toon alle vacatures in Randstad
          Impact Of IP On Artificial Intelligence SoCs      Cache   Translate Page   Web Page Cache   
Deep learning applications will call for specialized IP in the form of new processing and memory architectures.
          These 4 Antivirus Tools Are Using AI to Protect Your System      Cache   Translate Page   Web Page Cache   

The future of antivirus protection is exciting. Much like our cars, trains, and boats, the future of antivirus runs on artificial intelligence. AI technology is one of the fastest growing sectors around the world and security researchers are continually evaluating and integrating the technology into their consumer products.

Consumer antivirus products with AI or machine learning elements are appearing thick and fast. Does your next antivirus subscription need to include AI, or is it just another security buzzword? Let’s take a look.

Traditional Antivirus vs. AI Antivirus

The term “artificial intelligence” once conjured fantastical images of futuristic technology, but AI is now a reality. To understand what AI antivirus is, you need to understand how traditional antivirus works.

Traditional Antivirus

A traditional antivirus uses file and data signatures, and pattern analysis to compare potential malicious activity to previous instances. That is, the antivirus knows what the malicious file looks like, and can move swiftly to stop those files from infecting your system, should you pick one up. That’s a very basic explanation. You can read more about how it works and what scans to use right here The 3 Types of Antivirus Scans and When to Use Each One The 3 Types of Antivirus Scans and When to Use Each One Scanning your system with an antivirus program is important for keeping your system secure. But which type of antivirus scan should you use? Full, Quick, or Custom? Read More .

The antivirus on your system works well, don’t get me wrong. However, the number of malware attacks continues to rise, and security researchers regularly discover extremely advanced malware variants, such as Mylobot What Is Mylobot Malware? How It Works and What to Do About It What Is Mylobot Malware? How It Works and What to Do About It Every so often, a truly new malware strain appears. Mylobot is a perfect example. Learn more about what it is, why it's dangerous, and what to do about it. Read More . Furthermore, some traditional or legacy antivirus solutions cannot compete with advanced threats such as the devastating WannaCry ransomworm The Global Ransomware Attack and How to Protect Your Data The Global Ransomware Attack and How to Protect Your Data A massive cyberattack has struck computers around the globe. Have you been affected by the highly virulent self-replicating ransomware? If not, how can you protect your data without paying the ransom? Read More , or the Petya ransomware that encrypts your Master Boot Record Will The Petya Ransomware Crack Bring Back Your Files? Will The Petya Ransomware Crack Bring Back Your Files? A new ransomware variant, Petya, has been cracked by an irate victim. This is a chance to get one over on the cybercriminals, as we show you how to unlock your ransomed data. Read More .

As the threat landscape shifts, so must the antivirus detection mechanisms.

AI Antivirus

AI antivirus (or in some cases, machine learning―more on this distinction in a moment) works differently. There are a few different approaches, but AI antivirus learns about specific threats within its network environment and executes defensive activities without prompt.

AI and machine learning antivirus leverage sophisticated mathematical algorithms combined with the data from other deployments to understand what the baseline of security is for a given system. As well as this, they learn how to react to files that step outside that window of normal functionality.

Machine Learning vs. Artificial Intelligence

Another important distinction in the future of antivirus is between machine learning algorithms and artificial intelligence. The two words are sometimes used interchangeably but are not the same thing.

Artificial Intelligence (AI): AI refers to programs and machines that execute tasks with the characteristics of human intelligence Google Duplex Will Identify Itself as an AI Google Duplex Will Identify Itself as an AI Google Duplex was quite the talking point at I/O 2018, with serious morality questions being asked about the AI. However, Google has now made it clear Duplex will identify itself as not human. Read More , including problem-solving, forward planning, and learning. Broadly speaking, machines that can carry out human tasks in a manner we consider “intelligent.” Machine Learning (ML): ML refers to a broad spectrum of the current applications of AI technologies focusing on the idea that machines with data access and the correct programming can learn for themselves. Broadly speaking, machine learning is a means to an end for achieving AI What Is Machine Learning? Google's Free Course Breaks It Down for You What Is Machine Learning? Google's Free Course Breaks It Down for You Google has designed a free online course to teach you the fundamentals of machine learning. Read More .

Machine learning and AI are deeply intertwined, and you can see how the terms see occasional misuse. The difference in meaning with regards to antivirus is an important distinction. Most (if not all) of the latest antivirus suites implement some form of machine learning, but some algorithms are more advanced than others.

Machine learning in antivirus technologies isn’t new. It is getting more intelligent, and is easier to use as a marketing tool now that the wider public is more aware of ML and AI.

How Security Companies Use AI in Antivirus

There are a few antivirus solutions that use advanced algorithms to protect your system, but the use of true AI is still rare. Still, there are several antivirus tools with excellent AI and ML implementations that show how the security industry is evolving to protect you from the latest threats.

1. Cylance Smart Antivirus

Cylance is a well-known name in machine learning and artificial intelligence cybersecurity. The enterprise-grade CylancePROTECT uses AI-techniques to protect a huge number of businesses, and they count several Fortune 100 organizations among their clientele. Cylance Smart Antivirus is their first foray into consumer antivirus products, bringing that enterprise-level AI protection into your home.

Cylance Smart Antivirus relies entirely on AI and ML to distinguish malware from legitimate data. The result is an antivirus that doesn’t bog your system down by constantly scanning and analyzing files. ( Or informing you of its status every 15-minutes Top Free Antivirus Apps Without Nag Screens and Bloatware Top Free Antivirus Apps Without Nag Screens and Bloatware Nagging antivirus apps are a huge pain. You don't have to put up with them, even for free. Here are the best antivirus programs that don't come with popups or bundled junk. Read More .) Rather, Cylance Smart Antivirus waits until the moment of execution and immediately kills the threat―without human intervention.

“Consumers deserve security software that is fast, easy to use, and effective,” said Christopher Bray, senior vice president, Cylance Consumer. “The consumer antivirus market is long overdue for a ground-breaking solution built on robust technology that allows them to control their security environment.”

Thanks for the shout out @sawaba I can vouch that the primary reason we launched Cylance Smart Antivirus is because our customers have told us they’ve grown frustrated with everything on the market now.

― Hiep Dang (@Hiep_Dang) June 19, 2018

Smart Antivirus does, however, have some downsides. Unlike other antivirus suites with active monitoring, Cylance Smart Antivirus allows you to visit potentially malicious sites. I assume this is confidence that the product will stop malicious downloads, but it doesn’t protect against phishing attacks or similar threats.

A single Cylance Smart Antivirus license costs $29 per year , while a $69 household pack lets you install on five different systems.

2. Deep Instinct D-Client

Deep Instinct uses deep learning (a machine learning technique) to detect “any file before it is accessed or executed” on your system. The Deep Instinct D-Client makes use of static file analysis in conjunction with a threat prediction model that allows it to eliminate malware and other system threats autonomously.

Deep Instinct’s D-Client uses vast quantities of raw data to continue improving its detection algorithms. Deep Instinct is one of the only companies with private deep learning infrastructure dedicated to improving their detection accuracy, too.

3. Avast Free Antivirus

For most people, Avast is a familiar name in security. Avast Free Antivirus is the most popular antivirus on the market, and its history of protections goes back decades. Avast Free Antivirus has been “using AI and machine learning for years” to protect users from evolving threats. In 2012, the Avast Research Lab announced three powerful backend tools for their products.

The “Malware Similarity Search” allows almost instantaneous categorization of huge samples of incoming malware. Avast Free Antivirus quickly analyzes similarities between existing malware files using both static and dynamic analysis. “Evo-Gen” is similar “but a bit subtler in nature.” Evo-Gen is a genetic algorithm that works to find short and generic descriptions of malware in massive datasets. “MDE” is a database that works on top of the indexed data, allowing heavy parallel access.

These three machine learning technologies collectively evolved as the foundation for Avast’s CyberCapture .

CyberCapture is a core feature of the Avast security suite, specifically targeting unknown malware and zero-days. When an unknown suspicious file enters a system, CyberCapture activates and immediately isolates the host system. The suspect file automatically uploads to an Avast cloud server for data analysis. Afterwards, the user receives a positive or negative notification regarding the status of the file. All the while, your data is feeding back into the algorithms to define further and enhance yours and others’ system security.

Download:Avast Free Antivirus for windows | Mac | linux

Download:Avast Mobile Security for Android

4. Windows Defender Security Center

The Windows Defender Security Center for enterprise and business solutions will receive a phenomenal boost as Microsoft turns to artificial intelligence to bulk out its security. The 2017 WannaCry ransomworm ripped through Windows systems Prevent WannaCry Malware Variants by Disabling This Windows 10 Setting Prevent WannaCry Malware Variants by Disabling This Windows 10 Setting WannaCry has thankfully stopped spreading, but you should still disable the old, insecure protocol it exploited. Here's how to do it on your own computer in just a moment. Read More after hackers released a CIA trove of zero-day vulnerabilities into the wild.

Microsoft is creating a 400 million computer-strong machine learning network to build its next generation of security tools. The new AI-backed security features will start with its enterprise customers, but eventually filter down to Windows 10 systems for regular consumers. Windows Defender is constantly improving in other ways, too, and is now one of the top enterprise and consumer security solutions . The below image illustrates a snapshot of how Windows Defender machine learning protections works.


These 4 Antivirus Tools Are Using AI to Protect Your System

Want a prime example of how machine learning antivirus springs into action? Randy Treit, a senior security researcher for Windows Defender Research, writes up the Bad Rabbit ransomware detection example . It’s worth a read (it’s short!).

Antivirus: More Advanced Than You Realized

Is your antivirus suite more advanced than you realized? Machine learning and artificial intelligence are undoubtedly making larger inroads with security products. But their current prominence is more buzzword than effective deployment.

Try not to worry too much about whether your antivirus has AI or is implementing machine learning techniques. In the meantime, here’s a comparison of the best free antivirus products The 10 Best Free Anti-Virus Programs The 10 Best Free Anti-Virus Programs You must know by now: you need antivirus protection. Macs, Windows and Linux PCs all need it. You really have no excuse. So grab one of these ten and start protecting your computer! Read More for you to check out. AI or not, it is important to protect your system at all times.

Image Credit: Wavebreakmedia/ Depositphotos


          LF Deep Learning Foundation Advances Open Source Artificial Intelligence With Major …      Cache   Translate Page   Web Page Cache   
"The progression of artificial intelligence and machine learning technologies calls for a shift in how we design and implement networks and services," ...
          8/9/2018: IDÉES & DÉBATS: La « conspiration » du deep learning      Cache   Translate Page   Web Page Cache   

C’est aujourd’hui la branche la plus en vue de l’intelligence artificielle (IA). Si votre téléphone portable obéit (la plupart du temps) à votre voix, si Google sait (en principe) trouver votre animal de compagnie dans vos milliers de photos, si les...
          Rethinking Numerical Representations for Deep Neural Networks. (arXiv:1808.02513v1 [cs.LG])      Cache   Translate Page   Web Page Cache   

Authors: Parker Hill, Babak Zamirai, Shengshuo Lu, Yu-Wei Chao, Michael Laurenzano, Mehrzad Samadi, Marios Papaefthymiou, Scott Mahlke, Thomas Wenisch, Jia Deng, Lingjia Tang, Jason Mars

With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency. In this work, we explore unconventional narrow-precision floating-point representations as it relates to inference accuracy and efficiency to steer the improved design of future DNN platforms. We show that inference using these custom numeric representations on production-grade DNNs, including GoogLeNet and VGG, achieves an average speedup of 7.6x with less than 1% degradation in inference accuracy relative to a state-of-the-art baseline platform representing the most sophisticated hardware using single-precision floating point. To facilitate the use of such customized precision, we also present a novel technique that drastically reduces the time required to derive the optimal precision configuration.


          A Semi-Supervised Data Augmentation Approach using 3D Graphical Engines. (arXiv:1808.02595v1 [cs.CV])      Cache   Translate Page   Web Page Cache   

Authors: Shuangjun Liu, Sarah Ostadabbas

Deep learning approaches have been rapidly adopted across a wide range of fields because of their accuracy and flexibility, but require large labeled training datasets. This presents a fundamental problem for applications with limited, expensive, or private data (i.e. small data), such as human pose and behavior estimation/tracking which could be highly personalized. In this paper, we present a semi-supervised data augmentation approach that can synthesize large scale labeled training datasets using 3D graphical engines based on a physically-valid low dimensional pose descriptor. To evaluate the performance of our synthesized datasets in training deep learning-based models, we generated a large synthetic human pose dataset, called ScanAva using 3D scans of only 7 individuals based on our proposed augmentation approach. A state-of-the-art human pose estimation deep learning model then was trained from scratch using our ScanAva dataset and could achieve the pose estimation accuracy of 91.2% at PCK0.5 criteria after applying an efficient domain adaptation on the synthetic images, in which its pose estimation accuracy was comparable to the same model trained on large scale pose data from real humans such as MPII dataset and much higher than the model trained on other synthetic human dataset such as SURREAL.


          Unsupervised/Semi-supervised Deep Learning for Low-dose CT Enhancement. (arXiv:1808.02603v1 [cs.CV])      Cache   Translate Page   Web Page Cache   

Authors: Mingrui Geng, Yun Deng, Qian Zhao, Qi Xie, Dong Zeng, Dong Zeng, Wangmeng Zuo, Deyu Meng

Recently, deep learning(DL) methods have been proposed for the low-dose computed tomography(LdCT) enhancement, and obtain good trade-off between computational efficiency and image quality. Most of them need large number of pre-collected ground-truth/high-dose sinograms with less noise, and train the network in a supervised end-to-end manner. This may bring major limitations on these methods because the number of such low-dose/high-dose training sinogram pairs would affect the network's capability and sometimes the ground-truth sinograms are hard to be obtained in large scale. Since large number of low-dose ones are relatively easy to obtain, it should be critical to make these sources play roles in network training in an unsupervised learning manner. To address this issue, we propose an unsupervised DL method for LdCT enhancement that incorporates unlabeled LdCT sinograms directly into the network training. The proposed method effectively considers the structure characteristics and noise distribution in the measured LdCT sinogram, and then learns the proper gradient of the LdCT sinogram in a pure unsupervised manner. Similar to the labeled ground-truth, the gradient information in an unlabeled LdCT sinogram can be used for sufficient network training. The experiments on the patient data show effectiveness of the proposed method.


          Parallax: Automatic Data-Parallel Training of Deep Neural Networks. (arXiv:1808.02621v1 [cs.DC])      Cache   Translate Page   Web Page Cache   

Authors: Soojeong Kim, Gyeong-In Yu, Hojin Park, Sungwoo Cho, Eunji Jeong, Hyeonmin Ha, Sanha Lee, Joo Seong Jeong, Byung-Gon Chun

The employment of high-performance servers and GPU accelerators for training deep neural network models have greatly accelerated recent advances in machine learning (ML). ML frameworks, such as TensorFlow, MXNet, and Caffe2, have emerged to assist ML researchers to train their models in a distributed fashion. However, correctly and efficiently utilizing multiple machines and GPUs is still not a straightforward task for framework users due to the non-trivial correctness and performance challenges that arise in the distribution process. This paper introduces Parallax, a tool for automatic parallelization of deep learning training in distributed environments. Parallax not only handles the subtle correctness issues, but also leverages various optimizations to minimize the communication overhead caused by scaling out. Experiments show that Parallax built atop TensorFlow achieves scalable training throughput on multiple CNN and RNN models, while requiring little effort from its users.


          Effective Character-augmented Word Embedding for Machine Reading Comprehension. (arXiv:1808.02772v1 [cs.CL])      Cache   Translate Page   Web Page Cache   

Authors: Zhuosheng Zhang, Yafang Huang, Pengfei Zhu, Hai Zhao

Machine reading comprehension is a task to model relationship between passage and query. In terms of deep learning framework, most of state-of-the-art models simply concatenate word and character level representations, which has been shown suboptimal for the concerned task. In this paper, we empirically explore different integration strategies of word and character embeddings and propose a character-augmented reader which attends character-level representation to augment word embedding with a short list to improve word representations, especially for rare words. Experimental results show that the proposed approach helps the baseline model significantly outperform state-of-the-art baselines on various public benchmarks.


          Highly Accelerated Multishot EPI through Synergistic Combination of Machine Learning and Joint Reconstruction. (arXiv:1808.02814v1 [eess.IV])      Cache   Translate Page   Web Page Cache   

Authors: Berkin Bilgic, Itthi Chatnuntawech, Mary Kate Manhard, Qiyuan Tian, Congyu Liao, Stephen F. Cauley, Susie Y. Huang, Jonathan R. Polimeni, Lawrence L. Wald, Kawin Setsompop

Purpose: To introduce a combined machine learning (ML) and physics-based image reconstruction framework that enables navigator-free, highly accelerated multishot echo planar imaging (msEPI), and demonstrate its application in high-resolution structural imaging.

Methods: Singleshot EPI is an efficient encoding technique, but does not lend itself well to high-resolution imaging due to severe distortion artifacts and blurring. While msEPI can mitigate these artifacts, high-quality msEPI has been elusive because of phase mismatch arising from shot-to-shot physiological variations which disrupt the combination of the multiple-shot data into a single image. We employ Deep Learning to obtain an interim magnitude-valued image with minimal artifacts, which permits estimation of image phase variations due to shot-to-shot physiological changes. These variations are then included in a Joint Virtual Coil Sensitivity Encoding (JVC-SENSE) reconstruction to utilize data from all shots and improve upon the ML solution.

Results: Our combined ML + physics approach enabled R=8-fold acceleration from 2 EPI-shots while providing 1.8-fold error reduction compared to the MUSSELS, a state-of-the-art reconstruction technique, which is also used as an input to our ML network. Using 3 shots allowed us to push the acceleration to R=10-fold, where we obtained a 1.7-fold error reduction over MUSSELS.

Conclusion: Combination of ML and JVC-SENSE enabled navigator-free msEPI at higher accelerations than previously possible while using fewer shots, with reduced vulnerability to poor generalizability and poor acceptance of end-to-end ML approaches.


          Backprop Evolution. (arXiv:1808.02822v1 [cs.LG])      Cache   Translate Page   Web Page Cache   

Authors: Maximilian Alber, Irwan Bello, Barret Zoph, Pieter-Jan Kindermans, Prajit Ramachandran, Quoc Le

The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific lan- guage to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization per- formance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.


          Unsupervised Total Variation Loss for Semi-supervised Deep Learning of Semantic Segmentation. (arXiv:1605.01368v3 [cs.CV] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Mehran Javanmardi, Mehdi Sajjadi, Ting Liu, Tolga Tasdizen

We introduce a novel unsupervised loss function for learning semantic segmentation with deep convolutional neural nets (ConvNet) when densely labeled training images are not available. More specifically, the proposed loss function penalizes the L1-norm of the gradient of the label probability vector image , i.e. total variation, produced by the ConvNet. This can be seen as a regularization term that promotes piecewise smoothness of the label probability vector image produced by the ConvNet during learning. The unsupervised loss function is combined with a supervised loss in a semi-supervised setting to learn ConvNets that can achieve high semantic segmentation accuracy even when only a tiny percentage of the pixels in the training images are labeled. We demonstrate significant improvements over the purely supervised setting in the Weizmann horse, Stanford background and Sift Flow datasets. Furthermore, we show that using the proposed piecewise smoothness constraint in the learning phase significantly outperforms post-processing results from a purely supervised approach with Markov Random Fields (MRF). Finally, we note that the framework we introduce is general and can be used to learn to label other types of structures such as curvilinear structures by modifying the unsupervised loss function accordingly.


          Deep Rewiring: Training very sparse deep networks. (arXiv:1711.05136v5 [cs.NE] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Guillaume Bellec, David Kappel, Wolfgang Maass, Robert Legenstein

Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them. But also generic hardware and software implementations of deep learning run more efficiently for sparse networks. Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints. We present an algorithm, DEEP R, that enables us to train directly a sparsely connected neural network. DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded. We demonstrate that DEEP R can be used to train very sparse feedforward and recurrent neural networks on standard benchmark tasks with just a minor loss in performance. DEEP R is based on a rigorous theoretical foundation that views rewiring as stochastic sampling of network configurations from a posterior.


          A Machine Learning Framework for Stock Selection. (arXiv:1806.01743v2 [q-fin.PM] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: XingYu Fu, JinHong Du, YiFeng Guo, MingWen Liu, Tao Dong, XiuWen Duan

This paper demonstrates how to apply machine learning algorithms to distinguish good stocks from the bad stocks. To this end, we construct 244 technical and fundamental features to characterize each stock, and label stocks according to their ranking with respect to the return-to-volatility ratio. Algorithms ranging from traditional statistical learning methods to recently popular deep learning method, e.g. Logistic Regression (LR), Random Forest (RF), Deep Neural Network (DNN), and the Stacking, are trained to solve the classification task. Genetic Algorithm (GA) is also used to implement feature selection. The effectiveness of the stock selection strategy is validated in Chinese stock market in both statistical and practical aspects, showing that: 1) Stacking outperforms other models reaching an AUC score of 0.972; 2) Genetic Algorithm picks a subset of 114 features and the prediction performances of all models remain almost unchanged after the selection procedure, which suggests some features are indeed redundant; 3) LR and DNN are radical models; RF is risk-neutral model; Stacking is somewhere between DNN and RF. 4) The portfolios constructed by our models outperform market average in back tests.


          A Multi-task Deep Learning Architecture for Maritime Surveillance using AIS Data Streams. (arXiv:1806.03972v3 [cs.LG] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Duong Nguyen, Rodolphe Vadaine, Guillaume Hajduch, René Garello, Ronan Fablet

In a world of global trading, maritime safety, security and efficiency are crucial issues. We propose a multi-task deep learning framework for vessel monitoring using Automatic Identification System (AIS) data streams. We combine recurrent neural networks with latent variable modeling and an embedding of AIS messages to a new representation space to jointly address key issues to be dealt with when considering AIS data streams: massive amount of streaming data, noisy data and irregular timesampling. We demonstrate the relevance of the proposed deep learning framework on real AIS datasets for a three-task setting, namely trajectory reconstruction, anomaly detection and vessel type identification.


          Deep learning anomalies with TensorFlow and Apache Spark      Cache   Translate Page   Web Page Cache   
Deep learning is always among the hottest topics and TensorFlow is one of the most popular frameworks out there. In this session, Khanderao Kand ...
          Deep Learning and Disaster Management      Cache   Translate Page   Web Page Cache   
The application of deep learning (DL) in disaster management can help mitigate natural and human made catastrophes. We have more information ...
          Dell EMC Launches AI-Targeted Ready Solutions      Cache   Translate Page   Web Page Cache   
Built for deep learning workloads, Ready Solutions for AI was co-engineered by Dell EMC and Nvidia and built around Dell EMC PowerEdge servers.
          Deep learning useful in diabetic retinopathy screening      Cache   Translate Page   Web Page Cache   
Deep learning is a type of machine learning shown to be remarkably effective in the past few years, but is not a new science, she explained. “It's based ...
          Deep Learning Engineer - Devoteam - Randstad      Cache   Translate Page   Web Page Cache   
Unique culture where people speak frankly, have great mutual respect and are united by a genuine interest in technology. Is Deep Learning your competency realm?...
Van Devoteam - Mon, 06 Aug 2018 13:27:24 GMT - Toon alle vacatures in Randstad
          Offer - Machine Learning classroom course Pune - INDIA      Cache   Translate Page   Web Page Cache   
Python will introduce the learner to applied machine learning, focusing more on the techniques and methods than on the statistics behind these methods.Deep Learning with Python introduces the field of deep learning using the Python language.The Training is organizing by the Best Machine Learning Training Company NearLearn. Enroll Today for Machine Learning Certification in Pune. Near Learn has been designed for the requirement of having the stronghold in planning Machine learning algorithms from the bottom . This has been favored as the best and robust platform for having Machine Learning systems.For More Details Contact Us Himansu: +91-9739305140Email: info@nearlearn.com
          Software System Engineer - Software Development Kit, Neural netw      Cache   Translate Page   Web Page Cache   
MA-Boston, If you are a Software System Engineer with experience, please read on! What You Will Be Doing - Assist in build of a deep learning Software Development Kit for Optical Processing Unit accelerators - Create deep learning framework compiler for Optical Processing Unit - Develop runtime engine including core API and driver - Implement deep neutal network models on Optical Processing Unit - Assist in
          Linux Foundation and Kernel Development      Cache   Translate Page   Web Page Cache   
  • Containers Microconference Accepted into 2018 Linux Plumbers Conference

    The Containers Micro-conference at Linux Plumbers is the yearly gathering of container runtime developers, kernel developers and container users. It is the one opportunity to have everyone in the same room to both look back at the past year in the container space and discuss the year ahead.

    In the past, topics such as use of cgroups by containers, system call filtering and interception (Seccomp), improvements/additions of kernel namespaces, interaction with the Linux Security Modules (AppArmor, SELinux, SMACK), TPM based validation (IMA), mount propagation and mount API changes, uevent isolation, unprivileged filesystem mounts and more have been discussed in this micro-conference.

  • LF Deep Learning Foundation Advances Open Source Artificial Intelligence With Major Membership Growth

    The LF Deep Learning Foundation, an umbrella organization of The Linux Foundation that supports and sustains open source innovation in artificial intelligence, machine learning, and deep learning, today announced five new members: Ciena, DiDi, Intel, Orange and Red Hat. The support of these new members will provide additional resources to the community to develop and expand open source AI, ML and DL projects, such as the Acumos AI Project, the foundation's comprehensive platform for AI model discovery, development and sharing.

  • A quick history of early-boot memory allocators

    One might think that memory allocation during system startup should not be difficult: almost all of memory is free, there is no concurrency, and there are no background tasks that will compete for memory. Even so, boot-time memory management is a tricky task. Physical memory is not necessarily contiguous, its extents change from system to system, and the detection of those extents may be not trivial. With NUMA things are even more complex because, in order to satisfy allocation locality, the exact memory topology must be determined. To cope with this, sophisticated mechanisms for memory management are required even during the earliest stages of the boot process.

    One could ask: "so why not use the same allocator that Linux uses normally from the very beginning?" The problem is that the primary Linux page allocator is a complex beast and it, too, needs to allocate memory to initialize itself. Moreover, the page-allocator data structures should be allocated in a NUMA-aware way. So another solution is required to get to the point where the memory-management subsystem can become fully operational.

    In the early days, Linux didn't have an early memory allocator; in the 1.0 kernel, memory initialization was not as robust and versatile as it is today. Every subsystem initialization call, or simply any function called from start_kernel(), had access to the starting address of the single block of free memory via the global memory_start variable. If a function needed to allocate memory it just increased memory_start by the desired amount. By the time v2.0 was released, Linux was already ported to five more architectures, but boot-time memory management remained as simple as in v1.0, with the only difference being that the extents of the physical memory were detected by the architecture-specific code. It should be noted, though, that hardware in those days was much simpler and memory configurations could be detected more easily.

  • Teaching the OOM killer about control groups

    The kernel's out-of-memory (OOM) killer is summoned when the system runs short of free memory and is unable to proceed without killing one or more processes. As might be expected, the policy decisions around which processes should be targeted have engendered controversy for as long as the OOM killer has existed. The 4.19 development cycle is likely to include a new OOM-killer implementation that targets control groups rather than individual processes, but it turns out that there is significant disagreement over how the OOM killer and control groups should interact.

    To simplify a bit: when the OOM killer is invoked, it tries to pick the process whose demise will free the most memory while causing the least misery for users of the system. The heuristics used to make this selection have varied considerably over time — it was once remarked that each developer who changes the heuristics makes them work for their use case while ruining things for everybody else. In current kernels, the heuristics implemented in oom_badness() are relatively simple: sum up the amount of memory used by a process, then scale it by the process's oom_score_adj value. That value, found in the process's /proc directory, can be tweaked by system administrators to make specific processes more or less attractive as an OOM-killer target.

    No OOM-killer implementation is perfect, and this one is no exception. One problem is that it does not pay attention to how much memory a particular user has allocated; it only looks at specific processes. If user A has a single large process while user B has 100 smaller ones, the OOM killer will invariably target A's process, even if B is using far more memory overall. That behavior is tolerable on a single-user system, but it is less than optimal on a large system running containers on behalf of multiple users.

read more


          Linux Deep Learning expands: answer is still 42      Cache   Translate Page   Web Page Cache   
none
           LF Deep Learning Foundation builds membership      Cache   Translate Page   Web Page Cache   
The LF Deep Learning Foundation, whose mission is to support and sustain open source innovation in artificial intelligence, machine learning, and deep learning, announced five new members: Ciena, DiDi, Intel, Orange and Red Hat.

These companies join founding members Amdocs, AT&T, B.Yond, Baidu, Huawei, Nokia, Tech Mahindra, Tencent, Univa and ZTE.

“We are very pleased to build off the launch momentum of the LF Deep Learning Foundation and welcome new members with vast resources and technical expertise to support our growing community and ecosystem of AI projects,” said Lisbeth McNabb, Chief Operating Officer of The Linux Foundation.

Mazin Gilbert, Vice President of Advanced Technology and Systems at AT&T, has also been elected to the role of Governing Board Chair of LF Deep Learning. This position leads the board in supporting various AI and ML open source projects, including infrastructure and support initiatives related to each project.

“The Deep Learning Foundation is a significant achievement by the open source community to drive harmonization among tools and platforms in deep learning and artificial intelligence,” said Mazin Gilbert, Vice President of Advanced Technology and Systems at AT&T. “This effort will enable an open marketplace of analytics and machine learning capabilities to help expedite adoption and deployments of DL solutions worldwide.”

https://www.deeplearningfoundation.org
          Accessibility in Tech with Haben Girma      Cache   Translate Page   Web Page Cache   

On this episode of the podcast we continue a conversation we started with Haben Girma, an advocate for equal rights for people with disabilities, regarding the value of tech accessibility. Melanie and Mark talk with her about common challenges and best practices when considering accessibility in technology design and development. Bottom line - we need one solution that works for all.

Haben Girma

The first Deafblind person to graduate from Harvard Law School, Haben Girma advocates for equal opportunities for people with disabilities. President Obama named her a White House Champion of Change, and Forbes recognized her in Forbes 30 Under 30. Haben travels the world consulting and public speaking, teaching clients the benefits of fully accessible products and services. Haben is a talented storyteller who helps people frame difference as an asset. She resisted society’s low expectations, choosing to create her own pioneering story. Because of her disability rights advocacy she has been honored by President Obama, President Clinton, and many others. Haben is also writing a memoir that will be published by Grand Central Publishing in 2019. Learn more at habengirma.com.

Cool things of the week
  • Istio reaches 1.0: ready for prod blog
  • Google for Nigeria: Making the internet more useful for more people blog
    • GCPPodcast Episode 17: The Cloud In Africa with Hiren Patel and Dale Humby podcast
  • Access Google Cloud services, right from IntelliJ IDEA blog
Interview
  • Haben Girma’s website site
  • Haben Girma’s presentation at NEXT video
  • GCPPodcast Episode 100: Vint Cerf: past, present, and future of the internet podcast
  • Web Content Accessibility Guidelines (WCAG) site
  • Android Accessibility Guidelines site
  • Apple Developer Accessibility Guidelines site
  • Black in AI site
  • Google Accessibility site
  • San Francisco Lighthouse for the Blind site
  • National Federation of the Blind site
  • National Association of the Deaf site
Question of the week

How do I perform large scale mutations in BigQuery? blog and site

Where can you find us next?

Mark will be at Pax Dev and Pax West starting August 28th. In September, he’ll be at Tokyo NEXT.

Melanie is at Def Con, Black Hat, and BSides Las Vegas. In September, she will be at Deep Learning Indaba.


          Voices in AI – Episode 62: A Conversation with Atif Kureishy      Cache   Translate Page   Web Page Cache   
About this Episode Episode 62 of Voices in AI features host Byron Reese and Atif Kureishy discussing AI, deep learning, and the…
          Voices in AI – Episode 62: A Conversation with Atif Kureishy      Cache   Translate Page   Web Page Cache   
About this Episode Episode 62 of Voices in AI features host Byron Reese and Atif Kureishy discussing AI, deep learning, and the…
          Deep Learning Engineer - Devoteam - Randstad      Cache   Translate Page   Web Page Cache   
Unique culture where people speak frankly, have great mutual respect and are united by a genuine interest in technology. Is Deep Learning your competency realm?...
Van Devoteam - Mon, 06 Aug 2018 13:27:24 GMT - Toon alle vacatures in Randstad
          Süni Zəka, Maşın Öyrənməsi və Dərin Öyrənmə arasındakı fərqlər nədir?      Cache   Translate Page   Web Page Cache   
suni zeka, masin oyrenmesi, derin oyrenme, Artificial İntelligence, Machine Learning, Deep Learning arasindaki ferq, texnologiyanin insan davranisarini tekrarlamasi, suni zekanin qurulmasi, masin oyrenmesinin alt coxlugu, problemlerin reqemsal helli, reqemsal metod, reqemsal hell usulu

Süni Zəka (Artificial İntelligence), Maşın Öyrənməsi (Machine Learning) və Dərin Öyrənmə (Deep Learning) arasındakı fərqi müəyyən etmək üçün gəlin, hər birinin qısa təriflərinə nəzər salaq.

Süni Zəka (AI) – texnologiyanın insan davranışlarını təkrarlamasının təmin edilməsidir.


          Lucid Planet Radio with Dr. Kelly: Encore: The Singularity is HERE: Interface the Future, Explore Paradox, and Recode Mythology with Mitch Schultz       Cache   Translate Page   Web Page Cache   
GuestToday visionary thinker, futurist and filmmaker Mitch Schultz joins Dr. Kelly to explore humanity as we approach the technological singularity. What is the singularity, and what does it mean for humanity?   Explore a transdisciplinary approach at the intersection of the arts, cognitive psychology, deep learning, and philosophy. Guided by Consciousness, Evolution, and Story. Beginning with what conscious state of being (terrestrial and universal) is perceiving. Followed the one consta ...
          Lucid Planet Radio with Dr. Kelly: The Singularity is HERE: Interface the Future, Explore Paradox, and Recode Mythology with Mitch Schultz       Cache   Translate Page   Web Page Cache   
EpisodeToday visionary thinker, futurist and filmmaker Mitch Schultz joins Dr. Kelly to explore humanity as we approach the technological singularity. What is the singularity, and what does it mean for humanity?   Explore a transdisciplinary approach at the intersection of the arts, cognitive psychology, deep learning, and philosophy. Guided by Consciousness, Evolution, and Story. Beginning with what conscious state of being (terrestrial and universal) is perceiving. Followed the one consta ...
          DeepMotion Reveals “Digital Cerebellum” AI Animation Tech      Cache   Translate Page   Web Page Cache   
DeepMotion, a pioneer in the field of Motion Intelligence, announced today that DeepMotion Neuron, the first tool for completely procedural, physical character animation, has launched for presale. The breakthrough cloud application trains digital characters to develop physical intelligence using advanced AI, physics and deep learning. With guidance and practice, digital characters can now achieve adaptive […]
          After NEXT 2018: Trends in higher education and research      Cache   Translate Page   Web Page Cache   

From classrooms to campus infrastructure, higher education is rapidly adapting to cloud technology. So it’s no surprise that academic faculty and staff were well represented among panelists and attendees at this year’sGoogle Cloud Next. Several of our more than 500 breakout sessions at Next spoke to the needs of higher education, as as did critical announcements like our partnership with the National Institutes of Health to make make public biomedical datasets available to researchers. Here are ten major themes that came out our higher education sessions at Next:

  1. Collaborating across campuses. Learning technologists from St. Norbert College, Lehigh University, University of Notre Dame, and Indiana University explained how G Suite and CourseKit, Google’s new integrated learning management tool, are helping teachers and students exchange ideas.
  2. Navigating change.Academic IT managers told stories of how they’ve overcome the organizational challenges of cloud migration and offered some tips for others: start small, engage key stakeholders, and take advantage of Google’s teams of engineers and representatives, who are enthusiastic and knowledgeable allies. According to Joshua Humphrey, Team Lead, Enterprise Computing, Georgia State University, "We've been using GCP for almost three years now and we've seen an average yearly savings of 44%. Whenever people ask why we moved to the cloud this is what we point to. Usability and savings."
  3. Fostering student creativity. In our higher education booth at Next, students demonstrated projects that extended their learning beyond the classroom. For example, students at California State University at San Bernardino built a mobile rover that checks internet connectivity on campus, and students at High Tech High used G Suite and Chromebooks to help them create their own handmade soap company.
  4. Reproducing scientific research. Science is built on consistent, reliable, repeatable findings. Academic research panelists at the University of Michigan are using Docker on Compute Engine to containerize pipeline tools so any researcher can run the same pipeline without having to worry about affecting the final outcome.
  5. Powering bioinformaticsToday’s biomedical research often requires storing and processing hundreds of terabytes of data. Teams at SUNY Downstate, Northeastern, and the University of South Carolina demonstrated how they used BigQuery and Compute Engine to build complex simulations and manage huge datasets for neuroscience, epidemiology, and environmental research.
  6. Accelerating genomics research. Moving data to the cloud enables faster processing to test more hypotheses and uncover insights. Researchers from Stanford, Duke, and Michigan showed how they streamlined their genomics workloads and cut months off their processing time using GCP.
  7. Democratizing access to deep learningAutoML Vision, Natural Language, and Translation, all in beta, were announced at Next and can help researchers build custom ML models without specialized knowledge in machine learning or coding. As Google’s Chief Scientist of AI and Machine Learning Fei-Fei Li noted in her blog post, Google’s aim “is to make AI not just more powerful, but more accessible.”
  8. Transforming LMS analytics. Scalable tools can turn the data collected by learning management systems and student information services into insights about student behavior. Google’s strategic partnership with Unizen allows a consortium of universities to integrate data and learning sciences, while Ivy Tech used ML Engine to build a predictive algorithm to improve student performance in courses.
  9. Personalizing machine learning and AI for student services. We’re seeing a growing trend of universities investigating AI to create virtual assistants. Recently Strayer University shared with us how they used Dialogflow to do just that, and at Next, Carnegie Mellon walked us through their process of building SARA, a socially-aware robot assistant.
  10. Strengthening security for academic IT: Natural disasters threaten on-premise data centers, with earthquakes, flooding, and hurricanes demanding robust disaster-recovery planning. Georgia State, the University of Minnesota, and Stanford’s Graduate School of Business shared how they improved the reliability and cost-efficiency of their data backup by migrating to GCP.
We've been using GCP for almost three years now and we've seen an average yearly savings of 44%. Whenever people ask why we moved to the cloud this is what we point to: usability and savings Joshua Humphrey
Enterprise Computing, Georgia State University



To learn more about our solutions for higher education, visit our website, explore our credits programs for teaching and research, or speak with a member of our team.


          The Trillion Dollar Question      Cache   Translate Page   Web Page Cache   

Recently, Apple’s stock price rose to the point where the company’s market valuation was above $1 trillion, the first U.S. company to reach that benchmark. Subsequently, numerous articles were published describing Apple’s journey to this point and why it got there. Most people describe Apple as a technology company. They make technology products: iPhones, iPads, Macs, etc. These are all computing devices. But there is another way to think of Apple and what kind of company they are as well as how they became so successful.

Neil Cybart, an analyst over at Above Avalon, likes to describe Apple as a design company focused on building useful tools for people. Of the latest round of profiles on Apple reaching a $1 trillion market valuation, he writes:

Despite supposedly being about chronicling how Apple went from near financial collapse in the late 1990s to a trillion-dollar market cap, a number of articles did not include any mention of Jony Ive [Apple’s Chief Design Officer], or even design for that matter. To not include Jony Ive in an article about Apple’s last 20 years is unfathomable, demonstrating a clear misunderstanding of Apple’s culture and the actual reasons that contributed to Apple’s success. Simply put, such profiles failed in their pursuit of describing Apple’s journey to a trillion dollars. Apple is where it is today because of design – placing an emphasis on how Apple products are used. Every other item or variable is secondary. [emphasis added]

As long as I have followed computers people have complained that Apple’s hardware is substandard. Other companies like Dell, Gateway, Acer, and Lenovo, had long been making computers that were “better” than Apple’s hardware. Apple’s value has always been selling good hardware coupled with premium software. But for a long time that was not appreciated by the market and Apple almost went bankrupt as a result.

The “Speeds and Feeds” Era for Data Analysis

When I was growing up, computers were all about so-called “speeds and feeds”. The only things people talked about were the megahertz of their processor or how many megabytes of RAM a computer had. A computer with a higher megahertz CPU was by definition better than a computer with a lower megahertz CPU. More RAM was better than less RAM and more disk space was better than less disk space. It was easy to compare different computers because we had quantitative metrics to go by. The hardware itself was a commodity and discussion about software was nonexistent because every computer ran the same software: Windows.

We are very much in the “speeds and feeds” era for data analysis right now. There is tremendous focus on and fascination with the tools and machinery underlying data analysis. Deep learning is only one such example, along with an array of related machine learning tools. Web sites like Kaggle promote a culture of “performance” where the person who can cobble together the most accurate algorithm is a winner. It’s easy to compare different algorithms to each other because there is often a single metric of performance that we can easily agree to compare.

Serious investment is being made in improving algorithms to make them more accurate, efficient, and powerful. We need these algorithms to be better so that we can have self-driving cars, intelligent assistants, fraud detection, and music discovery. Even the hardware itself is being optimized to improve the performance of these specific algorithms. This is the call of “more gigahertz, more RAM, more disk space” of our time. As easy hardware wins are fading into the past (as shown by Intel’s struggle), the focus is on improving the performance of machine learning software running on top of it.

All of this is necessary if we want to reap the benefits of machine learning algorithms in our daily lives. But if the computing industry has anything to teach the data science industry, it’s that perhaps the more interesting stuff is yet to come. Furthermore, it suggests that the companies (and perhaps individuals) with the best speeds and feeds will not necessarily be the winners.

What Comes Next?

Today, it could be argued that the most profitable “computer” in the world is the iPhone, which to be sure, has better “speeds and feeds” than any computer from my childhood. But it is by no means the fastest computer today. Nor does it have the most RAM, the most disk space, or the best graphics. How can that be?

Of course, the focus of computing changed from desktop to laptop to mobile, in part due to the great advancement in chip technology and miniaturization. So the benefit was not in greater speeds and feeds, but rather in smaller sizes for the same speeds and feeds. With these smaller, more personal, devices, the software and the design of the system became of greater importance. People were not using these devices to “crunch numbers” or do complex, but highly specialized, tasks. Rather, they were using them to do everyday tasks, like checking email, surfing the web, and communicating with friends. These were not business machines; they were for the mass market.

Arguable, the emphasis that Apple places on design has made it the most successful computer company of today because design is what creates the best user experience today in the mass market. Data science remains a niche area of work today even though its popularity and application has exploded over just a few years. It’s difficult for me to see how it might move into a mass market position, but I can see more and more people doing and consuming data analysis in the future. As the population of data analysis consumers grows, I think people will become less focused on accuracy and prediction metrics and more focused on whether a given analysis achieves a specified goal. In other words, data analyses will have to be designed to accomplish a certain task. The better individuals are at designing good data analyses, the more successful they will be.


          [עושים תוכנה] מצילים חיי אדם באמצעות Deep Learning      Cache   Translate Page   Web Page Cache   

אם אתם מסתובבים בעולם התוכנה וכנראה שגם אם לא, אתם שומעים כמה פעמים ביום את צמד המילים Machine Learning.
פה ושם כנראה גם שמעתם את צמד המילים של המגניבות החדשות בשכונה : Deep Learning

אבל..האם טרחתם להתעמק (מצחיק!) ולהבין מה זה אומר? האם העזתם להתנסות בתחום בעצמכם?

היחס בין הכמות שנאמרות המילים הללו לבין השימוש בהם בפועל בצורה אמיתית ונכונה הוא מקרי בהחלט.
לכן, כדי שתוכלו להבין קצת מעבר , הבאנו בפרק החדש של ״עושים תוכנה״ את גיא ריינר אחד המייסדים של חברת aidoc.

גיא ביחד עם שותפיו הנהדרים אלעד וולך ומיכאל ברגינסקי, פיתחו מערכת שעוזרת לרדיולוגים לנתח צילומי CT ורנטגן ובעצם עוזרת לייעל תהליכים ואולי אפילו מצילה חיי אדם.
בלי השימוש בDeep Learning לא בטוח שהם היו מצליחים לעשות זאת ותהיו בטוחים שהדרך שהם עשו בשנים האחרונות לא הייתה קלה בכלל.

The post [עושים תוכנה] מצילים חיי אדם באמצעות Deep Learning appeared first on עושים היסטוריה.


          [Из песочницы] Перевод книги Эндрю Ына «Страсть к машинному обучению» Главы 1 — 14      Cache   Translate Page   Web Page Cache   

Некоторое время назад в моей ленте в фейсбуке всплыла ссылка на книгу Эндрю Ына (Andrew Ng) "Machine Learning Yearning", которую можно перевести, как "Страсть к машинному обучению" или "Жажда машинного обучения".


image<img src="<img src="https://habrastorage.org/webt/ds/rc/ct/dsrcctfottkedkf7o1hxbqsoamq.png" />" alt="image"/>


Людям, интересующимся машинным обучением или работающим в этой сфере представлять Эндрю не нужно. Для непосвященных достаточно сказать, что он является звездой мировой величины в области искусственного интеллекта. Ученый, инженер, предприниматель, один из основателей Coursera. Автор отличного курса по введению в машинное обучение и курсов, составляющих специализацию "Глубокое обучение" (Deep Learning).

Читать дальше →
          Vincent Granville posted a blog post      Cache   Translate Page   Web Page Cache   
Vincent Granville posted a blog post

How to Stay up-to-date with your AI and ML Knowledge

Andrew Ng is a great fan of reading research papers as a long term investment in your own study (On Life, Creativity, And Failure about Andrew Ng). Anyone who has worked in our field (AI, Machine Learning) can attest to that. AI is a complex and a rapidly evolving field. It’s a challenge to stay up to date with the latest technical details.Based on my experience, in this post, I discuss how you can stay up to date by learning from the community. From a personal perspective, I work in two niche areas – Enterprise AI and my teaching for AI and IoT at the University of Oxford.My strategy for personal investment in my study is: to study a broad set of topics in the following four categories:Tutorials and GithubLeaders and networksDeep Learning papersInterview questionsI have tried to create a concise list below which should give you depth for AI and Deep Learning. This list also reflects my personal study bias (for example Python) – hence is not comprehensive.I am thankful to all the people/sources listed here for their willingness to share insights which have helped my own learning over the years.Read the full list of resources (by Ajit Joakar), here.See More

          After NEXT 2018: Trends in higher education and research      Cache   Translate Page   Web Page Cache   

From classrooms to campus infrastructure, higher education is rapidly adapting to cloud technology. So it’s no surprise that academic faculty and staff were well represented among panelists and attendees at this year’sGoogle Cloud Next. Several of our more than 500 breakout sessions at Next spoke to the needs of higher education, as as did critical announcements like our partnership with the National Institutes of Health to make make public biomedical datasets available to researchers. Here are ten major themes that came out our higher education sessions at Next:

  1. Collaborating across campuses. Learning technologists from St. Norbert College, Lehigh University, University of Notre Dame, and Indiana University explained how G Suite and CourseKit, Google’s new integrated learning management tool, are helping teachers and students exchange ideas.
  2. Navigating change.Academic IT managers told stories of how they’ve overcome the organizational challenges of cloud migration and offered some tips for others: start small, engage key stakeholders, and take advantage of Google’s teams of engineers and representatives, who are enthusiastic and knowledgeable allies. According to Joshua Humphrey, Team Lead, Enterprise Computing, Georgia State University, "We've been using GCP for almost three years now and we've seen an average yearly savings of 44%. Whenever people ask why we moved to the cloud this is what we point to. Usability and savings."
  3. Fostering student creativity. In our higher education booth at Next, students demonstrated projects that extended their learning beyond the classroom. For example, students at California State University at San Bernardino built a mobile rover that checks internet connectivity on campus, and students at High Tech High used G Suite and Chromebooks to help them create their own handmade soap company.
  4. Reproducing scientific research. Science is built on consistent, reliable, repeatable findings. Academic research panelists at the University of Michigan are using Docker on Compute Engine to containerize pipeline tools so any researcher can run the same pipeline without having to worry about affecting the final outcome.
  5. Powering bioinformaticsToday’s biomedical research often requires storing and processing hundreds of terabytes of data. Teams at SUNY Downstate, Northeastern, and the University of South Carolina demonstrated how they used BigQuery and Compute Engine to build complex simulations and manage huge datasets for neuroscience, epidemiology, and environmental research.
  6. Accelerating genomics research. Moving data to the cloud enables faster processing to test more hypotheses and uncover insights. Researchers from Stanford, Duke, and Michigan showed how they streamlined their genomics workloads and cut months off their processing time using GCP.
  7. Democratizing access to deep learningAutoML Vision, Natural Language, and Translation, all in beta, were announced at Next and can help researchers build custom ML models without specialized knowledge in machine learning or coding. As Google’s Chief Scientist of AI and Machine Learning Fei-Fei Li noted in her blog post, Google’s aim “is to make AI not just more powerful, but more accessible.”
  8. Transforming LMS analytics. Scalable tools can turn the data collected by learning management systems and student information services into insights about student behavior. Google’s strategic partnership with Unizen allows a consortium of universities to integrate data and learning sciences, while Ivy Tech used ML Engine to build a predictive algorithm to improve student performance in courses.
  9. Personalizing machine learning and AI for student services. We’re seeing a growing trend of universities investigating AI to create virtual assistants. Recently Strayer University shared with us how they used Dialogflow to do just that, and at Next, Carnegie Mellon walked us through their process of building SARA, a socially-aware robot assistant.
  10. Strengthening security for academic IT: Natural disasters threaten on-premise data centers, with earthquakes, flooding, and hurricanes demanding robust disaster-recovery planning. Georgia State, the University of Minnesota, and Stanford’s Graduate School of Business shared how they improved the reliability and cost-efficiency of their data backup by migrating to GCP.
We've been using GCP for almost three years now and we've seen an average yearly savings of 44%. Whenever people ask why we moved to the cloud this is what we point to: usability and savings Joshua Humphrey
Enterprise Computing, Georgia State University



To learn more about our solutions for higher education, visit our website, explore our credits programs for teaching and research, or speak with a member of our team.


          Dell EMC Targets AI Workloads With Integrated Systems      Cache   Translate Page   Web Page Cache   
The company is rolling out two new Ready Solutions for machine learning with Hadoop and deep learning with GPU accelerators from Nvidia.
          [Из песочницы] Перевод книги Эндрю Ына «Страсть к машинному обучению» Главы 1 — 14      Cache   Translate Page   Web Page Cache   

Некоторое время назад в моей ленте в фейсбуке всплыла ссылка на книгу Эндрю Ына (Andrew Ng) "Machine Learning Yearning", которую можно перевести, как "Страсть к машинному обучению" или "Жажда машинного обучения".


image<img src="<img src="https://habrastorage.org/webt/ds/rc/ct/dsrcctfottkedkf7o1hxbqsoamq.png" />" alt="image"/>


Людям, интересующимся машинным обучением или работающим в этой сфере представлять Эндрю не нужно. Для непосвященных достаточно сказать, что он является звездой мировой величины в области искусственного интеллекта. Ученый, инженер, предприниматель, один из основателей Coursera. Автор отличного курса по введению в машинное обучение и курсов, составляющих специализацию "Глубокое обучение" (Deep Learning).

Читать дальше →
          Weekly Machine Learning Opensource Roundup – Aug. 9, 2018      Cache   Translate Page   Web Page Cache   
Examples 100 Days Of ML Code 100 Days of Machine Learning Coding Making Alexa respond to Sign Language using Tensorflow.js A project to make Amazon Echo respond to sign language using your webcam Deep Hive Using your audience as a hive mind for deep learning Toolsets gandiva A Vectorized processing toolset for compiling and evaluating … Continue reading Weekly Machine Learning Opensource Roundup – Aug. 9, 2018
          Working toward a consensus between reformers and those often opposed to them      Cache   Translate Page   Web Page Cache   

Michael Petrilli recently wrote an essay titled “Where Education Reform Goes From Here,” which garnered responses from Sandy Kress and Peter Cunningham, among others. These pieces include much that’s worthy of support, emphasis, and further discussion, as well as a few areas of disagreement.

Based on their collective comments, I think there is a good chance for reconciliation and a working consensus between “reformers” and those of us who have had major problems with reform policies, implementation, and assumptions. There seems to be a common emphasis on the following approaches to improving student and school performance: 

  • The centrality of curriculum and instruction
  • High-quality materials
  • Building the processes schools and districts (or CMO’s) use for school improvement, such as improving the capacity at each school for continuous growth
  • Attracting higher caliber teachers, improved induction, career ladders, and leadership, and a continued attention to improving performance for all
  • Alternate pathways for high school graduation, including career and technical education
  • Increased funding
  • Striking a balance between school and local control and district and state expectations and support
  • Avoiding the harsher anti-public-school and anti-teacher rhetoric
  • Looking to both traditional public schools and charter schools for models of high performance

These ideas also drove our efforts in California—where I live and once headed up the state education agency—to improve performance.

Mike’s, Peter’s, and Sandy’s willingness to be honest about problems with the reform movement and their sincere attempt to find common ground is to be commended. Both charters and traditional public schools need to improve, and there is a growing agreement on what that takes.

Mike writes that preparing students for democracy should be one of the purposes driving any improvement effort. There is a growing interest in civics and civic engagement in the country, and excellent exemplars now exist among charters (Democracy Prep, for example) and traditional public schools.

My only caveat is to add one more important purpose of education: the classic goal of a liberal education to help enrich each student’s life so they can reach their individual potential and develop character and a high moral stance. Mike mentions in passing literature, history, and the humanities as helping to find out how the world works, and he makes a glancing reference to character development in the service of citizenship. Yet I think this goal of broadening individual perspectives to lead a more fulfilling life should be explicitly expressed. (For a discussion of this point, see my essay titled “The Big Picture: The Three Goals of Public Education.”)

Mike also deserves kudos for promoting rigorous career and technical education as a pathway for students not bound for a four-year college. For a school, district, or state, the preparation-for-work goal should be to maximize the number of students prepared for a four-year college (or a pathway on which they transfer to one from a two-year school), and to prepare all others for a specific vocation. Presently, the country is preparing about 40 percent for four-year colleges. Even if we increase that to 50 percent (a formidable goal), that still leaves a large number of students not served. Most current policy at state and district levels basically ignores these students and assumes almost all can and should be prepared for a four-year college.

I do agree with those who are wary of an early placement test because of the danger of a premature choice, as we should give some students the chance to change perspectives in later grades. As one alternative, schools in the San Diego Unified School District have a Linked Learning college program that’s combined with a career path in which students who follow the latter early on are able to shift to the four-year-college track at a later time.

Many of Mike’s comments on literacy are also spot-on, including the importance of early foundation skills, and then content and vocabulary, as the major drivers of improving comprehension, as opposed to an over-emphasis on “comprehension skills.” One of the major deficiencies of annual statewide literacy tests is the lack of connection to content and the resulting default to comprehension strategies. Louisiana, for example, is attempting to correct this situation.

From our perspective, too many reformers are still too wedded to a strict accountability model based on a faulty theory of change. The initial reform paradigm was a simple structural leverage approach: Define student performance standards (mainly for accountability purposes, not to inform instructional improvement), assess whether the standards were being met, publicize those outcomes, provide consequences for results (bad and good), get out of the way of individual schools, and let pressure from harsh consequences and competition, especially from charters and parents, force improvement.

This strategy proved to be flawed in several respects and thus didn’t produce the hoped-for results.

First, highly simplistic is the assumption that individual schools, if given freedom from district control and spurred by competition and consequences, would figure out how to improve on their own, and it proved false for most schools. Many reformers now realize that the missing ingredient in that paradigm was direct attention to and support for the nuts and bolts of school improvement: curriculum, instructional materials, professional development, team building, principal and teacher leadership, effective district (or CMO) assistance, and help with getting these elements to cohere, as well as proper funding for those efforts. (Peter Cunningham’s response to Mike’s essay therefore deserves praise for asserting the importance of funding if improvement is to occur.) By comparison, the indirect method of attempting to improve performance by standards, primarily test-based assessments, and consequential accountability turned out to be a much weaker way to influence school performance, and it produced considerable collateral damage.

Another erroneous assumption underlying this simple reform paradigm was that educators would not improve unless compelled or pressured by fear of consequences or competition. Actually, most educators want to improve, but many did not know how, did not receive proper support, or were subject to leaders who were motivated by a test-and-punish philosophy relying on fear instead of the more engaging build-and-support approach. Appealing to teachers as professionals and engaging them in the work of improvement produces results; pressuring them often backfires. Deming and Drucker still apply.

Yet many reformers want to retain or strengthen accountability with consequences and embed the more direct approach in high-stakes accountability. The two strategies conflict because they stem from two radically different theories of how to encourage professionals to improve. More often than not, pressure and competition detract from high performance. High-stakes testing encourages schools or districts to become too fixated on test results and test items, to the detriment of deep learning and learning progressions. Campbell’s law is relevant; consequential accountability encourages educators to game the system, outright cheat, or become detached from the commitment to deeper learning and long-term continuous improvement by concentrating on short-term test results. Some reformers retort that teaching to the test and test prep are fine if complex skills are tested. But the tests don’t meet that standard. Dan Koretz’s The Testing Charade and Jim Popham’s work exemplify the problems with focusing on standardized test results, which are not of a fine enough grain size to help instruction.

As an example, tests don’t reflect the emerging idea of the importance of learning progressions, such as the development of proportional thinking in mathematics. These should be driving curriculum, instruction, classroom student assessment, and personalization. (See the recently released and excellent “Illustrative Mathematics” for a free curriculum based on learning progressions that was developed by Bill McCallum—one of the authors of Common Core Math—and his team for math in grades six through eight, and which received a top rating from EdReports.) Many reformers have advocated for more personalized, adaptive instruction. One impediment was the U.S. Department of Education’s original refusal to allow the Smarter Balanced Assessment Consortium to develop an adaptive test on broader strands across grades so students could adjust to higher or lower positions on these broader learning progressions. They insisted that the tests be limited to the standards of a particular grade.

Annual test results are a useful warning light and offer useful information about subgroups, but a whole array of formative evaluations, the use of instructional tasks as assessments, and teacher and student judgments are necessary to focus on what is needed to improve student performance. All too often, annual assessments drive instruction in superficial and shallow ways, instead of being one tool in the service of deeper learning. Many charters and traditional public schools, which live and die by annual test results, have become test-prep machines, narrowing the curriculum and harming student’s future performance. Also problematic is the tendency for some charter schools to trumpet bogus results by such ploys as not backfilling open slots over time and creating a rarified cohort. Competition and fear of consequences have similarly infected many traditional public schools with the same disease, including outright cheating or fiddling with who takes the test.

Finally, radical decentralization did not produce the results as advertised. The theory was based in part on the idea that districts were a main part of the problem of low performance. Districts were consumed by politics, stakeholder resistance, and/or bureaucratic inefficiencies, and were thought to be ineffective because they were top-down compliance oriented, or incapable of or not interested in improving results, but rather in protecting turf. They couldn’t or wouldn’t change. Decentralizing to individual schools, preferably charters, however, did not solve the problem of district effectiveness or individual schools and teachers needing support. Districts (or the central support structure in CMO’s) turn out to be crucial players in improving schools. Instead of end-running them, efforts should be made to improve their performance, and should be modeled after what our best districts have done. Contrary to the argument that districts were incapable of change, there is a growing number of districts in this country that have significantly improved their ability to support school improvement

Districts in California—such as Long Beach (which only has a handful of charters), Garden Grove, Elk Grove, and Sanger—as well as comparable districts across the rest of the country, were able to engender school-site improvement by reorienting their management philosophy. They made the difficult shift from compliance orientation to support and engagement, but still insisted on high expectations—which, if not met, initiated discussions on how to improve. They placed solid curricula and effective classroom instruction at the center of improvement efforts and built supportive structures and processes to facilitate instructional improvement with impressive results. That strategy should guide improvement policies. Instead of giving up on districts, we should agree on and support approaches and polices geared to help the laggards improve. 

Bravo also to Mike’s suggestion that teacher quality and teaching are not the only determinants of high student performance. Curricula, good materials, support processes, money, and community efforts are all also crucial. While reformers are now stressing the importance of curriculum and instruction, they and many traditional school leaders have not thought deeply enough about the complex school processes necessary to improve classroom instruction. Mike alludes to “professional development,” but an effective improvement strategy is much more complex than that. Educators and policymakers need to concentrate on how to develop coherence among coaching, professional development, team building, use of instructional materials, a broad array of classroom formative assessment techniques, teacher and principal leadership, support for struggling students, and what districts must do to support those efforts.

It is also gratifying to see many pro-public-school reformers become sensitive to and willing to oppose privatization forces high-jacking their rhetoric to replace or drastically cut funding for public schools, or to squelch teacher unions, as has happened in many Republican-led states and at the national level. Most reformers now resist the canard that the choice is between reformers’ policies favoring students or the status quo favoring adult and union interests. Both pro-public-education reformers and the anti-reform camp want to improve the quality of our schools; the debate is over which policies or strategies will best accomplish that goal. 

Many of us also agree with reformers’ proposals to concentrate more on the front end of the teacher pipeline. Welcome are suggestions to increase the quality of new teachers by strengthening teacher preparation programs, in part via higher admissions standards, and by lengthening the initial time for granting tenure, with streamlined due process protections as part of career-ladder progressions.

For existing teachers, many reformers have criticized the almost exclusive reform emphasis on firing the worst teachers by test-based and intricate principal evaluations. The effort was ruined by the use of faulty assessments and processes, and the policy itself detracted from more positive efforts to raise the performance of all staff. Moreover, concentrating on the worst often neglected supporting the best through such approaches as embedding the most effective teachers in a learning community and expanding their influence

Rewarding excellent teachers with more cash has not worked and has caused collateral damage by lowering morale and jeopardizing team building. There is a simple way out of this: Pay the best teachers more, but also have them take on additional supportive roles. Career ladders and teacher-leadership positions need to become much more prevalent, as some reformers have argued. Convincing a top teacher to stay in the profession improves student and school performance much more than firing a laggard.

That’s not to say that the worst teachers should not be fired or counseled out. There are some excellent examples of effective teacher evaluation strategies, such as those in California’s San Jose and San Juan public school districts, where teachers have helped design and implement the programs. When there is teacher buy-in and evaluation is embedded in a comprehensive school improvement effort that includes the participation of teacher leaders at the school, the rates of dismissal or resignations of the weaker teachers is actually higher. Incompetent teachers can’t hide in group efforts; those who can improve do so, and the many who can’t just resign. Conversely, having principals spend an inordinate amount of time and paperwork conducting multiple classroom visits of every teacher for the purpose of formal evaluation severely hampers their more productive role of organizing their schools. Even the best teachers are willing to accept improvement advice as part of a collaborative improvement effort; but they tend to shut down, narrow their teaching, or resist when it is part of a formal evaluation process, especially from someone whom they don’t believe is more skilled than they are.

There are many more issues which could be discussed, but I hope that this commentary helps illuminate areas of agreement, areas needing further discussion, and areas that are still in dispute.

Bill Honig has been a practicing educator for more than forty-five years. He has taught in the inner-city schools of San Francisco, served as a local superintendent in Marin County, and was appointed to the State Board of Education by California governor Jerry Brown during his first term. In 1983, Honig was elected California state superintendent of public instruction, a position he held for ten years. In 1995, he founded the Consortium on Reaching Excellence. And he recently served as Vice-Chair of the California Instructional Quality Commission.

The views expressed herein represent the opinions of the author and not necessarily the Thomas B. Fordham Institute. 


          IBM Launches EdX Certificate Programs in Deep Learning and Chatbots      Cache   Translate Page   Web Page Cache   
IBM is introducing two new Professional Certificate programs on the edX platform, focused on emerging tech in artificial intelligence: deep learning and chatbots.
          Video: New Cascade Lake Xeons to Speed Ai with Intel Deep Learning Boost      Cache   Translate Page   Web Page Cache   

This week at the Data-Centric Innovation Summit, Intel laid out their near-term Xeon roadmap and plans to augment their AVX-512 instruction set to boost machine learning performance. "This dramatic performance improvement and efficiency - up to twice as fast as the current generation - is delivered by using a single instruction to handle INT8 convolutions for deep learning inference workloads which required three separate AVX-512 instructions in previous generation processors."

The post Video: New Cascade Lake Xeons to Speed Ai with Intel Deep Learning Boost appeared first on insideHPC.


          Infographic – A Complete Guide on Getting Started with Deep Learning in Python      Cache   Translate Page   Web Page Cache   
Introduction You seem to come across the term ‘Deep Learning’ everywhere these days. It’s all pervasive and seems to be at the heart of ... The post Infographic – A Complete...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

          Ghacks Deals: Pay What You Want: AI & Deep Learning Bundle      Cache   Translate Page   Web Page Cache   

Pay What You Want: AI & Deep Learning Bundle offers a collection of ebooks and courses about Deep Learning and artificial intelligence. As is the case with all of these offers on […]

Ghacks needs you. You can find out how to support us here or support the site directly by becoming a Patreon. Thank you for being a Ghacks reader. The post Ghacks Deals: Pay What You Want: AI & Deep Learning Bundle appeared first on gHacks Technology News.


          Comment on How to Use the Keras Functional API for Deep Learning by Jason Brownlee      Cache   Translate Page   Web Page Cache   
If you fit an MLP on an image, the image pixels must be flattened to 1D before being provided as input.
          Comment on Regression Tutorial with the Keras Deep Learning Library in Python by Jason Brownlee      Cache   Translate Page   Web Page Cache   
The model is evaluated on the test dataset. You can learn more about test datasets here: https://machinelearningmastery.com/difference-test-validation-datasets/


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09