Next Page: 10000

          Image and Signal Analysis Journal Club by Sijie Shen      Cache   Translate Page      
Tuesday, Mar 12
(10 a.m. - 10:50 a.m.)

Sijie Shen

Mathematical Sciences, UTD

Training Neural Networks Ib

          Milestone announces MotoGP 19, arrives in June      Cache   Translate Page      
Milestone announced today that this year will also come the new edition of its motorcycle simulator, MotoGP 19, waiting for June 6. The improvements made to the franchise this time focus on theAI of the opponents, which will react more realistically through the use of neural networks, and to the improvement of the multiplayer experience thanks to the introduction of servers Dedicated and other news.

          Can artificial intelligence solve the mysteries of quantum physics?      Cache   Translate Page      
(The Hebrew University of Jerusalem) A new study published in Physical Review Letters by Prof. Shashua's computer science doctoral students at Hebrew University has demonstrated mathematically that algorithms based on deep neural networks can be applied to better understand the world of quantum physics, as well.
          PyCoder’s Weekly: Issue #359 (March 12, 2019)      Cache   Translate Page      

#359 – MARCH 12, 2019
View in Browser »

The PyCoder’s Weekly Logo

Writing Beautiful Pythonic Code With PEP 8

Learn how to write high-quality, readable code by using the Python style guidelines laid out in PEP 8. Following these guidelines helps you make a great impression when sharing your work with potential employers and team mates. Learn how to make your code PEP 8 compliant with these bite-sized lessons.

Enforcing The Single Responsibility Principle (SRP) in Python

The Single Responsibility Principle (or SRP) is an important concept in software development. The main idea of this concept is: all pieces of software must have only a single responsibility. Nikita’s article guides you through the complex process of writing simple code with some hands-on refactoring examples. You’ll use callable classes, SRP, dependency injection, and composition to write simple Python code. Nice read!

Find a Python Job Through Vettery


Vettery specializes in developer roles and is completely free for job seekers. Interested? Submit your profile, and if accepted, you can receive interview requests directly from top companies seeking Python devs. Get started →
VETTERY sponsor

How to Set Up Your Python Project for Success With Tests, CI, and Code Coverage

How to add tests, CI, code coverage, and more. Very detailed writeup.

Detecting Real vs Fake Faces With Python and OpenCV

Learn how to detect liveness with OpenCV, Deep Learning, and Keras. You’ll learn how to detect fake faces and perform anti-face spoofing in face recognition systems with OpenCV.

Managing Multiple Python Versions With pyenv

In this step-by-step tutorial, you’ll learn how to install multiple Python versions and switch between them with ease, including project-specific virtual environments, even if you don’t have sudo access with pyenv.

Python Packages Growth Since 2005

“The Python ecosystem has been steadily growing [since 2005]. After the first few years of hyper growth as PyPI gained near-full adoption in the Python community, the number of packages actively developed each year—meaning they had at least one release or new distribution uploaded—has increased 28% to 48% every year.”


Loop With “Else” Clause

“What is the Pythonic way to handle the situation where if a condition exists the loop should be executed, but if it does not something else should be done?”

Login: admin Password: admin


The Source for the Zen of Python Completely Violates the Zen of Python

“I was clicking around in PyCharm and noticed that the this module in CPython violates basically all of these principles.”

Python Jobs

Sr Enterprise Python Developer (Toronto, Canada)


Senior Systems Engineer (Hamilton, Canada)


Python Web Developer (Remote)

Premiere Digital Services

Software Developer (Herndon, VA)


Python Software Engineer (Berlin, Germany)


Computer Science Teacher (Pasadena, CA)

ArtCenter College of Design

Senior Python Engineer (New York, NY)


Software Engineer (Herndon, VA)

Charon Technologies

Web UI Developer (Herndon, VA)

Charon Technologies

More Python Jobs >>>

Articles & Tutorials

Don’t Make It Callable

You can make any Python object callable by adding a __call__ method to it. Like operator overloading this seems like a nifty idea at first…but is it really? Moshe’s article goes over some use cases and examples to discuss whether making objects callable is a good idea or not.

Python Pandas: Merging Dataframes Using Inner, Outer, Left and Right Joins

How to merge different Dataframes into a single dataframe using Pandas’ DataFrame.merge() function. Merging is a big topic, so this part focuses on merging dataframes using common columns as Join Key and joining using Inner Join, Right Join, Left Join and Outer Join.

Python Opportunities Come to You on Indeed Prime


Indeed prime is a hiring platform exclusively for tech talent like you. If you’re accepted, we’ll match you with companies and roles that line up with your skills, career goals and salary expectations. Apply for free today.
INDEED sponsor

An Introduction to Neural Networks With Python

A simple explanation of how neural networks work and how to implement one from scratch in Python. Nice illustrations!

Import Almost Anything in Python

An intro to module loaders and finders so you can “hack” Python’s import system for fun and profit.
ALEKSEY BILOGUR • Shared by Aleksey Bilogur

Private Python Package Management With Poetry and Packagr


I Learned Python in a Week and Only Sorta Regret It


Why You Want Formal Dependency Injection in Python Too

“In other languages, e.g., Java, explicit dependency injection is part of daily business. Python projects however very rarely make use of this technique. I’d like to make a case for why it might be useful to rethink this approach.”

Understanding and Improving Conda’s Performance

Update from the Conda team regarding Conda’s speed, what they’re working on, and what performance improvements are coming down the pike.

Sentence Similarity in Python Using Doc2Vec

Using Python to estimate the similarity of two text documents using the Doc2Vec module.

Iterating with Simplicity: Evolving a Django app with Intercooler.js


Projects & Code

ctyped: Build Ctypes Interfaces for Shared Libraries With Type Hinting

GITHUB.COM/IDLESIGN • Shared by Juan Rodriguez

iodide-project/pyodide: Run CPython on WASM in the browser

And not just that, it’s a full Python scientific stack, compiled to WebAssembly for running in the browser. More info here.

Pyckitup: Python Game Engine That Runs on WebAssembly


PEP 8 Speaks: GitHub Integration for Python Code Style

A GitHub app to automatically review Python code style over Pull Requests.

ArchiveBox: Open Source Self-Hosted Web Archive


minik: Web Framework for the Serverless World

GITHUB.COM/EABGLOBAL • Shared by PythonistaCafe


Python Atlanta

March 14, 2019

Karlsruhe Python User Group (KaPy)

March 15, 2019

Django Girls Rivers 2019 Workshop

March 15 to March 17, 2019

PyCon Odessa

March 16 to March 17, 2019

PyCon SK 2019

March 22 to March 25, 2019

Happy Pythoning!
This was PyCoder’s Weekly Issue #359.
View in Browser »


[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

          x-dream-distribution and partners at CabSat2019      Cache   Translate Page      

Following our success at CabSat 2018, we decided to bring our innovative products and services to it once again. Join x-dream-distribution GmbH in Dubai from 12-14 March 2019 at CabSat on the Bavarian booth in Hall 3.

x-dream-distribution GmbH presents innovative ingest, Social ingest andoutgestsolutions by Woody Technologies and transcoding and live broadcasting software by Capella Systems, as well as ingest and playout software by Libero Systems, and microservices toolkit for broadcaster by Squared Paper.

All the latest features that have been presented at BVE2019 and even more, will be available to all our visitors at CabSat in Dubai, Middle East & Africa's only event for content, broadcast, satellite, media & entertainment industry professionals looking to create inspiration, action & reaction.

Partner products and news CabSat 2019

Capella Systems (USA)

Cambria FTC // Cambria Cluster

Already in 4th generation, the Cambria FTC and Cambria Cluster are an innovative transcoding product… Most resentstandardfeatures:

•    SD / HD / UHD and up to 8K
•    xAVC, ProRes, DNxHD, JPEG2000
•    H.264 & H.265
•    HDR support
•    Dolby E & Dolby Vision
•    S3 read&write

Cambria Live // Cambria Broadcast Manager // Live Edit

New release of Cambria Live Series v4.1, a software-based production suite for professional live streaming broadcast production. This all-in-one system handles live switching, production functions, encoding, and distribution.

•    MPEG-DASH and CMAF support for Akamai
•    Failover (backup stream) support for DASH/HLS with Akamai
•    Ad Pre-fetch request to Yospace HLS/DASH targets
•    Software-based cue tone trigger feature
•    Embed splice_event_id from SCTE into Ad Preset

Flow Works (Germany)

Distributed MAM with distributed workflow support for media processing.
•    FlowCenter - highly integrated, complete workflow and asset management solution.
•    Flow ANT - micro media management appliance with GPU acceleration
•    All-new Flow Archive GUI (Editorial GUI).

Libero (Turkey)

•    Libero Playout is a software-based playout automation system which provides powerful, flexible and user-friendly broadcasting solutions via a client-server architecture.
•    Libero Ingest is a flexible multi-channel ingest, transcoding and encoding software with powerful and user-friendly features.

Metaliquid (Italy)

Customized on-premises and cloud state of the art AI recognition and classification services to meet specific industry needs. Metaliquid has developed a proprietary deep learning framework and neural network architectures.

•    Face recognition
•    Shot and setting recognition
•    Sensitive contentdetection
•    Opening and closingcreditsdetection
•    Sport actionsclassification
•    Content type, audio and language classification

Squared Paper (United Kingdom)

The Busby Enterprise Service Bus & microservices toolkit is specially designed for the broadcast industry:

•    monitoring hardware and software systems and applications
•    workflow orchestration from small to large and complex
•    event recording for SLA reporting and later analysis
•    controlling external devices and services, etc.

Teamium (France)

Feature rich, simple to use resource scheduling and collaboration management solution exclusively designed for video production.
•    Cloud-basedproductionmanagement
•    Resourceplanning and scheduling
•    User definedbusinessprocesses
•    Consumer grade userexperience
•    Realtime financialdashboard

Woody Technologies (France)

•    Version 3.1 of all Woody software will be released, bringing several major enhancements.
•    New Woody in2it Server, a unique client-server ingest tool for all media formats, with web-based intuitive UX and strong workflow control features, streamlining local and remote ingest workflows.

•    Woody in2it Go, the ultimate tool for reporters on the field to encode, transfer and notify their footage or stories to the broadcaster facility.
•    Woody Social, ingest from any social network directly to your production environment.
•    Woody in2it Server, Woody Ingest, Woody Outgest and Woody Social can now be deployed in a scalable architecture containing multiple nodes. This brings two major improvements - redundancy and load balancing – for large Woody deployments.

x-dream-media (Germany)

Software integrator with an entire commitment to the media IT developing its own software products for file-based workflows and asset management.
•    Signiant Managers + Agents and XDM WFM – workflow manager with integrations to many 3rd parties file processing and publishing software
•    OneGUI – job, workflow and farm monitoring & reporting, search & filtering, multi-tenant, various 3rd parties (e.g. Harmonic, Telestream, Capella, MOG, Interra)
•    Ingest Browser – media browsing, previewing, trimming and workflow start, watchfolder, storage indexing, file search
•    MFP – multi format player with frame accurate positioning, side-by-side view, audio leveling, SDI output and playlist support
•    SERVUS node – software-only videoserver, recorder and IP-streamer

          Google Rolling Out Instant On-Device Voice Recognition To Pixels      Cache   Translate Page      
Google Pixel device owners will soon be able to enjoy instant voice transcription using on-device neural networking technology within...
          4K Version Of Metroid Prime 2 Is Using Textures Upscaled By A Neural Network      Cache   Translate Page      

Just like we’ve already seen with Doom, you can use a neural network to upscale pretty much any old video game textures, and the results are amazing. Metroid Prime 2 is no exception.


          Real-time neural sliding mode field oriented control for a DFIG-based wind turbine under balanced and unbalanced grid conditions      Cache   Translate Page      
This study proposes a real-time sliding mode field oriented control for a doubly-fed induction generator (DFIG)-based wind turbine prototype connected to the grid. The proposed controller is used to track the desired direct current (DC) voltage reference at the output of the DC link, to maintain constant the grid power factor at the step-up transformer terminals controlled by the grid side converter, and to force independently the stator active and reactive power to track desired values through the rotor currents controlled by the rotor side converter. This control scheme is based on a recurrent high-order neural network (RHONN) identifier trained on-line by an extended Kalman filter. The RHONN is used to approximate the DC link and the DFIG mathematical models. The adequate approximation helps to calculate the exact equivalent control part of the sliding mode controller and to eliminate the effects of disturbances and unknown dynamics appearing in the grid, which improves the robustness of the control scheme. This controller is experimentally validated on a 1/4 HP DFIG prototype and tested for variable wind speed to track a time-varying power reference and to extract the maximum power from the wind, under both balanced and unbalanced grid conditions.
          Magister Dixit      Cache   Translate Page      
“A neural network computes a function.” Arthur Choi, Ruocheng Wang, Adnan Darwiche ( 21.12.2018 )
          Data Science: Deep Learning in Python (Updated)      Cache   Translate Page      
Data Science: Deep Learning in Python (Updated)#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
Data Science: Deep Learning in Python (Updated)
.MP4 | Video: 1280x720, 30 fps(r) | Audio: AAC, 48000 Hz, 2ch | 1.43 GB
Duration: 9.5 hours | Genre: eLearning Video | Language: English

The MOST in-depth look at neural network theory, and how to code one with pure Python and Tensorflow.

Learn how Deep Learning REALLY works (not just some diagrams and magical black box code)

          Depth layer by layer analysis, Google Machine Translation breakthrough behind the neural network arc      Cache   Translate Page      
The depth of | layer by layer analysis, neural network architecture behind Google Machine Translation breakthrough is what? Sohu, science and technology selected from the SMERITY machine in the heart of the Google compiler (GNMT) the nerve Machine Translation "Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" describes an interesting […]
          Machine Learning - Al Manal Training Center , United Arab Emirates, Abu Dhabi,Abu Dhabi       Cache   Translate Page      
Machine Learning Training:

Machine learning! It’s a branch of computer science. It’s one of the most learnt courses today as the opportunities 
are abundant. If you want to learn Machine learning course in Abu Dhabi, then reach Al Manal Training ;
As new technologies evolved, machine learning was altered greatly and our Machine learning classes in Abu Dhabi 
will give you a clear understanding.

Below are some of the topics that we will discuss in our machine learning training classes at our institute:

Machine Learning
Data Preprocessing
Introduction to Supervised Learning
Simple and Multiple Linear Regression
Polynomial Regrssion

Linear Methods for Classification
Logistic Regression
K-Nearest Neighbours
Support Vector Machines
Kernel SVM
Naive Bayes
Decision Tree
Random Forest

Introduction to Unsupervised Learning
Cluster Analysis
K Means Clustering

Reinforcement Learning

Natural Language Processing

Deep Learning
Artificial Neural Networks

Dimensionality Reduction

Model Selection Procedures

Cost: 3500 AED

Duration: Upto 30 Hours

          Common foundations of biological and artificial vision      Cache   Translate Page      
Trieste, Italy (SPX) Mar 13, 2019
"It is known that there are important similarities between the visual system of primates and the artificial neural networks of the latest generation. Our study shows how these similarities exist also with the visual system of rats, whose architecture is undoubtedly more primitive, if compared with the brain of primates, but whose functions and potential still remain largely unexplored". This is
          Communication-efficient distributed SGD with Sketching. (arXiv:1903.04488v1 [cs.LG])      Cache   Translate Page      

Authors: Nikita Ivkin, Daniel Rothchild, Enayat Ullah, Vladimir Braverman, Ion Stoica, Raman Arora

Large-scale distributed training of neural networks is often limited by network bandwidth, wherein the communication time overwhelms the local computation time. Motivated by the success of sketching methods in sub-linear/streaming algorithms, we propose a sketching-based approach to minimize the communication costs between nodes without losing accuracy. In our proposed method, workers in a distributed, synchronous training setting send sketches of their gradient vectors to the parameter server instead of the full gradient vector. Leveraging the theoretical properties of sketches, we show that this method recovers the favorable convergence guarantees of single-machine top-$k$ SGD. Furthermore, when applied to a model with $d$ dimensions on $W$ workers, our method requires only $\Theta(kW)$ bytes of communication, compared to $\Omega(dW)$ for vanilla distributed SGD. To validate our method, we run experiments using a residual network trained on the CIFAR-10 dataset. We achieve no drop in validation accuracy with a compression ratio of 4, or about 1 percentage point drop with a compression ratio of 8. We also demonstrate that our method scales to many workers.

          Practical Multi-fidelity Bayesian Optimization for Hyperparameter Tuning. (arXiv:1903.04703v1 [cs.LG])      Cache   Translate Page      

Authors: Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, Andrew Gordon Wilson

Bayesian optimization is popular for optimizing time-consuming black-box objectives. Nonetheless, for hyperparameter tuning in deep neural networks, the time required to evaluate the validation error for even a few hyperparameter settings remains a bottleneck. Multi-fidelity optimization promises relief using cheaper proxies to such objectives --- for example, validation error for a network trained using a subset of the training points or fewer iterations than required for convergence. We propose a highly flexible and practical approach to multi-fidelity Bayesian optimization, focused on efficiently optimizing hyperparameters for iteratively trained supervised learning models. We introduce a new acquisition function, the trace-aware knowledge-gradient, which efficiently leverages both multiple continuous fidelity controls and trace observations --- values of the objective at a sequence of fidelities, available when varying fidelity using training iterations. We provide a provably convergent method for optimizing our acquisition function and show it outperforms state-of-the-art alternatives for hyperparameter tuning of deep neural networks and large-scale kernel learning.

          Artificial Intelligence-aided Receiver for A CP-Free OFDM System: Design, Simulation, and Experimental Test. (arXiv:1903.04766v1 [cs.IT])      Cache   Translate Page      

Authors: Jing Zhang, Chao-Kai Wen, Shi Jin, Geoffrey Ye Li

Orthogonal frequency division multiplexing (OFDM), usually with sufficient cyclic prefix (CP), has been widely applied in various communication systems. The CP in OFDM consumes additional resource and reduces spectrum and energy efficiency. However, channel estimation and signal detection are very challenging for CP-free OFDM systems. In this paper, we propose a novel artificial intelligence (AI)-aided receiver (AI receiver) for a CP-free OFDM system. The AI receiver includes a channel estimation neural network (CE-NET) and a signal detection neural network based on orthogonal approximate message passing (OAMP), called OAMP-NET. The CE-NET is initialized by the least-square channel estimation algorithm and refined by a linear minimum mean-squared error neural network. The OAMP-NET is established by unfolding the iterative OAMP algorithm and adding several trainable parameters to improve the detection performance. We first investigate their performance under different channel models through extensive simulation and then establish a real transmission system using a 5G rapid prototyping system for an over-the-air (OTA) test. Based on our study, the AI receiver can estimate time-varying channels with a single training phase. It also has great robustness to various imperfections and has better performance than those competitive algorithms, especially for high-order modulation. The OTA test further verifies its feasibility to real environments and indicates its potential for future communications systems.

          Constructing mass-decorrelated hadronic decay taggers in ATLAS      Cache   Translate Page      
A large number of physics processes as seen by ATLAS at the LHC manifest as collimated, hadronic sprays of particles known as ‘jets’. Jets originating from the hadronic decay of a massive particle are commonly used in searches for both measurements of the Standard Model and searches for new physics. The ATLAS experiment has employed machine learning discriminants to the challenging task of identifying the origin of a given jet, but such multivariate classifiers exhibit strong non-linear correlations with the invariant mass of the jet, complicating many analyses which wish to make use of the mass spectrum. Adversarially trained neural networks (ANN) are presented as a way to construct mass-decorrelated jet classifiers by jointly training two networks in a domain-adversarial fashion. The use of neural networks further allows this method to benefit from high-performance computing platforms for fast development. A comprehensive study of different mass-decorrelation techniques is performed in ATLAS simulated datasets, comparing ANNs to designed decorrelated taggers (DDT), fixed-efficiency k-NN regression, convolved substructure (CSS), and adaptive boosting for uniform efficiency (uBoost). Performance is evaluated using metrics for background jet rejection and mass-decorrelation.
          Hardware Accelerated ATLAS Workloads on the WLCG      Cache   Translate Page      
In recent years the usage of machine learning techniques within data-intensive sciences in general and high-energy physics in particular has rapidly increased, in part due to the availability of large datasets on which such algorithms can be trained as well as suitable hardware, such as graphics or tensor processing units which greatly accelerate the training and execution of such algorithms. Within the HEP domain, the development of these techniques has so far relied on resources external to the primary computing infrastructure of the WLCG. In this paper we present an integration of hardware-accelerated workloads into the Grid through the declaration of dedicated queues with access to hardware accelerators and the use of linux container images holding a modern data science software stack. A frequent use-case of in the development of machine learning algorithms is the optimization of neural networks through the tuning of their hyper parameters. For this often a large range of network variations must be trained and compared, which for some optimization schemes can be performed in parallel -- a workload well suited for grid computing. An example of such a hyper-parameter scan on Grid resources for the case of Flavor Tagging within ATLAS is presented.
          A Scalable Near-Memory Architecture for Training Deep Neural Networks on Large In-Memory Datasets      Cache   Translate Page      
Most investigations into near-memory hardware accelerators for deep neural networks have primarily focused on inference, while the potential of accelerating training has received relatively little attention so far. Based on an in-depth analysis of the key computational patterns in state-of-the-art gradient-based training methods, we propose an efficient near-memory acceleration engine called NTX that can be used to train state-of-the-art deep convolutional neural networks at scale. Our main contributions are: (i) a loose coupling of RISC-V cores and NTX co-processors reducing offloading overhead by $7times$7× over previously published results; (ii) an optimized IEEE 754 compliant data path for fast high-precision convolutions and gradient propagation; (iii) evaluation of near-memory computing with NTX embedded into residual area on the Logic Base die of a Hybrid Memory Cube; and (iv) a scaling analysis to meshes of HMCs in a data center scenario. We demonstrate a $2.7times$2.7× energy efficiency improvement of NTX over contemporary GPUs at $4.4times$4.4× less silicon area, and a compute performance of 1.2 Tflop/s for trainin- large state-of-the-art networks with full floating-point precision. At the data center scale, a mesh of NTX achieves above 95 percent parallel and energy efficiency, while providing $2.1times$2.1× energy savings or $3.1times$3.1× performance improvement over a GPU-based system.
          Google squeezed an offline dictation AI into its keyboard app      Cache   Translate Page      

Google has updated its Gboard keyboard app for Android with AI-powered dictation that works offline. The company says it’s effectively miniaturized a cloud-based neural network system for speech recognition into an 80MB mobile app update, and that it’ll allow for faster and more reliable dictation on the go. That’s big, because it means you don’t need your phone connecting to a server to deliver high-quality speech recognition results – and you also don’t need to have access to a high-speed Wi-Fi network to use the feature. The new system has been in the works since 2014, and it eschews the…

This story continues at The Next Web

Or just read more coverage about: Google

          The Morning Brew #2703      Cache   Translate Page      
Information JavaScript, CSS, HTML & Other Static Files in ASP .NET Core – Shahed Chowdhuri Why isn’t my session state working in ASP.NET Core? Session state, GDPR, and non-essential cookies – Andrew Lock How to Style console.log Contents in Chrome DevTools – Christian Nwamba Image Recognition with Neural Networks (part 2) – Patryk Borowa C# […]
          Design Engineer - (Sr./Mid) SRAM/ACDs/Sigma Deltas      Cache   Translate Page      
CA-Santa Clara, Located in beautiful Santa Clara, CA and Orange County we're developing deep neural network computing chips across several industries. We are very well funded and are growing out our team. If you're interested in deep learning, electronics, high performance computing, and the relevant industries that would use our amazing products (robotics, medical, self-driving cars); join us! If you're passiona
          Video: Metroid Prime 2: Echoes Looks Lovely Running At 60fps And In 4K      Cache   Translate Page      

HD Trilogy when?

The Metroid Prime series is hugely popular among the franchise's fanbase - you probably already knew that if you chose to read this article - but the games are starting to show their age a little as technology gallops forward. Metroid Prime 2: Echoes is now 15 years old (where does the time go?), with its GameCube visuals naturally looking noticeably different to today's standards.

But what would the games look like if they were given a significant resolution upgrade? Well, you can take a look for yourself thanks to this video below which sees an emulated version of the game running at 60fps in 4K. The texture upscales have been achieved by using a neural network, with the credit going to BearborgOne.

Read the full article on

          The Morning Brew #2703      Cache   Translate Page      
Information JavaScript, CSS, HTML & Other Static Files in ASP .NET Core – Shahed Chowdhuri Why isn’t my session state working in ASP.NET Core? Session state, GDPR, and non-essential cookies – Andrew Lock How to Style console.log Contents in Chrome DevTools – Christian Nwamba Image Recognition with Neural Networks (part 2) – Patryk Borowa C# […]
          Researchers Have Taught Robots Self Awareness of Their Own Bodies      Cache   Translate Page      

Columbia Engineering's robot learns what it is, with zero prior knowledge of physics, geometry, or motor dynamics. After a period of "babbling," and within about a day of intensive computing, the robot creates a self-simulation, which it can then use to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its body. (Source: Robert Kwiatkowski/Columbia Engineering) 

What if a robot had the ability to become kinematically self-aware, in essence, developing its own model, based on observation and analysis of its own characteristics?

Researchers at Columbia University have developed a process in which a robot "can auto-generate its own self model," that will accurately simulate its forward kinematics, which can be run at any point in time to update and essentially calibrate the robot as it experiences wear, damage, or reconfiguration – thereby allowing an autonomous robotic control system to achieve the highest accuracy and performance. The same self model can then be used to learn additional tasks.

For robots designed to perform critical tasks, it is essential to have an accurate kinematic model describing the robot's mechanical characteristics. This will allow the controller to project response times, inertial behavior, overshoot, and other characteristics that could potentially lead the robot's response to diverge from an issued command, and compensate for them.

The robotic arm in multiple poses as it was collecting data through random motion. (Image source: Robert Kwiatkowski/Columbia Engineering)

This requirement presents several challenges: First, as robotic mechanisms get more complex, the ability to produce a mathematically accurate model becomes more difficult. This is especially true for soft robotics, which tend to exhibit highly non-linear behavior. Second, once in service, robots can change, either through wear or damage, or simply experience different types of loads while in operation. Finally, the user may choose to reconfigure the robot to perform a different function from the one it was originally deployed for. In each of these cases, the kinematic model embedded in the controller may fail to achieve satisfactory result if not updated.

According to Robert Kwiatkowski, a doctoral student involved in the Columbia University research, a type of "self-aware robot," capable of overcoming these challenges was demonstrated in their laboratory. The team conducted the experiments using a four-degree-of freedom articulated robotic arm. The robot moved randomly through 1,000 trajectories collecting state data at 100 points along each one. The state data was derived from positional encoders on the motor and the end effector and was then fed, along with the corresponding commands, into a deep learning neural network. “Other sensing technology, such as indoor GPS would have likely worked just as well,” according to Kwiatkowski.

One point that Kwiatkowski emphasized was that this model had no prior knowledge of the robot's shape, size, or other characteristics, nor, for that matter, did it know anything about the laws of physics.

Initially, the models were very inaccurate. "The robot had no clue what it was, or how its joints were connected." But after 34 hours of training the model become consistent with the physical robot to within about four centimeters.

This self-learned model was then installed into a robot and was able to perform pick-and-place operations with a 100% rate in a closed-loop test. In an open loop test, which Kwiatkowski said is equivalent to picking up objects with your eyes closed (a task even difficult for humans), it achieved 44% success.

Overall, the robot achieved an error rate comparable to the robot's own re-installed operating system. The self-modeling capability makes the robot far more autonomous,Kwiatkowski said. To further demonstrate this, the researchers replaced one of the robotic linkages with one having different characteristics (weight, stiffness, and shape) and the system updated its model and continued to perform as expected.

This type of capability could be extremely useful for an autonomous vehicle that could continuously update its state model in response to changes due to wear, variable internal loads, and driving conditions.

Clearly more work is required to achieve a model that can converge in seconds rather than hours. From here, the research will to proceed to look into more complex systems.

RP Siegel, PE, has a master's degree in mechanical engineering and worked for 20 years in R&D at Xerox Corp. An inventor with 50 patents and now a full-time writer, RP finds his primary interest at the intersection of technology and society. His work has appeared in multiple consumer and industry outlets, and he also co-authored the eco-thriller  Vapor Trails.

ESC, Embedded Systems Conference


The nation's largest embedded systems conference is back with a new education program tailored to the needs of today's embedded systems professionals, connecting you to hundreds of software developers, hardware engineers, start-up visionaries, and industry pros across the space. Be inspired through hands-on training and education across five conference tracks. Plus, take part in technical tutorials delivered by top embedded systems professionals. Click here to register today!

          Data Elixir - Issue 224      Cache   Translate Page      

In the News

The AI-Art Gold Rush Is Here

The gold rush started last October when Christie's sold an algorithm generated print for $432,500. More recently, an AI artist had its own show at a gallery in Chelsea. There's definitely a lot of interest here but is AI art really all that interesting? This longread in the Atlantic explores this burgeoning industry with links to artwork so you can judge for yourself.


Why Data Science Teams Need Generalists, Not Specialists

A team of specialists works well in environments where the organization knows exactly what needs to be done and execution can be managed like an assembly line. This article by Eric Colson explores why that's rarely the case in data science and how specialization can get in the way.

Sponsored Link

Master of Management Analytics: Your degree for the world of data

Realize the promise of data analytics and find the opportunity in the numbers. The Master of Management Analytics from Smith School of Business is essential training to unleash the potential of data and generate competitive advantage.

Tools and Techniques

Viewing Matrices & Probability as Graphs

Nice post that starts by showing how every matrix is a graph. From there, it's a visual tour of matrix operations and probabilities. Great read!

Why Model Explainability is The Next Data Science Superpower

In this excerpt from his model explainability course, Dan Becker outlines the types of things that the very best data scientists are able to discern about their models and why that information is useful. This post also sparked a worthwhile discussion on Hacker News.

Exploring Neural Networks with Activation Atlases

Great interactive article on the Distil site that introduces a new technique for visualizing how decision-making happens in a neural network. It's a long read but it's compelling all the way through.

Set Your Jupyter Notebook up Right with this Extension

By default, Jupyter Notebooks are unnamed, have no markdown cells, and no imports. Since people are notoriously bad at changing default settings, why not encourage better practices? This simple extension gently nudges you to create better notebooks.

Lessons learned building natural language processing systems in health care

Building NLP systems in a complex domain like health care is hard. Not only do these systems require broad domain knowledge, every sub-specialty and form of communication is fundamentally different. In this post, David Talby outlines common issues and the lessons he's learned over 7 years of building NLP systems in health care.

Find A Data Science Job Through Vettery

Vettery specializes in tech roles and is completely free for job seekers. Interested? Submit your profile, and if accepted onto the platform, you can receive interview requests directly from top companies growing their data science teams.

// sponsored


Awesome Machine Learning Interpretability

This curated list of machine learning interpretability resources is definitely worthy of its "awesome" moniker. Includes a blueprint of use-cases, software examples, tutorials, packages, books, papers, etc.

Data Viz

Data Visualization Society Logo: Behind the scenes

"Logo design" may not sound interesting but this post describes the logo for the newly formed Data Visualization Society. The logo changes dynamically according to member skills and it's unlike any logo you've ever seen.

Jobs & Careers

Post on Data Elixir's Job Board to reach a wide audience of data professionals.


Data Elixir is curated and maintained by @lonriesberg. For additional finds from around the web, follow Data Elixir on Twitter or Facebook.

This RSS feed is published on You can also subscribe via email.

          Neural networks predict planet mass      Cache   Translate Page      
(University of Bern) To find out how planets form astrophysicists run complicated and time consuming computer calculations. Members of the NCCR PlanetS at the University of Bern have now developed a totally novel approach to speed up this process dramatically. They use deep learning based on artificial neural networks, a method that is well known in image recognition.
          Re: The Reference Frame: AdS bulk is a neural network, entanglement is a quantum gauge field      Cache   Translate Page      

A surprising political comment. The first reference in the paper by Czech et al. is this 2016 preprint:

The author list is Xi Dong, Daniel Harlow, and Aron Wall. Now, as you know, Daniel Harlow is one of the most aggressive far left activists in the theoretical physics community - and the author of the anti-Strumia "particles for justice" hit piece, among other things. What about Aron Wall?

It turns out that last summer, a homosexualist outlet ran a hit piece against Wall who was hired by Cambridge yet he dared to say that homosexuality was unnatural:

Despite these differences, Harlow and Wall could have still written a rather influential paper (118 cits now). Would it be possible today?

          neuroptica added to PyPI      Cache   Translate Page      
Nanophotonic Neural Network Simulator
          Deep Learning: When Should You Use It?      Cache   Translate Page      
By Tom Taulli Deep learning, which is a subset of AI (Artificial Intelligence), has been around since the 1950s. It’s focused on developing systems that mimic the brain’s neural network structure. Yet it was not until the 1980s that deep learning started to show promise, spurred by the pioneering theories of researchers like Geoffrey Hinton, Yoshua Bengio and Yann Lecun. There was also the benefit of accelerating improvements in computer power. Despite all this, there remained lots of
          Review: Sophos Intercept X Stops Threats at the Gate      Cache   Translate Page      
Review: Sophos Intercept X Stops Threats at the Gate eli.zimmerman_9856 Tue, 03/12/2019 - 11:59

Traditional anti-malware products scan both memory and disk for particular threat signatures, which are updated daily (or even more often). But if a new threat appears before the pattern files are updated, these solutions won’t be able to detect or prevent the attack. 

In an effort to keep ahead of hackers, SophosLabs analyzes more than 400,000 new malware samples every day. The challenge is that the vast majority of malware is unique to individual organizations, so updating a pattern file is an inefficient, ineffective block for these attacks.

To fix that, Sophos Intercept X sits on top of traditional security software solutions to augment protection. The software prevents malware before it can be executed and stops threats, such as ransomware, from running. When ransomware does get into the network, the tool provides a root cause analysis to help users understand the forensic details.

MORE FROM EDTECH: Here are four ways universities can improve their endpoint protection.

Defeat Ransomware with Automatic Monitoring and File Rollbacks

Intercept X uses deep learning to detect new (and previously unseen) malware and unwanted applications. Deep learning is modeled after the human brain, using advanced neural networks that continuously learn as they accumulate more data.

It’s the same kind of machine learning that powers facial recognition, natural language processing and even self-driving cars, all inside an anti-malware program.


Ransomware has grown at a fast clip since the success of the WannaCry malware infection in May 2016. Ransomware installs itself on a computer and then encrypts important files, making them inaccessible to their owner. The owner then receives a message from the attackers that, in an exchange for currency, they will decrypt the files

Sophos Intercept X blocks these attacks by monitoring the file system, detecting any rapid encryption of files and terminating the process. It even rolls back the changes to the files, leaving them as if they had never been touched — and denying the cybercriminals a payoff.

Integrated Protections Give Admins Better Visibility

The software offers several additional protections. WipeGuard uses the same deep learning features to protect a computer’s Master Boot Record. (Ransomware attacks on the MBR prevent the computer from restarting — even restores from backups are impossible until the cybercriminals get their money.)

Safe Browsing includes policies to monitor a web browser’s encryption, presentation and network interfaces to detect “man in the browser” attacks that are common in many banking Trojan viruses.

Sophos Root Cause Analysis contains a list of infection types that have occurred in the past 90 days. There’s even a Visualize tab that connects devices, browsers and websites to track where the infection occurred and how it spread. 

This doesn’t mean users must take action immediately, but it could help them investigate the chain of events surrounding a malware infection and highlight any necessary security improvements.

One caveat: If users haven’t patched their software (especially Java and Adobe applications), Intercept X may detect false positives. Be sure to update all software to the most current versions — always a best practice — to avoid these accidental alerts.


Make Management Easier Through Sophos Central Dashboard

Endpoint protection is wonderful, but managing all those endpoints can be a chore. In addition to the usual laptops and desktops, security managers must pay attention to servers, mobile devices, email and web browsing. The potential threat surface can be overwhelming.

Sophos Central streamlines endpoint management, especially when deployed alongside other Sophos products. From the console, admins can manage Intercept X and endpoint protection either globally or by device. Web protection provides enterprise-grade browsing defense against malicious pop-ups, ads and risky file downloads. The mobile dashboard also shows device compliance, self-service portal registrations, platform versions and management status. 

Server security protects both virtual and physical servers. The Server Lockdown feature reduces the possibility of attack by ensuring that a server can configure and run only known, trusted executables.

Sophos wireless, encryption and email products also tie in to the console, and Sophos Wi-Fi access points can work alongside endpoint and mobile protection clients to provide integrated threat protection. 

That lets admins see what’s happening on wireless networks, APs and connecting clients to get insight into the inappropriate use of resources, including rogue APs. 

The Sophos Encryption dashboard provides centrally managed full-disk encryption using Windows BitLocker or Mac FileVault. Key management becomes a snap with the SafeGuard Management Center, which lets users recover damaged systems. 

Sophos email protection provides a safeguard against spam, phishing attempts and other malicious attacks through the most common user interface of all: email.

Sophos Central isn’t just for admins. Self-service is an important feature today, with user demands and IT budgets in constant conflict. 

Users can log in to the Sophos self-service portal to customize their security status, recover passwords and get notifications. In most IT departments, password recovery is the No. 1 help desk request, and eliminating those calls means technicians can spend more time on complex tasks.

Sophos Intercept X

OS: Windows 7, 8, 8.1 and 10, 32-bit and 64-bit; macOSz
Speed: Extracts millions of file features in 20 millisecondsm
Storage Requirement: 20MB on the endpoint
Server Requirement: Sophos Central supported on Windows 2008R2 and above

Dr. Jeffrey Sheen currently works as the supervisor of enterprise architecture services for Grange Mutual Casualty Group of Columbus, Ohio.

          Survey of Precision-Scalable Multiply-Accumulate Units for Neural-Network Processing      Cache   Translate Page      
The current trend for deep learning has come with an enormous computational need for billions of Multiply-Accumulate (MAC) operations per inference. Fortunately, reduced precision has demonstrated large benefits with low impact on accuracy, paving the way towards processing in mobile devices and IoT nodes. Precision-scalable MAC architectures optimized for neural networks have recently gained interest thanks to their subword parallel or bit-serial capabilities. Yet, it has been hard to make a fair judgment of their relative benefits as they have been implemented with different technologies and performance targets. In this work, run-time configurable MAC units from ISSCC 2017 and 2018 are implemented and compared objectively under diverse precision scenarios. All circuits are synthesized in a 28nm commercial CMOS process with precision ranging from 2 to 8 bits. This work analyzes the impact of scalability and compares the different MAC units in terms of energy, throughput and area, aiming to understand the optimal architectures to reduce computation costs in neural-network processing.
          BLADE: A BitLine Accelerator for Devices on the Edge      Cache   Translate Page      
The increasing ubiquity of edge devices in the consumer market, along with their ever more computationally expensive workloads, necessitate corresponding increases in computing power to support such workloads. In-memory computing is attractive in edge devices as it reuses preexisting memory elements, thus limiting area overhead. Additionally, in-SRAM Computing (iSC) efficiently performs computations on spatially local data found in a variety of emerging edge device workloads. We therefore propose, implement, and benchmark BLADE, a BitLine Accelerator for Devices on the Edge. BLADE is an iSC architecture that can perform massive SIMD-like complex operations on hundreds to thousands of operands simultaneously. We implement BLADE in 28nm CMOS and demonstrate its functionality down to 0.6V, lower than any conventional state-of-the-art iSC architecture. We also benchmark BLADE in conjunction with a full Linux software stack in the gem5 architectural simulator, providing a robust demonstration of its performance gain in comparison to an equivalent embedded processor equipped with a NEON SIMD co-processor. We benchmark BLADE with three emerging edge device workloads, namely cryptography, high efficiency video coding, and convolutional neural networks, and demonstrate 4x, 6x, and 3x performance improvement, respectively, in comparison to a baseline CPU/NEON processor at an equivalent power budget.
          Android Q Beta ya está aquí: estas son sus novedades      Cache   Translate Page      

Android Q Beta ya está aquí: estas son sus novedades#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Se acabó la espera. Google acaba de lanzar un avance de la próxima actualización de su sistema operativo móvil. Android Q Beta ya está aquí, y desde hoy puede ser instalado en los dispositivos Pixel.

Android Q Beta 1 nos avanza algunas de las novedades que la compañía ha preparado para su sistema operativo. A lo largo de los próximos meses lanzarán nuevas versiones Beta con más novedades que serán anunciadas el próximo mes de mayo durante el Google I/O 2019.

Privacidad y localización

Q Permiso Ubicación

Android Q se preocupa mucho más de nuestra privacidad, y para ello ahora mejora el permiso de nuestra localización, para que podamos decir que solo pueda acceder a nuestra ubicación cuando la aplicación esté en uso. De esta forma, una aplicación no podrá acceder a nuestra ubicación en segundo plano.

Nuevos permisos y más protecciones

Para asegurar todavía nuestra privacidad y datos, Android Q estrena nuevos permisos que nos permitirán controlar el acceso que tienen las aplicaciones a nuestras fotos, vídeos y música.

Android Q también evitará que las aplicaciones en segundo plano se abran solas para captar la atención del usuario. Ahora las aplicaciones tendrán que mostrar una notificación de alta prioridad.

También Android Q restringe el acceso a los identificadores de nuestro dispositivo que no pueden ser modificados, como el IMEI, el número de serie y otros identificadores similares.

Pantallas plegables

Q Plegables

Tal y como Google avanzó a finales del año pasado, Android Q da soporte oficial a los nuevos dispositivos plegables que ya comenzaron a ser presentados el pasado MWC 2019. La nueva versión de Android permite usar hasta tres aplicaciones a la vez en pantalla, con nuevos gestos para organizar las aplicaciones abiertas por pantalla, y "Screen continuity", que nos permitirá seguir con lo que estábamos haciendo de forma fluida al plegar/desplegar la pantalla.

Compartir con... más rápido

Q Compartir

Una de las quejas de las últimas versiones de Android es que la opción "Compartir con..." era muy lenta, nos hacía esperar varios segundos hasta que se mostrasen todas las opciones. Ahora en Android Q será todo más rápido, nada más tocar en compartir veremos nuestras opciones más usadas, sin esperas.

Paneles de configuración

Q Configuración

Las aplicaciones ahora podrán integrar los nuevos paneles de configuración de Android Q, para evitar así tener que abrir los ajustes completos. Ahora, si una aplicación nos invita a activar nuestro Wi-Fi ya no tendrá que abrir la aplicación Ajustes, ahora podrá mostrar un panel inferior con las opciones Wi-Fi que necesita.

Modos para el Wi-Fi

Las aplicaciones podrán ajustar habilitar los modos de alto rendimiento o baja latencia de nuestra conexión Wi-Fi. La baja latencia es importante para mejorar la experiencia en juegos en tiempo real o llamadas de voip.

JPG con profundidad

Q Profundidad

El modo retrato se ha popularizado tanto que Google da soporte oficial en Android Q lanzado su propio formato de imagen, que guardará junto al JPG los metadatos XMP con la profundidad de la imagen, para que así podamos usar cualquier galería o editor de fotos para editar nuestras fotos.

Vulkan para todos

La API gráfica de alto rendimiento Vulkan 1.1 será ahora un requisito para todos los dispositivos de 64 bits que ejecutan Android Q y superior, así los desarrolladores podrán ofrecer juegos con gráficos de calidad consola en más dispositivos.

También da soporte experimental a ANGLE, una capa de abstración de gráficos que permite ejecutar los juegos OpenGL sobre Vulkan, mejorando significamente el rendimiento de los juegos creados en OpenGL en dispositivos compatibles con Vulkan.

Neural Networks API 1.2

Android Q añade soporte a la nueva API de redes neuronales de Google para mejorar la inteligencia artificial de nuestros dispositivos. Han añadido 60 nuevas operaciones nuevas que permiten acelerar y mejorar la detección de objetos y de imágenes.

Rendimiento ART

Q Rendimiento

Android Q mejora el rendimiento del entorno de ejecución de aplicaciones ART para que se ejecuten todavía más rápido las aplicaciones y consuman menos batería.

Android Q Beta 1 ya disponible en los Google Pixel

Google lanza la primera versión beta de Android Q para sus dispositivos Pixel, para todos. Es decir, para los Pixel 1, Pixel 1 XL, Pixel 2, Pixel 2 XL, Pixel 3 y Pixel 3 XL.

Para actualizar tu Pixel a Android Q tan sólo hay que registrarse en Android Program Beta y esperar a que la OTA llegue a tu dispositivo.

Más información | Google


Una filtración sugiere que la próxima actualización de Android Q será la versión 10

Así es como Android Q mejorará nuestra privacidad: nos avisará cuando una app esté usando permisos sensibles

Gasolina, diésel, híbrido… Cómo acertar con la propulsión

- Android Q Beta ya está aquí: estas son sus novedades por Cosmos .

          New Study: Algorithms based on deep neural networks can be applied to quantum physics      Cache   Translate Page      

A computer science research group from the Hebrew University of Jerusalem has mathematically proven that artificial intelligence (AI) can help us understand currently unreachable quantum physics phenomena. The results have been published in Physical Review Letters. "Our research proves that the AI algorithms can represent highly complex quantum systems significantly more efficiently than existing approaches," said Prof. Amnon Shashua, Intel senior vice president and Mobileye president and CEO."

The post New Study: Algorithms based on deep neural networks can be applied to quantum physics appeared first on insideHPC.

          Introducing Android Q Beta      Cache   Translate Page      

Posted by Dave Burke, VP of Engineering

In 2019, mobile innovation is stronger than ever, with new technologies from 5G to edge to edge displays and even foldable screens. Android is right at the center of this innovation cycle, and thanks to the broad ecosystem of partners across billions of devices, Android's helping push the boundaries of hardware and software bringing new experiences and capabilities to users.

As the mobile ecosystem evolves, Android is focused on helping users take advantage of the latest innovations, while making sure users' security and privacy are always a top priority. Building on top of efforts like Google Play Protect and runtime permissions, Android Q brings a number of additional privacy and security features for users, as well as enhancements for foldables, new APIs for connectivity, new media codecs and camera capabilities, NNAPI extensions, Vulkan 1.1 support, faster app startup, and more.

Today we're releasing Beta 1 of Android Q for early adopters and a preview SDK for developers. You can get started with Beta 1 today by enrolling any Pixel device (including the original Pixel and Pixel XL, which we've extended support for by popular demand!) Please let us know what you think! Read on for a taste of what's in Android Q, and we'll see you at Google I/O in May when we'll have even more to share.

Building on top of privacy protections in Android

Android was designed with security and privacy at the center. As Android has matured, we've added a wide range of features to protect users, like file-based encryption, OS controls requiring apps to request permission before accessing sensitive resources, locking down camera/mic background access, lockdown mode, encrypted backups, Google Play Protect (which scans over 50 billion apps a day to identify potentially harmful apps and remove them), and much more. In Android Q, we've made even more enhancements to protect our users. Many of these enhancements are part of our work in Project Strobe.

Giving users more control over location

With Android Q, the OS helps users have more control over when apps can get location. As in prior versions of the OS, apps can only get location once the app has asked you for permission, and you have granted it.

One thing that's particularly sensitive is apps' access to location while the app is not in use (in the background). Android Q enables users to give apps permission to see their location never, only when the app is in use (running), or all the time (when in the background).

For example, an app asking for a user's location for food delivery makes sense and the user may want to grant it the ability to do that. But since the app may not need location outside of when it's currently in use, the user may not want to grant that access. Android Q now offers this greater level of control. Read the developer guide for details on how to adapt your app for this new control. Look for more user-centric improvements to come in upcoming Betas. At the same time, our goal is to be very sensitive to always give developers as much notice and support as possible with these changes.

More privacy protections in Android Q

Beyond changes to location, we're making further updates to ensure transparency, give users control, and secure personal data.

In Android Q, the OS gives users even more control over apps, controlling access to shared files. Users will be able to control apps' access to the Photos and Videos or the Audio collections via new runtime permissions. For Downloads, apps must use the system file picker, which allows the user to decide which Download files the app can access. For developers, there are changes to how your apps can use shared areas on external storage. Make sure to read the Scoped Storage changes for details.

We've also seen that users (and developers!) get upset when an app unexpectedly jumps into the foreground and takes over focus. To reduce these interruptions, Android Q will prevent apps from launching an Activity while in the background. If your app is in the background and needs to get the user's attention quickly -- such as for incoming calls or alarms -- you can use a high-priority notification and provide a full-screen intent. See the documentation for more information.

We're limiting access to non-resettable device identifiers, including device IMEI, serial number, and similar identifiers. Read the best practices to help you choose the right identifiers for your use case, and see the details here. We're also randomizing the device's MAC address when connected to different Wi-Fi networks by default -- a setting that was optional in Android 9 Pie.

We are bringing these changes to you early, so you can have as much time as possible to prepare. We've also worked hard to provide developers detailed information up front, we recommend reviewing the detailed docs on the privacy changes and getting started with testing right away.

New ways to engage users

In Android Q, we're enabling new ways to bring users into your apps and streamlining the experience as they transition from other apps.

Foldables and innovative new screens

Foldable devices have opened up some innovative experiences and use-cases. To help your apps to take advantage of these and other large-screen devices, we've made a number of improvements in Android Q, including changes to onResume and onPause to support multi-resume and notify your app when it has focus. We've also changed how the resizeableActivity manifest attribute works, to help you manage how your app is displayed on foldable and large screens. To you get started building and testing on these new devices, we've been hard at work updating the Android Emulator to support multiple-display type switching -- more details coming soon!

Sharing shortcuts

When a user wants to share content like a photo with someone in another app, the process should be fast. In Android Q we're making this quicker and easier with Sharing Shortcuts, which let users jump directly into another app to share content. Developers can publish share targets that launch a specific activity in their apps with content attached, and these are shown to users in the share UI. Because they're published in advance, the share UI can load instantly when launched.

The Sharing Shortcuts mechanism is similar to how App Shortcuts works, so we've expanded the ShortcutInfo API to make the integration of both features easier. This new API is also supported in the new ShareTarget AndroidX library. This allows apps to use the new functionality, while allowing pre-Q devices to work using Direct Share. You can find an early sample app with source code here.

Settings Panels

You can now also show key system settings directly in the context of your app, through a new Settings Panel API, which takes advantage of the Slices feature that we introduced in Android 9 Pie.

A settings panel is a floating UI that you invoke from your app to show system settings that users might need, such as internet connectivity, NFC, and audio volume. For example, a browser could display a panel with connectivity settings like Airplane Mode, Wi-Fi (including nearby networks), and Mobile Data. There's no need to leave the app; users can manage settings as needed from the panel. To display a settings panel, just fire an intent with one of the new Settings.Panel actions.


In Android Q, we've extended what your apps can do with Android's connectivity stack and added new connectivity APIs.

Connectivity permissions, privacy, and security

Most of our APIs for scanning networks already require COARSE location permission, but in Android Q, for Bluetooth, Cellular and Wi-Fi, we're increasing the protection around those APIs by requiring the FINE location permission instead. If your app only needs to make peer-to-peer connections or suggest networks, check out the improved Wi-Fi APIs below -- they simplify connections and do not require location permission.

In addition to the randomized MAC addresses that Android Q provides when connected to different Wi-Fi networks, we're adding new Wi-Fi standard support, WP3 and OWE, to improve security for home and work networks as well as open/public networks.

Improved peer-to-peer and internet connectivity

In Android Q we refactored the Wi-Fi stack to improve privacy and performance, but also to improve common use-cases like managing IoT devices and suggesting internet connections -- without requiring the location permission.

The network connection APIs make it easier to manage IoT devices over local Wi-Fi, for peer-to-peer functions like configuring, downloading, or printing. Apps initiate connection requests indirectly by specifying preferred SSIDs & BSSIDs as WiFiNetworkSpecifiers. The platform handles the Wi-Fi scanning itself and displays matching networks in a Wi-Fi Picker. When the user chooses, the platform sets up the connection automatically.

The network suggestion APIs let apps surface preferred Wi-Fi networks to the user for internet connectivity. Apps initiate connections indirectly by providing a ranked list of networks and credentials as WifiNetworkSuggestions. The platform will seamlessly connect based on past performance when in range of those networks.

Wi-Fi performance mode

You can now request adaptive Wi-Fi in Android Q by enabling high performance and low latency modes. These will be of great benefit where low latency is important to the user experience, such as real-time gaming, active voice calls, and similar use-cases.

To use the new performance modes, call WifiManager.WifiLock.createWifiLock() with WIFI_MODE_FULL_LOW_LATENCY or WIFI_MODE_FULL_HIGH_PERF. In these modes, the platform works with the device firmware to meet the requirement with lowest power consumption.

Camera, media, graphics

Dynamic depth format for photos

Many cameras on mobile devices can simulate narrow depth of field by blurring the foreground or background relative to the subject. They capture depth metadata for various points in the image and apply a static blur to the image, after which they discard the depth metadata.

Starting in Android Q, apps can request a Dynamic Depth image which consists of a JPEG, XMP metadata related to depth related elements, and a depth and confidence map embedded in the same file on devices that advertise support.

Requesting a JPEG + Dynamic Depth image makes it possible for you to offer specialized blurs and bokeh options in your app. You can even use the data to create 3D images or support AR photography use-cases in the future. We're making Dynamic Depth an open format for the ecosystem, and we're working with our device-maker partners to make it available across devices running Android Q and later.

With Dynamic Depth image you can offer specialized blurs and bokeh options in your app.

New audio and video codecs

Android Q introduces support for the open source video codec AV1. This allows media providers to stream high quality video content to Android devices using less bandwidth. In addition, Android Q supports audio encoding using Opus - a codec optimized for speech and music streaming, and HDR10+ for high dynamic range video on devices that support it.

The MediaCodecInfo API introduces an easier way to determine the video rendering capabilities of an Android device. For any given codec, you can obtain a list of supported sizes and frame rates using VideoCodecCapabilities.getSupportedPerformancePoints(). This allows you to pick the best quality video content to render on any given device.


For apps that perform their audio processing in C++, Android Q introduces a native MIDI API to communicate with MIDI devices through the NDK. This API allows MIDI data to be retrieved inside an audio callback using a non-blocking read, enabling low latency processing of MIDI messages. Give it a try with the sample app and source code here.

ANGLE on Vulkan

To enable more consistency for game and graphics developers, we are working towards a standard, updateable OpenGL driver for all devices built on Vulkan. In Android Q we're adding experimental support for ANGLE on top of Vulkan on Android devices. ANGLE is a graphics abstraction layer designed for high-performance OpenGL compatibility across implementations. Through ANGLE, the many apps and games using OpenGL ES can take advantage of the performance and stability of Vulkan and benefit from a consistent, vendor-independent implementation of ES on Android devices. In Android Q, we're planning to support OpenGL ES 2.0, with ES 3.0 next on our roadmap.

We'll expand the implementation with more OpenGL functionality, bug fixes, and performance optimizations. See the docs for details on the current ANGLE support in Android, how to use it, and our plans moving forward. You can start testing with our initial support by opting-in through developer options in Settings. Give it a try today!

Vulkan everywhere

We're continuing to expand the impact of Vulkan on Android, our implementation of the low-overhead, cross-platform API for high-performance 3D graphics. Our goal is to make Vulkan on Android a broadly supported and consistent developer API for graphics. We're working together with our device manufacturer partners to make Vulkan 1.1 a requirement on all 64-bit devices running Android Q and higher, and a recommendation for all 32-bit devices. Going forward, this will help provide a uniform high-performance graphics API for apps and games to use.

Neural Networks API 1.2

Since introducing the Neural Networks API (NNAPI) in 2017, we've continued to expand the number of operations supported and improve existing functionality. In Android Q, we've added 60 new ops including ARGMAX, ARGMIN, quantized LSTM, alongside a range of performance optimisations. This lays the foundation for accelerating a much greater range of models -- such as those for object detection and image segmentation. We are working with hardware vendors and popular machine learning frameworks such as TensorFlow to optimize and roll out support for NNAPI 1.2.

Strengthening Android's Foundations

ART performance

Android Q introduces several new improvements to the ART runtime which help apps start faster and consume less memory, without requiring any work from developers.

Since Android Nougat, ART has offered Profile Guided Optimization (PGO), which speeds app startup over time by identifying and precompiling frequently executed parts of your code. To help with initial app startup, Google Play is now delivering cloud-based profiles along with APKs. These are anonymized, aggregate ART profiles that let ART pre-compile parts of your app even before it's run, giving a significant jump-start to the overall optimization process. Cloud-based profiles benefit all apps and they're already available to devices running Android P and higher.

We're also continuing to make improvements in ART itself. For example, in Android Q we've optimized the Zygote process by starting your app's process earlier and moving it to a security container, so it's ready to launch immediately. We're storing more information in the app's heap image, such as classes, and using threading to load the image faster. We're also adding Generational Garbage Collection to ART's Concurrent Copying (CC) Garbage Collector. Generational CC is more efficient as it collects young-generation objects separately, incurring much lower cost as compared to full-heap GC, while still reclaiming a good amount of space. This makes garbage collection overall more efficient in terms of time and CPU, reducing jank and helping apps run better on lower-end devices.

Security for apps

BiometricPrompt is our unified authentication framework to support biometrics at a system level. In Android Q we're extending support for passive authentication methods such as face, and adding implicit and explicit authentication flows. In the explicit flow, the user must explicitly confirm the transaction in the TEE during the authentication. The implicit flow is designed for a lighter-weight alternative for transactions with passive authentication. We've also improved the fallback for device credentials when needed.

Android Q adds support for TLS 1.3, a major revision to the TLS standard that includes performance benefits and enhanced security. Our benchmarks indicate that secure connections can be established as much as 40% faster with TLS 1.3 compared to TLS 1.2. TLS 1.3 is enabled by default for all TLS connections. See the docs for details.

Compatibility through public APIs

Another thing we all care about is ensuring that apps run smoothly as the OS changes and evolves. Apps using non-SDK APIs risk crashes for users and emergency rollouts for developers. In Android Q we're continuing our long-term effort begun in Android P to move apps toward only using public APIs. We know that moving your app away from non-SDK APIs will take time, so we're giving you advance notice.

In Android Q we're restricting access to more non-SDK interfaces and asking you to use the public equivalents instead. To help you make the transition and prevent your apps from breaking, we're enabling the restrictions only when your app is targeting Android Q. We'll continue adding public alternative APIs based on your requests; in cases where there is no public API that meets your use case, please let us know.

It's important to test your apps for uses of non-SDK interfaces. We recommend using the StrictMode method detectNonSdkApiUsage() to warn when your app accesses non-SDK APIs via reflection or JNI. Even if the APIs are exempted (grey-listed) at this time, it's best to plan for the future and eliminate their use to reduce compatibility issues. For more details on the restrictions in Android Q, see the developer guide.

Modern Android

We're expanding our efforts to have all apps take full advantage of the security and performance features in the latest version of Android. Later this year, Google Play will require you to set your app's targetSdkVersion to 28 (Android 9 Pie) in new apps and updates. In line with these changes, Android Q will warn users with a dialog when they first run an app that targets a platform earlier than API level 23 (Android Marshmallow). Here's a checklist of resources to help you migrate your app.

We're also moving the ecosystem toward readiness for 64-bit devices. Later this year, Google Play will require 64-bit support in all apps. If your app uses native SDKs or libraries, keep in mind that you'll need to provide 64-bit compliant versions of those SDKs or libraries. See the developer guide for details on how to get ready.

Get started with Android Q Beta

With important privacy features that are likely to affect your apps, we recommend getting started with testing right away. In particular, you'll want to enable and test with Android Q storage changes, new location permission states, restrictions on background app launch, and restrictions on device identifiers. See the privacy documentation for details.

To get started, just install your current app from Google Play onto a device or Android Virtual Device running Android Q Beta and work through the user flows. The app should run and look great, and handle the Android Q behavior changes for all apps properly. If you find issues, we recommend fixing them in the current app, without changing your targeting level. Take a look at the migration guide for steps and a recommended timeline.

Next, update your app's targetSdkVersion to 'Q' as soon as possible. This lets you test your app with all of the privacy and security features in Android Q, as well as any other behavior changes for apps targeting Q.

Explore the new features and APIs

When you're ready, dive into Android Q and learn about the new features and APIs you can use in your apps. Take a look at the API diff report, the Android Q Beta API reference, and developer guides as a starting point. Also, on the Android Q Beta developer site, you'll find release notes and support resources for reporting issues.

To build with Android Q, download the Android Q Beta SDK and tools into Android Studio 3.3 or higher, and follow these instructions to configure your environment. If you want the latest fixes for Android Q related changes, we recommend you use Android Studio 3.5 or higher.

How do I get Android Q Beta?

It's easy - you can enroll here to get Android Q Beta updates over-the-air, on any Pixel device (and this year we're supporting all three generations of Pixel -- Pixel 3, Pixel 2, and even the original Pixel!). Downloadable system images for those devices are also available. If you don't have a Pixel device, you can use the Android Emulator, and download the latest emulator system images via the SDK Manager in Android Studio.

We plan to update the preview system images and SDK regularly throughout the preview. We'll have more features to share as the Beta program moves forward.

As always, your feedback is critical, so please let us know what you think — the sooner we hear from you, the more of your feedback we can integrate. When you find issues, please report them here. We have separate hotlists for filing platform issues, app compatibility issues, and third-party SDK issues.

Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10
Site Map 2018_10_11
Site Map 2018_10_12
Site Map 2018_10_13
Site Map 2018_10_14
Site Map 2018_10_15
Site Map 2018_10_16
Site Map 2018_10_17
Site Map 2018_10_18
Site Map 2018_10_19
Site Map 2018_10_20
Site Map 2018_10_21
Site Map 2018_10_22
Site Map 2018_10_23
Site Map 2018_10_24
Site Map 2018_10_25
Site Map 2018_10_26
Site Map 2018_10_27
Site Map 2018_10_28
Site Map 2018_10_29
Site Map 2018_10_30
Site Map 2018_10_31
Site Map 2018_11_01
Site Map 2018_11_02
Site Map 2018_11_03
Site Map 2018_11_04
Site Map 2018_11_05
Site Map 2018_11_06
Site Map 2018_11_07
Site Map 2018_11_08
Site Map 2018_11_09
Site Map 2018_11_10
Site Map 2018_11_11
Site Map 2018_11_12
Site Map 2018_11_13
Site Map 2018_11_14
Site Map 2018_11_15
Site Map 2018_11_16
Site Map 2018_11_17
Site Map 2018_11_18
Site Map 2018_11_19
Site Map 2018_11_20
Site Map 2018_11_21
Site Map 2018_11_22
Site Map 2018_11_23
Site Map 2018_11_24
Site Map 2018_11_25
Site Map 2018_11_26
Site Map 2018_11_27
Site Map 2018_11_28
Site Map 2018_11_29
Site Map 2018_11_30
Site Map 2018_12_01
Site Map 2018_12_02
Site Map 2018_12_03
Site Map 2018_12_04
Site Map 2018_12_05
Site Map 2018_12_06
Site Map 2018_12_07
Site Map 2018_12_08
Site Map 2018_12_09
Site Map 2018_12_10
Site Map 2018_12_11
Site Map 2018_12_12
Site Map 2018_12_13
Site Map 2018_12_14
Site Map 2018_12_15
Site Map 2018_12_16
Site Map 2018_12_17
Site Map 2018_12_18
Site Map 2018_12_19
Site Map 2018_12_20
Site Map 2018_12_21
Site Map 2018_12_22
Site Map 2018_12_23
Site Map 2018_12_24
Site Map 2018_12_25
Site Map 2018_12_26
Site Map 2018_12_27
Site Map 2018_12_28
Site Map 2018_12_29
Site Map 2018_12_30
Site Map 2018_12_31
Site Map 2019_01_01
Site Map 2019_01_02
Site Map 2019_01_03
Site Map 2019_01_04
Site Map 2019_01_06
Site Map 2019_01_07
Site Map 2019_01_08
Site Map 2019_01_09
Site Map 2019_01_11
Site Map 2019_01_12
Site Map 2019_01_13
Site Map 2019_01_14
Site Map 2019_01_15
Site Map 2019_01_16
Site Map 2019_01_17
Site Map 2019_01_18
Site Map 2019_01_19
Site Map 2019_01_20
Site Map 2019_01_21
Site Map 2019_01_22
Site Map 2019_01_23
Site Map 2019_01_24
Site Map 2019_01_25
Site Map 2019_01_26
Site Map 2019_01_27
Site Map 2019_01_28
Site Map 2019_01_29
Site Map 2019_01_30
Site Map 2019_01_31
Site Map 2019_02_01
Site Map 2019_02_02
Site Map 2019_02_03
Site Map 2019_02_04
Site Map 2019_02_05
Site Map 2019_02_06
Site Map 2019_02_07
Site Map 2019_02_08
Site Map 2019_02_09
Site Map 2019_02_10
Site Map 2019_02_11
Site Map 2019_02_12
Site Map 2019_02_13
Site Map 2019_02_14
Site Map 2019_02_15
Site Map 2019_02_16
Site Map 2019_02_17
Site Map 2019_02_18
Site Map 2019_02_19
Site Map 2019_02_20
Site Map 2019_02_21
Site Map 2019_02_22
Site Map 2019_02_23
Site Map 2019_02_24
Site Map 2019_02_25
Site Map 2019_02_26
Site Map 2019_02_27
Site Map 2019_02_28
Site Map 2019_03_01
Site Map 2019_03_02
Site Map 2019_03_03
Site Map 2019_03_04
Site Map 2019_03_05
Site Map 2019_03_06
Site Map 2019_03_07
Site Map 2019_03_08
Site Map 2019_03_09
Site Map 2019_03_10
Site Map 2019_03_11
Site Map 2019_03_12
Site Map 2019_03_13