Next Page: 10000

          Senior AI/Deep Learning Software Engineer - St Josephs Hospital and Medical Center - Phoenix, AZ      Cache   Translate Page      
Ability to align business needs to development and machine learning or artificial intelligence solutions. Experience in natural language understanding, computer...
From Dignity Health - Tue, 27 Nov 2018 03:06:49 GMT - View all Phoenix, AZ jobs
          R&D Engineer Deep Learning // Asaphus Vision GmbH      Cache   Translate Page      

Looking for your next big adventure? Are you eager to work in a highly motivated team of experts in machine and deep learning? Then join our team! As R&D Engineer you develop innovative applications like face recognition and eye tracking. Qualifications: At least a Master’s degree in Computer Science or Mathematics with a very strong...

Check out all open positions at http://BerlinStartupJobs.com


          Sr Principal Cognitive Sftwr - Explainable AI      Cache   Translate Page      
Northrop Grumman Mission Systems in Beavercreek, Ohio sector is seeking a Sr Principal Cognitive Sftwr who will be an integral part of a Research, Technology Transition and Systems development Team that performs deep learning on problems ranging from Machine Translation, Automated Speech Recognition, Speech Synthesis, Image Processing, Cyber Solutions and Remote Sensing Applications. The selected applicant will have the opportunity to advance the state of the art for the intelligence production and analysis. The applicant will also have the opportunity to perform independent research and development. Conducts research in artificial intelligence (AI)/machine learning, and prototypes advanced machine learning and deep learning techniques to stretch the capability of autonomous systems research and development programs. Defines, develops, and delivers novel mathematical and statistical modeling and algorithm development to tackle the challenges of prediction, optimization, and classification.
          PyCoder’s Weekly: Issue #359 (March 12, 2019)      Cache   Translate Page      

#359 – MARCH 12, 2019
View in Browser »

The PyCoder’s Weekly Logo


Writing Beautiful Pythonic Code With PEP 8

Learn how to write high-quality, readable code by using the Python style guidelines laid out in PEP 8. Following these guidelines helps you make a great impression when sharing your work with potential employers and team mates. Learn how to make your code PEP 8 compliant with these bite-sized lessons.
REAL PYTHON video

Enforcing The Single Responsibility Principle (SRP) in Python

The Single Responsibility Principle (or SRP) is an important concept in software development. The main idea of this concept is: all pieces of software must have only a single responsibility. Nikita’s article guides you through the complex process of writing simple code with some hands-on refactoring examples. You’ll use callable classes, SRP, dependency injection, and composition to write simple Python code. Nice read!
NIKITA SOBOLEV

Find a Python Job Through Vettery

alt

Vettery specializes in developer roles and is completely free for job seekers. Interested? Submit your profile, and if accepted, you can receive interview requests directly from top companies seeking Python devs. Get started →
VETTERY sponsor

How to Set Up Your Python Project for Success With Tests, CI, and Code Coverage

How to add tests, CI, code coverage, and more. Very detailed writeup.
JEFF HALE

Detecting Real vs Fake Faces With Python and OpenCV

Learn how to detect liveness with OpenCV, Deep Learning, and Keras. You’ll learn how to detect fake faces and perform anti-face spoofing in face recognition systems with OpenCV.
ADRIAN ROSEBROCK

Managing Multiple Python Versions With pyenv

In this step-by-step tutorial, you’ll learn how to install multiple Python versions and switch between them with ease, including project-specific virtual environments, even if you don’t have sudo access with pyenv.
REAL PYTHON

Python Packages Growth Since 2005

“The Python ecosystem has been steadily growing [since 2005]. After the first few years of hyper growth as PyPI gained near-full adoption in the Python community, the number of packages actively developed each year—meaning they had at least one release or new distribution uploaded—has increased 28% to 48% every year.”
PYDIST.COM

Discussions

Loop With “Else” Clause

“What is the Pythonic way to handle the situation where if a condition exists the loop should be executed, but if it does not something else should be done?”
PYTHON.ORG

Login: admin Password: admin

TWITTER.COM/REALPYTHON

The Source for the Zen of Python Completely Violates the Zen of Python

“I was clicking around in PyCharm and noticed that the this module in CPython violates basically all of these principles.”
REDDIT

Python Jobs

Sr Enterprise Python Developer (Toronto, Canada)

Kognitiv

Senior Systems Engineer (Hamilton, Canada)

Preteckt

Python Web Developer (Remote)

Premiere Digital Services

Software Developer (Herndon, VA)

L2T, LLC

Python Software Engineer (Berlin, Germany)

Wooga

Computer Science Teacher (Pasadena, CA)

ArtCenter College of Design

Senior Python Engineer (New York, NY)

15Five

Software Engineer (Herndon, VA)

Charon Technologies

Web UI Developer (Herndon, VA)

Charon Technologies

More Python Jobs >>>

Articles & Tutorials

Don’t Make It Callable

You can make any Python object callable by adding a __call__ method to it. Like operator overloading this seems like a nifty idea at first…but is it really? Moshe’s article goes over some use cases and examples to discuss whether making objects callable is a good idea or not.
MOSHE ZADKA

Python Pandas: Merging Dataframes Using Inner, Outer, Left and Right Joins

How to merge different Dataframes into a single dataframe using Pandas’ DataFrame.merge() function. Merging is a big topic, so this part focuses on merging dataframes using common columns as Join Key and joining using Inner Join, Right Join, Left Join and Outer Join.
THISPOINTER.COM

Python Opportunities Come to You on Indeed Prime

alt

Indeed prime is a hiring platform exclusively for tech talent like you. If you’re accepted, we’ll match you with companies and roles that line up with your skills, career goals and salary expectations. Apply for free today.
INDEED sponsor

An Introduction to Neural Networks With Python

A simple explanation of how neural networks work and how to implement one from scratch in Python. Nice illustrations!
VICTOR ZHOU

Import Almost Anything in Python

An intro to module loaders and finders so you can “hack” Python’s import system for fun and profit.
ALEKSEY BILOGUR • Shared by Aleksey Bilogur

Private Python Package Management With Poetry and Packagr

CHRISTOPHER DAVIES

I Learned Python in a Week and Only Sorta Regret It

MEZEROTM.COM

Why You Want Formal Dependency Injection in Python Too

“In other languages, e.g., Java, explicit dependency injection is part of daily business. Python projects however very rarely make use of this technique. I’d like to make a case for why it might be useful to rethink this approach.”
GITHUB.COM/DOBIASD

Understanding and Improving Conda’s Performance

Update from the Conda team regarding Conda’s speed, what they’re working on, and what performance improvements are coming down the pike.
ANACONDA.COM

Sentence Similarity in Python Using Doc2Vec

Using Python to estimate the similarity of two text documents using the Doc2Vec module.
KANOKI.ORG

Iterating with Simplicity: Evolving a Django app with Intercooler.js

ADAM STEPINSKI

Projects & Code

ctyped: Build Ctypes Interfaces for Shared Libraries With Type Hinting

GITHUB.COM/IDLESIGN • Shared by Juan Rodriguez

iodide-project/pyodide: Run CPython on WASM in the browser

And not just that, it’s a full Python scientific stack, compiled to WebAssembly for running in the browser. More info here.
GITHUB.COM/IODIDE-PROJECT

Pyckitup: Python Game Engine That Runs on WebAssembly

PICKITUP247.COM

PEP 8 Speaks: GitHub Integration for Python Code Style

A GitHub app to automatically review Python code style over Pull Requests.
PEP8SPEAKS.COM

ArchiveBox: Open Source Self-Hosted Web Archive

GITHUB.COM/PIRATE

minik: Web Framework for the Serverless World

GITHUB.COM/EABGLOBAL • Shared by PythonistaCafe

Events

Python Atlanta

March 14, 2019
MEETUP.COM

Karlsruhe Python User Group (KaPy)

March 15, 2019
BL0RG.NET

Django Girls Rivers 2019 Workshop

March 15 to March 17, 2019
DJANGOGIRLS.ORG

PyCon Odessa

March 16 to March 17, 2019
PYCONODESSA.COM

PyCon SK 2019

March 22 to March 25, 2019
PYCON.SK


Happy Pythoning!
This was PyCoder’s Weekly Issue #359.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]


          x-dream-distribution and partners at CabSat2019      Cache   Translate Page      

Following our success at CabSat 2018, we decided to bring our innovative products and services to it once again. Join x-dream-distribution GmbH in Dubai from 12-14 March 2019 at CabSat on the Bavarian booth in Hall 3.

x-dream-distribution GmbH presents innovative ingest, Social ingest andoutgestsolutions by Woody Technologies and transcoding and live broadcasting software by Capella Systems, as well as ingest and playout software by Libero Systems, and microservices toolkit for broadcaster by Squared Paper.

All the latest features that have been presented at BVE2019 and even more, will be available to all our visitors at CabSat in Dubai, Middle East & Africa's only event for content, broadcast, satellite, media & entertainment industry professionals looking to create inspiration, action & reaction.

Partner products and news CabSat 2019

Capella Systems (USA)

Cambria FTC // Cambria Cluster

Already in 4th generation, the Cambria FTC and Cambria Cluster are an innovative transcoding product… Most resentstandardfeatures:

•    SD / HD / UHD and up to 8K
•    xAVC, ProRes, DNxHD, JPEG2000
•    H.264 & H.265
•    HDR support
•    DASH, HLS, MSS
•    Dolby E & Dolby Vision
•    S3 read&write

Cambria Live // Cambria Broadcast Manager // Live Edit

New release of Cambria Live Series v4.1, a software-based production suite for professional live streaming broadcast production. This all-in-one system handles live switching, production functions, encoding, and distribution.

•    MPEG-DASH and CMAF support for Akamai
•    Failover (backup stream) support for DASH/HLS with Akamai
•    Ad Pre-fetch request to Yospace HLS/DASH targets
•    Software-based cue tone trigger feature
•    Embed splice_event_id from SCTE into Ad Preset

Flow Works (Germany)

Distributed MAM with distributed workflow support for media processing.
•    FlowCenter - highly integrated, complete workflow and asset management solution.
•    Flow ANT - micro media management appliance with GPU acceleration
•    All-new Flow Archive GUI (Editorial GUI).

Libero (Turkey)

•    Libero Playout is a software-based playout automation system which provides powerful, flexible and user-friendly broadcasting solutions via a client-server architecture.
•    Libero Ingest is a flexible multi-channel ingest, transcoding and encoding software with powerful and user-friendly features.

Metaliquid (Italy)

Customized on-premises and cloud state of the art AI recognition and classification services to meet specific industry needs. Metaliquid has developed a proprietary deep learning framework and neural network architectures.

•    Face recognition
•    Shot and setting recognition
•    Sensitive contentdetection
•    Opening and closingcreditsdetection
•    Sport actionsclassification
•    Content type, audio and language classification

Squared Paper (United Kingdom)

The Busby Enterprise Service Bus & microservices toolkit is specially designed for the broadcast industry:

•    monitoring hardware and software systems and applications
•    workflow orchestration from small to large and complex
•    event recording for SLA reporting and later analysis
•    controlling external devices and services, etc.

Teamium (France)

Feature rich, simple to use resource scheduling and collaboration management solution exclusively designed for video production.
•    Cloud-basedproductionmanagement
•    Resourceplanning and scheduling
•    User definedbusinessprocesses
•    Consumer grade userexperience
•    Realtime financialdashboard

Woody Technologies (France)

•    Version 3.1 of all Woody software will be released, bringing several major enhancements.
•    New Woody in2it Server, a unique client-server ingest tool for all media formats, with web-based intuitive UX and strong workflow control features, streamlining local and remote ingest workflows.

•    Woody in2it Go, the ultimate tool for reporters on the field to encode, transfer and notify their footage or stories to the broadcaster facility.
•    Woody Social, ingest from any social network directly to your production environment.
•    Woody in2it Server, Woody Ingest, Woody Outgest and Woody Social can now be deployed in a scalable architecture containing multiple nodes. This brings two major improvements - redundancy and load balancing – for large Woody deployments.

x-dream-media (Germany)

Software integrator with an entire commitment to the media IT developing its own software products for file-based workflows and asset management.
 
•    Signiant Managers + Agents and XDM WFM – workflow manager with integrations to many 3rd parties file processing and publishing software
•    OneGUI – job, workflow and farm monitoring & reporting, search & filtering, multi-tenant, various 3rd parties (e.g. Harmonic, Telestream, Capella, MOG, Interra)
•    Ingest Browser – media browsing, previewing, trimming and workflow start, watchfolder, storage indexing, file search
•    MFP – multi format player with frame accurate positioning, side-by-side view, audio leveling, SDI output and playlist support
•    SERVUS node – software-only videoserver, recorder and IP-streamer

 

http://www.indiantelevision.com/sites/default/files/styles/300x300/public/images/tv-images/2019/03/11/x-dream.jpg?itok=AQjolE4I

          (USA-MD-Bethesda) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-CA-Los Angeles) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-NC-Raleigh) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-PA-Philadelphia) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-NC-Research Triangle Park) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-TX-Austin) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-CA-San Jose) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-GA-Atlanta) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-MA-Boston) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-NY-New York) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-MN-Minneapolis) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-TX-Houston) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-AZ-Phoenix) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-IL-Chicago) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-NC-Charlotte) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-MO-St Louis) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-DC-Washington) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-NY-Armonk) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          (USA-TX-Dallas) Research Staff Member      Cache   Translate Page      
**Job Description** At IBM Research, we invent things that matter to the world. Today, we are pioneering the most promising and disruptive technologies that will transform industries and society, including the future of AI, Blockchain and Quantum Computing. We are driven to discover. With more than 3,000 researchers in 12 labs located across six continents, IBM Research is one of the world’s largest and most influential corporate research labs. We are seeking Research Staff Member candidates with demonstrated publication records in one or more of our focus areas and technical leadership potential. As part of the IBM Research team, you will conduct world-class research on innovative technologies and solutions, and publish in top-tier conferences and journals. You will also have the opportunity to contribute to the commercialization of the resulting assets. Demonstrated communication skills and ability to work independently, as well as in a team, are highly desired traits. You should have one or more of the following skills: -Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices -Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) -Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software -Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services -Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash -Quantum Computing You must be willing to work in any of the following locations: Albany, NY; Almaden, CA; Austin, TX; Cambridge, MA; Yorktown Heights, NY. Ph.D. degree in Engineering, Computer Science, Physics or a related area is required. The World is Our Laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology and business to make the world work better. IBM Research is a global community of forward-thinkers working towards a common goal: progress. **Required Technical and Professional Expertise** + Ability to work in a team environment, as well as independently + Demonstrated communication skills **Preferred Tech and Prof Experience** Advanced knowledge in one or more of the following areas: + Cloud computing infrastructure, including cloud platform and programming models; cloud infrastructure services; APIs; containers; DevOps techniques; microservices + Computing and data services, including emerging platforms (Spark, SQL/NoSQL, Blockchain) + Artificial intelligence computing infrastructure, including deep learning platform and services, accelerated cognitive systems, accelerators interface architecture, compilers and programming models, co-designed hardware/software + Distributed computing theory and application, systems; services technology, including service management, analytics, automation and orchestration, cognitive services + Memory controller architecture, memory performance modeling, memory standards and interfaces, cache design, non-volatile memory, NAND flash + Quantum Computing **EO Statement** IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
          Postdoctoral Fellow in Deep Learning applied to Digital Pathology - Sunnybrook Health Sciences Centre - Toronto, ON      Cache   Translate Page      
The Martel group is located in the Image Processing Lab at Sunnybrook Research Institute. The medical image analysis research group, led by Dr Anne Martel, is...
From Sunnybrook Health Sciences Centre - Wed, 20 Feb 2019 16:24:57 GMT - View all Toronto, ON jobs
          Optimizing the depth and the direction of prospective planning using information values      Cache   Translate Page      
by Can Eren Sezener, Amir Dezfouli, Mehdi Keramati Evaluating the future consequences of actions is achievable by simulating a mental search tree into the future. Expanding deep trees, however, is computationally taxing. Therefore, machines and humans use a plan-until-habit scheme that simulates the environment up to a limited depth and then exploits habitual values as … Continua la lettura di Optimizing the depth and the direction of prospective planning using information values
          Spike burst-pause dynamics of Purkinje cells regulate sensorimotor adaptation      Cache   Translate Page      
by Niceto R. Luque, Francisco Naveros, Richard R. Carrillo, Eduardo Ros, Angelo Arleo Cerebellar Purkinje cells mediate accurate eye movement coordination. However, it remains unclear how oculomotor adaptation depends on the interplay between the characteristic Purkinje cell response patterns, namely tonic, bursting, and spike pauses. Here, a spiking cerebellar model assesses the role of Purkinje … Continua la lettura di Spike burst-pause dynamics of Purkinje cells regulate sensorimotor adaptation
          Individual prognosis at diagnosis in nonmetastatic prostate cancer: Development and external validation of the PREDICT Prostate multivariable model      Cache   Translate Page      
by David R. Thurtle, David C. Greenberg, Lui S. Lee, Hong H. Huang, Paul D. Pharoah, Vincent J. Gnanapragasam Background Prognostic stratification is the cornerstone of management in nonmetastatic prostate cancer (PCa). However, existing prognostic models are inadequate—often using treatment outcomes rather than survival, stratifying by broad heterogeneous groups and using heavily treated cohorts. To … Continua la lettura di Individual prognosis at diagnosis in nonmetastatic prostate cancer: Development and external validation of the PREDICT <i>Prostate</i> multivariable model
          Age distribution, trends, and forecasts of under-5 mortality in 31 sub-Saharan African countries: A modeling study      Cache   Translate Page      
by Iván Mejía-Guevara, Wenyun Zuo, Eran Bendavid, Nan Li, Shripad Tuljapurkar Background Despite the sharp decline in global under-5 deaths since 1990, uneven progress has been achieved across and within countries. In sub-Saharan Africa (SSA), the Millennium Development Goals (MDGs) for child mortality were met only by a few countries. Valid concerns exist as to … Continua la lettura di Age distribution, trends, and forecasts of under-5 mortality in 31 sub-Saharan African countries: A modeling study
          Deep learning : engage the world, change the world      Cache   Translate Page      
Michael Fullan, Joanne Quinn, Joanne McEachen.. Thousand Oaks, California : Corwin [2018] -- ÉPC-Biologie : LB 2806 F849 2018
          Data Science: Deep Learning in Python (Updated)      Cache   Translate Page      
Data Science: Deep Learning in Python (Updated)#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
Data Science: Deep Learning in Python (Updated)
.MP4 | Video: 1280x720, 30 fps(r) | Audio: AAC, 48000 Hz, 2ch | 1.43 GB
Duration: 9.5 hours | Genre: eLearning Video | Language: English

The MOST in-depth look at neural network theory, and how to code one with pure Python and Tensorflow.

Learn how Deep Learning REALLY works (not just some diagrams and magical black box code)


          Merasakan Sensasi Rumah Pintar LG di Sydney      Cache   Translate Page      

Liputan6.com, Sydney - Saat gelaran InnoFest 2019 tingkat Asia Pasifik, LG Electronics menekankan teknologi kecerdasan (artificial intelligence/AI), dengan mendirikan rumah pintar bernama LG Mansion di Cremorne, Sydney, New South Wales, Australia.

Di rumah pintar tersebut dipamerkan sejumlah demo produk elektronik LG kelas premium (LG Signature), yang telah membenamkan platform AI, ThinQ, dengan dukungan dari Google Assistant dan Amazon.

Tekno Liputan6.com berkesempatan untuk merasakan sensasi bagaimana produk tersebut dapat memudahkan aktivitas sehari-hari menjadi lebih mudah dan menyenangkan di rumah pintar LG, Rabu (13/3/2019) waktu setempat.

Perangkat elektronik, mulai dari televisi (TV), penyejuk udara, pembersih debu, lemari es, lemari pendingin wine, microwave, mesin pembuat bir, hingga mesin cuci, bisa berkomunikasi satu sama lain.

Ini disebut Internet of Things (IoT), di mana semua itu bisa berjalan berkat dukungan jaringan internet berkecepatan tinggi, seperti 5G. Dalam hal ini kemampuan AI dibutuhkan IoT untuk menjadikan perangkat menjadi lebih bersahabat.

Berkat AI, perangkat elektronik tersebut juga akan mempelajari kebiasaan pengguna serta merekomendasikan atau memberi saran tentang pengaturan dan pemakaian perangkat agar bisa berjalan dengan baik.

Untuk menggunakan perangkat pintar dari LG, pengguna cukup melakukan perintah melalui suara, dengan mengatakan "Hi LG". Dengan catatan perangkat tersebut sudah mengenal suara si pengguna.

Suasana LG InnoFest 2019 di Sydney, Australia. Liputan6.com/Iskandar#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Misalnya, saat ingin menaruh wine di lemari pendingin, pengguna cukup mengatakan, "Hi LG, open the door please." Lalu pintu lemari pendingin itu akan terbuka dengan sendirinya tanpa khawatir botol wine yang ada di kedua genggaman tangan terjatuh.

Asisten Virtual

Suasana LG InnoFest 2019 di Sydney, Australia. Liputan6.com/Iskandar#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Selain itu, ketika pengguna menyalakan air purifier di dalam rumah, asisten virtual dalam bentuk speaker akan merekomendasikan pengguna, apakah perlu dinyalakan penyejuk udara atau tidak. Asisten virtual itu juga akan memberi tahu tentang kondisi udara di luar ruangan.

Lalu, saat pengguna membersikan rumah dengan vacuum cleaner, ThinQ akan menyadarinya dan bakal merekomendasikan untuk mengaktifkan robot pembersih.

Suasana LG InnoFest 2019 di Sydney, Australia. Liputan6.com/Iskandar#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Apabila pengguna akan mencuci pakaian di mesin cuci, ThinQ akan mengatur pengoperasiannya dan memperingati pengguna bila ada masalah.

Suasana LG InnoFest 2019 di Sydney, Australia. Liputan6.com/Iskandar#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Misalkan jika pintu mesin cuci tidak tertutup rapat, ThinQ akan memberitahu pegguna kalau ada masalah tersebut. Ya, ThinQ mengetahui semua pengaturan yang sesuai dengan keinginan kita.

 

Meningkatkan Fitur AI Pada TV

Suasana LG InnoFest 2019 di Sydney, Australia. Liputan6.com/Iskandar#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Tak cukup sampai di situ, TV pintar terbaru LG juga bisa dikontrol via perintah suara melalui mic yang ada pada remote control. Smart TV itu bahkan bisa menjadi hub untuk berkomunikasi dengan perangkat pintar lainnya.

LG mengklaim telah meningkatkan fitur AI pada TV, di mana deep learning yang ada bisa mengarahkan prosesor untuk meningkatkan kualitas gambar.

(Isk/Jek)

Saksikan Video Pilihan Berikut Ini


          Machine Learning - Al Manal Training Center , United Arab Emirates, Abu Dhabi,Abu Dhabi       Cache   Translate Page      
Machine Learning Training:

Machine learning! It’s a branch of computer science. It’s one of the most learnt courses today as the opportunities 
are abundant. If you want to learn Machine learning course in Abu Dhabi, then reach Al Manal Training ;
As new technologies evolved, machine learning was altered greatly and our Machine learning classes in Abu Dhabi 
will give you a clear understanding.

Below are some of the topics that we will discuss in our machine learning training classes at our institute:

Machine Learning
Data Preprocessing
Introduction to Supervised Learning
Simple and Multiple Linear Regression
Polynomial Regrssion

Linear Methods for Classification
Logistic Regression
K-Nearest Neighbours
Support Vector Machines
Kernel SVM
Naive Bayes
Decision Tree
Random Forest

Introduction to Unsupervised Learning
Cluster Analysis
K Means Clustering

Reinforcement Learning

Natural Language Processing

Deep Learning
Artificial Neural Networks

Dimensionality Reduction

Model Selection Procedures

Cost: 3500 AED

Duration: Upto 30 Hours


          Merasakan Sensasi Rumah Pintar LG di Sydney      Cache   Translate Page      

Liputan6.com, Sydney - Saat gelaran InnoFest 2019 tingkat Asia Pasifik, LG Electronics menekankan teknologi kecerdasan (artificial intelligence/AI), dengan mendirikan rumah pintar bernama LG Mansion di Cremorne, Sydney, New South Wales, Australia.

Di rumah pintar tersebut dipamerkan sejumlah demo produk elektronik LG kelas premium (LG Signature), yang telah membenamkan platform AI, ThinQ, dengan dukungan dari Google Assistant dan Amazon.

Tekno Liputan6.com berkesempatan untuk merasakan sensasi bagaimana produk tersebut dapat memudahkan aktivitas sehari-hari menjadi lebih mudah dan menyenangkan di rumah pintar LG, Rabu (13/3/2019) waktu setempat.

Perangkat elektronik, mulai dari televisi (TV), penyejuk udara, pembersih debu, lemari es, lemari pendingin wine, microwave, mesin pembuat bir, hingga mesin cuci, bisa berkomunikasi satu sama lain.

Ini disebut Internet of Things (IoT), di mana semua itu bisa berjalan berkat dukungan jaringan internet berkecepatan tinggi, seperti 5G. Dalam hal ini kemampuan AI dibutuhkan IoT untuk menjadikan perangkat menjadi lebih bersahabat.

Berkat AI, perangkat elektronik tersebut juga akan mempelajari kebiasaan pengguna serta merekomendasikan atau memberi saran tentang pengaturan dan pemakaian perangkat agar bisa berjalan dengan baik.

Untuk menggunakan perangkat pintar dari LG, pengguna cukup melakukan perintah melalui suara, dengan mengatakan "Hi LG". Dengan catatan perangkat tersebut sudah mengenal suara si pengguna.

Suasana LG InnoFest 2019 di Sydney, Australia. Liputan6.com/Iskandar#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Misalnya, saat ingin menaruh wine di lemari pendingin, pengguna cukup mengatakan, "Hi LG, open the door please." Lalu pintu lemari pendingin itu akan terbuka dengan sendirinya tanpa khawatir botol wine yang ada di kedua genggaman tangan terjatuh.

Asisten Virtual

Suasana LG InnoFest 2019 di Sydney, Australia. Liputan6.com/Iskandar#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Selain itu, ketika pengguna menyalakan air purifier di dalam rumah, asisten virtual dalam bentuk speaker akan merekomendasikan pengguna, apakah perlu dinyalakan penyejuk udara atau tidak. Asisten virtual itu juga akan memberi tahu tentang kondisi udara di luar ruangan.

Lalu, saat pengguna membersikan rumah dengan vacuum cleaner, ThinQ akan menyadarinya dan bakal merekomendasikan untuk mengaktifkan robot pembersih.

Suasana LG InnoFest 2019 di Sydney, Australia. Liputan6.com/Iskandar#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Apabila pengguna akan mencuci pakaian di mesin cuci, ThinQ akan mengatur pengoperasiannya dan memperingati pengguna bila ada masalah.

Suasana LG InnoFest 2019 di Sydney, Australia. Liputan6.com/Iskandar#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Misalkan jika pintu mesin cuci tidak tertutup rapat, ThinQ akan memberitahu pegguna kalau ada masalah tersebut. Ya, ThinQ mengetahui semua pengaturan yang sesuai dengan keinginan kita.

 

Meningkatkan Fitur AI Pada TV

Suasana LG InnoFest 2019 di Sydney, Australia. Liputan6.com/Iskandar#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Tak cukup sampai di situ, TV pintar terbaru LG juga bisa dikontrol via perintah suara melalui mic yang ada pada remote control. Smart TV itu bahkan bisa menjadi hub untuk berkomunikasi dengan perangkat pintar lainnya.

LG mengklaim telah meningkatkan fitur AI pada TV, di mana deep learning yang ada bisa mengarahkan prosesor untuk meningkatkan kualitas gambar.

(Isk/Jek)

Saksikan Video Pilihan Berikut Ini


          Technical Marketing Engineer, ONTAP AI & Analytics - NetApp - Research Triangle Park, NC      Cache   Translate Page      
A strong candidate will bring knowledge, experience, and passion for the movement towards NVIDIA GPUs to power all modern Deep Learning software frameworks,...
From NetApp - Mon, 11 Feb 2019 18:15:31 GMT - View all Research Triangle Park, NC jobs
          Technical Marketing Engineer, ONTAP AI & Analytics - NetApp - Sunnyvale, CA      Cache   Translate Page      
A strong candidate will bring knowledge, experience, and passion for the movement towards NVIDIA GPUs to power all modern Deep Learning software frameworks,...
From NetApp - Mon, 11 Feb 2019 18:15:30 GMT - View all Sunnyvale, CA jobs
          Senior Solutions Architect -Autonomous Driving - NVIDIA - Santa Clara, CA      Cache   Translate Page      
Be an internal champion for Deep Learning and HPC among the NVIDIA technical community. Our success has been based on laser-focused industry vertical business...
From NVIDIA - Sat, 22 Dec 2018 13:57:43 GMT - View all Santa Clara, CA jobs
          Deep Learning Solution Architect - NVIDIA - Santa Clara, CA      Cache   Translate Page      
NVIDIA is widely considered to be one of the technology world’s most desirable employers. 5+ years delivering Enterprise Accelerated Computing (HPC, Deep...
From NVIDIA - Tue, 06 Nov 2018 01:54:48 GMT - View all Santa Clara, CA jobs
          Udemy St. Patrick’s Day Sale 🍀      Cache   Translate Page      

Do beer and AI go together? For the next week, all my Deep Learning and AI courses are available for just $11.99! ($1.00 less than the current sale, woohoo!) For my courses, please use the coupons below (included in the links), or if you want, enter the coupon code: MAR2019. For prerequisite courses (math, stats, Python programming) and […]

The post Udemy St. Patrick’s Day Sale 🍀 appeared first on Lazy Programmer.


          Senior AI/Deep Learning Software Engineer - St Josephs Hospital and Medical Center - Phoenix, AZ      Cache   Translate Page      
Ability to align business needs to development and machine learning or artificial intelligence solutions. Experience in natural language understanding, computer...
From Dignity Health - Tue, 27 Nov 2018 03:06:49 GMT - View all Phoenix, AZ jobs
          Глубокая авторегрессия      Cache   Translate Page      
Deep learning и модели авторегрессии

          SwiftStack Announces World’s First Multi-Cloud AI/ML Data Management Solution      Cache   Translate Page      

According to a new press release, “SwiftStack, the leader in multi-cloud data storage and management, today announced a new customer-proven edge-to-core-to-cloud solution that supports large-scale Artificial Intelligence/Machine and Deep Learning (AI/ML/DL) workflows. SwiftStack has recently deployed the new solution stack in two autonomous vehicle use cases. SwiftStack’s AI/ML solution delivers massive storage parallelism and throughput, […]

The post SwiftStack Announces World’s First Multi-Cloud AI/ML Data Management Solution appeared first on DATAVERSITY.


          frimcla 0.0.539      Cache   Translate Page      
Framework for Image Classification using traditional and deep learning techniques
          frimcla 0.0.538      Cache   Translate Page      
Framework for Image Classification using traditional and deep learning techniques
          frimcla 0.0.537      Cache   Translate Page      
Framework for Image Classification using traditional and deep learning techniques
          Deep generative models for fast shower simulation in ATLAS      Cache   Translate Page      
The need for large scale and high fidelity simulated samples for the extensive physics program of the ATLAS experiment at the Large Hadron Collider motivates the development of new simulation techniques. Building on the recent success of deep learning algorithms, Variational Auto-Encoders and Generative Adversarial Networks are investigated for modeling the response of the ATLAS electromagnetic calorimeter for photons in a central calorimeter region over a range of energies. The properties of synthesized showers are compared to showers from a full detector simulation using Geant4. This feasibility study demonstrates the potential of using such algorithms for fast calorimeter simulation for the ATLAS experiment in the future and opens the possibility to complement current simulation techniques. To em- ploy generative models for physics analyses, it is required to incorporate additional particle types and regions of the calorimeter and enhance the quality of the synthesized showers.
          What Is Deep Learning?      Cache   Translate Page      
AI. Deep Learning. Robots. What is all this stuff about …
          PlayStation Now: quali e quanti giochi si possono scaricare? - Everyeye Videogiochi      Cache   Translate Page      
  1. PlayStation Now: quali e quanti giochi si possono scaricare?  Everyeye Videogiochi
  2. PlayStation Now con connessione ADSL da 20 Mbit: pregi e difetti  Multiplayer.it
  3. PlayStation 5 sfrutterà il deep learning?  Tom's Hardware Italia
  4. PlayStation Now senza carta di credito: come funziona l'abbonamento?  Everyeye Videogiochi
  5. PlayStation Now: Sony dettaglia lancio in Italia, data e prezzo  Multiplayer.it
  6. Visualizza copertura completa su Google News

          PS5 usaría 'deep learning' para adaptarse a los jugadores, según una patente      Cache   Translate Page      
Para ofrecer experiencias personalizadas.

La información de PlayStation 5 sigue estando envuelta en un misterio que los analistas y fuentes de la industria intentan despejar con sus previsiones.

Recientemente, el youtuber Skullzi TV ha descubierto una patente que arroja información interesante acerca del funcionamiento de esta nueva plataforma. En base a la descripción de dicha patente - que podéis leer aquí -, PlayStation 5 haría uso del deep learning o aprendizaje profundo para adaptarse a los jugadores.



Adaptando el videojuego a los usuarios


De esta forma, la experiencia que ofrece un videojuego se adaptaría a la manera de jugar y los hábitos de los usuarios. "La personalización aprende de las interacciones históricas de los jugadores con el videojuego y, opcionalmente, con otros videojuegos", se lee en la patente.

"Se implementa una red neuronal de aprendizaje profundo para generar conocimiento a partir de las interacciones históricas", es decir, el sistema analizaría el conjunto de acciones de un usuario respecto a un videojuego para ofrecer una experiencia personalizada. "La personalización se establece de acuerdo con el conocimiento."

El videojuego se adaptará a los usuarios en base a sus acciones y habilidad

Así, el videojuego cambiaría su dificultad dependiendo de la forma en que el jugador se desenvuelva con el mismo y su nivel de habilidad. Por ejemplo, si el sistema te reconoce como un jugador experto, omitirá todos los tutoriales para que puedas dirigirte directamente a disfrutar del juego.

Encontramos que el sistema "lee, analiza y cambia" la experiencia según el estilo de juego o las habilidades del propio jugador, indica la patente.

El sistema "lee, analiza y cambia" la experiencia

Hay que señalar que esta información proviene de una patente y que, de momento, tendremos que esperar a que Sony confirme si, finalmente, PS5 hará uso del aprendizaje profundo o no.

Por otro lado, las últimas previsiones apuntaban a que PlayStation 5 llegaría entre abril de 2021 y marzo de 2022.

PS5 usaría 'deep learning' para adaptarse a los jugadores, según una patente
          Design Engineer - (Sr./Mid) SRAM/ACDs/Sigma Deltas      Cache   Translate Page      
CA-Santa Clara, Located in beautiful Santa Clara, CA and Orange County we're developing deep neural network computing chips across several industries. We are very well funded and are growing out our team. If you're interested in deep learning, electronics, high performance computing, and the relevant industries that would use our amazing products (robotics, medical, self-driving cars); join us! If you're passiona
          Senior AI/Deep Learning Software Engineer - St Josephs Hospital and Medical Center - Phoenix, AZ      Cache   Translate Page      
Ability to align business needs to development and machine learning or artificial intelligence solutions. Experience in natural language understanding, computer...
From Dignity Health - Tue, 27 Nov 2018 03:06:49 GMT - View all Phoenix, AZ jobs
          BrandPost: AI, This Is the Intelligent and Lossless Data Center Network You Want!      Cache   Translate Page      

The AI era is accelerating. AI is no longer just a data model in a lab. In addition, the industry is constantly exploring the way to implement AI applications.The compound annual growth rate (CAGR) of government, finance, Internet, new retail, new manufacturing, and healthcare industries with AI implementation will exceed 30% in the future three years. As AI is coming, is the underlying network infrastructure that provides key support for AI development already ready?

The algorithm, computing power, and data are the three driving forces for AI development. Today, we have made breakthroughs in the deep learning algorithm. However, algorithm-driven intelligence relies heavily on enormous sample data and high-performance computing capabilities. Revolutionary changes have taken place in storage and computing fields to improve data processing efficiency of AI.

To read this article in full, please click here


          Quién es Mellanox y por qué NVIDIA ha pagado 6.900 millones de dólares por ella      Cache   Translate Page      

Quién es Mellanox y por qué NVIDIA ha pagado 6.900 millones de dólares por ella#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Esta semana nos enterábamos de un acuerdo singular: NVIDIA hacía la mayor adquisición de su historia y compraba la empresa israelí Mellanox por 6.900 millones de dólares, superando a Intel en la puja por esa compañía.

¿Qué ha impulsado a NVIDIA a hacer esa operación? Las razones son relevantes para una empresa que desde hace tiempo se ha diversificado. Ya no solo importan las gráficas para videojuegos: NVIDIA quiere ganar la batalla la inteligencia artificial y los centros de datos, un mercado muy jugoso en el que ahora contará con unos recursos extraordinarios.

¿Qué hace Mellanox?

La empresa israelí es muy conocida dentro del ámbito de los centros de datos, y lo es por sus soluciones especializadas de conectividad. De hecho cuenta con cuatro líneas de producto entre las que destacan dos: los adaptadores de red inteligentes que actúan como coprocesadores o aceleradores de estas conexiones, y la tecnología Infiniband para el mercado HPC y para segmentos altamente especializados.

Connectx 6 Con el adaptador ConnectX-6, Mellanox ya logra alcanzar tasas de 200 Gbps.

¿Adivináis cuál es uno de esos segmentos altamente especializados? Efectivamente: las aplicaciones de inteligencia artificial que se nutren de capacidades de cálculo brutales con máquinas que necesitan interconectarse para trabajar juntas: las conexiones Infiniband permiten que los potenciales cuellos de botella desaparezcan en este tipo de implantaciones.

Intel estaba muy interesada en Mellanox e hizo una puja por la empresa que según los rumores ascendía a 6.000 millones de dólares. La razón es que ambas luchan por el mercado de las interconexiones en el mercado de HPC.

Sth Ethernet domina en el mercado de la supercomputación, pero Infiniband está ganando terreno.

Lo indicaba en un análisis STH, que nos hablaba de cómo la solución de Intel (Omni-Path, con velocidades de 100 Gbps por puerto) competía con el adaptador Infiniband de Mellanox, que también logra esas tasas de transferencia en su versión ConnectX-5 pero ya llega a los 200 Gbps en el ConnectX-6.

Como explicaban en ese informe, la relevancia de Infiniband es clara, pero también lo es la de la conexión Ethernet que para muchos entornos es realmente fantástica. Aquí Intel domina con sus chips 1 Gbps (Realtek no compite mucho en servidores), y en el campo de las conexiones y los chips 10 GbE es también Intel la que es la referente.

Sin embargo con las soluciones 40GbE Intel tuvo problemas con sus NICs (Network Interface Card, tarjetas de red) de la familia Fortville, lo que ha permitido que las soluciones 40GbE de Mellanox (y de otros fabricantes) ganen terreno.

Esa historia ha continuado con la transición a soluciones 50GbE o 100 GbE en las que Intel se ha quedado atrás: Mellanox se ha convertido de repente en protagonista del mercado Ethernet que es el más extendido en entornos de alto rendimiento, centros de datos incluidos.

¿Para qué quiere NVIDIA a Mellanox?

Las respuestas aquí son variadas y hay muchos ámbitos en los que NVIDIA podría aprovechar las soluciones y recursos de Mellanox. Una de ellas sería la potencial entrada de NVIDIA en el mercado de los servidores: NVIDIA tiene ahora todo los componentes para desarrollar máquinas especializadas en entornos HPC o centros de datos, ya que esta última pieza del puzzle les ofrece una ventaja competitiva importante.

Bluefield

De hecho la tecnología InfiniBand de Mellanox podría usarse para conectar servidores basados en GPUs que podrían trabajar a toda máquina con esas conexiones de alto rendimiento.

Es aquí donde las cosas se perfilan, porque la división de NVIDIA dedicada al ámbito de la inteligencia artificial y el aprendizaje profundo (Deep Learning) podrá aprovecharse de la tecnología de Mellanox para conectar esos nodos con 8 o 16 GPUs (por ejemplo).

Eso le permitiría hacer uso de Infiniband para RDMA (Remote Direct Memory Access), un sistema que permite "saltarse" a la CPU a la hora de hacer lecturas y escrituras en memoria, y que es muy utilizado en HPC, Big Data, centros de datos y sistemas de almacenamiento, entre otros.

Hay otros beneficios claros de esas sinergias que se crean con la adquisición -las soluciones de NVIDIA y Mellanox en entornos HPC ya eran conocidas y apreciadas- y esto le pone las cosas muy difíciles a Intel en el ámbito HPC y su división de conectividad, que aparentemente se ha quedado atrás en cuanto a innovación y capacidad de desarrollo de productos competitivos.

¿Está pensando Intel en abandonar ese segmento de negocio? Difícil saberlo, pero con la compra de Mellanox NVIDIA también parece asegurarse un buen bocado de este mercado en el futuro próximo, sobre todo porque las soluciones de Mellanox también se utilizan por las grandes tecnológicas en sus centros de datos: Alphabet, Amazon o Microsoft las aprovechaban, e incluso AMD trabajaba con Mellanox para soluciones en este ámbito.

recommendation.header

Debian cumple 25 años, y estas son las razones por las que ha sido crucial para la historia de Linux

Gasolina, diésel, híbrido… Cómo acertar con la propulsión

NVIDIA supera a Microsoft e Intel en su puja por los supercomputadores y compra Mellanox por 6.900 millones de dólares

-
the.news Quién es Mellanox y por qué NVIDIA ha pagado 6.900 millones de dólares por ella originally.published.in por Javier Pastor .


          Briefing: Facebook AI executive joins Alibaba      Cache   Translate Page      
Alibaba is looking to enhance its AI fundamental capabilities such as deep learning, and Jia is well-regarded for his key contributions in this field.
          Medtech firm wins Qualcomm challenge for start-ups      Cache   Translate Page      

Bengaluru: Bengaluru-based medtech firm Artelus India Pvt Ltd took home $100,000 after winning chipset maker Qualcomm's challenge for start-ups in India, the company said on Wednesday.

Artelus India focuses on leveraging cutting edge technologies like deep learning and Artificial Intelligence (AI) to increase the capacity of healthcare providers.

It has developed Diabetic Retinopathy Intelligent Screening System Integrated (DRISTi) -- a deep learning based AI algorithm that reads digital images to detect and identify early signs of diabetic retinopathy that could lead to permanent blindness.

Biometric device maker Mobiusworks Pvt Ltd and medtech firm Chingroo Labs Pvt Ltd -- both based in Bengaluru -- secured first and second runners-up spot, respectively, in the competition called Qualcomm Design in India Challenge 2018.

The first and second runners-up received $75,000 and $50,000, respectively, Qualcomm said.

Launched in 2016, the Qualcomm Design in India Challenge is an incubation programme that encourages start-ups to develop innovative hardware products using Qualcomm's advanced technologies.

With the 2018 edition coming to an end, the programme has supported 39 start-ups with an overall investment of over $12.3 million, the company said.



          Researchers Have Taught Robots Self Awareness of Their Own Bodies      Cache   Translate Page      

Columbia Engineering's robot learns what it is, with zero prior knowledge of physics, geometry, or motor dynamics. After a period of "babbling," and within about a day of intensive computing, the robot creates a self-simulation, which it can then use to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its body. (Source: Robert Kwiatkowski/Columbia Engineering) 

What if a robot had the ability to become kinematically self-aware, in essence, developing its own model, based on observation and analysis of its own characteristics?

Researchers at Columbia University have developed a process in which a robot "can auto-generate its own self model," that will accurately simulate its forward kinematics, which can be run at any point in time to update and essentially calibrate the robot as it experiences wear, damage, or reconfiguration – thereby allowing an autonomous robotic control system to achieve the highest accuracy and performance. The same self model can then be used to learn additional tasks.

For robots designed to perform critical tasks, it is essential to have an accurate kinematic model describing the robot's mechanical characteristics. This will allow the controller to project response times, inertial behavior, overshoot, and other characteristics that could potentially lead the robot's response to diverge from an issued command, and compensate for them.

The robotic arm in multiple poses as it was collecting data through random motion. (Image source: Robert Kwiatkowski/Columbia Engineering)

This requirement presents several challenges: First, as robotic mechanisms get more complex, the ability to produce a mathematically accurate model becomes more difficult. This is especially true for soft robotics, which tend to exhibit highly non-linear behavior. Second, once in service, robots can change, either through wear or damage, or simply experience different types of loads while in operation. Finally, the user may choose to reconfigure the robot to perform a different function from the one it was originally deployed for. In each of these cases, the kinematic model embedded in the controller may fail to achieve satisfactory result if not updated.

According to Robert Kwiatkowski, a doctoral student involved in the Columbia University research, a type of "self-aware robot," capable of overcoming these challenges was demonstrated in their laboratory. The team conducted the experiments using a four-degree-of freedom articulated robotic arm. The robot moved randomly through 1,000 trajectories collecting state data at 100 points along each one. The state data was derived from positional encoders on the motor and the end effector and was then fed, along with the corresponding commands, into a deep learning neural network. “Other sensing technology, such as indoor GPS would have likely worked just as well,” according to Kwiatkowski.

One point that Kwiatkowski emphasized was that this model had no prior knowledge of the robot's shape, size, or other characteristics, nor, for that matter, did it know anything about the laws of physics.

Initially, the models were very inaccurate. "The robot had no clue what it was, or how its joints were connected." But after 34 hours of training the model become consistent with the physical robot to within about four centimeters.

This self-learned model was then installed into a robot and was able to perform pick-and-place operations with a 100% rate in a closed-loop test. In an open loop test, which Kwiatkowski said is equivalent to picking up objects with your eyes closed (a task even difficult for humans), it achieved 44% success.

Overall, the robot achieved an error rate comparable to the robot's own re-installed operating system. The self-modeling capability makes the robot far more autonomous,Kwiatkowski said. To further demonstrate this, the researchers replaced one of the robotic linkages with one having different characteristics (weight, stiffness, and shape) and the system updated its model and continued to perform as expected.

This type of capability could be extremely useful for an autonomous vehicle that could continuously update its state model in response to changes due to wear, variable internal loads, and driving conditions.

Clearly more work is required to achieve a model that can converge in seconds rather than hours. From here, the research will to proceed to look into more complex systems.

RP Siegel, PE, has a master's degree in mechanical engineering and worked for 20 years in R&D at Xerox Corp. An inventor with 50 patents and now a full-time writer, RP finds his primary interest at the intersection of technology and society. His work has appeared in multiple consumer and industry outlets, and he also co-authored the eco-thriller  Vapor Trails.

ESC, Embedded Systems Conference

ESC BOSTON IS BACK!
REGISTER TODAY!

The nation's largest embedded systems conference is back with a new education program tailored to the needs of today's embedded systems professionals, connecting you to hundreds of software developers, hardware engineers, start-up visionaries, and industry pros across the space. Be inspired through hands-on training and education across five conference tracks. Plus, take part in technical tutorials delivered by top embedded systems professionals. Click here to register today!


          Amy Webb on Artificial Intelligence, Humanity, and the Big Nine      Cache   Translate Page      

BigNineCover-193x300.jpg
Futurist and author Amy Webb talks about her book, The Big Nine, with EconTalk host Russ Roberts. Webb observes that artificial intelligence is currently evolving in a handful of companies in the United States and China. She worries that innovation in the United States may lead to social changes that we may not ultimately like; in China, innovation may end up serving the geopolitical goals of the Chinese government with some uncomfortable foreign policy implications. Webb’s book is a reminder that artificial intelligence does not evolve in a vacuum–research and progress takes place in an institutional context. This is a wide-ranging conversation about the implications and possible futures of a world where artificial intelligence is increasingly part of our lives.

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

TimePodcast Episode Highlights
0:33

Intro. [Recording date: February 12, 2019.]

Russ Roberts: My guest is futurist and author Amy Webb.... Her latest book is The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity.... Your book is a warning about the challenges we face, that we're going to face dealing with the rise of artificial intelligence. What is special about the book, at least in my experience reading about AI [Artificial Intelligence] and worries about artificial intelligence is that it doesn't talk about AI in the abstract but actually recognizes the reality that AI is mostly being developed within very specific institutional settings in the United States and in China. So, let's start with what you call the Big Nine. Who are they?

Amy Webb: Sure. So, what's important to note is that when it comes to AI, there's a tremendous amount of misplaced optimism and fear. And so, as you rightly point out, we tend to think in the abstract. In reality, there are 9 big tech giants who overwhelmingly are funding the research--building the open-source frameworks, developing the tools and the methodologies, building the data sets, doing the tests, and deploying AI at scale. Six of those companies are in the United States--I call them the G-Mafia for short. They are Google, Microsoft, Amazon, Facebook, IBM [International Business Machines], and Apple. And the other three are collectively known as the BAT. And they are based in China. That's Baidu, Alibaba, and Tencent. Together, those Big Nine tech companies are building the future of AI. And as a result, our helping to make serious plans and determinations, um, for I would argue the future of humanity.

Russ Roberts: And, just out of curiosity: I don't think you say very much in the book at all about Europe. Is there anything happening in Europe, in terms of research?

Amy Webb: Sure. So, the--you know, there's plenty of happening in France. Certainly in Canada. Montreal is one of the global hubs for what's known as Deep Learning. So this is not to say that there's not pockets of development and research elsewhere in the world. And it also isn't to say that there aren't additional large companies that are helping to grow the ecosystem. Certainly Salesforce and Uber are both contributing. However, when we look at the large systems, and the ecosystems and everything that plugs into them, overwhelming these are the 9 companies that we ought to be paying attention to.

3:18

Russ Roberts: So, I want to start with China. I had an episode with Mike Munger on the sharing economy and what he calls in his book Tomorrow 3.0. And, in the course of that conversation, we joked about people getting rated on their social skills and that those would be made public--how nice people were to each other. And we had a nice laugh about that. And I mentioned that I didn't think that that was an ideal situation--that people would be incentivized that way to be good people: despite my general love of incentives, that made me uneasy. And in response to that episode, some people mentioned an episode of Black Mirror[?]--the video series--and also some things that were happening in China. And I thought, 'Yeh, yeh, yeh, whatever.' But, what's happening in China--it's hard to believe. But, tell us about it.

Amy Webb: Sure. And, let me give you a quick example of one manifestation of this trend, and then sort of set that in the broader cultural context. So, there's a province in China where a new sort of global system is being rolled out. And it is continually mining and refining the data of the citizens who live in that area. So, as an example, if you cross the street when there's a red light and you are not able to safely cross the street at that point--if you choose to anyway, as to jay-walk--cameras that are embedded with smart recognition technology will automatically not just recognize that there's a person in the intersection when there's not supposed to be, but will actually recognize that person by name. So they'll use facial recognition technology along with technologies that are capable of recognizing posture and gait. It will recognize who that person is. Their image will be displayed on a nearby digital--not bulletin board; what do you call those--digital billboard. Where their name and other personal information will be displayed. And it will also trigger a social media mention on a network called Weibo. Which is one of the predominant social networks in China. And that person, probably, some of their family members, some of their friends, but also their employer, will know that they have--they have infracted--they have caused an infraction. So, they've crossed the street when they weren't supposed to. And, in some cases, that person may be publicly told--publicly shamed--and publicly told to show up at a nearby police precinct. Now, this is sort of important because it tells us something about the future of recognition technology and data. Which is very much tethered to the future of artificial intelligence. Now, better known as the Social Credit Score, China has been experimenting with this for quite a while; and they are not just tracking people as they cross the street. They are also looking at other ways that people behave in society, and that ranges from whether or not bills are paid on time, to how people perform in their social circles, to disciplinary actions that may be taken at work or at school, to what people are searching on--you know, on the Internet. And the idea is to generate some kind of a metric to show people definitively how well they are fitting in to Chinese society as Chinese people. This probably sounds, to the people listening to the show, like a horrible, Twilight Zone episode--

Russ Roberts: It sounds like 1984, is what it sounds like to me. It's not like, 'I wonder if that's a good idea.' It's more like, 'Are you kidding me?'

Amy Webb: Yeah. And so like, when I first heard about this, my initial response was not abject horror. I was curious. I was very curious.

Russ Roberts: [?]

Amy Webb: But like, here's what made me curious: Why bother? I mean, China has 1.4 billion people. And if the idea is to deploy something like this at scale, that is a tremendous amount of data. And you have to stop and say to yourself, 'Well, what's the point?' So, this is where some cultural context comes into play. So, I used to live in China. And I also used to live in Japan. And, they are very different cultures, very different countries. One distinctive feature of China is a community-reporting mechanism that is sort of embedded into society. And going back many thousands of years--you know, China is an enormous--it's a huge piece of land. And you've got people living throughout it; in fact, they are so spread apart, you have, you know, significantly different dialects being spoken. So, one way to sort of maintain control over vast masses of people spread out geographically was to develop a culture--sort of a tattle-tale culture. And so, throughout villages, if you were doing something untoward or breaking some kind of local custom or rule, that would get reported--you would get reported. Sort of in a gossipy way. But, you would get reported; and ultimately that person that heard the information would report that on up to maybe a precinct or a feudal manager of some kind, who would then report that up to whoever was in charge of the village or town; and then you would get into some kind of actual trouble. This was a way of maintaining social control. And so if you talk to people in China today, a lot of people are aware of monitoring. What I find so interesting is that at the moment, the outcry that we see outside of China does not match the outcry that I have observed--or actually to the lack of outcry--that I have observed in China. Now, there's one other piece of this really important: This is that using AI in this way ties in to China's Belt and Road Initiative [BRI]. And you might have heard about the BRI. This is sort of a master plan--it's a long-term strategy that helps China optimize what used to be the previous Silk Road--trading route. But it's sort of built around infrastructure. What's interesting is that there's also a digital version of this--the sort of digital BRI--where China is partnering with a lot of countries that are in situations where social stability is not a guarantee. And so, they are starting to export this technology into societies and places where there isn't that cultural context in place. And so, you have to stop and wonder and ask yourself, 'What does it mean for 58 pilot countries to have in their hands a technology capable of mining and refining and learning about all of their citizens, and reporting any infractions on up to authorities?' You know, in places like the Philippines, where free speech right now is questionable, this kind of technology, which does not make sense to us as Americans, may make slightly more sense to people in China, becomes a dangerous weapon in the hands of an authoritative, an authoritarian regime elsewhere in the world.

11:14

Russ Roberts: It reminds me, when you talk about the tattle-tale culture--of course, the Soviets did the same thing. They encouraged people to inform on--telltale sounds like a child reporting an insult. It's a monitoring mechanism by which authoritarian governments keep people in line. And you talk about the lack of outcry. Well, one reason is, is that you are worried that your social score is going to be low. Outcrying is probably not a good idea.

Amy Webb: That's right. That's right.

Russ Roberts: You should mention also, which I got from your book, that: It's not just like it's awkward, it's kind of embarrassing, you have a low score. These scores are going to be--going to be used, or being used?--to deal with people get credit, whether they can travel? Is that correct?

Amy Webb: Right. So, again. It's China. So, we can't be 100% of the information that's coming out, because it's a controlled-information ecosystem. But from what we've been able to gather, in all of the research that I've done, you know--I would suggest that it's already being used. It's certainly being used against ethnic minorities like the Uighurs. But we've seen instances of scoring systems being used to make determinations about school that kids are able to get into. You know, kids who, through no fault of their own may have parents that have run afoul, you know, in some way, and earned demotions and demerits on their social credit scores. So, it would appear as though this is already starting to affect people in China. And, again, my job is to quantify and assess future risk. So, as I was doing all of this research, my mind immediately went to: What are the longer term downstream implications? I think some of them are pretty obvious. Right? Like, some people in China are going to wind up having a miserable life as a result of the social credit score--the social credit score as it grows and is more widely adopted to some extent could lead to better social harmony, I guess; but it also leads to, you know, quashing individual ideas and certain freedoms and expressions of individual thinking. But, the flip side of this is: If it's the case that China has BRI--and it's investing in countries around the world not just in infrastructure but in digital infrastructure like fiber and 5G and communications networks and small cells and all the different technologies, in addition to AI and data, isn't it plausible that some time in the near future, our future trade wars aren't just rhetoric but could wind up in a retaliatory situation where people who don't have a credit score can't participate in the Chinese economy? Or, businesses that don't have credit scores can't do, can't trade. Or countries that don't have--if we think about like a Triple A Bond rating, you know, what happens if this credit scoring system evolves and China does business with, only with countries that have a high-enough score? We could quite literally get locked out of part of the global economy. It seems far-fetched, but I would argue that the signals are present now that point to something that could look like that in the near future.

15:03

Russ Roberts: Well, this is going to be a pretty paranoid show--episode--of EconTalk. So, I'm okay with that kind of fear-mongering, because it strikes me as quite worrisome. And I think we have to be, as you hinted at, you have to be open-minded that maybe this will make a better Chinese society, as defined by them. You know, the Soviets wanted to create a new Soviet man--and woman. They failed. But now, with these tools maybe there will be a new Chinese man and woman who will be harmoniously living with their neighbor, never jaywalking, and never gossiping, and smiling more often. Who knows? But, it's not my first, default thought about how this is going to turn out. I think that--

Amy Webb: No, but you kind of--you have to start with--I want to point out that I am not like a dystopian fiction writer. I'm a pragmatist. So, this--I am not studying all of this for the purpose of scaring people. What I would argue is, I have studied all of this, and used data, and modeled out plausible outcomes; and it is scary. It really is. Because you have to, again, connect the dots between all of this and other adjacent areas that are important to note. The CCP [Chinese Communist Party] in China is--

Russ Roberts: the Communist Party--

Amy Webb: yep--is facing some huge opportunities but also big problems. The Chinese economy may technically be slowing, but it's not a slow economy. There's plenty of growth ahead. And, if that holds--and there's no reason why at the moment it wouldn't--you know, Chinese society is about to go through social mobility at a scale never seen before in modern human history. And as that enormous group of people moves up, they are going to want to buy stuff. They are going to want to travel. So, you know, that potentially causes some problems, because the more wealth that is earned, the more agency people feel, the more opinions they start having about how the government ought to be run. And, you know, the CCP effectively made the current President of China, Xi Jinping, effectively President for life. And 2049--which seems far off but in the grand scheme of things isn't really that far into the future--is the 100th anniversary of the founding of the CCP. China is very good at long-term planning. Now, they've not always made good on fulfilling promises. But they are good at planning.

Russ Roberts: Yes, they are.

Amy Webb: Right? So, I don't see all of this as flashes in the pan, and 'AI's kind of a hot buzzy topic right now.' I'm looking at the much longer term and the much bigger picture. That's what makes me kind of concerned.

18:02

Russ Roberts: I think that's absolutely right. One other institutional detail to make clear for listeners is that the Chinese Internet is roped off, to some extent--to quite a large extent. They are developing their own tools and apps. And, talk about the three companies in China that are working on AI and how they work together in a way that American companies are not.

Amy Webb: So, here's another interesting facet of the Big Nine and AI is on a sort of a dual-developmental track. In China, Baidu, Alibaba, and Tencent were all formed sort of in the late 1990s, early 2000s; and their origin stories are not all that different from our big, modern tech giants like Amazon and Google and Apple. The key distinction is that our big tech companies were formed out for the most part in Seattle, Redmond, and Cupertino--California and San Francisco. Where, the ecosystem was able to blossom: there's plenty of competition. And there was plenty of talent. California has fairly lenient--in some ways--fairly lenient employer/employee laws which has made it very easy for talent to move between companies. And, if you are somebody who studies innovation, you know, the sort of lack of--the limited or lack of regulation, the ability for people to move around--

Russ Roberts: letting people make enormous amounts of money when they succeed and losing all of it when they fail--

Amy Webb: Right. Right. Right. But, the lack of safety net, the lack of a central, federal authority, if you will, is partly what enabled these companies to grow. And to grow fast. And to grow big. Which is why we also see a lot of overlap. So, Google, Microsoft, Amazon, and IBM [International Business Machines] own and maintain the world's largest cloud infrastructure. So, if you own a website or you are a business owner or you are making a phone call, at some point you are accessing one of their clouds. You know--we have competing, for the most part, we have competing operating systems for our mobile devices. For the most part, we still have competing email systems. And that's because without a central authority dictating one of the companies was going to do which thing, they all sort of did it. They went alone. When it went on their own and built their own things. So, now we have tremendous wealth concentrated among just a few companies who own the lion's share of patents; who are funding most of the research. And, for the most part, Silicon Valley and Washington, D.C. have an antagonistic relationship. That is not the case in China. So, in China, when the big tech companies were being formed there, you don't do anything in China without also in some way creating that business in concert with Beijing--with the government. You've got to pull patents--I'm sorry--you've got to pull permits. You have to abide by various regulations and laws. People are checking in on you. So, while Baidu, Alibaba, and Tencent may be independent financial organizations, in practical terms they are very much working in lockstep with Beijing. Alibaba, for those of you not familiar with the company, is very similar to Amazon. So, it's a retail operation. Tencent is very similar to our social media: so, sort of Twitter meets gaming and chat. And, I'm sorry--and Baidu is sort of search--is the sort of Google-esque company of the bunch. When China--when the Chinese government decided that AI was going to be a central part of its future plans--and this was decided years ago--it also decided that Tencent was going to focus on health; that Baidu was going to focus on cloud; and that Alibaba was going to focus on various different data aspects. I'm sorry; and Baidu was also going to focus on AI and transportation. So, it's not as though these companies came to these additional areas of research and work on their own. It was centrally coordinated. And that's a really, really important thing to keep in mind. If we've got a central government, a powerful government that is now--that has this long-term vision and is centrally coordinating, what's happening at a top level with the research and the movements of these companies, suddenly you have a streamlined system where you don't have arguments about regulation; you don't have the companies at each other's throats--like we've seen in the United States, Apple suddenly calling for sweeping privacy regulations because, to be fair, it's sort of--they are already far ahead and it gives them a competitive advantage. You don't see all that infighting in China. So, we have some fundamental differences. And the real challenge is that while we're trying to sort all this out in the United States, you have a streamlined central authority with three very powerful companies who are all now collaborating in some way on the future. In addition to a bunch of other top-level government initiatives to repatriate academics; to bring back top AI people; but also to do things like start educating kindergarteners about AI. There is a textbook that is going to roll out this year throughout China teaching kindergarteners the fundamentals of machine learning. I mean, you know--whereas in the United States, you know, some of our government officials, you know, up until very recently denied AI's capabilities; and only yesterday--so this is February 11th--President Trump issued an Executive Order to, I guess--I mean, there's a handful of bullet points on what AI ought to be, but it wasn't a policy paper. There's no funding. There's no government structure set up. There's not--I mean it--you see where I'm getting at?

25:07

Russ Roberts: Well, yeah--let me push back against that a little bit. You know, China is growing tremendously; as you point out, they are going to, presumably, they are already in one of the greatest transformations in human history from the countryside to the city, from a low standard of living to a much higher standard of living. And most of that's wonderful, and I'm happy about it. We don't know exactly what their ambitions will be or are outside of their own borders, and therefore what the repercussions are for us. As you suggest, they are doing a bunch of stuff. But the fact that they are top down and planning and organized, and we are chaotic and disorganized--so, just to take an example, you know, there's n companies in America, more than 4; I don't know how many there are--working on various aspects of driverless autonomous vehicles. There's Uber; there's Lyft; Apple; Google; there's Waymo. There's a lot going on here. And a lot of that will turn out. That's the nature of creative destruction; and capitalism. Some of those investments won't pan out. It will--the gambles will fail and lose, and people lose all their money. And, in general, historically, that chaotic soup of competition serves the average person and the people who are innovators quite well. The fact that China has, say, Baidu focusing on that and no one else having to worry about it, could be a bug, not a feature. I'm not convinced that China teaching kindergarteners machine learning is going to turn out well. Could be a mistake. Could be an enormous blunder. They are not allowing kind of experimentation, trial and error, that in my view is central to innovation. So, I think it remains to be seen how successful their walled garden with top-down gardening going on from the government's vision of what they want AI to serve, is going to work out. It might. It could. And it could be hard--the outcomes might be really bad for not just the Chinese but for other people. But it might just kind of fail. And, I'm not even convinced that their growth path is going to continue the way it has in the past. A lot of people just assume that because they have grown dramatically over the last 25 years they'll keep growing dramatically. There's a lot of ghost cities in China; there's a lot of overbuilding. I'm not so sure they have everything under control. So, I think you have to have that caveat as a footnote to those concerns.

Amy Webb: I completely agree with you. I would say that, for years, especially in the United States, we've been indoctrinated into thinking that China is a copy-paste culture rather than a culture that understands how to innovate, and to some extent I think that that is the result of that heavy-fisted, top-down approach to business. What I'm concerned about is not whether China succeeds financially. Here's what I'm concerned about. The challenge with artificial intelligence is that it's already here. It is not--there's no event horizon. There's no single thing that happens. It's already here. And it's been here for a while. And, in fact, it powers--you know, artificial [?] intelligence now powers our email; it powers the anti-lock brakes in our cars. You know. And essentially, this new Third Era of computing that we are in, if we assume that the First Era was tabulation--so that would have been Ada Lovelace in the late 1800s--and a Second Era was programmable systems, which would have been those early IBM mainframes on up to the, you know, desktop computers that we use today. This next Era is AI. And AI, while we've seen it anthropomorphized in movies like Her and on shows like Westworld, at its heart, AI is simply systems that make decisions on our behalf. And they do that using tools to optimize. So, the challenge is that, right now, systems are capable of making fairly narrow decisions. And the structures of those systems, and which data they were trained on, and how they make decisions and under what circumstances, those decisions were made by a relatively few number of people working at the BAT [Baidu, Alibaba, Tencent] in China and at the G-Mafia here in the United States. And the problem is that these systems aren't static. They continue to learn. And they--you know--they join, literally millions and millions of other algorithms that are all working in service of optimizing things on our behalf. Which is why I agree with you that if we are talking about a self-driving future, it's good to have competition, because--for all the usual reasons. Right? We get better form factors[?]; we get better vehicles; we get better price points. But when we are talking about systems that are continuing to evolve, that grow more and more powerful the more data they have access to and the more compute they are given--more computer power. And as we move into the more technical aspects, there are things like Generative Adversarial Networks, which are specifically designed to play tricks, to help systems learn more quickly. We are talking about slowly but surely ceding control over to systems to make these decisions on our behalf. And, that is what concerns me. What concerns me is that we do not have a singular set of guardrails that are global in nature. We don't have norms and standards. I'm not in favor of regulation. On the other hand, we don't have any kind of agreed-upon ideas for who and what to optimize for, under what circumstances. Or even what data sets to use. And China has a vastly different approach than we do in the United States, in part because China has a completely different viewpoint on what details of people's private lives should be mined, refined, and productized. And here in the United States, a lot of these companies have obfuscated when and how they are using our data. And, the challenge is that we all have to live with the repercussions.

32:10

Russ Roberts: Yeah, I'd agree with that. Up to a point. I want to give you a chance to talk about some scary examples. I think the--I'll just say, up front, that for me, underlying this whole problem--there are many different proximate causes and concerns. But there is, it seems to me, a very significant lack of competition. We can talk about how much competition there is in the United States relative to China. But certainly--the concern for me here in the United States is that the Big Six[Big Nine?] here in the United States will stay the Big Six[Big Nine?]. Which will give them leverage to do a bunch of things that you or I might not like. I do want to add that whatever we do to regulate or constrain them, via culture or whatever, allows for the possibility that they don't stay the Big Six[Big Nine?]. And I think one of the challenges of any way to deal with these problems is that, if you're not careful, you are going to end up creating a cartel that--it's de facto right now, but that can change. But if you make it de jure, you're going to end up with much worse outcomes than I think we're going to have. But, to concede your point about concern: I do think the Silicon Valley ethos of ask for forgiveness rather than permission--because right now there's no one you have to ask permission for, generally. Users are not paying much attention. There's very little regulation of how your private data is being used. Obviously something happened on January 1st, 2019 because I get a lot of annoying bars on my websites saying 'Will you accept cookies?' and I stupidly always click 'Yes,' like I'm sure most people do. And now they've complied with whatever required them to do that, and they're moving along. So, you know, I do think that there are some serious issues here. And you give some examples in the book of where these corporations--or China--have done things, and they really pay a price for it. They just keep going. The Facebook/Cambridge Analytica problem. The example you give of China pressuring Marriott the way their website was designed in terms of territorial recognition of China's sovereignty over various places that are somewhat up in the air. Those are serious issues, I think. And, more importantly, they are just the tip of the iceberg. So, talk about a couple of those things that you are worried about, that I think are alarming. And, normally, the marketplace would punish these folks; but not much does.

Amy Webb: So, I love what you just said, which is that the market--so it's curious, right? Why has the marketplace not punished the Big Nine? Or at least the G-Mafia, right? Or at least Facebook?

Russ Roberts: They've been punished a little bit. I think their users are down. I'm thinking about deleting my Facebook page. And I'm sure--and I've switched to DuckDuckGo for my searching. It's a really small step. But these are things that maybe people are starting to do in a little, slightly bigger, numbers.

Amy Webb: Maybe. But, again, like I don't have access to the whole world's data. Thank God. But--and you--let's just reveal our biases: like, you and I are digitally savvy people.

Russ Roberts: You're kind, Amy.

Amy Webb: Well, but you are. I think the fact that you even know what DuckDuckGo is, that you are somebody who is using it, I think is quite telling. [More to come, 35:37]

(21 COMMENTS)

          Neural networks predict planet mass      Cache   Translate Page      
(University of Bern) To find out how planets form astrophysicists run complicated and time consuming computer calculations. Members of the NCCR PlanetS at the University of Bern have now developed a totally novel approach to speed up this process dramatically. They use deep learning based on artificial neural networks, a method that is well known in image recognition.
          Backend Engineer (m/f) // Leverton      Cache   Translate Page      

We provide state-of-the-art machine and deep learning-based data extraction and contract analytics on our SaaS platform. Essentially, clients can upload an NDA, employment contract, real estate lease, or vendor contract, and within minutes our AI engine will automatically extract key pieces of contract data for our customers to consume, explore, and analyse. Your Future Achievements...

Check out all open positions at http://BerlinStartupJobs.com


          Sophos Day 2019 realiza-se quinta-feira no Porto      Cache   Translate Page      
Sophos Day 2019 realiza-se quinta-feira no Porto

A Sophos , líder mundial de segurança para proteção de redes e endpoits, realiza no Porto esta quinta feira o Sophos Day 2019, na Casa da Música. O evento junta reconhecidos nomes em cibersegurança que terão oportunidade de partilhar casos reais, analisar as últimas tendências de ciberameaças e identificar, junto de parceiros do setor, as chaves para sincronizar tecnologias e assegurar a melhor coordenação face a ataques de última geração.

Na Casa da Música, no Porto, um dos temas que serão abordados ao longo da manhã foca os ataques de ransomware dirigido – identificados como as principais ameaças que enfrentam os serviços de cibersegurança em 2019. Depois dos ataques massivos protagonizados por bots nos últimos anos, os cibercriminosos mudaram as suas estratégias para ataques de ransomware dirigidos, tornando os ataques mais personalisados e difíceis de detetar. Um exemplo disto é o Matrix, um ransomware que pede resgates de até 2500 euros, que foi recentemente detetado e analisado pelo SophosLab.

A vulnerabilidade dos telefones móveis também estará em debate. Um estudo recentemente lançado pelo SophosLabs, onde se analisaram mais de 10 milhões de amostras de Android cedidas pelos utilizadores, revelou a existência de 3,5 milhões de aplicações potencialmente suspeitas ou maliciosas e, destas, 77% eram malware. Desde então, este tipo de ameaças sofreu um incremento significativo pois cada vez atacantes recorrem a plataformas oficiais como o Google Play para esconder aplicações maliciosas como software de criptomineração ou malware que põe em periogo a segurança de dispositivos móveis.

Ao longo de toda a manhã, serão apresentadas as principais tendências em inovação e cibersegurança mas serão também alvo de debate as melhores estratégias e tecnologia para a luta contra o cibercrime. Perante um panorama internacional cada vez mais complexo em que os ciberatacantes desenvolvem ameaças cada vez mais personalizadas, e por isso, mais difíceis de detetar, a Sophos continua a ampliar as funcionalidades das suas tecnologuas através da Segurança Sincronizada. Com o desenvolvimento de sistemas como o Intercept X, que recerre a Deep Learning para oferecer uma previsão preditiva face a qualquer malware e que também conta com funcionalidade EDR; ou a última geração de XG Firewall que incorporta proteção contra movimentos laterais, a Sophos disponibiliza assim soluções de última geração para fazer frente às ciberameaças atuais.

Ricardo Maté, country manager da Sophos Iberia, vai clarificar a evolução do panorama das ameaças no setor da cibersegurança. Com ele, Alberto R. Rodas, Sales Engineer Manager da Sophos Iberia apresentara as possibilidades do Intercept X Advanced, que conta com tecnologia EDR e que agora permite a deteção e resposta inteligente de forma antecipada. Iván Mateos, Sales Engineer da Sophos Iberia, apresentará as novas funcionalidades do XG Firewall na sua versão 17.5 e a ferramenta de consciencialização e formação dos utilizadores – Phish Threat.

O encontro termina com uma mesa redonda com responsáveis de empresas de distintos setores de atividade do tecido empresarial nacional que, a par com a Sophos, vão ter oportunidade de comentar o estado de prevenão contra ciberataques das empresas nacionais e a importância que pessoas como CISO ou DPO têm nas organizações, qual o nível de consciencialização que existe nas empresas em Portugal para que seja possível identificar o estado da cibersegurança.

Ainda pode registar-se no evento e ver a agenda em detalhe aqui: https://events.sophos.com/sophosdayporto2019 .


          Senior AI/Deep Learning Software Engineer - St Josephs Hospital and Medical Center - Phoenix, AZ      Cache   Translate Page      
Ability to align business needs to development and machine learning or artificial intelligence solutions. Experience in natural language understanding, computer...
From Dignity Health - Tue, 27 Nov 2018 03:06:49 GMT - View all Phoenix, AZ jobs
          PyTorch Geometric: A Fast PyTorch Library for DL      Cache   Translate Page      
A new GitHub project, PyTorch Geometric (PyG), is attracting attention across the machine learning community. PyG is a geometric deep learning extension library for PyTorch dedicated to processing irregularly structured input data such as graphs, point clouds, and manifolds.
          Determined AI nabs $11M Series A to democratize AI development      Cache   Translate Page      

Deep learning involves a highly iterative process where data scientists build models and test them on GPU-powered systems until they get something they can work with. It can be expensive and time-consuming, often taking weeks to fashion the right model. Determined AI, a new startup wants to change that by making the process faster, cheaper […]

The post Determined AI nabs $11M Series A to democratize AI development appeared first on RocketNews | Top News Stories From Around the Globe.


          Computer Vision / Deep Learning Engineer - multiple roles      Cache   Translate Page      
CA-Burlingame, I am currently working with several companies in the area who are actively hiring in the field of Computer Vision and Deep Learning. AI, and specifically Computer Vision and Deep Learning are my niche market specialty and I only work with companies in this space. I am actively recruiting for multiple levels of seniority and responsibility, from experienced Individual Contributor roles, to Team Lea
          Best Practices for TensorFlow* On Intel® Xeon® Processor-based HPC Infrastructures      Cache   Translate Page      

AI is bringing new ways to use massive amounts of data to solve problems in business and industry—and in high performance computing (HPC). AI applications increasingly take on day-to-day use cases, HPC practitioners—like their commercial counterparts—are looking to move deep learning training off specialized laboratory hardware and software onto the familiar Intel®-based infrastructure already in [...]

Read More...

[...]

Read More...

The post Best Practices for TensorFlow* On Intel® Xeon® Processor-based HPC Infrastructures appeared first on Blogs@Intel.


          Associate Partner, Cognitive and Analytics - Financial Services - IBM - United States      Cache   Translate Page      
Solve business challenges and support business decisions by using advanced analytics techniques and deep learning algorithms....
From IBM - Thu, 24 Jan 2019 11:49:34 GMT - View all United States jobs
          Blog Review: Mar. 13      Cache   Translate Page      
Deep learning and coverage; Huawai defends security; making fins.
          Deep Learning: When Should You Use It?      Cache   Translate Page      
By Tom Taulli Deep learning, which is a subset of AI (Artificial Intelligence), has been around since the 1950s. It’s focused on developing systems that mimic the brain’s neural network structure. Yet it was not until the 1980s that deep learning started to show promise, spurred by the pioneering theories of researchers like Geoffrey Hinton, Yoshua Bengio and Yann Lecun. There was also the benefit of accelerating improvements in computer power. Despite all this, there remained lots of
          Review: Sophos Intercept X Stops Threats at the Gate      Cache   Translate Page      
Review: Sophos Intercept X Stops Threats at the Gate eli.zimmerman_9856 Tue, 03/12/2019 - 11:59

Traditional anti-malware products scan both memory and disk for particular threat signatures, which are updated daily (or even more often). But if a new threat appears before the pattern files are updated, these solutions won’t be able to detect or prevent the attack. 

In an effort to keep ahead of hackers, SophosLabs analyzes more than 400,000 new malware samples every day. The challenge is that the vast majority of malware is unique to individual organizations, so updating a pattern file is an inefficient, ineffective block for these attacks.

To fix that, Sophos Intercept X sits on top of traditional security software solutions to augment protection. The software prevents malware before it can be executed and stops threats, such as ransomware, from running. When ransomware does get into the network, the tool provides a root cause analysis to help users understand the forensic details.

MORE FROM EDTECH: Here are four ways universities can improve their endpoint protection.

Defeat Ransomware with Automatic Monitoring and File Rollbacks

Intercept X uses deep learning to detect new (and previously unseen) malware and unwanted applications. Deep learning is modeled after the human brain, using advanced neural networks that continuously learn as they accumulate more data.

It’s the same kind of machine learning that powers facial recognition, natural language processing and even self-driving cars, all inside an anti-malware program.

Sheen

Ransomware has grown at a fast clip since the success of the WannaCry malware infection in May 2016. Ransomware installs itself on a computer and then encrypts important files, making them inaccessible to their owner. The owner then receives a message from the attackers that, in an exchange for currency, they will decrypt the files

Sophos Intercept X blocks these attacks by monitoring the file system, detecting any rapid encryption of files and terminating the process. It even rolls back the changes to the files, leaving them as if they had never been touched — and denying the cybercriminals a payoff.

Integrated Protections Give Admins Better Visibility

The software offers several additional protections. WipeGuard uses the same deep learning features to protect a computer’s Master Boot Record. (Ransomware attacks on the MBR prevent the computer from restarting — even restores from backups are impossible until the cybercriminals get their money.)

Safe Browsing includes policies to monitor a web browser’s encryption, presentation and network interfaces to detect “man in the browser” attacks that are common in many banking Trojan viruses.

Sophos Root Cause Analysis contains a list of infection types that have occurred in the past 90 days. There’s even a Visualize tab that connects devices, browsers and websites to track where the infection occurred and how it spread. 

This doesn’t mean users must take action immediately, but it could help them investigate the chain of events surrounding a malware infection and highlight any necessary security improvements.

One caveat: If users haven’t patched their software (especially Java and Adobe applications), Intercept X may detect false positives. Be sure to update all software to the most current versions — always a best practice — to avoid these accidental alerts.

Cybersecurity-report_EasyTarget.jpg

Make Management Easier Through Sophos Central Dashboard

Endpoint protection is wonderful, but managing all those endpoints can be a chore. In addition to the usual laptops and desktops, security managers must pay attention to servers, mobile devices, email and web browsing. The potential threat surface can be overwhelming.

Sophos Central streamlines endpoint management, especially when deployed alongside other Sophos products. From the console, admins can manage Intercept X and endpoint protection either globally or by device. Web protection provides enterprise-grade browsing defense against malicious pop-ups, ads and risky file downloads. The mobile dashboard also shows device compliance, self-service portal registrations, platform versions and management status. 

Server security protects both virtual and physical servers. The Server Lockdown feature reduces the possibility of attack by ensuring that a server can configure and run only known, trusted executables.

Sophos wireless, encryption and email products also tie in to the console, and Sophos Wi-Fi access points can work alongside endpoint and mobile protection clients to provide integrated threat protection. 

That lets admins see what’s happening on wireless networks, APs and connecting clients to get insight into the inappropriate use of resources, including rogue APs. 

The Sophos Encryption dashboard provides centrally managed full-disk encryption using Windows BitLocker or Mac FileVault. Key management becomes a snap with the SafeGuard Management Center, which lets users recover damaged systems. 

Sophos email protection provides a safeguard against spam, phishing attempts and other malicious attacks through the most common user interface of all: email.

Sophos Central isn’t just for admins. Self-service is an important feature today, with user demands and IT budgets in constant conflict. 

Users can log in to the Sophos self-service portal to customize their security status, recover passwords and get notifications. In most IT departments, password recovery is the No. 1 help desk request, and eliminating those calls means technicians can spend more time on complex tasks.

Sophos Intercept X

OS: Windows 7, 8, 8.1 and 10, 32-bit and 64-bit; macOSz
Speed: Extracts millions of file features in 20 millisecondsm
Storage Requirement: 20MB on the endpoint
Server Requirement: Sophos Central supported on Windows 2008R2 and above

Dr. Jeffrey Sheen currently works as the supervisor of enterprise architecture services for Grange Mutual Casualty Group of Columbus, Ohio.


          GV leads $11 million round in Determined AI deep learning developer platform      Cache   Translate Page      
Determined AI raised $11 million, led by GV (formerly Google Ventures), to bring new features to its distributed deep learning management platform.
          El ecosistema de TensorFlow para programadores principiantes y expertos en Machine Learning: cursos, lenguajes y Edge Computing      Cache   Translate Page      

El ecosistema de TensorFlow para programadores principiantes y expertos en Machine Learning: cursos, lenguajes y Edge Computing#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

TensorFlow es la apuesta clave de Google para construir el ecosistema del futuro del Machine Learning que pueda ser ejecutado en la nube, en aplicaciones o en dispositivos hardware de todo tipo.

Precisamente, los esfuerzos en su última TensorFlow Dev Summit 2019 han ido enfocados en facilitar y simplificar el uso del framework, incorporando más API tanto para los programadores principiantes como para los más expertos. De este modo, todos podremos aprovecharnos de las nuevas mejoras para crear modelos de aprendizaje más fácilmente para la mayor número de casos de uso y desplegarlos en cualquier dispositivos.

Han impulsado el despliegue de los algoritmos de forma local en dispositivos hardware con la release final de TensorFlow Lite 1.0 sin necesidad de recurrir a la nube u otro sistema centralizado para ser procesados. Un claro ejemplo de que el Edge Computing forma parte de la estrategia clave de Google para dotar a cualquier dispositivo, ya sea IoT o móvil, de todas las ventajas del aprendizaje automático.

En los tres años que han pasado desde su lanzamiento, TensorFlow ha sentando las bases de un ecosistema de Machine Learning end-to-end, ayudando a potenciar la revolución del Deep Learning. Cada vez hay más desarrolladores que hacen uso de algoritmos para implementar nuevas funcionalidades a los usuarios o acelerar tareas hasta ahora tediosas como la clasificación de imágenes, la captura y reconocimiento de documentos o el reconocimiento de voz y la síntesis del lenguaje natural en los asistentes virtuales (Google Assistant o Alexa)

No es extraño que TensorFlow sea el proyecto con mayor número de contribuciones en Github año tras años, con más de 1.800 contribuciones. Acumulando más de 41 millones de descargas en tres años de historia y decenas de ejemplos de uso en distintas plataformas.

Tensorflow

El camino hacia TensorFlow 2.0

TensorFlow ha sentando las bases de un ecosistema de Machine Learning end-to-end, ayudando a potenciar la revolución del Deep Learning

TensorFlow 2.0 Alpha se ha fijado el objetivo de simplificar su uso, ampliando las posibilidades para ser una plataforma de ML más abierta que puede ser utilizada tanto por investigadores que quieran realizar experimentos y estudios, desarrolladores dispuestos a automatizar cualquier clase tarea o empresas que quieran facilitar la experiencia de uso de sus usuarios a través del inteligencia artificial.

Uno de los pilares de TensorFlow 2.0 es la integración más estrecha con Keras como la API de alto nivel para construir y entrenar modelos de Deep Learning. Keras tiene diversas ventajas:

  • Enfocada en el usuario. Keras tiene una interfaz más simple y consistente, adaptada a los casos de uso más comunes. Facilitando un feedback más claro para entender los errores de implementación.
  • Ser más modular y reutilizable. De este modo los modelos de Keras puede componer estructuras más complejas a través de capas y optimizadores sin necesidad de un modelo específico para entrenar.
  • Pensado tanto para principiantes como para expertos. Aprovechando como idea fundamental el background de los diversos tipos de programadores que se están involucrando desde el principio en el desarrollo de Deep Learning. Keras provee una API mucho más clara sin necesidad de ser un experto con años de experiencia.

También se han incorporado una amplia colección de datasets públicos preparados para ser utilizados con TensorFlow. Cualquier desarrollador que se haya aventurado a trabajar en Machine Learning sabe, esto representa el ingrediente principal para crear modelos y entrenar los algoritmos que después emplearemos. Tener esa ingente cantidad de datos ayuda bastante.

Para la migración de TensorFlow 1.x a 2.0 se han creado diversas herramientas para convertir y migrar los modelos. Ya que se han realizado actualizaciones necesarias para que sean más óptimos y pueden ser desplegados en más plataformas.

El ecosistema sigue creciendo con numerosas librerías para crear un entorno de trabajo más seguro y privado. Como el lanzamiento de la librería TensorFlow Federated, cuya intención es descentralizar el proceso de entrenamiento para compartirlo con múltiples participantes que puedan ejecutarlo localmente y envíen los resultados sin exponer necesariamente los datos capturados, sólo compartiendo el aprendizaje obtenido para la generación de los algoritmos. Un claro ejemplo de esto es el aprendizaje automático de los teclados virtuales, como el de GBoard de Google, que no expone datos sensibles, ya que va a aprendiendo localmente en el propio dispositivo.

Federated Tensor Flow Gboard

Al hilo de esto, el equilibrio entre Machine Learning y la privacidad es una tarea compleja, por ello se ha lanzado la librería de TensorFlow Privacy que permite definir distintos escenarios y grados para salvaguardar los datos más sensible y anonimizar la información de entrenamiento de los modelos.

Python no está solo en TensorFlow, más lenguajes como Swift o Javascript se unen a la plataforma

Python sigue siendo una pieza fundamental en el ecosistema Machine Learning y a la vez ha recibido un gran impulso al ser uno de los lenguajes principales

Obviamente, Python sigue siendo una pieza fundamental en el ecosistema Machine Learning y a la vez ha recibido un gran impulso al ser uno de los lenguajes principales con decenas de librerías entre las más utilizadas, a parte de su gran madurez. No sólo en TensorFlow, sino en otras plataformas como PyTorch.

Pero el ecosistema de TensorFlow ha abierto sus puertas incorporando librerías como TensorFlow.js, que finalmente alcanza la versión 1.0 Con más de 300.000 descargas y 100 contribuciones. Permite ejecutar proyectos ML en el navegador o el en backend con Node.js, tanto modelos ya pre entrenados como construir entrenamientos.

Empresas como Uber o Airbnb ya lo están utilizando en entornos de producción. Hay una amplia galería de ejemplos y casos de uso utilizando JavaScript junto a TensorFlow

Tensor Flow Switf

Otra de las grandes novedades es el avance en la implementación de TensorFlow en Swift con su versión 0.2. De esta forma incorporan un nuevo lenguaje de propósito general como Swift al paradigma ML con todas las funcionalidades para que los desarrolladores puedan acceder a todos los operadores de TensorFlow. Todo ello construido sobre las bases de Jupyter y LLDB.

Ejecutar localmente nuestros modelos con Tensor Lite: la apuesta por el Edge computing

Edge computing se basa en ejecutar modelos y hacer inferencias directamente en local sin depender de que tengan que ser enviados los datos para ser analizados a la nube.

El objetivo de TensorFlow Lite es impulsar definitivamente el Edge Computing en los millones de dispositivos que a día de hoy son capaces de ejecutar TensorFlow. Es la solución para ejecutar modelos y hacer inferencias directamente sin depender de que tengan que ser enviados los datos para ser analizados a la nube.

En Mayo de 2017 fue presentado en la conferencia de desarrolladores de Google IO con una versión preview. Esta semana ha alcanzado la versión definitiva TensorFlow Lite 1.0. Lo que ayudará a implementar diversos casos de uso como la generación de texto predictivo, la clasificación de imágenes, la detección de objetos, el reconocimiento de audio o la síntesis de voz, entre otros muchos escenarios que se pueden implementar.

Esto permite una mejora de rendimiento considerable debido a la conversión al modelo de TensorFlow Lite que permite la herramienta, como por el aumento de rendimiento para ser ejecutado en las GPU de cada dispositivo, incluyendo Android, por ejemplo.

Con ello, TensorFlow Mobile comienza a ser deprecado, salvo que realmente queramos realizar entrenamientos directamente desde el mismo dispositivo. Ya han confirmado que dentro del roadmap de esta versión Lite están trabajando en esto mismo, desvelaron ciertas funcionalidades interesantes como el aprendizaje acelerado con la asignación de pesos para mejorar la inferencia e incorporar ese aprendizaje en sucesivas ejecuciones.

Para completar los novedades, se presentó Google Coral una placa hardware que permite desplegar modelos usando TensorFlow Lite y toda la potencia de la Edge TPU de Google.

Tensor Flow Coral Board

Aprender sobre TensorFlow y Machine Learning cada vez más fácil con estos cursos

En 2016, Udacity lanzó el primer curso sobre TensorFlow en colaboración con Google. Desde entonces, más de 400.000 estudiantes se han apuntado al curso. Aprovechando el lanzamiento de Tensor Flow 2.0 Alpha, se ha renovado el curso por completo para hacerlo más accesible a cualquier desarrollador sin requerir un profundo conocimiento en matemáticas. Tal como ellos afirman: “Si puedes programar, puedes construir aplicaciones AI con Tensor Flow”

Cursos Tensorflow Udacity Deeplearning

El curso de Udacity está guiado por el equipo de desarrollo de Google, a día de hoy está disponible la formación de los primeros 2 meses de la planificación, pero irán añadiendo más contenido a lo largo de las semanas. En la primera parte podrás aprender los conceptos fundamentales detrás del machine learning y cómo construir tu primera red neuronal usando TensorFlow. Disponen de numerosos ejercicios y codelabs escritos por el propio equipo de Tensor Flow.

También se ha incorporado nuevo material en deeplearning.ai con un curso de introducción a AI, ML y DL, parte del career path de Tensor Flow: from Basics to Mastery series de Coursera. Entre los instructores se encuentra Andrew Ng, uno de los más importantes impulsores del Machine Learning desde sus inicios.

Y otra de las plataformas orientadas a la formación en AI, Fast.ai ha incorporado dos cursos sobre el uso de TensorFlow Lite para desarrolladores móviles y otro sobre el uso de Swift en TensorFlow.

Definitivamente, tenemos muchas oportunidades para empezar a aprender más sobre la revolución del Machine Learning junto a TensorFlow, una de las plataformas end-to-end más completas para este fin.

recommendation.header

Gasolina, diésel, híbrido… Cómo acertar con la propulsión

IA y redes neuronales, la apuesta de Google para luchar contra la pornografía infantil

Estados Unidos y Rusia paran las negociaciones sobre un tratado para prohibir los robots asesinos

-
the.news El ecosistema de TensorFlow para programadores principiantes y expertos en Machine Learning: cursos, lenguajes y Edge Computing originally.published.in por Txema Rodríguez .


          Survey of Precision-Scalable Multiply-Accumulate Units for Neural-Network Processing      Cache   Translate Page      
The current trend for deep learning has come with an enormous computational need for billions of Multiply-Accumulate (MAC) operations per inference. Fortunately, reduced precision has demonstrated large benefits with low impact on accuracy, paving the way towards processing in mobile devices and IoT nodes. Precision-scalable MAC architectures optimized for neural networks have recently gained interest thanks to their subword parallel or bit-serial capabilities. Yet, it has been hard to make a fair judgment of their relative benefits as they have been implemented with different technologies and performance targets. In this work, run-time configurable MAC units from ISSCC 2017 and 2018 are implemented and compared objectively under diverse precision scenarios. All circuits are synthesized in a 28nm commercial CMOS process with precision ranging from 2 to 8 bits. This work analyzes the impact of scalability and compares the different MAC units in terms of energy, throughput and area, aiming to understand the optimal architectures to reduce computation costs in neural-network processing.
          Determined AI, which wants to make AI development easier using its software, exits stealth and announces $11M Series A led by GV (Ron Miller/TechCrunch)      Cache   Translate Page      

Ron Miller / TechCrunch:
Determined AI, which wants to make AI development easier using its software, exits stealth and announces $11M Series A led by GV  —  Deep learning involves a highly iterative process where data scientists build models and test them on GPU-powered systems until they get something they can work with.


          Take NVIDIA’s new Deep Learning Robotics Workshop at TC Sessions: Robotics + AI      Cache   Translate Page      
As part of TechCrunch Sessions: Robotics + AI, we are happy to announce a partnership to deliver a brand new course from NVIDIA’s Deep Learning Institute (DLI). Called the NVIDIA Deep Learning for Robotics Workshop, this brand new, never before offered workshop will provide training in a new 3D accelerated remote desktop environment on April […]
          Determined AI nabs $11M Series A to democratize AI development      Cache   Translate Page      
Deep learning involves a highly iterative process where data scientists build models and test them on GPU-powered systems until they get something they can work with. It can be expensive and time-consuming, often taking weeks to fashion the right model. Determined AI, a new startup wants to change that by making the process faster, cheaper […]
          CEVA Computer Vision, Deep Learning and Long Range Communication Technologies Power DJI Drones      Cache   Translate Page      
MOUNTAIN VIEW, Calif.,&nbsp;March 12, 2019&nbsp;/PRNewswire/ --&nbsp;CEVA, Inc. (NASDAQ: CEVA), the leading licensor of signal processing platforms and artificial intelligence processors for smarter, connected devices, today announced that the latest generation of Mavic 2 camera drones from DJI, the world&#39;s leader in civilian drones and aerial imaging technology, deploy CEVA DSPs and platforms to enable on-device artificial intelligence, advanced computer vision and long-range...

This story is related to the following:
Vision Systems
Transportation Industry Products

Search for suppliers of: Imaging Cameras | Remotely Piloted Drones

           Comment on Nvidia purchases chip-maker Mellanox for $6.9 billion by HvR       Cache   Translate Page      
Not at all, think this is more of clever forward thinking expanding into new markets since the GPU market is pretty much saturated so it is very exposed to market fluctuation, currently in a recession, instead of steady growth. Think Nvidia owns more than 80% of discrete market share so they have AMD by the short and curlies on that front and will be very unwilling to give up market share in a multi-billion dollar market but probably since Intel is also getting into the discrete GPU playpen. Also both the Deep Learning and super high speed network acceleration needs to process hundreds parallel of task specific small processes just like graphic processing so it is easy for them to use the same silicone design on all three fronts.
          Determined AI nabs $11M Series A to democratize AI development      Cache   Translate Page      
Deep learning involves a highly iterative process where data scientists build models and test them on GPU-powered systems until they get something they can work with. It can be expensive and time-consuming, often taking weeks to fashion the right model. New startup Determined AI wants to change that by making the process faster, cheaper and […]
          Scientifique en apprentissage profond/Deep Learning Scientist - Huawei Canada - Montréal, QC      Cache   Translate Page      
Located in Hong Kong, Shenzhen, Beijing, London, Paris, Montreal, Toronto and Edmonton, Noah’s Ark Lab is the flagship AI research lab of Huawei Technologies....
From Huawei Canada - Thu, 11 Oct 2018 23:46:40 GMT - View all Montréal, QC jobs
          Take NVIDIA’s new Deep Learning Robotics Workshop at TC Sessions: Robotics + AI      Cache   Translate Page      
As part of TechCrunch Sessions: Robotics + AI, we are happy to announce a partnership to deliver a brand new course from NVIDIA’s Deep Learning Institute (DLI). Called the NVIDIA Deep Learning for Robotics Workshop, this brand new, never before offered workshop will provide training in a new 3D accelerated remote desktop environment on April […]
          Determined AI nabs $11M Series A to democratize AI development      Cache   Translate Page      
Deep learning involves a highly iterative process where data scientists build models and test them on GPU-powered systems until they get something they can work with. It can be expensive and time-consuming, often taking weeks to fashion the right model. New startup Determined AI wants to change that by making the process faster, cheaper and […]
          Business Development Manager, Quantum Computing - Amazon Web Services, Inc. - Seattle, WA      Cache   Translate Page      
AWS customers are looking for ways to change their business models and solve complex business challenges with machine learning (ML) and deep learning (DL)...
From Amazon.com - Wed, 13 Mar 2019 07:52:29 GMT - View all Seattle, WA jobs


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10
Site Map 2018_10_11
Site Map 2018_10_12
Site Map 2018_10_13
Site Map 2018_10_14
Site Map 2018_10_15
Site Map 2018_10_16
Site Map 2018_10_17
Site Map 2018_10_18
Site Map 2018_10_19
Site Map 2018_10_20
Site Map 2018_10_21
Site Map 2018_10_22
Site Map 2018_10_23
Site Map 2018_10_24
Site Map 2018_10_25
Site Map 2018_10_26
Site Map 2018_10_27
Site Map 2018_10_28
Site Map 2018_10_29
Site Map 2018_10_30
Site Map 2018_10_31
Site Map 2018_11_01
Site Map 2018_11_02
Site Map 2018_11_03
Site Map 2018_11_04
Site Map 2018_11_05
Site Map 2018_11_06
Site Map 2018_11_07
Site Map 2018_11_08
Site Map 2018_11_09
Site Map 2018_11_10
Site Map 2018_11_11
Site Map 2018_11_12
Site Map 2018_11_13
Site Map 2018_11_14
Site Map 2018_11_15
Site Map 2018_11_16
Site Map 2018_11_17
Site Map 2018_11_18
Site Map 2018_11_19
Site Map 2018_11_20
Site Map 2018_11_21
Site Map 2018_11_22
Site Map 2018_11_23
Site Map 2018_11_24
Site Map 2018_11_25
Site Map 2018_11_26
Site Map 2018_11_27
Site Map 2018_11_28
Site Map 2018_11_29
Site Map 2018_11_30
Site Map 2018_12_01
Site Map 2018_12_02
Site Map 2018_12_03
Site Map 2018_12_04
Site Map 2018_12_05
Site Map 2018_12_06
Site Map 2018_12_07
Site Map 2018_12_08
Site Map 2018_12_09
Site Map 2018_12_10
Site Map 2018_12_11
Site Map 2018_12_12
Site Map 2018_12_13
Site Map 2018_12_14
Site Map 2018_12_15
Site Map 2018_12_16
Site Map 2018_12_17
Site Map 2018_12_18
Site Map 2018_12_19
Site Map 2018_12_20
Site Map 2018_12_21
Site Map 2018_12_22
Site Map 2018_12_23
Site Map 2018_12_24
Site Map 2018_12_25
Site Map 2018_12_26
Site Map 2018_12_27
Site Map 2018_12_28
Site Map 2018_12_29
Site Map 2018_12_30
Site Map 2018_12_31
Site Map 2019_01_01
Site Map 2019_01_02
Site Map 2019_01_03
Site Map 2019_01_04
Site Map 2019_01_06
Site Map 2019_01_07
Site Map 2019_01_08
Site Map 2019_01_09
Site Map 2019_01_11
Site Map 2019_01_12
Site Map 2019_01_13
Site Map 2019_01_14
Site Map 2019_01_15
Site Map 2019_01_16
Site Map 2019_01_17
Site Map 2019_01_18
Site Map 2019_01_19
Site Map 2019_01_20
Site Map 2019_01_21
Site Map 2019_01_22
Site Map 2019_01_23
Site Map 2019_01_24
Site Map 2019_01_25
Site Map 2019_01_26
Site Map 2019_01_27
Site Map 2019_01_28
Site Map 2019_01_29
Site Map 2019_01_30
Site Map 2019_01_31
Site Map 2019_02_01
Site Map 2019_02_02
Site Map 2019_02_03
Site Map 2019_02_04
Site Map 2019_02_05
Site Map 2019_02_06
Site Map 2019_02_07
Site Map 2019_02_08
Site Map 2019_02_09
Site Map 2019_02_10
Site Map 2019_02_11
Site Map 2019_02_12
Site Map 2019_02_13
Site Map 2019_02_14
Site Map 2019_02_15
Site Map 2019_02_16
Site Map 2019_02_17
Site Map 2019_02_18
Site Map 2019_02_19
Site Map 2019_02_20
Site Map 2019_02_21
Site Map 2019_02_22
Site Map 2019_02_23
Site Map 2019_02_24
Site Map 2019_02_25
Site Map 2019_02_26
Site Map 2019_02_27
Site Map 2019_02_28
Site Map 2019_03_01
Site Map 2019_03_02
Site Map 2019_03_03
Site Map 2019_03_04
Site Map 2019_03_05
Site Map 2019_03_06
Site Map 2019_03_07
Site Map 2019_03_08
Site Map 2019_03_09
Site Map 2019_03_10
Site Map 2019_03_11
Site Map 2019_03_12
Site Map 2019_03_13