Next Page: 10000

          

Data Scientist / Firmware Engineer - G2 Recruitment Solutions - Zurich, Switzerland

 Cache   
Context I am currently seeking a Data scientist/ firmware engineer to join my client in Zurich on a permanent basis. The client works within the medical device sector and is seeking an individual who is skilled as a data scientist and firmware engineer. In this role you will have two main responsibilities To use a complier to translate Matlab code into Embedded C. To work with Signal processing A successful candidate must have 3 years experience in signal processing and machine learning 3...
          

call,黑马程序员,执行力-素材家,体育、文化、人像、风景素材

 Cache   
陆鉴成 杨娅姣

9月23-25日,全球抢先的云核算IT根底架构产品及计划提供商浪潮与欧洲尖端途径商AB Group、ALSO、Dutt权诗妍enhofer别离达到战略武川アイ协作,浪潮的世界化进程加快推动。

这三家分销途径,各具特色并在欧洲商场具有无足轻重的位置。女主请回头谈及与浪潮的签约,这三家分销商一同以为,欧洲是一个敞开的商场,对与浪潮这家来自我国的全球IT巨子的协作远景充溢古家赶黄草决心。浪潮集团高档副总裁彭震表明,浪潮正在加大推动世界化战略,此次与三家欧洲顶尖分销商树立战略协作,将进一步发挥分销渠call,黑马程序员,执行力-资料家,体育、文明、人像、景色资料道商的欧洲本地化优势和浪潮全球抢先的产品计划才能,一同开辟欧洲商场,完成双call,黑马程序员,执行力-资料家,体育、文明、人像、景色资料赢开展。

据了解,在世界化方面,浪潮的事务现在已掩盖120call,黑马程序员,执行力-资料家,体育、文明、人像、景色资料多个国家和地区,有8个全球研制中心、6个全球出产中心以及2个逆杀神魔全球服务中心,在美国、欧洲、韩国、日本等区域商场增加十分迅速。

作为全球抢先的云核算IT根底架构产品及计划提供商,浪潮服务器商场份额近年来一向稳居全球前三、增速全球榜首。这其间,JDM形式和人工智能服务器功不可没。

法医狂妃废材七公主 主持人万欣
call,黑马程序员,执行力-资料家,体育、文明、人像、景色资料

立异的JDM形式是浪潮事务何挺被规坚持持续增加的重要动力,现在全球TOP10的CSP中,有6家挑选了浪潮服务器。在JDM形式下,服务器的需求、研制、出产和交给流程,不再是一个简略的、在浪潮内部关闭的进程,而是和客户以及协作同伴一同,运用数字化、智能化、网络化的手法,从客户需求开端的有质量的快速交给进程。

一起浪潮在人工智能服务器范畴居全球抢先丰臀丰臀,是很多超大规模互联网客户构建其AI核算途径菩珠蓬莱客的中心供货商,其在我国人工智能根底架构市占率一向超越50%,可提供从核算途径、办理深呼锡套件、结构优化到使用加快call,黑马程序员,执行力-资料家,体育、文明、人像、景色资料的完好计划。一起浪潮参加乃至主导了一系列人工智能的产品技能和世界测验规范:参加OCP社区榜首代OAM规范的拟定;在全球体系功能评测规范安排SPEC中,首先建议建立Machine Learning技能委员会,并担任首届委员会主席。

谈及协作同伴战略,浪潮集团副总裁、途径办理部总经理王峰表明,浪潮多年来一向致力于构建新式牛志美协作同伴关系,将松冯国辉耦合的商务协作形式晋级为包括技能、研制等全事务链的紧耦合价值协作形式,驱动与同伴的双赢开展。在我国已有近万家协作同伴的一起,浪潮接下来将着力与世界接轨,大力拓宽海外协作同伴,构建遍及人狗交全球的协作同伴网络印特尔,将浪潮抢先的产品和解决计划带给海外客户,完成更高效的本地化交给,让全球体会与服务无国界。

闪字签

          

Oracle Magazine articles

 Cache   
Over the past few weeks I’ve had a couple of articles published with Oracle Magazine and these can be viewed on their website. The first article is titled ‘Quickly Create Charts and Graphs of You Query Data‘ using Oracle Machine Learning Notebooks. The second article is titled ‘REST-Enabling Oracle Machine Learning Models‘. Click on the […]
          

Azure News der Woche

 Cache   

Auch in dieser Woche gab es wieder zahlreiche News rund um Microsoft Azure! Hier wie immer der Überblick für euch: Disaster recovery for SAP HANA Systems on Azure Azure Cost Management updates – October 2019 New in Stream Analytics: Machine Learning, online scaling, custom code, and more Enabling Diagnostic Logging in Azure API for FHIR®...

Der Beitrag Azure News der Woche erschien zuerst auf MOUNTAIN IT - Eric Berg.


          

Ecobee smart thermostats are getting a new suite of energy-conservation features

 Cache   

Ecobee builds some of our favorite smart thermostats, and now the company is rolling out a new collection of features—dubbed Eco+—that promises to make them even smarter and more energy efficient.

This new suite of machine learning and artificial intelligence routines that, among other things, enable homeowners to avoid paying more for heating and cooling in areas where utilities have implemented “time of use” pricing: higher rates when energy is most in demand.

eco plus community energy savings Ecobee

New firmware helps Ecobee smart thermostats respond to the energy demands

To read this article in full, please click here


          

Other: Data Scientist II - Las Vegas, Nevada

 Cache   
Summary The Data Scientist turns data into high value assets in the form of insights and predictive models that contribute to measurable improvements in business process and performance. The Data Scientist II is expected to work independently with minimal oversight as well as lead project initiatives, mentor and assign work to others as needed. This role requires strong understanding and experience in Applied Statistics for Data Science as well as expertise in data wrangling, building, deploying and maintaining predictive models. You will be responsible to ensure the rest of the team and stakeholders engage in best practices to ensure statistically sound deliverables. You will document best practices as well as lead peer reviews of Data Science work. You will deliver improvements to existing models as well as lead ongoing complex initiatives for revenue optimization, customer segmentation, media-mix optimization, and churn analysis. Minimum Requirements Combination of Education and Experience will be considered. Must be authorized to work in the US as defined by the Immigration Act of 1986. Must pass a Criminal Background Check. Visa sponsorship is available for this position. Education: Bachelor s Degree Education Other: From an accredited college/university in one or more of the following: Computer Science, Statistics, Mathematics, Engineering, Bioinformatics, Econometrics, Physics, Operations Research or related field. Years of Experience: Minimum of three (3) years of experience in a technical environment. Other Requirements: Candidates must be comfortable in conversations and be able to demonstrate the following: Understanding of applied Statistics, algorithms, modeling techniques as they relate to Data Science as a practice. Technical experience working with data to include sourcing, extracting, validating, exploring, and transforming data with tools like SQL and Python. Expert knowledge of the Data Science process. Initiative, curiosity, and problem-solving skills through personal development projects and ongoing education. Command of Python and/or R with the ability to mentor and train others. Command of SQL with the ability to mentor and train others. Ability to lead, manage and deliver complex Data Science projects with minimal oversight. Command of applied statistics for Data Science. Master's Degree or PhD. Credit Check: Yes Preferred Requirements Master's Degree or PhD Strong knowledge in one or more of the following fields: statistics, data mining, machine learning, simulation, operations research, econometrics, and/or information retrieval. Strong knowledge of the data science process and practical experience using machine learning algorithms including regression, classification, simulation, scenario analysis, modeling, clustering, and decision trees. Knowledge in airline operations, customer interactions and/or inter-departmental limitations across business units. Strong written and verbal communication skills, proven presentation skills to all levels of audience. Strong intellect and analytical aptitude, along with ability to be self-driven. Demonstrated proficiency in Python, R, MATLAB, SQL or other programming languages or packages. Comfortable with a fast paced, dynamic work environment. Strong computer skills including but not limited to MS Office products. Job Duties Analyze and model airline operations and/or customer data and implement algorithms to support analysis using advanced statistical, engineering, and mathematical methods from physics, machine learning, data mining, econometrics, and operations research. Interpret business opportunities into Data Science projects and deliverables. Deliver Data Science solutions and quantify ROI/Business impact. Translate advanced analytics problems into technical approaches that yield actionable recommendations, in diverse domains such as predictive maintenance, delay predictions/recovery and Allegiant products upselling/cross-selling; communicate results and educate stakeholders through insightful visualizations, reports and presentations. Facilitate conversations for teams to collaborate in removing impediments, empowering teams to self-organize and improve their productivity. Retrieve, prepare, and process a rich variety of data sources from structured/unstructured cloud and non-cloud sources. Perform exploratory data analysis, generate and test working hypotheses, and uncover interesting trends and relationships. Exercise continuous self-development, education and learning. Act as an analytical mentor to others in the organization. Help establish and sustain Data Science culture. Leverage available research data to stay informed about industry related trends, potential disrupters, and competitive capabilities. Document various approaches and model metrics to seek iterative means of improvement. Provide a cohesive end-to-end solution through understanding the cross-pollination of technology/engineering/commercial verticals and applying both areas of expertise and areas of knowledge. Other duties as assigned. Physical Requirements The Physical Demands and Work Environment described here are a representative of those that must be met by a Team Member to successfully perform the essential functions of the role. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions of the role. Office - While performing the duties of this job, the Team Member is regularly required to stand, sit, talk, hear, see, reach, stoop, kneel, and use hands and fingers to operate a computer, key board, printer, and phone. May be required to lift, push, pull, or carry up to 20 lbs. May be required to work various shifts/days in a 24 hour situation. Regular attendance is a requirement of the role. Exposure to moderate noise (i.e. business office with computers, phones, printers, and foot traffic), temperature and light fluctuations. Ability to work in a confined area as well as the ability to sit at a computer terminal for an extended period of time. Some travel may be a requirement of the role. EEO Statement Equal Opportunity Employer: Disability/Veteran For more information, see Allegiant.com/careers ()
          

IT / Software / Systems: Computer Vision Engineer - Las Vegas, Nevada

 Cache   
SunIRef:it Computer Vision Engineer PlayVIG - Las Vegas, NV Role: We are currently looking for an exceptionally skilled Computer Vision Engineer to join our visual recognition team. Requirements: BS Degree in Computer Science or closely related field or equivalent knowledge Experience in development on Windows environments and ideally other environments (Linux, OSX) Adept a programmatic Interactive with SQL databases (SQL Server) Experience with architecting, designing and implementing functionality for real-time systems and services Knowledge of design patterns Solid understanding of data structures, algorithms, object oriented design, software engineering principles and the software development process Desire to live the disruptive start-up life...rapid releases, quick decisions and lots of autonomy Must be legally authorized to work in the United States or Canada without an employer-sponsored petition for a visa. Preferred Skills: Knowledge of computer vision and machine learning technologies and libraries, especially OpenCV Experience with C and C++ Experience with other Windows platforms/technologies a plus (Win32 SDK, Matlab, Python) Personal Characteristics: You can work productively independently, as well as in a team setting You grasp high-level product requirements and translate these to runing software effectively You are able to use creative solution to solve seemingly difficult/impossible tasks You maintain a strong sense of ownership and will see a project throughout its lifecycle from development to deployment You are flexible, adaptable and ambitious. You can switch gears in various situations and will enthusiastically take on new assignments as needed to support the team You not only accept constructive criticism from team members, you encourage it You are a results-oriented team player who remains accoutable for your performance at all times If you are interested in applying, please send your resume and cover letter to ****************. Applicants for employment have rights under Federal Employment Laws: Family and Medical Leave Act (FMLA) Equal Employment Opportunity (EEO) Employee Polygraph Protection Act (EPPA) PlayVIG - Today report job - original job If you require alternative methods of application or screening, you must approach the employer directly to request this as Indeed is not responsible for the employer's application process. ()
          

IMH, Duke-NUS and Neeuro Pilot Home-Based Brain-Training Game to Help Children with ADHD

 Cache   
Nov 06, 2019

• Researchers at IMH, Duke-NUS and A*STAR have developed an advanced brain-computer interface technology that harnesses machine learning to personalise brain-training for children with ADHD.
• Partnering local tech start-up, Neeuro, the researchers are rolling out a pilot home-based intervention programme for children with ADHD undergoing treatment at IMH. The take-home kit comprises a wireless headband and a Samsung tablet with the pre-loaded game.
• Extensive clinical testing through a large-scale randomised clinical trial of the game-based brain-training programme found improvements in the attention span of children with ADHD.

Singapore, 6 November 2019 --( ASIA TODAY )-- A first-of-its-kind personalised, interactive brain-training game will soon be helping children with Attention Deficit Hyperactivity Disorder (ADHD) improve their attention span. The unique selling point of this technology is that children with ADHD can participate in this programme from home. A pilot run for the home-based programme will be launched for 20 children, aged 6-12 years, who are currently receiving treatment for ADHD at the Institute of Mental Health (IMH).

The game, called CogoLand1, was developed through a decade’s worth of extensive research, utilising Brain-Computer Interface (BCI) technology that incorporates machine-learning algorithms to personalise attention training, with the hope of complementing mainstay ADHD treatment. The use of CogoLand to complement ADHD treatment is the result of a collaboration between IMH, Duke-NUS Medical School and A*STAR’s Institute for Infocomm Research (I2R). Neeuro Pte Ltd, a local tech startup and spinoff from A*STAR, is the current sole licensee of the technology.

This non-invasive ADHD intervention programme was the subject of a large scale randomised clinical trial funded by the National Medical Research Council, involving 172 children with ADHD in Singapore.2 Associate Professor Lee Tih Shih, from Duke-NUS’ Neuroscience and Behavioural Disorders programme and Principal Investigator of the large scale clinical trial, commented: “Our patented, personalised intervention using advanced BCI technology has shown very promising and robust results, and we hope it can benefit many children with ADHD in the future.”

Furthermore, Functional Magnetic Resonance Imaging (fMRI) scans of a subset of the children, led by Associate Professor Juan Helen Zhou, also from Duke-NUS, showed positive post-training effects observed in brain areas associated with attention and task-orientation.3 The patented technology was summarised by Professor Guan Cuntai, technical lead of the system and scientific advisor to Neeuro: “Our technology can accurately quantify a person’s attention level in real-time using a machine learning algorithm and, from there, develop a unique patented personalised training programme using a feed-forward concept for cognitive training. Further improvements have been made in recent iterations by capitalising on the latest deep learning approaches with our large dataset.” Professor Guan was also the Principal Scientist who led the BCI research when he was part of A*STAR’s I2R.

Dr Lim Choon Guan, Deputy Chief of the Department of Developmental Psychiatry at IMH said: “While medication and behavioural therapy are effective in treating symptoms of ADHD in children, some parents are also keen to explore other approaches that can help their children to improve their concentration. After a decade of collaborative work, our team is very excited to pilot this home-based brain-training game which parents can use to help their children regulate themselves.” The home-based programme will see the 20 children each receive a take-home kit that includes Neeuro’s brainwave-reading SenzeBand and a Samsung tablet with the preloaded CogoLand game, which they will use following a prescribed regimen for the duration of the programme. This approach is intended to be a complement and/or supplement to conventional ADHD treatment.

According to Dr. Alvin Chan, CEO and Co-Founder of Neeuro, “At Neeuro, our aim is to utilise technology to enable positive change in the neurological agility and fitness of our users. We are privileged to be working with institutions such as IMH, Duke-NUS and A*STAR, in conjunction with our hardware partner Samsung, to explore the use of cutting-edge technology in order to achieve this aim. It is our hope that this trial paves the way to enable the progressive development of new complementary options that will bring about positive outcomes for the millions of children afflicted with ADHD globally, especially those in Singapore.”

Mr Philip Lim, CEO of A*STAR’s innovation and enterprise office A*ccelerate, said: “It is always fulfilling when homegrown technologies are translated into meaningful outcomes. We are proud to be a part of Neeuro’s journey, and A*STAR will continue supporting entrepreneurial companies like them to grow and innovate.”

###

1 See Annex A for more information on CogoLand.

2 Lim, C., Poh, X., Fung, S., Guan, C., Bautista, D., & Cheung, Y. et al. (2019). A randomized controlled trial of a brain-computer interface based attention training program for ADHD. PLOS ONE, 14(5), e0216225. DOI: 10.1371/journal.pone.0216225

3 Qian, X., Loo, B., Castellanos, F., Liu, S., Koh, H., & Poh, X. et al. (2018). Brain-computer-interface-based intervention re-normalizes brain functional network topology in children with attention deficit/hyperactivity disorder. Translational Psychiatry, 8(1). DOI: 10.1038/s41398-018-0213-8

Note: The research study was funded by grants from the National Medical Research Council (NMRC) and National Healthcare Group (NHG). The research team also acknowledges the support received from the Ministry of Education, Singapore.

About Neeuro
Its core technology, NeeuroOS, is a platform ecosystem that empowers health care professionals, researchers and third party developers with an Artificial Intelligence (AI) driven platform with the ability to analyse the brain signals of users; measuring mental states including but not limited to attention, relaxation, mental workload and fatigue. Neeuro’s holistic platform, coupled with its other offerings, reveal numerous potential avenues to explore complementary mental wellness options for ADHD children, patients with stroke, cognitive rehabilitation and many other neurological issues.

For more information, please visit https://www.neeuro.com.
Comms Contacts
Kelly Choo
Neeuro Pte. Ltd.
Tel: +65 6397 5153
Email: contact@neeuro.com

Fiona Foo
Institute of Mental Health
Tel: +65 6389 2868 / +65 8123 8805
Email: Fiona_wy_foo@imh.com.sg

Federico Graciano
Duke-NUS Communications
Tel: +65 6601 3272
Email: f.graciano@duke-nus.edu.sg

Gladys Chung
A*STAR Corporate Communications
Tel: +65 6826 6348
Email: Gladys_chung@hq.a-star.edu.sg

Category: 
Collaboration / Partnership
Medicine & Health Care
Science Research
FeaturedNews: 
Show in Featured News

          

Machine Learning Scientist - Chisel AI - Toronto, ON

 Cache   
Reporting to our Data Science Lead, you will collaborate with our team of Machine Learning Scientists to explore and understand the latest in AI, NLP, and ML;
From Chisel AI - Tue, 15 Oct 2019 17:32:52 GMT - View all Toronto, ON jobs
          

11/3/2019: Business: Insurance company RSA hires big guns to beat fraud

 Cache   
RSA Ireland, the insurance company, has partnered with BAE Systems, a UK-based defence, security and aerospace group, to adopt its NetReveal counter-fraud technology. BAE’s NetReveal technology pairs machine learning and analytics with data scientists...
          

Business Intelligence Engineer, Transportation Execution - Amazon.com Services, Inc. - Bellevue, WA

 Cache   
Machine learning experience a plus. Work with business program owners to build data sets, reports, dashboards, and other products that answer their specific…
From Amazon.com - Thu, 17 Oct 2019 07:54:03 GMT - View all Bellevue, WA jobs
          

How to train artificial intelligence that won’t destroy the environment

 Cache   
The carbon footprint of machine learning is bigger than you think.
          

Go vs. Python: How to choose

 Cache   

When it comes to ease and convenience for the developer and accelerating the speed of development, two programming languages rise above the pack—Python and Go. Today Python is a mainstay of scripting, devops, machine learning, and testing, while Go is powering the new wave of container-based, cloud-native computing.

To read this article in full, please click here

(Insider Story)
          

Should you go all-in on cloud native?

 Cache   

We’ve all heard about “cloud native” databases, security, governance, storage, AI, and pretty much anything else that a cloud provider could offer. Here’s my definition of cloud native applications: Applications that leverage systems native to the public cloud they are hosted on.

The general advice is, “Cloud native: good. Non-native lift-and-shift: bad.”

This makes sense. By using native services, we can take advantage of core systems that include native security using native directory services, as well as native provisioning systems and native management and monitoring. Using non-native applications on public clouds is analogous to driving a super car on a gravel road.

To read this article in full, please click here


          

Google previews site for sharing machine learning experiments

 Cache   

Google has unveiled TensorBoard.dev, an online platform where data scientists, researchers, machine learning practitioners, and software developers can share machine learning experiments and collaborate on machine learning projects. 

Now in a beta release stage, TensorBoard.dev lets users upload machine learning experiments for sharing with anyone. The platform leverages the TensorBoard visualization toolkit, which works with Google’s TensorFlow library for machine learning and deep learning.

To read this article in full, please click here


          

Google Cloud launches TensorFlow Enterprise

 Cache   

Google Cloud has introduced TensorFlow Enterprise, a cloud-based TensorFlow machine learning service that includes enterprise-grade support and managed services.

Based on Google’s popular, open source TensorFlow machine learning library, TensorFlow Enterprise is positioned to help machine learning researchers accelerate the creation of machine learning and deep learning models and ensure the reliability of AI applications. Workloads in Google Cloud can be scaled and compatibility-tested.

To read this article in full, please click here


          

Qubole review: Self-service big data analytics

 Cache   

Billed as a cloud-native data platform for analytics, AI, and machine learning, Qubole offers solutions for customer engagement, digital transformation, data-driven products, digital marketing, modernization, and security intelligence. It claims fast time to value, multi-cloud support, 10x administrator productivity, a 1:200 operator-to-user ratio, and lower cloud costs.

editors choice award logo plum InfoWorld

What Qubole actually does, based on my brief experience with the platform, is to integrate a number of open-source tools, and a few proprietary tools, to create a cloud-based, self-service big data experience for data analysts, data engineers, and data scientists.

To read this article in full, please click here

(Insider Story)
          

MLPerf releases machine learning inference performance results

 Cache   
The MLPerf consortium today released an analysis of machine learning inference for datacenters and the edge to standardize industry benchmarks.
          

Spruce Up wants to do for home decor what Pandora did for music

 Cache   
Home decor startup Spruce Up has launched an updated ecommerce platform that leverages machine learning to recommend goods. 
          

Intelligent search platform Coveo raises $227 million at a valuation of over $1 billion

 Cache   
Coveo, a platform that meshes search, analytics, and machine learning to unlock insights contained within big data for businesses, has raised $227 million.
          

Research Guide: Advanced Loss Functions for Machine Learning Models

 Cache   
This guide explores research centered on a variety of advanced loss functions for machine learning models.
          

KDnuggets™ News 19:n42, Nov 6: 5 Statistical Traps Data Scientists Should Avoid; 10 Free Must-Read Books on AI

 Cache   
Learn about statistical fallacies Data Scientists should avoid; New and quite amazing Deep Learning capabilities FB has been quietly open-sourcing; Top Machine Learning tools for Developers; How to build a Neural Network from scratch and more.
          

Probability Learning: Maximum Likelihood

 Cache   
The maths behind Bayes will be better understood if we first cover the theory and maths underlying another fundamental method of probabilistic machine learning: Maximum Likelihood. This post will be dedicated to explaining it.
          

Top Stories, Oct 28 – Nov 3: 5 Statistical Traps Data Scientists Should Avoid; Top Machine Learning Software Tools for Developers

 Cache   
Also: Why is Machine Learning Deployment Hard?; Data Sources 101; 5 Statistical Traps Data Scientists Should Avoid; Everything a Data Scientist Should Know About Data Management; How to Become a (Good) Data Scientist — Beginner Guide
          

Microsoft Ignite 2019, domina l’intelligenza artificiale

 Cache   

Si è da poco concluso l’evento Ignite 2019, nel corso del quale Microsoft ha svelato diversi nuovi strumenti e servizi dotati di intelligenza artificiale.

Inoltre, tutte le new entry di casa Microsoft sono aderenti alle più recenti normative in tema di sicurezza e privacy, per rispondere alle esigenze di compliance di tutte le organizzazioni.

Fra le novità introdotte, segnaliamo Azure Synapse Analytics. Si tratta di un nuovo servizio che combina le capacità di Azure SQL Data Warehouse con nuove funzionalità che aiuteranno gli utenti ad analizzare i dati provenienti da diverse fonti in modo più rapido e sicuro.

In modalità preview, arriva anche Azure Arc: permette ai clienti di accedere ai servizi di Azure da altri cloud o infrastrutture, inclusi quelli di Amazon e Google.

Cambio di nome, invece, per Microsoft Flow che diventa Power Automate e si dota di una nuova funzionalità di robotic process automation chiamata UI flows, disponibile in preview pubblica, che consente di trasformare compiti manuali in workflow automatizzati, registrando e riproducendo le azioni guidate dagli esseri umani su software che non supportano l’automazione delle API.

Project Cortex è la prima vera novità per Microsoft 365 dal lancio di Teams. Il nuovo tool fa leva sull’intelligenza artificiale per analizzare i dati aziendali e organizzarli automaticamente per topic, assicurandosi di consegnare le informazioni alle giuste persone all’interno dell’organizzazione.

Inoltre, ecco nuove esperienze in Microsoft 365 annunciate ad Ignite 2019Play My Emails in Outlook for iOS è disponibile (per ora) negli Stati Uniti: Cortana usa il riconoscimento vocale e del linguaggio naturale per leggere ad alta voce le nuove email e gestire cambi nell’agenda, permettendo di rimanere aggiornati anche quando non si hanno le mani libere.

Nuove funzionalità di Stream usano algoritmi di machine learning per rilevare e cancellare rumori indesiderati di sottofondo nei video con un solo click.

Con il nuovo pulsante Teams Chat in Outlook è possibile trasformare lunghi flussi di email in chat di Teams per proseguire la discussione in modo più semplice e collaborativo.

Nuovi Microsoft Edge e Microsoft Bing, potenziati da funzionalità innovative, come la possibilità di combinare Internet con l’Intranet aziendale tramite Microsoft Search all’interno di Bing, oltre ad avanzate impostazioni di default per la protezione della privacy e la capacità di esportare informazioni direttamente da una ricerca sul web nelle applicazioni di Microsoft Office.

Infine, Yammer è stato completamente ridisegnato con decine di nuove funzionalità che migliorano la connessione, la creazione di community e la condivisione di informazioni all’interno delle aziende. La nuova versione di Yammer offre un’esperienza intelligente e fluida su più dispositivi e introduce nuove possibilità di integrazione con Teams, SharePoint e Outlook.

L'articolo Microsoft Ignite 2019, domina l’intelligenza artificiale è un contenuto originale di 01net.


          

FileMaker Cloud, il nuovo servizio Platform as a Service

 Cache   
FileMaker Cloud

Claris ha svelato il nuovo FileMaker Cloud, l’edizione ospitata su cloud della versione della piattaforma per database e app custom dedicata alle aziende e ai team di lavoro.

L’offerta FileMaker prevede ora, quindi: FileMaker Pro Advanced, versione per singoli utenti dedicata agli sviluppatori, che tramite il software possono creare app custom per i propri clienti o le proprie aziende, da eseguire su dispositivi mobili, cloud e in locale. 

Le licenze delle edizioni multi-utente si articolano nelle versioni FileMaker Server e FileMaker Cloud. La versione Server consente all’azienda di ospitare le app su server on-premise, mentre con quella Cloud l’hosting è per l’appunto sul servizio cloud, con i vantaggi di non dover preoccuparsi della gestione dell’infrastruttura server e con una semplificazione degli aspetti amministrativi (ma con, chiaramente, costi di licenza diversi). 

Il nuovo servizio di Claris offre dunque alle aziende una base pronta alla trasformazione digitale, per una nuova generazione di app low-code, potenziate da servizi cloud di terze parti e in grado di affrontare le moderne esigenze di orchestrazione, scalabilità e intelligenza.

Chiaramente, non è questo il primo passo sul cloud della piattaforma sviluppata dalla società sussidiaria di Apple, che la scorsa estate, in occasione della 24ª DevCon annuale, ha aperto un nuovo capitolo della sua storia, sotto il nome di Claris International. Un nuovo capitolo che però è anche, in qualche modo, un ritorno al passato: perché Claris era il nome originale, nel 1986, della consociata di Apple, che era stata poi rinominata in FileMaker, il prodotto principale della società.

La prima generazione di servizio cloud-hosting era stata rappresentata da FileMaker Cloud for AWS, fornito attraverso l’Amazon Web Services (AWS) Marketplace. L’azienda aveva annunciato lo stato di “deprecated” per FileMaker Cloud for AWS nell’ottobre del 2018. Ora, arriva la nuova Platform as a Service offerta da Claris: anch’essa comunque sfrutta l’infrastruttura di Amazon Web Services.

FileMaker CloudFileMaker Cloud è dunque, in sintesi, il nuovo servizio cloud offerto direttamente da Claris International, che consente ai clienti di accedere alle custom app ospitate nel cloud utilizzando FileMaker Pro Advanced, FileMaker Go e FileMaker WebDirect.

Il cloud di Claris, sottolinea l’azienda, è progettato per offrire le più recenti tecnologie emergenti agli sviluppatori, nonché per essere pronto per l’intelligenza artificiale e il machine learning, gli assistenti intelligenti, l’IoT, la realtà aumentata e virtuale e altro.

Una risorsa preziosa per gli sviluppatori FileMaker è la community Claris, dove è possibile trovare supporto per risolvere i problemi, anche grazie a un team di esperti di livello mondiale, tra cui CloudOps, SecOps, DevSecOps, che monitorano costantemente le prestazioni e la sicurezza utilizzando l’intelligenza artificiale.

Allo stesso modo, FileMaker Cloud offre agli sviluppatori supporto 24/7; inoltre, FileMaker Cloud è stato progettato ponendo la privacy al centro per impostazione predefinita, dispone di una crittografia end-to-end con gestione delle chiavi HSM e SSO con MFA.

Maggiori informazioni sono disponibili sul sito Claris.

L'articolo FileMaker Cloud, il nuovo servizio Platform as a Service è un contenuto originale di 01net.


          

L’intelligenza artificiale non sostituirà il security operation center

 Cache   
security operation center

Quanti hanno pensato al fatto che l’intelligenza artificiale a breve rimpiazzerà il security operation center? Dopotutto, è stato calcolato quanto costa gestire un SOC 24/7/365, e che cosa comporta da un punto di vista economico soddisfare le richieste di un CISO che chiede più risorse, in termini di persone, tecnologia e fondi, per contrastare nuove minacce cyber.

Sergej Epp, chief security officer della Central European region di Palo Alto Networks invece consiglia di non affidarsi esclusivamente alla tecnologia per proteggere l’organizzazione, ma bisogna valutate il modo in cui l’intelligenza artificiale può complementare il security operation center.

Sergej Epp, chief security officer della Central European Region di Palo Alto Networks

Per aiutarci a capire perché l’intelligenza artificiale non sostituirà mai il security operation center, Epp ci fa un esempio proveniente dal mondo degli scacchi.

Lezione di sicurezza dagli scacchi

Nel 1997, ricorda Epp, il grande maestro di scacchi Garry Kasparov giocò e perse contro Deep Blue, il sistema di intelligenza artificiale di Ibm. Quello che forse non si sa è che Kasparov stava vincendo quando Deep Blue fece quella che venne considerata una mossa inusuale, confondendo Kasparov al punto che perse il ritmo e, in ultima analisi, la partita.

La mossa di Deep Blue, tuttavia, non aveva l’obiettivo di far cadere in fallo il maestro: si scoprì successivamente che Deep Blue aveva un baco e fece una mossa casuale, invece che ragionata con calma.

Anche se la vittoria di Deep Blue venne considerata una pietra miliare nell’evoluzione dell’intelligenza artificiale, il “bug” che influì sul risultato della partita dovrebbe insegnarci che non si punta tutto su un solo cavallo. In effetti a volte bisogna pensare e agire fuori dagli schemi e questo è vero soprattutto quando si tratta di cybersecurity.

Come la cybersecurity trae vantaggio dall’intelligenza artificiale

Intelligenza artificiale e machine learning hanno dimostrato la capacità di automatizzare molte attività prima gestite dal security operation center o da tool di generazioni precedenti. E anche se utile per automatizzare molti processi decisionali legati alla sicurezza cyber non potrà mai sostituire l’intelligenza umana in un’area in continua evoluzione come il threat identification and management.

Poiché spesso non si conoscono le minacce e quale il loro impatto prima che si verifichi, risulta impossibile configurare una macchina affinché riconosca schemi del tutto sconosciuti.

Si cercano costantemente modi per contrastare le minacce avvalendosi delle enormi quantità di dati pubblici provenienti dai servizi di threat intelligence e altre metodologie di sorveglianza.

Analizzare gli incidenti recenti, partecipare a gruppi di discussione, configurare honeypot o ideare esercizi red-team aiuta e può costituire la base per mettere a punto una difesa AI-driven. Ma formare le macchine con i dati è estremamente difficile.

Bisogna insegnare alle macchine

Quello che le macchine sanno fare veramente bene è identificare gli schemi in base a un input e imparare dagli umani. Possiamo insegnare loro a riconoscere una sedia mostrandone miliardi di forme, colori e misure diverse. Ma cosa succede al machine learning quando qualcuno sviluppa una forma di sedia totalmente nuova?

In quei casi il cervello umano collega il formato mai visto con la funzionalità, mentre una macchina non riuscirà a capire che ci si può sedere a meno che non abbia l’aspetto di una sedia.

Abbiamo ancora bisogno che i nostri analisti security operation center insegnino agli algoritmi a riconoscerla come sedia – così come insegnerebbero al sistema AI a riconoscere un nuovo malware come minaccia.

Quindi, anche se intelligenza artificiale e machine learning non sostituiranno il security operation center, si tratta di tecnologie che svolgeranno un ruolo sempre più importante nei processi di automazione delle decisioni.

L’organizzazione è pronta all’intelligenza artificiale?

Prima di poter avvalersi dell’intelligenza artificiale, le imprese spesso dimenticano di dover trasformare le tecnologie di cybersecurity e il security operation center stesso.

Il successo dell’intelligenza artificiale sta nel livello di automazione e integrazione dei controlli di sicurezza. Le soluzioni atte a bloccare il traffico malevolo, mettere in quarantena una macchina, risolvere un problema o applicare una patch devono essere implementate in anticipo, il vantaggio di un rapido processo decisionale da parte dell’intelligenza artificiale è inutile se non può agire in modo immediato.

L’intelligenza artificiale arricchirà il ruolo degli analisti del security operation center consentendo loro di diventare data scientist e security architect, ruoli in cui si dedicheranno a ri-architettare i processi operativi, garantendo che vengano raccolti dati giusti e di qualità e identificando nuove tecniche creative per individuare problemi specifici di mercati, aziende e funzioni.

È importante quindi capire che l’intelligenza artificiale ridurrà i rischi, ma trasformerà anche il personale del security operation center.

L'articolo L’intelligenza artificiale non sostituirà il security operation center è un contenuto originale di 01net.


          

More free online CS courses from Stanford

 Cache   
Following the free Stanford online course offerings in the Fall 2011 quarter mentioned in this previous post, there are course offerings again this winter, including: probabilistic graphical models and natural language processing, another installment of machine learning, among other courses (scroll to the bottom of any of the linked courses to peruse more).
          

Stanford machine learning course

 Cache   
For those of you interested in learning more about machine learning, here’s an interesting opportunity. Andrew Ng at Stanford is offering his annual machine learning class in an online, open access format: Machine Learning. Interested folks will be able to sign up online, watch video lectures and notes, and get feedback on their progress. The class […]
          

The Morning Brew #2868

 Cache   
Software Announcing TypeScript 3.7 – Daniel Rosenwasser Join the Visual Studio for Mac ASP.NET Core Challenge – Jordan Matthiesen Azure Machine Learning – ML for all skill levels – Venky Veeraraghavan Now available: Azure DevOps Server 2019 Update 1.1 RC – Erin Dormier Released: Microsoft.Data.SqlClient 1.1 Preview 2 – David-Engel The November 2019 release of […]
          

Keynote Speakers to Present at HTNG's Insight Summit in Park City, UT

 Cache   

CHICAGO (July 10, 2019) – Caroline Y. Chan of Intel Corporation and Dan Cockerell, former VP of Disney’s Magic Kingdom will give keynote presentations at the 2019 HTNG Insight Summit meeting on August 5-7 at The Chateaux, Deer Valley in Park City, UT.

Caroline Y. Chan is Intel Corporation's vice president of business incubation and general manager of the Data Center Group. She is responsible for driving new services running across the network infrastructure, working closely with network service cloud service providers and enterprises.

Chan and her team will lead pathfinding of advanced technology solutions that are enabled and accelerated by 5G capabilities such as AI, machine learning, blockchain, data analytics, immersive media, cloud gaming, and others.

Dan Cockerell first joined Disney in 1989 as a participant of the Walt Disney World’s College Program, where he worked in resort parking and front desk guest services at hotels. Upon graduation from Boston University, he was selected into Disney’s Management Trainee Program and joined the task force to open Disneyland Paris.

After five years in France, Cockerell returned to Walt Disney World Florida where he held successive executive positions in both resort hotels and theme parks. These roles grew from General Manager of the All Star Resort to later becoming VP of Disney’s Epcot Center theme park, followed by serving as VP of Disney’s Hollywood Studios theme park, and culminating in becoming VP of the Magic Kingdom theme park. Responsible for daily operations of the largest theme park in the world, Cockerell oversaw the experiences of 12,000 cast members and over 20 million annual visitors.

In addition to general sessions, a number of HTNG workgroups will also meet to continue their work on topics of 5G, business analytics, fiber optics, guest data, integration, IoT, staff alert technology, payments, Wi-Fi and more.

For more information on the HTNG Insight Summit event, please visit: https://www.htng.org/page/ISNA_2019


***

 

About Hospitality Technology Next Generation (HTNG)

 

The premier technology solutions association in the hospitality industry, HTNG is a self-funded, nonprofit organization with members from hospitality companies, technology vendors to hospitality, consultants, media and academic experts. HTNG's members participate in focused workgroups to bring to market open solution sets addressing specific business problems. HTNG fosters the selection and adoption of existing open standards and also develops new open standards to meet the needs of the global hospitality industry.

 

Currently more than 400 corporate and individual members from across this spectrum, including world leading hospitality companies and technology vendors, are active HTNG participants. HTNG's  Board of Governors, consisting of 24 top IT leaders from hospitality companies around the world, itself has technology responsible for over 3 million guest rooms and world-leading venues. HTNG publishes workgroup proceedings, drafts and specifications for all HTNG members as soon as they are created, encouraging rapid and broad adoption. HTNG releases specifications into the public domain as soon as they are ratified by the workgroups. For more information, visit www.htng.org.


          

GuestMagic.AI by InnSpire Becomes the 2019 HTNG TechOvation Winner

 Cache   

NEW ORLEANS (April 10, 2019) – Hospitality Technology Next Generation (HTNG) crowns GuestMagic.AI by InnSpire the 2019 HTNG TechOvation Award Winner at the HT-NEXT Awards Program in New Orleans, LA on April 10.

GuestMagic.AI is an online AI-driven platform for hoteliers that uses machine learning to anticipate the next step of the guests to deliver the right service at the right time. It’s device agnostic from smart-phone to tablet, to TV, to voice, and beyond. Using the digital guest journey as a footprint to enhance every technical touch-point in the guest’s path, the result is to have the experience flow like magic.

“I am extremely pleased that the industry sees the value in what we are trying to create; a fully customized and personal guest experience, that engages the guest from when they book until they book again – while using automation and AI to scale that experience,” said Martin Chevalley, CEO & Co-founder of InnSpire. “Combined with easy to use, relevant, proven and innovative technologies, our partner hotels are utilizing the digital guest journey, big data, social media, and their hotel CRM systems to ensure that what they are offering is personal, relevant and instantly accessible."

Data Laundry by dailypoint was a finalist in this year's TechOvation Award. The patent-pending and fully automated cleaning method, Data Laundry, reviews key data points collected by dailypoint, merges duplicates, corrects mistakes from human errors and ultimately creates a single profile for each guest. The clean data is then pushed back to the PMS and stored in dailypoint so all data is up-to-date across sources, allowing hotels to have a single source of truth for all guest data.

TraknProtect’s Safety Button was also a finalist in this year's TechOvation Award. This safety button is designed to protect guest room attendants in the hospitality industry, where instances of sexual harassment and assault have become too common. When feeling threatened or uncomfortable, a housekeeper can press the safety button. Once pressed, designated security personnel are alerted and given a precise location in order to provide immediate assistance in a situation where every second counts.

Datatrend Technologies served as the HT-NEXT Awards Program sponsor for the third year in a row. "Datatrend consistently strives to help organizations leverage technology to drive business outcomes and we are honored to sponsor these awards. We are highly invested in elevating solutions in the hospitality industry, as continuing to improve the guest experience is essential to enduring growth and success,” said Vice President Rob Graves of Datatrend Technologies. “This year’s TechOvation Award Winner, InnSpire, blew us away! We are impressed by the innovation and creativity displayed by the 2019 participants and look forward to advancing hospitality technology together. Congratulations to all on this huge accomplishment!”

 

***

 

About Hospitality Technology Next Generation (HTNG)

 

The premier technology solutions association in the hospitality industry, HTNG is a self-funded, nonprofit organization with members from hospitality companies, technology vendors to hospitality, consultants, media and academic experts. HTNG's members participate in focused workgroups to bring to market open solution sets addressing specific business problems. HTNG fosters the selection and adoption of existing open standards and also develops new open standards to meet the needs of the global hospitality industry.

 

Currently more than 400 corporate and individual members from across this spectrum, including world leading hospitality companies and technology vendors, are active HTNG participants. HTNG's  Board of Governors, consisting of 24 top IT leaders from hospitality companies around the world, itself has technology responsible for over 3 million guest rooms and world-leading venues. HTNG publishes workgroup proceedings, drafts and specifications for all HTNG members as soon as they are created, encouraging rapid and broad adoption. HTNG releases specifications into the public domain as soon as they are ratified by the workgroups. For more information, visit www.htng.org.


          

Lead Data Infrastructure Engineer - Doxel - Redwood City, CA

 Cache   
Ability in managing and communicating data warehouse project plans to internal clients. Work closely with our Machine Learning Lead to build data pipelines that…
From Doxel - Tue, 04 Jun 2019 14:25:28 GMT - View all Redwood City, CA jobs
          

Senior AI/Deep Learning Software Engineer - St Josephs Hospital and Medical Center - Phoenix, AZ

 Cache   
Ability to align business needs to development and machine learning or artificial intelligence solutions. Experience in natural language understanding, computer…
From Dignity Health - Tue, 27 Nov 2018 03:06:49 GMT - View all Phoenix, AZ jobs
          

ByT_TR19/806 - Ingénieur d'études système radio 4G-5G

 Cache   
Filière Métier : TELECOMMUNICATIONS
Contrat : CDI
Description du poste :
Dans le but de mettre à la disposition de ses clients le meilleur réseau mobile de France, Bouygues Telecom mène continuellement des études pour anticiper les besoins d'évolution de son réseau. Au sein du Département d'Ingénierie Radio, vous êtes en charge des études des évolutions du réseau du point de vue fonctionnel. Vous participez aux présentations vers le management et à la construction de la stratégie d'évolution de l'accès radio. Vos principales missions seront liées à l'étude des nouvelles technologies et fonctionnalités radio : - Suivi, analyse et orientation des roadmaps des fournisseurs radio. - Etudes d'opportunité des nouvelles fonctionnalités. - Mise en place et analyse de pilotes. Les sujets d'études concernent notamment les nouveaux services associés à la 5G (Network Slicing, IoT, …) ainsi que l'arrivé des technologies d'Intelligence Artificielle et Machine Learning dans les réseaux radio.

De formation ingénieur ou équivalent dans le domaine de l'accès radio (4G, 5G), vous justifiez d'une première expérience réussie de 3 à 5 ans dans le métier. Vous êtes à l'aise avec la compréhension des normes 3GPP et avez une vision bout en bout des réseaux radio vous permettant de vous interfacer avec les ingénie ries cœurs et transport.Profil recherché : - Rigueur, esprit de synthèse et méthodique - Autonomie. - Force de proposition.
Ville : 13 AVENUE DU MARECHAL JUIN 92360 MEUDON

          

The Digital Disruption in Mechanical Engineering

 Cache   

The Fourth Industrial Revolution is fundamentally changing the world of work for which we are preparing our students and where mechanical engineers are applying their trade. At the same time the students who enter university programmes are much better prepared for the Digital World than they were in the past, expect for those students, in the South African context, who come from disadvantaged environments. 

Universities tend to be slow to react to changes in the environment and therefore all these factors put together result in a significant challenge for the development and implementation of Engineering Programmes. 

Over the last three decades most universities were quick to introduce computer programming in their programmes, as engineers had a strong vested interest in this field and always had a significant  requirement for fast and accurate computing.

The integration of fast computing, big data and machine learning enable engineers to be significantly more productive than in the past by speeding up and integrating processes, from design to manufacture, implementation and commissioning. This new approach is also blurring the boundaries between disciplines forcing mechanical engineers to work collectively in multi-disciplinary teams with other professionals. It also poses new challenges such as mastering software suites and manipulating complex digital models of physical systems.

Digital moods
“Multiphysics” refers to digital models that can simultaneously solve multiple physical phenomena. These models speed up the design processes and deliver large amounts of data that need to be analysed. It is now possible to simultaneously model and compute the fluid-dynamics over the wing of an aircraft as well as the forces and deflections (stresses and strains) the varying pressure profile will induce in the structure.

This is of course a very powerful “tool” that can be used to optimise the aerodynamics and structural elements of the wing in a very short time. 

Big data

Where we may have not been at the forefront is in the use of Big Data. These very large data sets have been available for many years in the Financial and Health sectors where. Colleagues working in the maintenance field, and especially the condition monitoring of mechanical and electrical plant, have had access to larger data sets but mostly used deterministic and statistical models to analyse the data.

The challenge we face going forward is that modern technology, including the Internet of Things, will make large data sets more readily available and we will need to understand how to handle and analyse the data. Data need to be prepared by cleaning it up, verifying and calibrating it, collating from different sources and then storing the data in a format accessible for the various algorithm that can be used to discover the embedded knowledge.

This new approach is also blurring the boundaries between disciplines forcing mechanical engineers to work collectively in multi-disciplinary teams with other professionals.

There are a host of methods available to analyse the data, extract information and discover the knowledge. Many of the new methods make use of artificial intelligence and machine learning where the algorithms, with minimal human input, can analyse data and discover new phenomena that
were not previously known. 

Reality check
The old saying “garbage in – garbage out” still holds and we will always need the fast and multi-processing skills of the human brain to look at the outcome and do a “reality check.” Recent experiences on the highly-automated Tesla assembly lines with the lack of humans on the line were identified as a key contributor to their not achieving the volumes and level of quality they desired.

Therefore, digital disruption in the world of mechanical engineering will indeed bring additional challenges to our fraternity. We will have to equip our new as well as experienced engineers with the necessary skills and understanding of modern data science but at the same time we must always ensure that these mechanical engineers have the required fundamental knowledge and experience to ensure that the new methods provide useful and technically valid results.

Yours in Mechanical Engineering,

Prof Wikus van Niekerk
SAIMechE Council Member


          

Online Education

 Cache   

Online education is growing quickly. Gone are the days where universities were the sole custodians of knowledge. Today we have unprecedented access to information and we are free to learn in near arbitrary depth in almost every imaginable field. A quick scan through Wikipedia can confirm how inertia is calculated in a moving reference frame, and YouTube will teach you how to replace a light bulb on your car. Most of us think that the learning stops there but the Internet can provide us with so much more.

As a group we are curious and enjoy learning, in our professional lives we are required to hone existing skills and develop new ones, but are plagued by extensive time commitments and a rapidly changing schedule that often prevents us from committing to the limited number of short courses presented locally. 

Online platforms offer a wider variety of courses with significantly more flexibility, in content timing and mode of participation. Modern online courses are truly massive and benefit from very strong community interaction. It is not uncommon to be enrolled in a course with 60 000 other students, most of whom are happy to communicate via the forums.

Experts
There are many strong online institutions but three organisations stand out, Coursera.org, Udemy.com and Edx.org. Each of these organisations afford anyone the opportunity to participate in courses presented by experts from well established universities including familiar institutions such as Harvard, Stanford, MIT and TU Delft. 

Over the past few years I have participated in courses ranging from statistical modeling presented by John’s Hopkins to geographical information systems presented by the US Army Academy. The courses range from 4 to 12 weeks and require a commitment of between 4 and 12 hours a week.

Video lectures and course materials are provided, with graded assessments and an active mentor community. The courses range from introductory courses to advanced postgraduate level. In some cases, the courses even bear credit at their host university.

Although courses are available on a wide range of topics most fields are limited to a digital footprint, and you are not likely to get your hands dirty. Most will provide you with the theory and rely on the participants to create their own applications. With this in mind each of the three organisations listed make some capstone module available where the participant can engage in an extended application of the theory in a project setting with supervision. These are typically bundled into a mini-diploma style collection or specialisation. In some cases these can extend to full degree programmes.

The University of Illinois for instance has shifted their 2 year Masters degree in Machine Learning to the Coursera platform and whether you are a resident student or online participant you will have access to the same resources. Though some of the courses can be pricey, most will be credit bearing and provide a course certificate for around $15 - $100. Almost all will allow you to audit content and participate in the online forum for free. In some cases, the courses even bear credit at their host university.

Sealability
Although this style of online education is not likely to replace a conventional engineering degree in South Africa any time soon, it is likely that we will be seeing similar courses make their way into the existing university curriculum as an efficient teaching tool that scales well to large groups.

For those of you with your degree under your belt, there is an opportunity to up-skill yourself and your employees with some confidence without taking on the burden of creating your own programmes or relying on local 3rd party providers.

With a small time investment these flexible courses will allow you to develop up to date technical skills in new fields or refine skills from years past. It might be a practical way to transition from one field to another or provide you an edge in your current organisation.

Dr Martin Venter
SAIMechE Western Cape Branch Chairman


          

Meridian Unveils Latest Edition of Leading Learning Management System

 Cache   
New features utilize machine learning and extend functionality beyond enterprise employees. Reston, VA –?November November 5, 2019?– Meridian Knowledge Solutions today announced the release of the latest version of its award-winning learning management system, Meridian LMS?Fall Fall 2019.

Brought to you by: eLearning Learning
          

Noninvasive Histopathological Imaging of Brain and Prostate Cancer

 Cache   

Being the routine clinical practice for most cancer types, assessing tumor histopathology is critical for cancer diagnosis and prognosis. Histological reviews by clinical pathologist based on the tissue from biopsy or surgical resection remain the only definitive diagnosis of tumor pathologies. However, biopsy or surgical resection is invasive with potential adverse side-effects, making it urgent to develop noninvasive imaging techniques for assessing tumor histopathology. Diffusion MRI was proved to be sensitive to cancer detection in several types of cancer. Yet, current diffusion MRI methods are not specific enough to assess tumor histopathology, especially for cancers like glioblastoma (GBM) and prostate cancer, most with complicated tumor micro-environment. To address this challenge, we employ a novel Diffusion Histology Imaging (DHI) approach, combining diffusion basis spectrum imaging (DBSI) and machine learning/deep learning, to accurately and non-invasively assess tumor histopathology. We apply DHI in imaging patients with GBM to reveal potential viable tumor and necrosis regions that current clinical imaging gold is not able to detect. For validation, we examined twenty surgical resection specimens from thirteen GBM patients and demonstrated that DBSI-derived restricted isotropic diffusion fraction significantly correlated with GBM tumor cellularity. The results further indicated that DHI predicted high cellularity tumor, tumor necrosis, and tumor infiltration with accuracy rate of respectively 91.9%, 93.7%, and 87.8%. It was suggested that DHI might serve as a favorable alternative to current neuroimaging techniques in guiding biopsy or surgery as well as monitoring therapeutic response in the treatment of glioblastomas. Similarly, we applied DHI on prostatectomy specimen and prostate cancer patients, and it was highly accurate not only in detecting prostate cancer from other benign prostatic histology or structures, but also in classifying various prostate cancer grades (grade 1: 88%; grade 2: 94%; grade 3: 92%; grade 4: 88%; grade 5: 95%). We demonstrated that through evaluating and profiling various histopathological structures in prostate cancer, DHI could increase accuracy of tumor detection, staging and grading.


          

Automating Active Learning for Gaussian Processes

 Cache   

In many problems in science, technology, and engineering, unlabeled data is abundant but acquiring labeled observations is expensive -- it requires a human annotator, a costly laboratory experiment, or a time-consuming computer simulation. Active learning is a machine learning paradigm designed to minimize the cost of obtaining labeled data by carefully selecting which new data should be gathered next. However, excessive machine learning expertise is often required to effectively apply these techniques in their current form. In this dissertation, we propose solutions that further automate active learning. Our core contributions are active learning algorithms that are easy for non-experts to use but that deliver results competitive with or better than human-expert solutions. We begin introducing a novel active search algorithm that automatically and dynamically balances exploration against exploitation --- without relying on a parameter to control this tradeoff. We also provide a theoretical investigation on the hardness of this problem, proving that no polynomial-time policy can achieve a constant factor approximation ratio for the expected utility of the optimal policy. Next, we introduce a novel information-theoretic approach for active model selection. Our method is based on maximizing the mutual information between the output variable and the model class. This is the first active-model-selection approach that does not require updating each model for every candidate point. As a result, we successfully developed an automated audiometry test for rapid screening of noise-induced hearing loss, a widespread and preventable disability, if diagnosed early. We proceed by introducing a novel model selection algorithm for fixed-size datasets, called Bayesian optimization for model selection (BOMS). Our proposed model search method is based on Bayesian optimization in model space, where we reason about the model evidence as a function to be maximized. BOMS is capable of finding a model that explains the dataset well without any human assistance. Finally, we extend BOMS to active learning, creating a fully automatic active learning framework. We apply this framework to Bayesian optimization, creating a sample-efficient automated system for black-box optimization. Crucially, we account for the uncertainty in the choice of model; our method uses multiple and carefully-selected models to represent its current belief about the latent objective function. Our algorithms are completely general and can be extended to any class of probabilistic models. In this dissertation, however, we mainly use the powerful class of Gaussian process models to perform inference. Extensive experimental evidence is provided to demonstrate that all proposed algorithms outperform previously developed solutions to these problems.


          

Blog Google: Understanding searches better than ever before

 Cache   
« (…) With the latest advancements from our research team in the science of language understanding–made possible by machine learning–we’re making … Continuer la lecture de « Blog Google: Understanding searches better than ever before »
          

Github tops 40 million developers as Python, data science, machine learning popularity surges

 Cache   
Github, owned by Microsoft, said it had more than 10 million new users, 44 million repositories created and 87 million pull requests in the last 12 months.
          

A.M. Best: Insurance Has an Imperative to Address AI and the “Art of the Possible”

 Cache   

A.M. Best recently hosted a webinar on the Insurance AI Imperative, featuring expert speakers and sponsored by Cognizant. Host Jon Weber of A.M. Best framed the discussion as an opportunity for some insurers to get ahead of the pack, while others could fall behind. The panelists were Jennifer Herz and Mike Clifton of Cognizant, a multinational IT company. Mike Clifton kicked things off in response to a question from Weber regarding whether AI is more “sizzle than steak” by explaining that AI is a vast area of technology. “Being an early adopter fits depending on your business profile and business acumen,” he said. “If you look at the spectrum of AI, there’s people who think of AI as purely just the ability for a machine to make a decision.” However, according to Clifton, the excitement in the industry is coming from the maturity of the technology - machine models and deep learning. Ultimately, he said, modern insurers must “digitally pivot and AI should be part of that strategy.” For the timeline of a fully mature AI timeline, Clifton again emphasized that it depends on the company’s business model and efficiency. Herz added that, “it’s a moving target, what we think is fully mature today, likely 24 months from now is going to feel completely different,” but that deriving the most benefit from AI initiatives through a carrier's business priorities is the key. Weber then shifted the discussion to the “disruptions” that AI is due to create, other than competition from insurtechs and start-ups, which has been much reported on throughout the industry. For Clifton, it all comes down to the insurtechs providing focus. “They give us the ability to laser in on a particular set of problems, like claims or catastrophe management,” he said. “That helps because it makes the value of that interaction well-known. The disruption elements are still occurring from an insurance industry perspective in that you’re seeing a lot of the easier risk appetites of the products being looked at as very targeted spaces to automate and use AI.” Clifton cited warranty coverage and rental coverage as examples. With respect to AI trends within the insurance industry at large, Herz said that incorporating different technology skill sets (especially among millennial workers) is something she hears frequently from the companies she works with as talent leaves the industry and new talent enters. Further, she noted that there is an emphasis on “shoring up the data infrastructure of companies” so that they can integrate data analysis, third party data and, thus, AI technology. These efforts marry with the “evolution of the customer experience” so that as technological innovation continuously accelerates, so do the needs of the customer. This is what Cognizant refers to as the “art of the possible” in adding with AI to the entire value stream of insurance.           Making the case that a “wait and see” approach is insufficient, Herz said that it’s all about where you want to drive growth. “Most companies can’t invest everywhere and have to pick where are the places they can get the most value. It’s a matter of how do you test and learn quickly and then how do you scale so that you can continue to drive benefit as you move through the [AI] maturity curve,” she explained. At the conclusion of the webinar, Weber invited the panelists to voice their essential takeaways from the presentation. For Clifton, it was all about the core of machine learning and models in actuarial science transferring to the frontlines. “AI will be foundational to how we implement insurance in the future, especially property and casualty,” he said. Herz added to the point, saying that she wanted registrants to remember that “understanding where you are and what your business strategy is and how the AI ecosystem can enable that strategy” will help provide focus.   This A.M. Best webinar is available on demand here.   Image Credit: A.M. Best via Twitter  
          

Robotic Process Automation: How can a BA prepare to come into the RPA world?

 Cache   
Robotic Process Automation makes it feel like the field of robotics is at an inflection point, destined to disrupt business models in all manner of industries, firstly identify more opportunities to deploy robots, thus, by infusing intelligence into RPA, hence combining machine learning capabilities with process automation, you can design […]
          

Principal Technical Product Manager - Telecom OSS/BSS Applications and Services - Amazon.com Services, Inc. - Bellevue, WA

 Cache   
Machine Learning and Deep Learning applicability to Telecom services. Strong understanding of business flows and integrated up stream & downstream applications.
From Amazon.com - Fri, 09 Aug 2019 07:52:12 GMT - View all Bellevue, WA jobs
          

Software Engineer - Engineering Data Tools - Blue Origin - Kent, WA

 Cache   
Demonstrated ability to utilize Machine Learning to solve complex problems. An internal drive to deliver results with the ability to seek out requirements and…
From Blue Origin - Wed, 17 Jul 2019 23:30:30 GMT - View all Kent, WA jobs
          

Director level, Finance Technology Global Operations Lead

 Cache   
As a key partner for the VP, FTS, drives the operations of both the run and project deployment efforts within the FTS organizationOversees development, implementation and adoption of next-gen finance tech solutions throughout the J&J finance organization alongside key Finance and Technology leaders; coordinates innovation efforts through major project deployments such as Central Finance and Enterprise Performance ManagementSupports cross-functional implementation efforts of new tech solutions within finance technology to improve reporting, planning, and analysis capabilities and ensures seamless integration into the business (processes, systems, people, etc.)Acts as primary point of contact for the Vice President of Finance Technology Solutions for all cross-functional initiativesPartners closely with Global Process Owners to ensure tech solution architecture aligns appropriately with business process designs and with the FS&T Project Management Organization to govern and manage ongoing tech project intakeSupports the development of the technology solutions roadmap, working closely with the lead Strategic Solutions Architect
  • A minimum of a Bachelors' Degree is required
  • 10+ years of--business operations experience required, preferably with--end-to-end technology solutions architecture experience within a large-scale, transformational business environment
  • Experience translating business needs into technology solutions and managing cross-functional priorities
  • Expertise in industry best-practices for next-gen operational finance solutions (e.g. RPA, AI / Machine Learning, Blockchain, etc.)
  • Experience in developing run governance models so that technology solutions can effectively manage change, and keep pace with changing business environments
  • Experience in complex multi-team delivery models and practical application of technical methods and procedures
  • Deep knowledge of organizational systems, models, and interdependencies needed to align the organization to the FS&T agenda
  • Excellent at building strong relationships with peers and with other senior-level stakeholders
  • Up to 20% travel may be required
  • Skills to influence others and move toward a common vision
  • Flexible, adaptable, and able to thrive in ambiguous situations
  • Experience with large-scale transformation and process change efforts
  • Team-oriented attitude and ability to work collaboratively with and through others
          

Health Data Lead

 Cache   
Overview Are you ready to join an organization where you can make an extraordinary impact every day? Imagine all Americans enjoying ideal cardiovascular health free of heart disease and stroke. At the American Heart Association and American Stroke Association, we get to work toward that goal every day. Is it easy? No. Is it worthwhile? Absolutely. This is satisfying and challenging work that makes a real difference in people's lives. We are where you can achieve professional growth with personal fulfillment. We are where you can connect people to making a lifesaving impact. We are where you can partner with individuals, schools, lawmakers, healthcare providers and others to ensure everyone has access to healthier lifestyle choices and proper healthcare. The American Heart Association is where you can make an extraordinary impact. Responsibilities The Lead Health Data Science Assets is a new exciting role that offers a unique opportunity to lead data strategy across the organization! This role will work with our Emerging Strategies and Ventures Team and serve as a critical liaison between Emerging Health and Business strategies, the AHA's Mission Aligned Business and Health Solutions teams and other business segments. This role is vital to American Heart Association's efforts that bring to bear healthcare and science data as a core asset and growth driver, crafting an outstanding organization and capability that will support quantifiable outcomes, enable identification of new product opportunities and deliver unrivaled data partnerships that fuel creativity. We are looking for someone who is highly motivated, who is an expert data innovator, and who can share tangible results from their strategies, leadership actions. We also need someone who has current experience in large growth organizations where data is a core capability for creating outcomes. Essential Job Duties Develop and implement solutions built on a scalable and flexible architecture that will allow AHA to handle and use health data as an enterprise business asset Define and implement standard operating practices for health data collection, ingestion, storage, transformation, distribution, integration, and consumption within AHA's Health solutions portfolio. Lead all aspects of data access and distribution. Lead the design and delivery of Data Business Intelligence AI and automation solutions advisory engagements involving strategy, roadmap and longer-term operating models. Support the delivery of a broad range of data assets and analytics. Identify and demonstrate approaches, appropriate tools and methodologies. Run health data quality and security. Define data standards, policies and procedures ensuring effective and efficient data management across the company. Provide expertise and leadership in the disciplines of data governance, data quality and master data integration and architecture. Establish a data governance framework. Maintain and share data definitions, data integrity, security and classifications. Direct the continued design, build, and operations of our Big Data Platforms and Solutions. Help identify and understand data from internal and external sources for competitive, scenario and performance analyses, and financial modeling to gain insight into new and existing processes and business opportunities. Work with business teams on commercial and non-commercial opportunities. Advise on fair market value data value propositions Actively contribute to proposal development of transformation engagements focused on DataAnalytics AI and automation. Demonstrate thought leadership to advise teams on DataAnalytics, AI and automation strategy and detailed use cases development by industry. Possess deep understanding of trends and strategies for identifying solutions to meet objectives. Monitor technology trends and raise awareness of capabilities and innovations in selected domains of expertise Empower Data Architecture team to create optimized data pipelines, data storage and data transformation. Support the practice with depth of experience and expertise in the following domains: automation, machine learning, deep learning, advanced analytics, data science, data aggregation & visualization Qualifications Bachelors' degree from a globally recognized institution of higher learning is required, with an advanced degree (MS, PhD, equivalent) strongly preferred. 10 years experience in a company known for data innovation and excellence with responsibility for a comparably- sized analytics business 10 years experience in Health data, real world evidence analytics andor health informatics with the capability to design data strategies and source key health data, gain acceptability for methods, and build analyses for benefit-risk justifications, development and other regulatory needs Experience implementing and using cutting edge analytic tools and capabilities, including B2B, B2C and cross-channel integration tools A passion for and experience with big data-driven decision-making processes across business functions Demonstrated ability to work with technical team of product and data engineers, as well as data scientistsPhDs Consistent track record of successfully delivering top and bottom-line results individually and as part of a high-performance team. Outstanding oralwritten communication and presentation skills, especially with respect to clearly communicating complex data-driven topics to both technical and non-technical audiences. Strategic thinker, leadership, communication, people management skills and innovator with ability to work across segments to support tactical planning and deliver on the objectives of organization. Knowledge of Technology and Healthcare, Life sciences and Health-tech Industry trends Creative, collaborative thinker with an ability to learn new things, assess problems and identify proactive solutions quickly Self-starter, comfortable leading change and getting things done. Travel is required (at least 10%), including overnights. Location: Dallas, Texas is preferred. At American Heart Association - American Stroke Association, diversity, inclusion, and equal opportunity applies to both our workforce and the communities we serve as it relates to heart health and stroke prevention. Be sure to follow us on Twitter to see what it is like to work for the American Heart Association and why so many people enjoy #TheAHALife EOE MinoritiesFemalesProtected VeteransPersons with Disabilities Requisition ID 2018-3066 Job Family Group Business Operations Job Category Science & Research Additional Locations US-Anywhere US-Anywhere Location: Charleston,WV
          

Analytics Consultant 4 Senior Data Analyst Auto Control Analytics Job posting

 Cache   
Job Description At Wells Fargo, we want to satisfy our customers' financial needs and help them succeed financially. We're looking for talented people who will put our customers at the center of everything we do. Join our diverse and inclusive team where you'll feel valued and inspired to contribute your unique skills and experience. Help us build a better Wells Fargo. It all begins with outstanding talent. It all begins with you. Consumer Banking is an industry leader in supporting homeowners and consumers, in addition to operating one of the most extensive banking franchises in the country. We serve mass market, affluent, and small business customers; as well as provide home and personal lending. Our focus is on delivering an exceptional experience for our customers through financial advice and guidance coupled with providing the products and services that will help them realize their financial hopes and dreams. We've built our team of top professionals by rewarding their accomplishments and ensuring they have what's needed to succeed. As a senior-level individual contributor, you will be responsible for partnering with, and developing analytic solutions for the Wells Fargo Auto business within Consumer Banking. You will come to the table with a significant analytic toolkit that you will use to identify, measure and monitor risks arising in the business Control environment. Your work will be used to demonstrate the effectiveness of controls over key business processes and mitigate known and emerging risks. Your expertise will contribute to a strong analytical foundation that consolidates information across interactions, customers, team members and products. This foundation will, in turn, lead to the development of key metrics that will enhance accountability for risk management. In this role, you will be part of an immediate team of Analytic Consultants who also develop analytic solutions for the Wells Fargo Auto business Control environment, and part of a broader team that supports other businesses within Consumer Banking. You will report into the Analytic Manager with responsibility for Control Analytics covering this business area. Responsibilities include: - Collaborate with colleagues across your team, the Control team, and Front Line Operations to implement a program for routine reviews of data-driven monitoring, communication and continuous refinement of metrics. Relevant topics will include operational risk, conduct risk and sales practice risk, as applicable. - Conduct analysis to support the implementation of a data-driven, consistent process of surveillance and monitoring for the within the Wells Fargo Auto business Control environment. Participate and offer input in process reviews, collaborate with Control teams and Front Line Operations to define metrics, ensure robust data sourcing, build a data visualization layer, and facilitate ongoing reviews. - Collaborate with your colleagues in other Consumer Banking businesses to optimize and develop the ongoing risk reporting necessary to support first line accountability for Self-Assurance Activity monitoring and other mission-critical activities. - Design and implement Control analytics and metrics to supplement Front Line Quality Assurance (QA) and Quality Control (QC) programs. - Partner with Consumer Banking colleagues to research, implement and monitor workflow and real-time alerts capabilities. - Leverage business knowledge, analytic expertise, and a variety of data platforms to provide insight into risk trends and emerging risks. - Build an analytic framework to ensure timely resolution of Issues, in partnership with Control team and Front Line Operations leaders. Candidate: You will have demonstrated experience in developing analytics in a large, distributed, diverse organization. Ideal applications of analytics include self-assurance, monitoring and surveillance. To be successful, you will be able to prove that you can conduct research into business and related data processes, and that you can develop actionable analytics to inform, influence, and drive business outcomes. The ability to cultivate relationships with stakeholders and credibility will be key factors in your effectiveness. You will differentiate yourself with articulate communication, demonstrated leadership capability, and effective interpersonal interactions. Your strong understanding of business drivers and processes, ability to influence, and offer credible challenge when needed, will position you for success. Preferred locations: 301 South Tryon St, Charlotte, NC; 7001 Westown Pkwy, West Des Moines, IA; 550 South 4th St, Minneapolis, MN; 250 East John Carpenter Fwy, Irving, TX; or any WF footprint location Required Qualifications - 6+ years of experience in one or a combination of the following: reporting, analytics, or modeling; or a Masters degree or higher in a quantitative field such as applied math, statistics, engineering, physics, accounting, finance, economics, econometrics, computer sciences, or business/social and behavioral sciences with a quantitative emphasis and 4+ years of experience in one or a combination of the following: reporting, analytics, or modeling Desired Qualifications - Extensive knowledge and understanding of research and analysis - Strong analytical skills with high attention to detail and accuracy - Excellent verbal, written, and interpersonal communication skills Other Desired Qualifications - Deep knowledge of Consumer Lending business processes, products, data and systems, preferably with a focus on Wells Fargo Auto. - Demonstrated ability to drive projects forward in a complex, resource-constrained environment with conflicting needs across stakeholders. - Prior experience with different analytic approaches, including trend analysis, regression analysis, Natural Language Processing (NLP), and Machine Learning techniques. - Excellent analytical, critical thinking and problem-solving skills. - Leadership effectiveness; ability to drive successful execution of strategic plans; prioritize and set goals. Must possess strong collaboration skills. - Demonstrated ability to execute effectively in a matrixed organization, develop partnerships with many business and functional areas. - Ability to communicate powerfully and prolifically to senior and executive leaders and simplify the complex. - Ability to influence without direct authority, create and manage (while achieving results) large-scale change and influence people at all levels of the organization. - Ability to lead through adversity and adjust to changing priorities. - Experience preparing presentations and analysis for third-party and regulatory audiences. - Familiarity with the following systems and related data environments: AFS, ITOP, CRS, CARS, Auto IMS, iREPO, ACAPS, EXS, SHAW, and ECAR. - Demonstrated expertise with analytic tools such as SAS, SQL, Python or R. - Demonstrated experience with Big Data capabilities, such as Teradata Aster, Oracle Exadata, and Hadoop. Disclaimer All offers for employment with Wells Fargo are contingent upon the candidate having successfully completed a criminal background check. Wells Fargo will consider qualified candidates with criminal histories in a manner consistent with the requirements of applicable local, state and Federal law, including Section 19 of the Federal Deposit Insurance Act. Relevant military experience is considered for veterans and transitioning service men and women. Wells Fargo is an Affirmative Action and Equal Opportunity Employer, Minority/Female/Disabled/Veteran/Gender Identity/Sexual Orientation.
          

Staff Fellow

 Cache   
STAFF FELLOW AT THE FDA OFFICE OF WOMEN"---S HEALTHOFFICE OF WOMEN"---S HEALTH OFFICE OF THE COMMISSIONERFOOD AND DRUG ADMINISTRATION DEPARTMENT OF HEALTH AND HUMAN SERVICESThe Office of Women"---s Health (OWH), Office of the Commissioner (OC), Food and Drug Administration is recruiting a full-time Staff Fellow to work within the Office of Translational Sciences, Center for Drug Evaluation and Research (CDER). Successful candidates will be engaged in mining social media networks extensively to gather all sources of information that are not traditionally gathered as part of patient history but could provide important insights into risk factors for opioid and substance use disorder in pregnant women and relapse. This data will be collected with the intention of identifying interventions for prevention and treatment as well as rehabilitation post pregnancy. The work of the selected expert will include but not be restricted to: - - - - - - - - -Building predictive models of user patterns of opioid usage using machine learning with an aim to develop AI algorithms to help engage in preventive intervention in "at-risk" - populations - - - - - - - - -Development of scientific manuscripts, authoring scientific posters, manuscripts and other scholarly dissemination related to opioid overuse disorder in pregnant women - - - - - - - - -Coordinating with other federal agencies, academic and medical institutions for the collection of data and development of standardized algorithms and SOP"---s for mining social media or "unstructured" - data - - - - - - - - -Coordination of scientific conferences including development of background materials and subsequent white papers as necessary - - - - - - - - -Collaboration with experts to develop presentations for national meetings - - - - - - - - -Development of educational materials as necessary (e.g., webinars, PowerPoint presentations) - - - - - - - - -Data aggregation and visualization from existing databases and statistical data analysis - - - - - - - - -Participation in synergistic activities across FDA, federal landscape and external stakeholders regarding opioid overuse in pregnant women - - - - - - - - -Searching, reviewing, and communicating the scientific literature on relevant topics via written and verbal communicationAttention to the health of women has been an integral part of FDA"---s mission. The Agency has a role in protecting and promoting the health of the American people, specifically in approving drugs, devices and other products to assure they are safe and effective. The FDA also has a public health mandate to ensure that consumers use these regulated tools to maximize health benefits. The Agency accomplishes this mission by providing leadership, assistance and support and by taking an active role in communicating its science-based information to the consumers they service, frequently turning to community organizations, women"---s advocacy groups, professional associations and national health education entities.The mission of OWH is to: - - - - - - - - -Serve as the principal advisor to the Commissioner and other key Agency officials on scientific, ethical, and policy issues relating to women's health - - - - - - - - -Provide leadership and policy direction for the Agency regarding issues of women's health and coordinate efforts to establish and advance a women's health agenda for the Agency - - - - - - - - -Promote the inclusion of women in clinical trials and the implementation of guidelines concerning the representation of women in clinical trials and the completion of sex/gender analyses - - - - - - - - -Identify and monitor the progress of crosscutting and multidisciplinary women's health initiatives including changing needs, areas that require study, and new challenges to the health of women as they relate to FDA's mission - - - - - - - - -Serve as the Agency's liaison with other agencies, industry, professional associations and advocacy groups with regards to the health of women.Additional information about the FDA OWH can be obtained at: -position will be filled through OWH's Research Fellowship program. The initial 2-year appointment will be funded through an OWH grant awarded and managed by CDER/OTS. Further consideration for extension of term may be considered by the Office of Translational Sciences. Applications will be accepted from U.S. citizens or Lawful Permanent Residents (green card holders) only. No previous Federal experience is required. Appointment does not confer any entitlement to a position in the competitive service, and there is no entitlement to Merit Systems Protection Board (MSPB) appeals rights. One- year probationary period may apply.QUALIFICATIONS:A PhD in statistics, computer science, mathematics, epidemiology with an emphasis on biological sciences, public health, health sciences, behavioral sciences, or related field with scientific writing and scientific literature research capabilities. Expertise in statistical analysis of real-world data such as patient reported outcomes and patient preferences would strengthen the applicant"---s potential for selection. The preferred candidate should have a comprehensive understanding of the latest big data science and big data analysis methods. The preferred candidate should not only understand the science in theory, but also have at least 2 years of hands-on experience of applying different AI/ML/NLP modeling technologies for big data analysis. The candidate should be able to apply different AI modeling methods to a set of data, and be able to distinguish the pros, cons, and outcomes of the analysis. The candidate should have the experience of using different AI libraries or packages to design or prototype solutions. The candidate should also have experience on defining and using training data set, using structured and unstructured data set, using multiple data sources, and performing data visualization. A minimum of two years experience in areas reflecting the above listed professional activities is preferred. In addition, the candidate must have strong collaborative skills, excellent written and oral communication skills, and evidence of leadership potential.HIGHLY DESIRABLE TECHNICAL QUALIFICATIONS:Knowledge of and programing experience in python to include the following considerations:Topic Modeling Using Gensim in Python. Topic models are a suite of algorithms that uncover the hidden thematic structure in document collectionsClustering: Automatic grouping of similar objects into setsClassification: Identifying to which category an object belongs toRegressionKnowledge of and familiarity with extraction, mapping, and alignment of drug names and adverse event terms to standard systematized terminologies (such as RxNorm, MedDRA, MeSH, and ATC)Knowledge of social media data extraction, NLP for social media mining, and supervised ML classification methodsExperience with the full Software Development Life Cycle (SDLC)Agle/Scrum methodology experienceUnderstanding of NLP techniques for text representation, semantic extraction techniques, data structures and modelingExperience with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)SALARY: Salary commensurate with education/experience.LOCATION: 10903 New Hampshire Avenue, Silver Spring, MD 20993HOW TO APPLY: Please submit a statement of interest for the Staff Fellow position, resume or curriculum vitae (CV) detailing relevant experience, transcripts, the names/contact information for three references to:
          

Application Owner - Executive Director - Cloud Identity Ac...

 Cache   
SunIRef:it Application Owner - Executive Director - Cloud Identity & Access Management (IAM) JP Morgan Chase 21,577 reviews - Jersey City, NJ 07310 JP Morgan Chase 21,577 reviews Read what people are saying about working here. Cybersecurity Technology Controls (CTC) delivers streamlined and consistent solutions supporting JPMorgan Chase's Controls, Access Management and IT Risk agendas, with a focus on stability, delivery, efficiencies and people. The goal of TC is to drive standardization, consistency and simplicity in a JPMorgan Chase architecture that fosters long-term productivity, quality and innovation across the entire enterprise. The disciplines within this organization are Oversight & Controls Technology, Identity & Access Management, IT Risk & Controls, and Third Party Risk Management. The Global Identity and Access Management (GIAM) organization within CTC provides access control governance and Identity Services for all lines of business (LOBs) globally, providing the right access to the right people at the right time for all technology platforms and applications supported by TC, and provides a comprehensive set of applications, tools, and staff to globally implement, monitor and manage technology risk solutions. As an experienced Software Engineer, your mission is to help lead our team of innovators and technologists toward creating next-level solutions that improve the way our business is run. Your deep knowledge of design, analytics, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit. And best of all, you'll be able to harness massive amounts of brainpower through our global network of technologists from around the world. GIAM is seeking an Application Owner/Delivery Manager to run the engineering team that devises and maintains the firm's Cloud identity solutions. This solution enables other teams to leverage our products when building their cloud native solutions. The role entails R & D, engineering, integration and support, and as such require extensive experience in designing and delivering enterprise grade IT solutions. As an experienced Software Engineer, your mission is to help lead our team of innovators and technologists toward creating next-level solutions that improve the way our business is run. Your deep knowledge of design, analytics, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit. And best of all, you'll be able to harness massive amounts of brainpower through our global network of technologists from around the world. The ideal candidate will have a proven track record of leading software development or product engineering teams in delivering enterprise grade solutions. They will have experience in delivering outcomes through agile software delivery and DevOps. The candidate should have excellent communications skills and be able to build strong relationships with senior leaders. Knowledge of Identity & Access Management (IAM) concepts including Cloud Identity approaches is a must. Responsibilities: Responsible for the technical integrity of the team's delivery. Provide analysis and estimation of future work impacting our team. Provide technical oversight to scrum masters/project managers during implementation and enhancement cycles Work closely with the risk control teams, delivery leads and vendors during the risk assessment activities for the third party solutions. Serve as a primary liaison with the third party vendors for ongoing technical tasks related to the solution e.g. breaks, defects, patches & upgrades. Mentor team members to progress their technical and professional skills. Build and maintain relationships with internal (business and technology team members) and third party vendors. Triage technical issues and lead teams toward solving problems. Plan team capacity to accommodate demands. Qualifications Bachelor's degree in Computer Science, Software Engineering, or equivalent 10+ years of technology experience, including 4 years of technical product delivery and management Experience with leading projects through all phases of a software development lifecycle. Candidates must be self-motivated and confident in ambiguous circumstances. Exceptional written and verbal communication skills, including experience with executive level communication. Ability to build strong internal (client) and external (vendor) relationships Leadership by example, coaching and creating an environment for continuous improvement and technical excellence Extensive knowledge and experience working in an Agile environment. (JIRA, Confluence, Git, etc) Familiarity with modern software engineering methodologies - DevOps, TDD, CI/CD Technologies Proficient at one or more programming languages: Java a plus Proficient with the internals of distributed operating systems: Unix/Linux, Windows, Z/OS Experienced in one or more scripting languages: Python, PowerShell APIs and Microservices Excellent understanding of compute infrastructures, computing services, operating systems, applications, databases, middleware, and management systems. Familiarity with IT control processes around risk and compliance Candidates with the following skills will have an added advantage Security domain concepts related to Authentication, Authorization, SAML, OAuth, Kerberos, Digital Certificates Experience with Privileged Access solutions such as CyberArk a plus When you work at JPMorgan Chase & Co., you're not just working at a global financial institution. You're an integral part of one of the world's biggest tech companies. In 14 technology hubs worldwide, our team of 40,000+ technologists design, build and deploy everything from enterprise technology initiatives to big data and mobile solutions, as well as innovations in electronic payments, cybersecurity, machine learning, and cloud development. Our $9. 5B+ annual investment in technology enables us to hire people to create innovative solutions that will not only transform the financial services industry, but also change the world. At JPMorgan Chase & Co. we value the unique skills of every employee, and we're building a technology organization that thrives on diversity. We encourage professional growth and career development, and offer competitive benefits and compensation. If you're looking to build your career as part of a global technology team tackling big challenges that impact the lives of people and companies all around the world, we want to meet you. Ready to use your expertise and experience to drive change? today. JP Morgan Chase - Today report job - original job
          

Sr NLP Data Scientist

 Cache   
# of Openings

1

Category (Portal Searching)

Information Technology

OVERVIEW

Who we are .

Ciox Health, a health technology company, is dedicated to improving U.S. health outcomes by transforming clinical data into actionable insights. With an unmatched network offering ubiquitous access, Ciox Health can release, acquire, enhance and deliver medical record and discrete clinical data from anywhere across the United States.

What we offer .

At Ciox Health we offer all employees a place to grow and expand their current skills so that they can not only help build Ciox Health into the greatest health technology company but create a career that you can be proud of. We offer you complete training and long-term career goals. Our environment is what most of our employees are the proudest of and our IT Group is comprised of some of the brightest and talented individuals. Give us just a few moments to explain why we need you and hope you will help us change how the health Industry manages its medical records.



Be a part of transforming the exchange of clinical data using the most advanced technology available. Ciox Health is on a mission to simplify the exchange of medical information. By partnering with the healthcare providers who hold health data and those who are requesting it Ciox is uniquely positioned to access, facilitate and improve the management and exchange of protected health information. Data Scientists are pioneers in leveraging data with Natural Language Processing (NLP) and Machine Learning (ML) algorithms to drive better clinical actions and outcomes. By partnering with business leaders at Ciox Health, this role will validate varying hypothesis to support business strategy into reality and make a very visible impact to consumers of data and improve the bottom line performance of Ciox Health. This Data Scientist Role requires minimum of 6 years of Data Scientist experience outside of Education.

RESPONSIBILITIES

* Data exploration and discover new uses for existing data sources

* Partner with management and business units on innovative ways to successfully utilize data and related AI/ML/NLP tools to advance business objectives and develop efficiencies

* Provide oversight to application engineering team so that they can interpret and monitor usage of ML models and continuously measure & tune its accuracy

* Work with product / business team to identify possible avenues to apply AI/ML

* Develop hypothesis and evaluate the performance of various NLP and AI/ML algorithms to address the business opportunity

* Perform analyses using statistical packages / languages such as Python or Spark

* Provide guidance to application engineering team so that they can built, deploy and support AI/ML models in production

* Develop subject matter expertise on source systems data and metadata

* Gain and master a comprehensive understanding of operations, processes, and business objectives and utilize that knowledge for data analysis and business insight

QUALIFICATIONS

* Master s degree or higher in a quantitative or relevant field (Statistics, Math, Economics, Engineering, Computer Science, Business Analytics, Data Science)

* 3 or more years of work experience in practicing NLP and data science in business, with more than 10 years of overall IT experience

* Experience setting up a Data Scientist Group/Process

* Experience in leading large-scale data science projects and delivering from end to end

* Strong proficiency in Python & scripting in general.

* Strong experience in data management and analysis with relational and NoSQL database

* Excellent problem solving and critical thinking capabilities.

* Experience with NLP technology

* Experience with Python (sklearn et al), Spark, Scala, or Java

* Strong foundational quantitative knowledge and skills

* Strong experience in SQL and database management

Ciox provides equal employment opportunities to all associates and applicants for employment without regard to as race, color, national origin, genetic information, religion or religious creed, sex (including pregnancy, childbirth and related medical conditions), gender, gender identity, gender expression, sexual orientation, age, marital status, physical or mental disability, citizenship status, ancestry, military and veteran status, or any other characteristic as protected by state or federal law. Equal employment opportunity applies to all terms and conditions of employment, including hiring, placement, promotion, termination, layoff, recall, transfer, leave of absence, compensation, benefits, leaves of absence, and training.
          

Data Scientist

 Cache   
SunIRef:it Data Scientist Verizon 25,481 reviews - Alpharetta, GA 30004 Verizon 25,481 reviews Read what people are saying about working here. What you'll be doing. Be a part of the team that identifies opportunities for using data analysis to enhance the Verizon Internal Audit team's role. The team is designed to add value and improve operations within Verizon's Internal Audit department to provide data analytics, data mining, and continuous auditing strategies and tactics. You along with your team members will provide both advisory and analytical support by identifying, developing, documenting, or executing analytics during all relevant stages of an audit. Bringing a systematic and disciplined approach to evaluating and improving the effectiveness of the overall control environment, risk management, and governance processes. Conducting stand-alone projects, including reporting dashboards, process automations, continuous auditing/ monitoring and risk assessment models. Gaining increasing levels of responsibilities and presenting to senior management. Conducting audit assist and risk modeling. What we're looking for. You'll need to have: Bachelor's degree or four or more years of work experience. Four or more years of relevant work experience. Willingness to travel. Even better if you have: Bachelor's degree in Management Information Systems, Computer Science, Accounting or any other related discipline. Business analytical skills; ability to apply business logic to design and implement data mining techniques on large data sets. Knowledge of predictive and prescriptive analytics, data mining and machine learning (Python and R preferred). Projects experience of creative and critical thinking. Experience in the use of Teradata SQL, MS SQL server, and Oracle SQL. Experience with data visualization, particularly creating dashboards and executive reporting (Tableau or other). Experience designing, developing, implementing and maintaining a database and programs to manage data analysis efforts. Experience with data warehousing or analytics in a cloud environment such as AWS. Knowledge of working with self-serve analytics tools for business users. Knowledge of the tools, technologies and practices needed to perform in-depth analysis of both structured transactional data, and semi-structured or unstructured data. Ability to work independently and within a team in a fast changing environment with changing priorities and changing time constraints. Strong interpersonal skills and ability to multi-task. Ability to interpret business requests and communicate findings in an intelligible manner. Ability to communicate technical findings to non-technical audiences. Knowledge of risk management methodology and factors. When you join Verizon. You'll have the power to go beyond - doing the work that's transforming how people, businesses and things connect with each other. Not only do we provide the fastest and most reliable network for our customers, but we were first to 5G - a quantum leap in connectivity. Our connected solutions are making communities stronger and enabling energy efficiency. Here, you'll have the ability to make an impact and create positive change. Whether you think in code, words, pictures or numbers, join our team of the best and brightest. We offer great pay, amazing benefits and opportunity to learn and grow in every role. Together we'll go far. Equal Employment Opportunity We're proud to be an equal opportunity employer - and celebrate our employees' differences, including race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, and Veteran status. Different makes us better. Verizon - Just posted report job - original job
          

Health Data Lead

 Cache   
Overview Are you ready to join an organization where you can make an extraordinary impact every day? Imagine all Americans enjoying ideal cardiovascular health free of heart disease and stroke. At the American Heart Association and American Stroke Association, we get to work toward that goal every day. Is it easy? No. Is it worthwhile? Absolutely. This is satisfying and challenging work that makes a real difference in people's lives. We are where you can achieve professional growth with personal fulfillment. We are where you can connect people to making a lifesaving impact. We are where you can partner with individuals, schools, lawmakers, healthcare providers and others to ensure everyone has access to healthier lifestyle choices and proper healthcare. The American Heart Association is where you can make an extraordinary impact. Responsibilities The Lead Health Data Science Assets is a new exciting role that offers a unique opportunity to lead data strategy across the organization! This role will work with our Emerging Strategies and Ventures Team and serve as a critical liaison between Emerging Health and Business strategies, the AHA's Mission Aligned Business and Health Solutions teams and other business segments. This role is vital to American Heart Association's efforts that bring to bear healthcare and science data as a core asset and growth driver, crafting an outstanding organization and capability that will support quantifiable outcomes, enable identification of new product opportunities and deliver unrivaled data partnerships that fuel creativity. We are looking for someone who is highly motivated, who is an expert data innovator, and who can share tangible results from their strategies, leadership actions. We also need someone who has current experience in large growth organizations where data is a core capability for creating outcomes. Essential Job Duties Develop and implement solutions built on a scalable and flexible architecture that will allow AHA to handle and use health data as an enterprise business asset Define and implement standard operating practices for health data collection, ingestion, storage, transformation, distribution, integration, and consumption within AHA's Health solutions portfolio. Lead all aspects of data access and distribution. Lead the design and delivery of Data Business Intelligence AI and automation solutions advisory engagements involving strategy, roadmap and longer-term operating models. Support the delivery of a broad range of data assets and analytics. Identify and demonstrate approaches, appropriate tools and methodologies. Run health data quality and security. Define data standards, policies and procedures ensuring effective and efficient data management across the company. Provide expertise and leadership in the disciplines of data governance, data quality and master data integration and architecture. Establish a data governance framework. Maintain and share data definitions, data integrity, security and classifications. Direct the continued design, build, and operations of our Big Data Platforms and Solutions. Help identify and understand data from internal and external sources for competitive, scenario and performance analyses, and financial modeling to gain insight into new and existing processes and business opportunities. Work with business teams on commercial and non-commercial opportunities. Advise on fair market value data value propositions Actively contribute to proposal development of transformation engagements focused on DataAnalytics AI and automation. Demonstrate thought leadership to advise teams on DataAnalytics, AI and automation strategy and detailed use cases development by industry. Possess deep understanding of trends and strategies for identifying solutions to meet objectives. Monitor technology trends and raise awareness of capabilities and innovations in selected domains of expertise Empower Data Architecture team to create optimized data pipelines, data storage and data transformation. Support the practice with depth of experience and expertise in the following domains: automation, machine learning, deep learning, advanced analytics, data science, data aggregation & visualization Qualifications Bachelors' degree from a globally recognized institution of higher learning is required, with an advanced degree (MS, PhD, equivalent) strongly preferred. 10 years experience in a company known for data innovation and excellence with responsibility for a comparably- sized analytics business 10 years experience in Health data, real world evidence analytics andor health informatics with the capability to design data strategies and source key health data, gain acceptability for methods, and build analyses for benefit-risk justifications, development and other regulatory needs Experience implementing and using cutting edge analytic tools and capabilities, including B2B, B2C and cross-channel integration tools A passion for and experience with big data-driven decision-making processes across business functions Demonstrated ability to work with technical team of product and data engineers, as well as data scientistsPhDs Consistent track record of successfully delivering top and bottom-line results individually and as part of a high-performance team. Outstanding oralwritten communication and presentation skills, especially with respect to clearly communicating complex data-driven topics to both technical and non-technical audiences. Strategic thinker, leadership, communication, people management skills and innovator with ability to work across segments to support tactical planning and deliver on the objectives of organization. Knowledge of Technology and Healthcare, Life sciences and Health-tech Industry trends Creative, collaborative thinker with an ability to learn new things, assess problems and identify proactive solutions quickly Self-starter, comfortable leading change and getting things done. Travel is required (at least 10%), including overnights. Location: Dallas, Texas is preferred. At American Heart Association - American Stroke Association, diversity, inclusion, and equal opportunity applies to both our workforce and the communities we serve as it relates to heart health and stroke prevention. Be sure to follow us on Twitter to see what it is like to work for the American Heart Association and why so many people enjoy #TheAHALife EOE MinoritiesFemalesProtected VeteransPersons with Disabilities Requisition ID 2018-3066 Job Family Group Business Operations Job Category Science & Research Additional Locations US-Anywhere US-Anywhere Location: El Paso,TX
          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

Data Scientist

 Cache   
The data scientist position is part of the DuPont core Science & Innovation competency in data discovery and data analytics. As Data Strategist, you will be a consultant to multiple business Research and Development (R&D) groups to help them understand and develop comprehensive data strategies and analytics solutions. You will be part of a team that work closely with internal stakeholders from emerging technologies, R&D, engineering, business and marketing, and other data/informatics teams in business units, to turn data into critical information and knowledge that can be used to make sound business decisions. . You will be a critical competency in a fast-growing team that contributes to decisions that impact the company s growth and innovation. The role must understand the R&D process, data pipelines, and data analytics methods that are used to collect, structure and analyze data. You will work with stakeholders to understand business objectives, define key performance indicators, and provide relevant analysis, insights and recommended actions. The_Role_&_Responsibilities Your_key_responsibilities_ * Play a key role in the development of data integration and analytics strategies for various groups to help shape the future of what data- centric R&D look like. * Evaluate business requirements and work closely with stakeholders to identify key business needs and translate them into a clear data solution. * Focus on helping R&D organizations to achieve transformational change by designing, developing, and executing data solution and work streams or enabling these organizations to do so more effectively * Define and implement the processes, mindsets, technologies, and expertise required to convert data to action. * Establishes a data roadmap, data management processes, and analytical platforms. * Continuously strive to improve the effectiveness of data management strategy by identifying new opportunities for data and analytics to advance R&D needs as well as identifying and conveying data quality and gaps. * Summarize and convey data findings to both a technical and non-technical audience. Remain current with emerging technologies and industry best practices; guide others on major strategies and methodologies. Prepare and deliver presentations and/or workshops to educate organizational leaders, colleagues, and other business departments. Stay current/relevant in and update job knowledge by participating in educational opportunities, reading professional publications, maintaining personal networks and participating in professional organizations. Other duties as assigned. Skills * Ability to conceive and portray overall needs and construct overall solution architecture. * Ability to work well under pressure and within tight deadlines. * Ability to communicate effectively (especially highly technical data to people without a technical background), drive consensus, and influence relationships at all levels. * Strong analytical and problem-solving skills; sound judgment and demonstrated leadership skills. * Eager to learn and support the business strategy and desire to work on strategic projects. Your_qualification_profile_ * Graduate degree in science or engineering field. Minimal 3-years of experience in data science and data architecture (including PhD research). * Experience working with cross-functional teams (business, data science, and IT) to ensure meaningful data collection or connections with responsibility for results, including costs and methods. * Passionate and skilled with technology including * Statistics, Machine Learning, and AI in R and/or Python. * Data Engineering experience with deep understanding of various data management, integration and visualization technologies (RDBMS, NoSQL, Spark, PowerBI, Spotfire etc). * Experience with text data platforms and analysis methods (NLP, ontology, data linkage, Marklogic, etc). * Computer Programming (XML, Jason, Angular, Java, etc.). * Logical and physical data modeling. * Other preferred skills Cloud and PAAS, IoT, Computational Modeling, Image Analysis. Outstanding people and communication skills (i.e., the ability to structure and synthesize within your communication). Demonstrated experience with forming and implementing data strategy on management, governance, architecture, and analytics approaches. Well versed in the ingestion, transformation, movement, sharing, stewardship and quality of data and information. Experience in a fast-paced agile development environment, and an ability to execute against aggressive timelines. At DuPont, we have an unbridled commitment to deliver essential innovations that enrich people s lives, enable sustainable development and foster human potential for generations to come. Innovations developed from highly engineered products and naturally sourced ingredients continue to shape industries and everyday life. From smarter homes to more efficient cars, from better ways of digitally connecting to new tools that enable active and healthy lifestyles in all these areas and many more, we re working with custom to transform their ideas into real world answers that help humanity thrive. Coupled with core values and excellent compensation & benefits Together, we re turning possibilities into real world answers that help humanity prosper! Come realize how you can make an impact, act like an owner and partner with customers in our journey. Please access the following link to better understand & appreciate DuPont s Journey EOE/DIVERSITY STATEMENT DuPont is an equal opportunity employer. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability or any other protected class. If you need a reasonable accommodation to search or apply for a position, please visit our Accessibility Page for Contact Information. For US Applicants See the Equal Employment Opportunity is the Law po at http //********************************************************* For our U.S. Affirmative Action Policy, click here. Job Research & Development Primary Location NA-United States-Delaware-Wilmington Organization Corporate Education Level Doctorate Degree (over 19 years) Schedule Full-time Employee Status Regular Job Type Experienced
          

Senior Software Engineer - Innovation Center

 Cache   
About Clearwater Analytics--

Clearwater Analytics-- is a global SaaS solution for automated investment data aggregation, reconciliation, accounting, and reporting. Clearwater helps thousands of organizations make the most of investment portfolio data with cloud-native software and client-centric servicing. Every day, investment professionals worldwide trust Clearwater to deliver timely, validated investment data and in-depth reporting.

Clearwater aggregates, reconciles, and reports on more than $3 trillion in assets across thousands of accounts daily for our Fortune 500 clients.

DESCRIPTION

The Innovation Center at Clearwater Analytics solves significant problems with new technology and techniques. The Innovation Center explores and uses machine learning, RPA, blockchain and any other technology that creates step-change for our clients, markets and employees. Clearwater's system is used by some of the world's largest technology firms, fixed income asset managers, and custodian banks. These firms rely on Clearwater's ability to solve difficult, seemingly impossible problems. Clearwater's Innovation is a key driver of those solutions.

Clearwater is looking for talented individuals who thrive on solving problems and developing new skills. We offer a competitive compensation package, exposure to cutting-edge financial market issues & information, business casual workplace, beautiful surroundings and work-life balance.

Responsibilities


  • Developing a solution to a problem in a way that hasn't been done before that has had dramatic positive results
  • Changing the technical direction of a team through persuasion, leadership, and force of will onto a better path
  • Leading a community of interest in a technology or domain that is not a standard part of the enterprise, but whose adoption would significantly impact the company for the better
  • Demonstrating the ability to decompose problems to their root causes and then follow an engineered approach to finding appropriate solutions

    REQUIREMENTS


    • 5+ years architecting and engineering critical systems
    • Proven innovation track record
    • Deep curiosity

    • Fluent with functional, imperative and object---oriented languages; knowledge of Java, Clojure, or JavaScript would be especially useful
    • Experience building complex web systems that have been successfully delivered to customers
    • Experience in communicating with users, other technical teams and management to collect requirements, identify tasks, provide estimates and meet production deadlines
    • Experience implementing and consuming large scale web services
    • Eagerness and willingness to learn new technologies

      Desired experience and skills:


      • Bachelor's degree in Computer Science or related field
      • Experience creating and consuming web services
      • Experience developing and designing a public facing API
      • Experience with database scaling and design

        What we offer:


        • Headquarters in the heart of downtown Boise
        • Business casual atmosphere in a flexible working environment
        • Team focused culture that promotes innovation and ownership
        • Access to cutting edge investment reporting technology and expertise
        • Continual learning, professional development and growth opportunities
        • Competitive salary and benefits package; including health, vision and dental
        • Additional benefits including PTO, 401(k) with 4% employer match

          

Senior Cyber Security Network Planner

 Cache   
What you'll be doing... Global Network & Technology (GN & T) Security Planning is seeking a Cyber Security Network Plannerto lead development of end-to-end security architectures and roadmaps for Verizon's internal and external networks. The security network planner will work to ensure security is built in from the beginning of their programs instead of being bolted on after the fact The network security plannerwill use their system-level knowledge of network segmentation to ensure Verizon has managed all significant security risks and eliminated redundant security capabilities. The network security planner will also work with academic and research institutions to identify and mitigate long-term security threats affecting our networks. As a member of the System SME team, you will work in a fast-paced environment focused on planning and managing security risk for Verizon's most critical systems. You will interact with the engineers and operating Verizon's networks, security engineering and networks and operations teams, and the Verizon CISO organization to ensure your recommendations address operational considerations. You will leverage the Domain SME team in GN & T Security Planning to ensure your network deliverables account for all security domain considerations (e.g., security engineering, IAM, network/asset/data security, software development, assessment, testing, and operations).You will collaborate with vendors and the broader network and security communities to stay up to speed on the latest security developments and ensure their future capabilities align with Verizon's needs. Define objectives, technical work, and timeline for developing network security architectures, roadmaps, and requirements. Build relationships with program, engineering, operations, security, and CISO teams to understand how to develop plans that effectively manage Verizon's security risks. Communicate progress, findings, and ensure successful handoff of deliverables to program and operational teams. Build domain knowledge of Verizon's environment to understand long-term risk areas that will develop as the systems evolve. Provide thought leadership by participating in Network and security forums and collaborating with academic and research institutions. What we're looking for... You'll need to have: Bachelor's degree or four or more years of work experience. Six or more years of relevant work experience. Experience with Routing, Switching, Firewall architecture and policy design. Even better if you have A degree in STEM field, (Computer Science, Electrical Engineering, or Computer Engineering). Six or more years of experience related to computer or network security. Knowledge of routing (BGP, OSPF, EIGRP, IS-IS) and switching protocols (VLAN, VXLAN). Experience in Linux administration. VNF/CNF experience. Knowledge of Cisco and Juniper network infrastructures. Ability to work independently on multiple high priority projects. Ability to manage multiple high-visibility, complex technical projects. Strong problem-solving skills. Strong written and oral communication skills. Experience with other key Verizon system areas such as LTE, 5G, IoT, big data, artificial intelligence, machine learning, cloud computing, etc. Experience building security architectures, roadmaps, and program requirements. Willingness to travel up to 25%. 22CyberARCH 22CyberNET When you join Verizon... You'll have the power to go beyond - doing the work that's transforming how people, businesses and things connect with each other. Not only do we provide the fastest and most reliable network for our customers, but we were first to 5G - a quantum leap in connectivity. Our connected solutions are making communities stronger and enabling energy efficiency. Here, you'll have the ability to make an impact and create positive change. Whether you think in code, words, pictures or numbers, join our team of the best and brightest. We offer great pay, amazing benefits and opportunity to learn and grow in every role. Together we'll go far. Equal Employment Opportunity We're proud to be an equal opportunity employer - and celebrate our employees' differences, including race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, and Veteran status. Different makes us better.
          

Data Scientist

 Cache   
AZ-Phoenix, Solugenix has been delivering comprehensive, managed-services IT solutions known for their innovation, value and dependability for the past 50 years. With a strong emphasis on innovation, Solugenix has now created an independent branded Data Science and AI based Incubation center for building next generation solutions which include new and emerging technologies around Machine Learning, Artificial
          

Public Cloud Product Owner

 Cache   
Education - Bachelor's Degree Skills - AWS - Organizational Skills - Product Management - Agile Benefits - Health Insurance The Chief Technology Office (CTO) provides solutions to guide technology across the firm globally, removing inefficiencies and streamlining how we deliver quality solutions. We're continuing to evolve from building next-gen platforms to guiding architectures that unlock their capabilities, automating how we take code from inception to production. We're focused on optimizing how apps are designed for the future, targeting solutions that are portable across multi-cloud platforms to stay resilient, scalable and maintainable. The CTO will drive Modern Engineering Practices across the firm and provide the pathway for technologist to improve their speed, quality and application development practices. Global Technology's Cloud Transformation Program defines and implements JPMorgan Chase's enterprise cloud strategy. This strategy enables a shift from specialized, dedicated infrastructure to elastic, self-provisioned public and private cloud infrastructure. The Apollo Product Area Owner is a member of the Public Cloud Product Team and is responsible for one or more products that contribute to a product initiative The Apollo Product Area Owner is accountable for meeting customer and stakeholder needs within the goals and timescales established by the Public Cloud Product Team The Apollo Product Area Owner is also a member of the agile team responsible for defining user stories and acceptance criteria The Apollo Product Area Owner prioritizes the backlog to streamline the execution of program priorities while maintaining the integrity of the product Responsibilities - Engages with and represents the stakeholders and customers for the Product - Ensure the product delivers value to the target applications and associated development teams. - Ensure the product delivers value to other stakeholders (for example, SRE and Cyber) - Is an Apollo and AWS Product subject matter expert (SME) / specialist - Manages the product backlog, ensuring: - It is visible and understandable to all parties (stakeholders and developers) - Is expressed in terms of customer value but also has sufficient technology definitions - Has appropriate acceptance criteria that help define the scope of the user story - Ensure collaboration and alignment is reached on the goals and timescales for the product within the Public Cloud Product Team. - Work with the Public Cloud Product Team to define measures for the ROI of the initiative. - Definition of Ready: clearly expressing Product Backlog items and ensuring the Development Team understands items in the Product Backlog to the level needed. - Define the scope of the product with phased deliveries where appropriate. - Identify and manage product risks and dependencies - escalating to the Public Cloud Product Team where necessary. - Collaborate with all Product Area Owners and the Public Cloud Product Team to help manage priorities and initiative scope. - Collaborate with the Product Area Owners to define metrics from the product delivery teams to help track progress and inter-team dependencies. - Collaborate with the Tech Leads to gather metrics, inputs, and impediments from the feature teams to help track progress on initiatives. - Develop a roadmap of product releases that support customer needs. This role requires a wide range of strengths and capabilities, including: BS/BA degree or equivalent experience Expert knowledge in product management processes across an entire line of business, as well as expertise in other lines of business and technology disciplines Experience working with high-performing teams in complex program execution A strong understanding of Agile methods; stakeholder management, risk management and operations Ability to create and maintain relationships with a wide range of stakeholders throughout the firm Cloud knowledge with the goal of becoming a subject matter expert in a specific cloud technology area Excellent written and oral communications skills Strong leadership and organizational skills When you work at JPMorgan Chase & Co., you're not just working at a global financial institution. You're an integral part of one of the world's biggest tech companies. In 20 technology centers worldwide, our team of 50,000 technologists design, build and deploy everything from enterprise technology initiatives to big data and mobile solutions, as well as innovations in electronic payments, cybersecurity, machine learning, and cloud development. Our $10B+ annual investment in technology enables us to hire people to create innovative solutions that will are transforming the financial services industry. At JPMorgan Chase & Co. we value the unique skills of every employee, and we're building a technology organization that thrives on diversity. We encourage professional growth and career development, and offer competitive benefits and compensation. If you're looking to build your career as part of a global technology team tackling big challenges that impact the lives of people and companies all around the world, we want to meet you.
          

Machine Learning Engineer

 Cache   
Join Hired and find your dream job as a Machine Learning Engineer at one of 10,000+ companies looking for candidates just like you. Companies on Hired apply to you, not the other way around. You???ll receive salary and compensation details upfront??? - before the interview - and be able to choose from a variety of industries you???re interested in, to find a job you???ll love in less than 2 weeks. We're looking for a talented AI expert to join our team. Responsibilities Engaging in data modeling and evaluation Developing new software and systems Designing trials and tests to measure the success of software and systems Working with teams and alone to design and implement AI models Skills An aptitude for statistics and calculating probability Familiarity in Machine Learning frameworks, such as Scikit-learn, Pytorch and Keras TensorFlow An eagerness to learn Determination - even when experiments fail, the ability to try again is key A desire to design AI technology that better serves humanity These Would Also Be Nice Good communication - even with those who do not understand AI Creative and critical thinking skills A willingness to continuously take on new projects Understanding the needs of the company Being results-driven

Requirements:

Hired
          

Python, Hadoop, and Machine Learning Software Engineer

 Cache   
SunIRef:it Python, Hadoop, and Machine Learning Software Engineer JP Morgan Chase 21,658 reviews - Wilmington, DE 19803 JP Morgan Chase 21,658 reviews Read what people are saying about working here. As a member of our Software Engineering Group we look first and foremost for people who are passionate around solving business problems through innovation & engineering practices. You will be required to apply your depth of knowledge and expertise to all aspects of the software development lifecycle, as well as partner continuously with your many stakeholders on a daily basis to stay focused on common goals. We embrace a culture of experimentation and constantly strive for improvement and learning. You'll work in a collaborative, trusting, thought-provoking environmentone that encourages diversity of thought and creative solutions that are in the best interests of our customers globally. This role requires a wide variety of strengths and capabilities, including: BS/BA degree or equivalent experience Advanced knowledge of application, data and infrastructure architecture disciplines Understanding of architecture and design across all systems Working proficiency in developmental toolsets Knowledge of industry wide technology trends and best practices Ability to work in large, collaborative teams to achieve organizational goals, and passionate about building an innovative culture Proficiency in one or more modern programming languages Python and Hadoop Understanding of software skills such as business analysis, development, maintenance and software improvement Experience with Machine Learning, Deep Learning, Data Mining, and/or Statistical Analysis tools. Strong hands-on experience with developing and deploying machine learning based models, statistical models, data mining, and business rules. Background on basic machine learning techniques including supervised, unsupervised, reinforcement and deep learning. Experience with machine learning tools such as Scikit Learn, Pandas, TensorFlow, SparkML, SAS, R, H20, Keras, Caf, Theano, etc. At least 5 years hands-on experience with various programing models such as Spring, Java, Python, or C/C++. Our Consumer & Community Banking Group depends on innovators like you to serve nearly 66 million consumers and over 4 million small businesses, municipalities and non-profits. You'll support the delivery of award winning tools and services that cover everything from personal and small business banking as well as lending, mortgages, credit cards, payments, auto finance and investment advice. This group is also focused on developing and delivering cutting edged mobile applications, digital experiences and next generation banking technology solutions to better serve our clients and customers. When you work at JPMorgan Chase & Co., you're not just working at a global financial institution. You're an integral part of one of the world's biggest tech organizations. In our global technology centers, our team of 50,000 technologists design, build and deploy everything from enterprise technology initiatives to big data and mobile solutions, as well as innovations in electronic payments, cybersecurity, machine learning, and cloud development. Our $11B annual investment in technology enables us to hire people to create innovative solutions that are transforming the financial services industry. At JPMorgan Chase & Co. we value the unique skills of every employee, and we're building a technology organization that thrives on diversity. We encourage professional growth and career development, and offer competitive benefits and compensation. If you're looking to build your career as part of a global technology team tackling big challenges that impact the lives of people and companies all around the world, we want to meet you. JP Morgan Chase - Just posted report job - original job
          

Data Scientist

 Cache   
OVERVIEW

Are you a problem solver, explorer, and knowledge seeker always asking, What if*



If so, you may be the new team member we re looking for. Because at SAS, your curiosity matters whether you re developing algorithms, creating customer experiences, or answering critical questions. Curiosity is our code, and the opportunities here are endless.



What we do

We re the leader in analytics. Through our software and services, we inspire customers around the world to transform data into intelligence. Our curiosity fuels innovation, pushing boundaries, challenging the status quo and changing the way we live.



What you ll do

As a Data Scientist at SAS and a member of the analytics team, you will analyze customer data and build high-end analytical models for solving high-value business problems, such as credit and debit card fraud, online banking fraud, credit risk, network security, and other intriguing problems.



You will:

* Process and analyze large volumes of (customer) data.

* Build predictive models with advanced machine learning algorithms such as Neural Networks, Decision Trees, Boosting/Ensemble methods, Clustering, and Online learning.

* Interact with customers from the data analysis stage to the final report presentation.

* Assist in technical sales support as needed.

* Constantly innovate by building new variables; improve modeling techniques to boost model performance; maintain and refine the processes and procedures for building high-end analytic modeling solutions.

* Write coherent reports and make presentations on high-end analytical projects.



What we re looking for

* You re curious, passionate, authentic, and accountable. These are our values and influence everything we do.

* You have a master's degree in statistics, mathematics, computer science, engineering, the physical sciences, or any other quantitative field.

* 2+ years related experience such as analyzing data and/or building analytical models; in either an academic or professional setting.

* Knowledge of multiple operating systems (e.g. Windows, Unix/Linux).

* Proficiency with 1 or more of the following Programming or Scripting languages: R, SAS, Bash, Perl, Python, MATLAB.

* Thorough knowledge of at least some supervised and unsupervised modeling techniques such as Logistic/Linear Regression, SVMs, Neural Networks / Deep Networks, Boosting/Ensemble methods, Decision Trees, and/or Clustering.

* Ability to manage very large amounts of data.



The nice to haves

* Ph.D in applied statistics, mathematics, computer science, engineering, or the physical sciences.

* Industry experience in mathematical/statistical modeling, pattern recognition, or data mining/data analysis.

* Extensive experience specifying and building advanced analytic solutions for the financial services and related industries with large-scale transaction data.

* Extensive experience in data management, deployment and product support for advanced analytic solutions.

* Excellent programming skills and knowledge of SAS and scripting languages.

* Ability to translate model performance to financial benefit for the business by incorporating knowledge of customer business practices.



Other knowledge, skills, and abilities

* Excellent written and verbal communication skills.

* Ability to think analytically, write and edit technical material, and relate statistical concepts and applications to technical and business users.

* Ability to work both independently and in a team environment.

* Ability to travel as business requirements dictate.



Why SAS

* We love living the #SASlife and believe that happy, healthy people have a passion for life, and bring that energy to work. No matter what your specialty or where you are in the world, your unique contributions will make a difference.

* Our multi-dimensional culture blends our different backgrounds, experiences, and perspectives. Here, it isn t about fitting into our culture, it s about adding to it - and we can t wait to see what you ll bring.

#LI-TP1



SAS looks not only for the right skills, but also a fit to our core values. We seek colleagues who will contribute to the unique values that makes SAS such a great place to work. We look for the total candidate: technical skills, values fit, relationship skills, problem solvers, good communicators and, of course, innovators. Candidates must be ready to make an impact.



Additional Information:

To qualify, applicants must be legally authorized to work in the United States, and should not require, now or in the future, sponsorship for employment visa status. SAS is an equal opportunity employer. All qualified applicants are considered for employment without regard to race, color, religion, gender, sexual orientation, gender identity, age, national origin, disability status, protected veteran status or any other characteristic protected by law. Read more: Equal Employment Opportunity is the Law. Also view the supplement EEO is the Law, and the notice Pay Transparency



Equivalent combination of education, training and experience may be considered in place of the above qualifications. The level of this position will be determined based on the applicant's education, skills and experience. Resumes may be considered in the order they are received. SAS employees performing certain job functions may require access to technology or software subject to export or import regulations. To comply with these regulations, SAS may obtain nationality or citizenship information from applicants for employment. SAS collects this information solely for trade law compliance purposes and does not use it to discriminate unfairly in the hiring process.



Want to stay up to date with life at SAS, products and jobs* Follow us on LinkedIn
          

Sr. Java Software Engineer for FX e-Commerce

 Cache   
SunIRef:it Sr. Java Software Engineer for FX e-Commerce JP Morgan Chase 21,658 reviews - Jersey City, NJ 07310 JP Morgan Chase 21,658 reviews Read what people are saying about working here. Our Corporate & Investment Bank relies on innovators like you to build and maintain the technology that helps us safely service the world's important corporations, governments and institutions. You'll develop solutions for a bank entrusted with holding $18 trillion of assets and $393 billion in deposits. CIB provides strategic advice, raises capital, manages risk, and extends liquidity in markets spanning over 100 countries around the world. As an experienced Software Engineer, your mission is to help lead our team of innovators and technologists toward creating next-level solutions that improve the way our business is run. Your deep knowledge of design, analytics, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit. And best of all, you'll be able to harness massive amounts of brainpower through our global network of technologists from around the world. You will be working in the FX e-Commerce Market Making and Distribution team in New York. This is a global team that develops and maintains the systems responsible for price differentiation, distribution and all aspects of execution (deal acceptance and trade booking). The team covers all FX Cash products (Spot, Forwards, Swaps) for G10 and Emerging Markets across a wide variety of distribution channels (Proprietary Single Dealer Platform, FIX APIs and multi-dealer platforms). You will be part of a global team with presence in London, New York and ASPAC and there is a high level of direct interaction with our Front Office e-Trading partners. This role requires a wide variety of strengths and capabilities, including: BS/BA degree or equivalent experience Partnering with trading and quantitative researchers to capture requirement and propose design and implementation Implementation delivery, primarily server side Java development with strong emphasis on non-functional requirements involving performance optimization of multi-threaded applications, resiliency in a distributed and co-located environment Expertise in application, data and infrastructure architecture disciplines Advanced knowledge of architecture, design and business processes Be responsible for seeing through your changes through the whole SDLC life cycle Providing level 3 support to the trading desk, helping investigation and analysis Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals When you work at JPMorgan Chase & Co., you're not just working at a global financial institution. You're an integral part of one of the world's biggest tech companies. In 14 technology hubs worldwide, our team of 40,000+ technologists design, build and deploy everything from enterprise technology initiatives to big data and mobile solutions, as well as innovations in electronic payments, cybersecurity, machine learning, and cloud development. Our $9.5B+ annual investment in technology enables us to hire people to create innovative solutions that will not only transform the financial services industry, but also change the world. At JPMorgan Chase & Co. we value the unique skills of every employee, and we're building a technology organization that thrives on diversity. We encourage professional growth and career development, and offer competitive benefits and compensation. If you're looking to build your career as part of a global technology team tackling big challenges that impact the lives of people and companies all around the world, we want to meet you. @2017 JPMorgan Chase & Co. JPMorgan Chase is an equal opportunity and affirmative action employer Disability/Veteran JP Morgan Chase - Just posted report job - original job
          

Workday gets serious about spend, acquires source-to-pay vendor Scout RFP

 Cache   
Workday gets serious about spend, acquires source-to-pay vendor Scout RFP Phil Wainewright Wed, 11/06/2019 - 15:15
Summary:
Workday's acquisition of Scout RFP shows it's serious about spend management but what's its strategy and how does that impact spend pureplay Coupa?

Business man at desk holds large burlap money bag with US dollar sign © nito - shutterstock

Is spend management an add-on to finance or a separate category? Workday is putting that question to the test with its acquisition of up-and-coming source-to-pay vendor Scout RFP, announced this week. It plants Workday on the same turf as Coupa — whose CEO told diginomica today he welcomes the move — as well as broadening its competitive front against ERP giants SAP and Oracle.

Workday has agreed to pay $540 million in an all-cash transaction to acquire Scout RFP. It expects to complete the deal before the close of its current financial year at the end of January. Through Workday Ventures, it has been an investor in the company since late 2018.

Boosting Workday's source-to-pay appeal

As its name suggests, Scout RFP aims to streamline the process of sourcing suppliers, traditionally initiated by issuing a 'request for proposal' (RFP). Founded five years ago, the San Francisco-based SaaS company has grown fast thanks to the engaging user experience of its automated sourcing and auction platform. It has over 240 customers globally and claims to manage more than $38.5 billion in project spend.

The product already offers close integration with Workday's existing procurement solution. Workday Procurement has itself been growing fast, with more than 650 customers signed up, over half of whom are live. Workday says it does not intend to re-platform Scout RFP but will converge the data model, security model and user experience with its own products over time.

According to Michael Lamoureux, lead analyst in sourcing technology at independent sourcing specialist website Spend Matters, the acquisition is set to boost Workday's appeal in the rapidly expanding source-to-pay (S2P) market:

ScoutRFP opens up more of the extremely fast growing S2P space to Workday (which some value at $50 billion as a total addressable market), and Workday benefits from a product known more for its usability and adoption than just about any other sourcing platform on the global market.

Applying machine learning to procurement

It seems likely that Workday will also be eager to apply its machine learning capabilities to more of the source-to-pay process. At its recent Workday Rising conference. Barbara Larson, General Manager, Workday Financial Management, showed off how it is moving two procurement processes "from labour intensive to frictionless," explaining:

Starting with one of the more manual process flows, procure-to-pay — when accounts payable receives an invoice, machine learning will upload and scan the invoice, and based on past patterns, automatically route it to the most qualified person. They no longer need to manually input and route the invoices. The entire workflow gets faster, saving time and reducing cost.

Then there's contract to cash, where accounts receivable manually matches payments. Instead of having to maintain complex rules to process payments, machine learning will recommend the invoices to match. And when there isn't enough detail, the machine presents the most likely match. Over time, the machine gets smarter and matching becomes more accurate, reducing the amount of manual work.

Integrating these and other automated source-to-pay processes into the core Workday Financials product will no doubt be an important selling point for both products.

New competition for Coupa

The acquisition creates a significant new competitor for cloud spend management vendor Coupa, but its CEO Rob Bernshteyn, who is London for its annual EMEA customer event, told me today that he welcomes Workday's entry into the market. He believes it brings more focus on the spend management sector and Coupa's own product offering:

In general, I think it's a very good thing to see more enthusiasm, more engagement, in this broader, large addressable market ...

There'll be more people — in that case and many other cases — thinking through how to address the problems around business spend management. We're trying to galvanise more focus on this area, because there's so much opportunity.

My take

Anyone who wondered if Workday had lost its appetite for acquisitions after spending $1.5 billion on Adaptive Insights last year has their answer. Dropping a cool half-billion on spend management sends a clear signal, but perhaps this should not surprise given the growth of the existing procurement offering and the attention it was given at Rising.

I've highlighted the focus on automating spend processes at Rising because my sense is that part of the thinking behind this acquisition comes out of Workday CEO Aneel Bhusri's belief that enterprises must harness the power of machine learning to survive. Sourcing and procurement is an area where there's huge scope for applying machine learning — as Coupa identified several years ago. Couple that with an obvious synergy between procurement and finance, and you can see a big opportunity opening up for Workday.

I get the feeling though that Coupa's CEO relishes the challenge from Workday. It spurs Coupa to continue to differentiate itself with a far more comprehensive offering for enterprises that see spend management not just as an automation opportunity but also as a strategic play. I'll have more on that in the next few days out of Coupa's EMEA event, including an interview with Bernshteyn, some news and customer stories.

Image credit - Business man at desk holds large burlap money bag with US dollar sign © nito - shutterstock

Disclosure - Coupa, Oracle and Workday are diginomica premier partners at time of writing


          

Machine Learning Engineer

 Cache   
Join Hired and find your dream job as a Machine Learning Engineer at one of 10,000+ companies looking for candidates just like you. Companies on Hired apply to you, not the other way around. You???ll receive salary and compensation details upfront??? - before the interview - and be able to choose from a variety of industries you???re interested in, to find a job you???ll love in less than 2 weeks. We're looking for a talented AI expert to join our team. Responsibilities Engaging in data modeling and evaluation Developing new software and systems Designing trials and tests to measure the success of software and systems Working with teams and alone to design and implement AI models Skills An aptitude for statistics and calculating probability Familiarity in Machine Learning frameworks, such as Scikit-learn, Pytorch and Keras TensorFlow An eagerness to learn Determination - even when experiments fail, the ability to try again is key A desire to design AI technology that better serves humanity These Would Also Be Nice Good communication - even with those who do not understand AI Creative and critical thinking skills A willingness to continuously take on new projects Understanding the needs of the company Being results-driven

Requirements:

Hired
          

Health Data Lead

 Cache   
Overview Are you ready to join an organization where you can make an extraordinary impact every day? Imagine all Americans enjoying ideal cardiovascular health free of heart disease and stroke. At the American Heart Association and American Stroke Association, we get to work toward that goal every day. Is it easy? No. Is it worthwhile? Absolutely. This is satisfying and challenging work that makes a real difference in people's lives. We are where you can achieve professional growth with personal fulfillment. We are where you can connect people to making a lifesaving impact. We are where you can partner with individuals, schools, lawmakers, healthcare providers and others to ensure everyone has access to healthier lifestyle choices and proper healthcare. The American Heart Association is where you can make an extraordinary impact. Responsibilities The Lead Health Data Science Assets is a new exciting role that offers a unique opportunity to lead data strategy across the organization! This role will work with our Emerging Strategies and Ventures Team and serve as a critical liaison between Emerging Health and Business strategies, the AHA's Mission Aligned Business and Health Solutions teams and other business segments. This role is vital to American Heart Association's efforts that bring to bear healthcare and science data as a core asset and growth driver, crafting an outstanding organization and capability that will support quantifiable outcomes, enable identification of new product opportunities and deliver unrivaled data partnerships that fuel creativity. We are looking for someone who is highly motivated, who is an expert data innovator, and who can share tangible results from their strategies, leadership actions. We also need someone who has current experience in large growth organizations where data is a core capability for creating outcomes. Essential Job Duties Develop and implement solutions built on a scalable and flexible architecture that will allow AHA to handle and use health data as an enterprise business asset Define and implement standard operating practices for health data collection, ingestion, storage, transformation, distribution, integration, and consumption within AHA's Health solutions portfolio. Lead all aspects of data access and distribution. Lead the design and delivery of Data Business Intelligence AI and automation solutions advisory engagements involving strategy, roadmap and longer-term operating models. Support the delivery of a broad range of data assets and analytics. Identify and demonstrate approaches, appropriate tools and methodologies. Run health data quality and security. Define data standards, policies and procedures ensuring effective and efficient data management across the company. Provide expertise and leadership in the disciplines of data governance, data quality and master data integration and architecture. Establish a data governance framework. Maintain and share data definitions, data integrity, security and classifications. Direct the continued design, build, and operations of our Big Data Platforms and Solutions. Help identify and understand data from internal and external sources for competitive, scenario and performance analyses, and financial modeling to gain insight into new and existing processes and business opportunities. Work with business teams on commercial and non-commercial opportunities. Advise on fair market value data value propositions Actively contribute to proposal development of transformation engagements focused on DataAnalytics AI and automation. Demonstrate thought leadership to advise teams on DataAnalytics, AI and automation strategy and detailed use cases development by industry. Possess deep understanding of trends and strategies for identifying solutions to meet objectives. Monitor technology trends and raise awareness of capabilities and innovations in selected domains of expertise Empower Data Architecture team to create optimized data pipelines, data storage and data transformation. Support the practice with depth of experience and expertise in the following domains: automation, machine learning, deep learning, advanced analytics, data science, data aggregation & visualization Qualifications Bachelors' degree from a globally recognized institution of higher learning is required, with an advanced degree (MS, PhD, equivalent) strongly preferred. 10 years experience in a company known for data innovation and excellence with responsibility for a comparably- sized analytics business 10 years experience in Health data, real world evidence analytics andor health informatics with the capability to design data strategies and source key health data, gain acceptability for methods, and build analyses for benefit-risk justifications, development and other regulatory needs Experience implementing and using cutting edge analytic tools and capabilities, including B2B, B2C and cross-channel integration tools A passion for and experience with big data-driven decision-making processes across business functions Demonstrated ability to work with technical team of product and data engineers, as well as data scientistsPhDs Consistent track record of successfully delivering top and bottom-line results individually and as part of a high-performance team. Outstanding oralwritten communication and presentation skills, especially with respect to clearly communicating complex data-driven topics to both technical and non-technical audiences. Strategic thinker, leadership, communication, people management skills and innovator with ability to work across segments to support tactical planning and deliver on the objectives of organization. Knowledge of Technology and Healthcare, Life sciences and Health-tech Industry trends Creative, collaborative thinker with an ability to learn new things, assess problems and identify proactive solutions quickly Self-starter, comfortable leading change and getting things done. Travel is required (at least 10%), including overnights. Location: Dallas, Texas is preferred. At American Heart Association - American Stroke Association, diversity, inclusion, and equal opportunity applies to both our workforce and the communities we serve as it relates to heart health and stroke prevention. Be sure to follow us on Twitter to see what it is like to work for the American Heart Association and why so many people enjoy #TheAHALife EOE MinoritiesFemalesProtected VeteransPersons with Disabilities Requisition ID 2018-3066 Job Family Group Business Operations Job Category Science & Research Additional Locations US-Anywhere US-Anywhere Location: Birmingham,AL
          

Scientist - Ancestry Research & Development

 Cache   
At 23andMe, we work with the richest database of genotypes and phenotypes ever assembled. Our Ancestry Research & Development team publishes primary research and develops methods and algorithms to drive 23andMe's Ancestry Product. This work requires both a keen interest in human history and a penchant for effective statistical and computational methods.

We seek a candidate with experience conducting population genetics research. You will join a team of Ph.D. population geneticists excited to glean insights from the genetic data of more than ten million 23andMe customers. You should have very strong coding skills, experience analyzing large genetic datasets, and a passion for interpreting patterns of human genetic variation.

We strongly suggest submitting a cover letter. We may consider a superlative candidate with genetics research experience, albeit not specifically population genetics, given a cover letter explaining their interest in and qualifications for this position.
Who we are

Since 2006, 23andMe's mission has been to help people access, understand, and benefit from the human genome. We are a group of passionate individuals pushing the boundaries of what's possible to help turn genetic insight into better health and personal understanding.

A list of 23andMe's recent scientific publications is available here: https://www.23andme.com/for/scientists/

What you'll do


  • Perform analyses that will advance understanding of human genetics and shape 23andMe's consumer product.
  • Leverage existing methods and tools to analyze large amounts of data.
  • Work collaboratively with the Research, Engineering, and Product teams to provide scientific support for a variety of teams across the company.

    What you'll bring


    • Ph.D. in Human Genetics or a related field (e.g., Biology, Bioinformatics, Computer Science, Statistics).
    • Expertise in Python, R, and/or C/C++, in a Linux environment.
    • Substantial experience working with large genetic datasets.
    • Excellent written and verbal communication skills.
    • Strong background in statistics and/or machine learning.
    • Ability to work collaboratively, effectively, and efficiently in a cross-functional team.
    • Excellent organizational skills to drive project success.

      Pluses


      • Experience communicating complex scientific concepts to a consumer audience.
      • Experience analyzing whole-genome sequence data.
      • Experience analyzing ancient DNA sequence data.

        About Us

        23andMe, Inc. is the leading consumer genetics and research company. Our mission is to help people access, understand and benefit from the human genome. The company was named by MIT Technology Review to its "50 Smartest Companies, 2017" list, and named one of Fast Company's "25 Brands That Matter Now, 2017". 23andMe has over 5 million customers worldwide, with -85 percent of customers consented to participate in research. 23andMe is located in Sunnyvale, CA. More information is available at www.23andMe.com.

        At 23andMe, we value a diverse, inclusive workforce and we provide equal employment opportunity for all applicants and employees. All qualified applicants for employment will be considered without regard to an individual's race, color, sex, gender identity, gender expression, religion, age, national origin or ancestry, citizenship, physical or mental disability, medical condition, family care status, marital status, domestic partner status, sexual orientation, genetic information, military or veteran status, or any other basis protected by federal, state or local laws. If you are unable to submit your application because of incompatible assistive technology or a disability, please contact us at accommodations-ext@23andme.com. 23andMe will reasonably accommodate qualified individuals with disabilities to the extent required by applicable law.

        Please note: 23andMe does not accept agency resumes and we are not responsible for any fees related to unsolicited resumes. Thank you.
          

Lead Data Scientist

 Cache   
SunIRef:it Lead Data Scientist Indiana University 382 reviews - Bloomington, IN Indiana University 382 reviews Read what people are saying about working here. The Bloomington Assessment and Research (BAR) office is searching for an innovative and energetic team player who will contribute to the decision support system for campus initiatives. Perform analyses and disseminate results to campus leaders using student data largely from the institution's data warehouse. Respond to campus requests with visually compelling presentations of information that communicate insights to stakeholders as well as providing research findings, including providing innovative solutions that explore complex data through the use of analytics (data mining, machine learning techniques) and / or interactive platforms (Tableau). Explore and lead new areas of inquiry that aim to enhance institutional effectiveness. Communicate information about large, complex, and detailed analyses in a variety of forms including short or lengthy narratives, or through group or individual presentations. Remaining up-to-date about new emerging technologies is essential to be successful in this position. Establishing collegial relationships with counterparts in other areas and institutions will contribute to advancing the work. Completion of training on storing, accessing, and releasing information in compliance with federal, state, and university policies is required for this role - training will be provided. Required Qualifications Bachelor's degree in Computer Science, Computer Systems Technology, or a closely related discipline and five years of experience working with complex electronic applications, databases, and electronic workflow. Combinations of education and related experience may be considered. Ability to effectively communicate and exchange information in complex environments and handle multiple complex tasks simultaneously. Demonstrated ability to self-monitor quality control and accuracy. Preferred Qualifications Master's degree in Data science, Statistical Analysis, or another analytical field. Familiarity with higher education issues (diversity and academic support). Specific experience with SQL, IUIE, Oracle relational data bases, and Customer Relations software. Experience working with a technical / research team. Knowledge of federal, state, and IU policies regarding the use of data, information, and systems. Work Location Bloomington, Indiana Job Classification Salary Plan: PAE Salary Grade: 4IT FLSA: Exempt Job Function: Information Technology Posting Disclaimer This posting is scheduled to close at 12:01am EST on the advertised Close Date. This posting may be closed at any time at the discretion of the University, but it will remain open for a minimum of 5 business days. To guarantee full consideration, please submit your application within 5 business days of the Posted Date. Equal Employment Opportunity Indiana University is an equal employment and affirmative action employer and a provider of ADA services. All qualified applicants will receive consideration for employment without regard to age, ethnicity, color, race, religion, sex, sexual orientation, gender identity or expression, genetic information, marital status, national origin, disability status or protected veteran status. Indiana University does not discriminate on the basis of sex in its educational programs and activities, including employment and admission, as required by Title IX. Questions or complaints regarding Title IX may be referred to the U.S. Department of Education Office for Civil Rights or the university Title IX Coordinator. See Indiana University's Notice of Non-Discrimination here which includes contact information. Campus Safety and Security The Annual Security and Fire Safety Report, containing policy statements, crime and fire statistics for all Indiana University campuses, is available online. You may also request a physical copy by emailing IU Public Safety at *********** or by visiting IUPD. Contact Us Request Support Telephone: ************ Indiana University - Just posted report job - original job
          

Companies like Google and Microsoft are making big investments in startups looking to disrupt healthcare. Here's where 5 top tech giants are placing their bets.

 Cache   

Google CEO Sundar Pichai

Tech giants like Google's parent company Alphabet and Microsoft have taken their time figuring out how to approach the massive $3.5 trillion US healthcare industry. 

But the big tech players haven't been shy about investing in companies looking to disrupt healthcare. Their investments hint at how the tech giants could ultimately succeed in healthcare. 

In October, CB Insights pulled together a report analyzing the investments tech companies had made in digital health startups looking at the last five years. In particular, Alphabet, Microsoft, and Chinese tech giant Tencent accounted for about 70% of the deals, the report found. 

Read more: Tech giants like Google and Amazon are beefing up their healthcare strategies. Here's how 7 tech titans plan to tackle the $3.5 trillion industry.

The report highlights that most of the deals have been in data management and analytics, with 40 deals in the space. That finding isn't surprising for He Wang, a healthcare analyst at CB Insights. 

"A common thread across all ten companies is data. Every tech company is leveraging some sort of data in all verticals they're going after, and that's no different in healthcare," Wang told Business Insider. "They're playing to their strengths and that's reflected in their investments." 

Here are the top 5 companies with the most investments in digital health and where they're placing their bets, from fewest investments to most, as determined by the CB Insights report. 

Samsung - 15 companies

Samsung is primarily focused on chronic disease management. 

The South Korean company, known for Galaxy phones and headsets, wants to expand into wearables and other tech monitoring systems that can track peoples' health. In total, it's invested in 15 companies. 

Samsung bought Neurologica, a medical imaging company and the US healthcare equipment maker, Nexus

"Samsung essentially wants to help people manage their health, using their wide adoption of hardware to distribute software that has healthcare applications," Wang said.  

 



Intel - 16 companies

A big area of focus for Intel is AI or machine learning in medical imaging, drug discovery and drug diagnostics, Wang said. In total, Intel has made investments in 16 digital health companies, according to the report. 

This can be seen with its investment of $30 million in startups working on cloud computing software innovation. The company, along with Microsoft, invested in CognitiveScale, a data analytics product for healthcare providers like hospital systems. 

Wang noted that Intel's healthcare investments align with the company's broader investments in AI across all verticals.  

"Intel is one of the most technologically advanced companies. They're still a worldwide provider of chips," Wang said. "So at the end of the day they're just trying to make a broader market place with chips." 



Tencent - 40 companies

Tencent has also been prolific in its dealmaking, investing in 40 companies. 

When delving into healthcare, the company is drawing on two main company strengths.

The first is Tencent's data on its users. The company has some of the most-used messaging apps and software, with over one billion active users on the company's WeChat platform alone. With this vast user base, Tencent is leveraging this asset to help it monetize and make investments, Wang said. 

The second asset is its investment in medical content marketing, allowing Tencent to analyze what users want in healthcare. An example of this an investment in SoYoung, the China-based medical aesthetic company which went public in the US earlier this year. 



Microsoft - 42 companies

As the second-most-active investor on the list, Microsoft made 42 investments in the space. But the company has a different approach than Google. Unlike Google, the majority of Microsoft's investments come from its accelerator and incubator programs.

These programs have actively worked with digital health companies at the earliest stage of investment, for startups like SWORD Health, Genoox, KenSci and SigTuple

"Microsoft is playing a little bit of a catch up in terms of services for cloud providers," Wang said. "They're using this incubator to build on this digital health startup ecosystem, to not only impact some of the most interesting companies out there but to also market or encourage people to use for their cloud computing capabilities." 

But Microsoft also has a long term strategy with its venture arm, M12, which invests in bigger startups, such as the chronic disease management company Livongo Health

The majority of its investments since 2016 have been in data management and analytics, and genomics companies. 

"Microsoft has made it very clear that cloud is the most important strategy they're going after," Wang said.  

 



Alphabet - 57 companies

Alphabet is the biggest and most active investor in the healthcare space. Google parent company Alphabet has backed 57 companies, making 70% of its health investments through corporate funds like Google Ventures, CapitalG, and Gradient Ventures. 

Google's accelerator and incubator programs have invested in 17 digital health companies primarily in genomics, clinical research, insurance and benefits. 

When it comes to genomics, Google's venture arm GV has invested in companies like 23andMe, Foundation Medicine, and Flatiron Health, while Alphabet's life sciences arm Verily has invested in Freenome.

The companies have collected clinical and genetic data, with the hope of finding new ways to use that information and keep people healthier. Google doesn't have access to the data these companies collect, but can draw insights from how the companies approach working with large amounts of information.

"They can use that data and tie it into advanced technology to drive better drug delivery, or diagnostics," Wang said. "Google is focused on healthcare data assets and how to use advanced technology to drive insights from there." 

Google has also invested in some of the biggest health insurance startups like Oscar Health and Clover Health, which collect information on how people use and navigate the healthcare system. 

 




          

Software Engineer (Python)

 Cache   
Summary

The Data Scientist will be part of the data science R&D team responsible for developing and managing a variety of data solutions and machine learning projects across all Appriss verticals.

Duties and Responsibilities

* Designing and implementing solutions related to machine learning and data mining on large data sets using statistical models, graph models, text mining and other modern.
* Conduct analysis, modeling, and analytics research for clients in retail, healthcare, public safety.
* Work with AWS, Azure and on-premise environments.
* Manipulate data from various data sources such as Netezza, Greenplum, SQL Server, raw files, and real time streaming data (SQS, Kinesis, Kafka).
* Experience building and deploying machine learning models and APIs.
* Testing, QA, and implementation of models and other predictive tools.
* Help support production applications and provide Tier 3 support.
* Collaborate with clients and internal teams to determine analysis specifications, product needs, and modeling initiatives and provide regular feedback.
* Prepare presentations and present results to internal and external clients, and potentially conferences.

Minimum Requirements

* Advanced degree in computer science or mathematics related fields
* 3+ years of experience with Python
* 3+ years of experience with SQL
* 2+ years as a Data Scientist/Analyst or Data Engineer
* Experience with linux, web APIs, and distributed systems
* Experience with AWS and/or Azure.
* Experience working with large amounts of structured or unstructured data.

Preferred Skills and Experience:
* Experience with libraries like Pandas, Tensorflow, Scikit-Learn, NetworkX
* ML skills in modern cloud environments such as AWS or Azure.
* Experience with Tableau or PowerBI or QuickSight
* Retail, Healthcare, or Criminal justice experience a plus.

Knowledge, Skills, Abilities, Experience, or Characteristics
* Demonstrated ability to apply statistical knowledge to analyze data to identify trends, outliers, develop/evaluate predictive models, create reports, automate processes
* Strong time management skills and project management skills
* Good verbal and written communication skills.
* Proficiency in PowerPoint & Excel.
* Ability and willingness to work with a team

Physical and Mental Requirements

Job is physically comfortable; individual has discretion about walking, standing, etc.

Job requires a very high level of judgment, exceptional analytical ability and creativity in investigating major problems that require original and highly innovative solutions. Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.

Other

Some travel may be involved for both training and customer facing issues.

Disclaimer

The preceding job description has been designed to indicate the general nature and level of work performed by employees within this classification. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities, and qualifications required of employees assigned to this job.

Equal Opportunity Employer - M/F/V/H

Equal Opportunity Employer/Protected Veterans/Individuals with DisabilitiesThe contractor will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor s legal duty to furnish information. 41 CFR 60-1.35(c)
          

Data Engineer (AWS) Intern

 Cache   
Data Engineer (AWS) Intern Asurion's internship program is a 12-week internship to help rising seniors get a sneak peek into the product and technology world. Assigned to a team, interns will have their own projects to complete and present to leadership at the end of the summer. The program will provide the intern with a unique strategic perspective, professional and personal development along the way, and experiences to enhance their academic learnings. Our goal is to allow for the intern to make contributions throughout the summer through project work, presentations and networking. Asurion's Internship: Open to rising seniors currently enrolled in undergrad pursuing a degree related to internship duties or major below. Duration of internship is 12 weeks from May 18th - August 7, 2020. Continuous learning and tailored on the job training in technology. Exposure to senior leadership including but not limited to onboarding, lunch and learn sessions, and team business case presentations. Build an intern community through peer intern groups, mentors, direct intern leaders, senior leadership throughout the summer. The Team Asurion's Enterprise Data Services (EDS) team is building an enterprise data platform (named ATLAS) leveraging the latest and greatest data technologies available. Built exclusively in the AWS cloud, the ATLAS platform utilizes technologies such as Informatica, Redshift, S3, Denodo, Spotfire, Presto and HIVE among many others. As THE enterprise data platform for Asurion, ATLAS will serve a variety of data needs, spanning core functionality like data cleansing, data standardization and KPI generation to reach functionality such as data discovery, data visualization and customer recommendation engines. On a day to day basis, team members are challenged to think creatively and leverage their data experience to solve tough data and analytics problems in ways that will scale to meet to the broad scope of the Asurion environment. Preferred Majors: Pursuing Bachelor's Degree in Computer Science, Data Analytics, Mathematics, Engineering or related field, with a graduation date between August 2020 - May 2021 Requirements: Good written and verbal communication skills and ability to provide deliverables in time sensitive projects. Proficient in one or more data/programming language i.e. SQL/Linux shell scripting/Python/Java/C#/C++ Knowledge on designing and developing in data movement and transformation using data integration tools. Knowledge/experience in some of the following preferred: Software Development & Analysis Java, Scala, Hive, Spark, HBase, Storm, Redshift, R, Kinesis, S3, and EMR. Understanding and knowledge on ETL, data warehousing/data mart concepts. Knowledge and experience with machine learning Knowledge/experience in one or more of the following areas: NoSQL technologies (Cassandra, HBase, DynamoDB), real-time streaming (apache storm, apache spark), Big data batch processing (Hive, SparkSQL), Cloud Technologies (Kinesis, S3, EMR) Shows a strong attention to development detail, produces high-quality algorithms/code. Excellent problem solving and analytical skills with excellent verbal and written communication skills. Must have strong internal customer service skills, ability to use tact and diplomacy, and to work effectively within a team (positive, process oriented). Responsibilities: Develops effective, maintainable code in a timely fashion. Follows established coding standards and techniques, assists with establishing standards. Develops proficiency in the application and use of systems, tools, and processes within the department's scope. Develops proficiency in the business processes that drive the applications within the department's scope. Develops a working knowledge of Asurion's applications and system integration. Assists with the compilation of status notifications for business stakeholders and Client Relations. Ensures code compiles with security policies and guidelines. PRO01492 - Sterling - Virginia - US - 2019/09/06
          

Software Development Engineer - Relocation Available - 838926-3 (Alpharetta,GA)

 Cache   
DESCRIPTION

The AWS Well-Architected Tool team is hiring Software Developers!!

Imagine if you could help shape the future of architecture, and go on a journey where few have tread before. AWS Well-Architected aims to help our customers develop technical expertise in AWS services, learn how to architect their cloud applications, and provide a great experience for customers and partners.

AWS is one of Amazon's fastest growing businesses. More than a million active customers, from Airbnb to SAP, use AWS Cloud solutions to deliver flexibility, scalability, and reliability. As a Software Development Engineer at Amazon, your code is held to a high-standard and you are expected to stay up-to-date on the latest technologies. In this role, you will be building a set of tools from the ground up, joining a new team with high potential. We have a huge amount of data at our fingertips, but we require strong engineers working alongside machine learning experts to unlock its potential for advanced, automated customer interactions.

BASIC QUALIFICATIONS

3+ years of non-internship professional software development experience.
Programming experience with at least one modern language such as Java, C++, or C# including object-oriented design.
1+ years of experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems

PREFERRED QUALIFICATIONS

Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations
Experience with No-SQL and RDBMS
Experience with HTML, XML and CSS
Distributed systems experience
Meets/exceeds Amazon's leadership principles requirements for this role
Meets/exceeds Amazon's functional/technical depth and complexity for this role
Amazon is an Equal Opportunity-Affirmative Action Employer - Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation

Job details

New York Area, NY

AWS Bill Generation

Software Development Associated topics: .net, algorithm, c#, java, programming, sde, software developer, software engineer, software programmer, sw
          

Principal Data Scientist

 Cache   
What you'll be doing. We are looking for a Principal Data Scientist who will be focused on delivering Customer Intelligence, as part of the System of Insights. You will drive profitable growth and business innovation by applying cutting edge machine learning techniques and AI technology. You will lead data science projects that drive customer intelligence, product personalization, marketing effectiveness, channel optimization, better customer experience, and operational efficiency. You will have to be adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. You must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. You must also have a proven ability to drive business results with your data-based insights. You should have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Work on Advanced Analytics using Big Data, Data Warehousing, Cognitive and Heuristic platforms. Research, design, implement, and oversee high-end analytical/technology process and solutions with a focus on leveraging advanced machine learning, artificial intelligence and cognitive methods. Work with the business to understand the requirements of the digital challenges, heuristic, machine and cognitive analysis and communicate back the results. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources. Perform ad-hoc analysis and develop reproducible analytical approaches to meet business requirements. Perform exploratory and targeted data analyses using descriptive statistics and other methods. machine learning and statistical techniques to large data sets to find actionable insights. Use complex algorithms to develop systems & applications that deliver business functions or architectural components. Present results and recommendations to senior management and business users. Responsible for providing line of sight to data quality and gaps where issues need to be addressed. Communicate the business value of technical solutions. Discover mutually beneficial solutions across customers while recognizing different styles. What we're looking for. You are a master at analyzing big data. You thrive in an environment where enormous volumes of data are generated at rapid speed. You're a creative thinker who likes to explore, and uncover the issues. You are decisive. You are great at influencing up, down, and across groups, and you take satisfaction in mentoring others; communicating what you've uncovered in a way that can be easily understood by others is one of your strengths. You'll need to have: Bachelor's degree or four or more years of work experience. Six or more years of relevant work experience. Experience using statistical computer languages (Python, Scala, PySpark, Java, SQL, etc.) to manipulate data and draw insights from large data sets. Even better if you have: A degree in mathematics, statistics, physics, engineering, computer science, economics, or relevantfield. Experience with Tableau or similar visual analysis tool, optimization, analytics and large data sets, project management, developing visually compelling interactive dashboards. Strong knowledgeof database concepts (Oracle, MS SQL, generic SQL, etc.) Strong knowledgeof data warehouse and data lake technology (Teradata, Hadoop). Strong knowledgeof third party analytic tools. Working experience with general purpose programming languages (Java, .Net, Python, Perl, etc.). Experience with shell scripting tools in Windows, Linux/Unix. Experience with data aggregating tools such as SPLUNK. Experience working with and creating data architectures. Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, XGBoost, Genetic Algorithms, etc. Strong knowledgeof advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications. Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, State Space, text mining, social network analysis, etc. Experience with distributed data/computing tools: Hadoop, Tez, Map/Reduce, Hive, Spark, PySpark, Scala, etc. Experience building semantic and feature engineering pipelines. Experience in adhoc-analysis and developing reproducible analytical approaches to meet business requirements. When you join Verizon. You'll have the power to go beyond - doing the work that's transforming how people, businesses and things connect with each other. Not only do we provide the fastest and most reliable network for our customers, but we were first to 5G - a quantum leap in connectivity. Our connected solutions are making communities stronger and enabling energy efficiency. Here, you'll have the ability to make an impact and create positive change. Whether you think in code, words, pictures or numbers, join our team of the best and brightest. We offer great pay, amazing benefits and opportunity to learn and grow in every role. Together we'll go far. Equal Employment Opportunity We're proud to be an equal opportunity employer - and celebrate our employees' differences, including race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, and Veteran status. Different makes us better.
          

Full Stack Software Engineer

 Cache   
Company: JPMorgan Chase - Location: Wilmington, United States, Delaware - Salary: negotiable / monthly - Job type: Full-Time - Posted: 1 week ago - Category: General SunIRef:it Full Stack Software Engineer JP Morgan Chase 23,081 reviews - Wilmington, DE 19803 JP Morgan Chase 23,081 reviews Read what people are saying about working here. As a member of our Software Engineering Group we look first and foremost for people who are passionate around solving business problems through innovation & engineering practices. You will be required to apply your depth of knowledge and expertise to all aspects of the software development lifecycle, as well as partner continuously with your many stakeholders on a daily basis to stay focused on common goals. We embrace a culture of experimentation and constantly strive for improvement and learning. You'll work in a collaborative, trusting, thought-provoking environmentone that encourages diversity of thought and creative solutions that are in the best interests of our customers globally. This role supports a team responsible for delivering firm-wide, strategic reporting analytics solution for data driven decision making to help make large and complex data more accessible understandable, and usable, while developing intuitive and attractive static and interactive data visualizations with analytics and visualization tools. The successful candidate will build automated and interactive solutions that can replace manual reports with dynamic dashboards while working closely with data architects to provide requirements to improve data analytics, and potentially leveraging machine learning (ML) via natural language processing (NLP) technologies. Being able to juggle and review numbers, trends, and data to come to new conclusions based on the findings is essential. Primary responsibilities include full stack software engineering, working throughout all stages of the product lifecycle (including support), and communicating with all stakeholders. This role requires a wide variety of strengths and capabilities, including: BS/BA degree or equivalent experience Advanced knowledge of application, data and infrastructure architecture disciplines Understanding of architecture and design across all systems Working proficiency in developmental toolsets Knowledge of industry wide technology trends and best practices Ability to work in large, collaborative teams to achieve organizational goals, and passionate about building an innovative culture Proficiency in one or more modern programming languages such as Python, Java, C#,Node.js Understanding of software skills such as business analysis, development, maintenance and software improvement Demonstrated full stack software engineering experience including some front-end exposure with Javascript, REACT, jQuery or D3/js etc is essential. Experience or willingness to support all stages of SDLC is required Knowledge or experience developing products end to end including gathering requirements, product development and design, creation of automated unit testing using a TDD approach, release and production support is strongly preferred Strong knowledge of managing large amounts of data, preferably with knowledge of different data sources (SQL, flat files, spreadsheets, non-SQL etc) is required. Experience working in a data warehouse environment is helpful Preferred qualifications include knowledge or experience with Machine Learning, DW/BI experience, and experience with cloud based services or Qlik Sense Our Corporate Technology team relies on smart, driven people like you to develop applications and provide tech support for all our corporate functions across our network. Your efforts will touch lives all over the financial spectrum and across all our divisions: Global Finance, Corporate Treasury, Risk Management, Human Resources, Compliance, Legal, and within the Corporate Administrative Office. You'll be part of a team specifically built to meet and exceed our evolving technology needs, as well as our technology controls agenda. When you work at JPMorgan Chase & Co., you're not just working at a global financial institution. You're an integral part of one of the world's biggest tech companies. In 15 technology centers worldwide, our team of 50,000 technologists design, build and deploy everything from enterprise technology initiatives to big data and mobile solutions, as well as innovations in electronic payments, cybersecurity, machine learning, and cloud development. Our $11B annual investment in technology enables us to hire people to create innovative solutions that are transforming the financial services industry. At JPMorgan Chase & Co. we value the unique skills of every employee, and we're building a technology organization that thrives on diversity. We encourage professional growth and career development, and offer competitive benefits and compensation. If you're looking to build your career as part of a global technology team tackling big challenges that impact the lives of people and companies all around the world, we want to meet you. JPMorgan Chase - 3 hours ago - report job - original job On Company Site
          

Assistant or Associate Professor

 Cache   
Employer Utah State University Location Logan, Utah Salary Commensurate with experience Posted Oct 21, 2019 Closes Nov 20, 2019 Discipline Interdisciplinary/Other Career Level Experienced Education Level PhD Relocation Cost No Relocation Sector Type Academia You need to sign in or create an account to save Requisition ID: 2019-2005 # of Openings: 3 Category (Portal Searching): Faculty Position Type: Regular Full-Time City: Logan Job Classification: Faculty College: College of Engineering Department: Civil & Environmental Engineering Advertised Salary: Commensurate with experience, plus excellent benefits Overview The Department of Civil and Environmental Engineering (CEE) and the Utah Water Research Laboratory (UWRL) at Utah State University (USU) invite applications for a cluster hire of three positions at the Assistant Tenure Track or Associate Professor levels. We seek candidates whose convergent research will generate data-driven, collaborative, sustainable, and holistic solutions to water quantity, quality, and management problems in the face of growing population, aging infrastructure, changing climate, and changing societal values. These are academic-year (9 month) appointments with an anticipated start date of August 2020. We seek candidates who can significantly contribute to scholarship and teaching in the broad topical areas of water resources engineering, environmental engineering, and/or hydraulic engineering. Successful candidates must articulate a vision for and show how their research and teaching in one or more of the areas below will: 1) yield data-driven, collaborative, sustainable, and holistic solutions; 2) leverage existing strengths throughout USU in hydrology, hydraulics, hydro informatics, water resources management, irrigation, water quality, and environmental engineering; and 3) improve CEE and USU capabilities to solve interdisciplinary water problems now and in the future. Areas of work may include: Development of a new generation of hydrologic models that integrate biological, chemical, and physical processes across natural, built, and agricultural landscapes. Ground and surface water hydrologic research focused on quantity, quality, and conjunctive management. This may include storm water management, wastewater reuse, and fate of environmental toxins and emerging contaminants. Development of data and information systems that advance data-intensive approaches to solving water problems, including data science and machine learning applications in water resources management, smart infrastructure, and sensing systems. Quantifying and forecasting water usage in food, energy, transportation, and other systems that consider changes in population, land use, and societal values. Water-environment sustainability, including current and future impacts to water infrastructure due to changing climate; infrastructure hydraulic monitoring, risk analyses, failure modes, rehabilitation, and design; urbanization; and data science of hydraulic infrastructure. Societal risk and hazard mitigation of water systems, including natural and man-made risks, hazards, and disasters with a focus on sustainable water-environment development strategies and community livability. Managing hydropower, irrigation, water rights, or water supply in the Intermountain West. To promote transdisciplinary and convergent research, we encourage team applications from groups of up to three candidates, which may include dual career partners. Each applicant must apply individually but articulate in their individual 3-page research statement how some or all of their research integrates with the other team members. Inquiries about this announcement may be made to: Jeff Horsburgh, PhD Search Committee Chair ********************** A review of applications will begin in early November and will continue until all three positions are filled. Responsibilities Successful candidates will: Secure extramural funding adequate to support their research program. Recruit and mentor graduate students to advance the research program. Teach undergraduate and graduate engineering courses in the theme area. Develop, lead, and be a member of convergent research teams both within and outside the CEE Department and UWRL. Participate in department, university, and professional service and outreach. Become nationally recognized for their work. Qualifications Minimum Qualifications: PhD in Civil, Environmental, Hydraulic, and/or Irrigation Engineering, Hydrology, Earth Sciences, or a closely related water field. Established record of research excellence in a water-related field appropriate for the candidate's rank. Ability to teach relevant undergraduate and graduate engineering courses. For information about USU courses, see: *************************************************************** *********************************************************** Preferred Qualifications: Prior teaching experience. Record of success in obtaining external funding. A professional engineering license or the ability to obtain one. To be considered at the Associate Professor level, the following additional qualifications must be met: An established research agenda with a strong publication record that demonstrates excellence and leadership in interdisciplinary, convergent research. A strong record of extramural funding. A record of success in research, teaching, and mentoring junior faculty and graduate students. Required Documents Along with the online application, please attach your Curriculum Vitae to be uploaded in the candidate profile under 'Resume'. In addition, the following documents are requested to be submitted in a single, combined PDF document in the candidate profile under 'Other documents': Cover letter summarizing motivation, qualifications, experience, and career goals. Names and contact information of at least three professional references (we will only solicit reference letters from final candidates). A concise statement of up to three pages that conveys your vision of how your proposed research program will: 1) address specific topical areas in the context of water sustainability challenges, 2) complement existing CEE Department and UWRL research, and 3) integrate with other interdisciplinary research/faculty collaborators at USU. For candidates applying as part of a team, the research statement must also identify proposed team members, describe how candidates' expertise is cross-cutting and complementary, and how they would work collaboratively and synergistically to address the identified topical areas. A two-page teaching statement that explains your approach to teaching and briefly describes existing or new CEE courses you would like to teach or develop. In addition to the above required documents, candidates must upload two of their most relevant peer-reviewed journal article publications. Each of these should be uploaded as separate PDF files. **Document size may not exceed 10 MB.**
          

Software Engineer

 Cache   
Job Description:
  • Our Aerospace client is in need of a Software Engineer to support the development of NASA flight Software Systems. This candidate will be involved with the design, development, integration, test, and delivery of software systems for advanced space systems, atmospheric flight vehicles, science instruments, and ground support systems. The ideal candidate will be capable of supporting efforts within an integrated development environment at all phases of the project software life cycle.
    Qualifications:
    • --- Development of the flight software and other supporting software systems in the C/C++ programming language.
      --- Interfacing with both actual hardware and simulated hardware modules.
      --- Experience developing software requirements, operational concepts, system interfaces, test plans and procedures.
      --- Experience with software version control systems desired (i.e. GIT, Subversion, etc.)
      --- Utilizes SDKs, custom tools and COTs software in the overall development of software systems
      --- Specialized knowledge in areas critical to machine learning for autonomous systems is desired (i.e. neural networks, genetic algorithms, etc.)
      --- Other areas of knowledge such as human-machine interaction, computer vision and image processing techniques, and robust decision making under uncertainty in an aerospace context is helpful.
      --- C/C++ programming skills.
      --- Python
      --- LabWindows
      --- Java Scripting
      --- Code development in both Linux (i.e. Redhat7) and Windows operating systems
      --- Code development for real-time operating systems helpful (VxWorks, FreeRTOS, etc.)
      --- Familiar with embedded system/single board computing (i.e. BeagleBone, Raspberry Pi, etc.)
      --- BS degree or higher in Computer Science/Engineering or equivalent.
          

Machine Learning Engineer

 Cache   
Join Hired and find your dream job as a Machine Learning Engineer at one of 10,000+ companies looking for candidates just like you. Companies on Hired apply to you, not the other way around. You???ll receive salary and compensation details upfront??? - before the interview - and be able to choose from a variety of industries you???re interested in, to find a job you???ll love in less than 2 weeks. We're looking for a talented AI expert to join our team. Responsibilities Engaging in data modeling and evaluation Developing new software and systems Designing trials and tests to measure the success of software and systems Working with teams and alone to design and implement AI models Skills An aptitude for statistics and calculating probability Familiarity in Machine Learning frameworks, such as Scikit-learn, Pytorch and Keras TensorFlow An eagerness to learn Determination - even when experiments fail, the ability to try again is key A desire to design AI technology that better serves humanity These Would Also Be Nice Good communication - even with those who do not understand AI Creative and critical thinking skills A willingness to continuously take on new projects Understanding the needs of the company Being results-driven

Requirements:

Hired
          

Software Development Engineer in Test (SDET) Graph Database Engineer and Network Analysis

 Cache   
Job ID

2019-11715

OVERVIEW

Are you a computer scientist or data scientist who has a passion or desire for building great high-quality commercial software* Esri is looking for individuals to join our team with a dedication to quality and software engineering to help advance Esri's cutting-edge ArcGIS software.

Your work will involve discovering innovative ways to improve the products we deliver to our customers worldwide, imagining ways to stress our code, implementing new tests, and even developing new test frameworks.

This challenging opportunity allows you to leverage your skills to design and build new and innovative software product capabilities. As a member of the ArcGIS Desktop, ArcGIS Pro, and ArcGIS Enterprise teams, you will work with a diverse group of engineers and developers to implement creative solutions to complex problems for managing and sharing information. You ll also have the opportunity to learn best practices from individuals that have decades of combined experience building ArcGIS, a premiere GIS platform.



Responsibilities:

* Design, develop, implement, and maintain test automation frameworks to be used by development and test engineers using C# as a primary programming language

* Work with a team of dedicated software engineers and product engineers to design and author test cases for unit, functional, performance, scalability, and durability testing based on user requirements

* Collaborate with software engineers, product engineers, and other stakeholders to build and test ArcGIS Pro functionality related to content management and sharing capabilities

* Assist in determining product quality and release readiness

REQUIREMENTS

* 1+ years of software testing experience

* 1+ years of experience using an application development language, such as C++, C#, or Java

* A self-motivated team player with an interest in continuous learning

* Bachelor's or master's in engineering, computer science, data science, machine learning, artificial intelligence, or a related field, depending on position level

Recommended Qualifications:

* Familiarity with Esri ArcGIS technologies

* Experience with graph and relational databases

* Experience with network or link analysis workflows

* 1+ years of experience using web technologies such as JSON, REST, or Java Script

* 1+ years of experience with software testing tools such as CodedUI, TestNG, Selenium, Cucumber, or related tools

#LI-RF1



THE COMPANY

Our passion for improving quality of life through geography is at the heart of everything we do. Esri s geographic information system (GIS) technology inspires and enables governments, universities, and businesses worldwide to save money, lives, and our environment through a deeper understanding of the changing world around them.



Carefully managed growth and zero debt give Esri stability that is uncommon in today's volatile business world. Privately held, we offer exceptional benefits, competitive salaries, 401(k) and profit-sharing programs, opportunities for personal and professional growth, and much more.



Esri is an equal opportunity employer (EOE) and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.



If you need a reasonable accommodation for any part of the employment process, please email humanresources@esri.com and let us know the nature of your request and your contact information. Please note that only those inquiries concerning a request for reasonable accommodation will be responded to from this e-mail address.
          

Agency Owner - South Puget Sound Area, Washington

 Cache   
What we do at American Family Insurance makes a difference in people's lives. And the way we're doing it is changing the way people think about insurance. Help us make a difference, and find a rewarding career along the way. Consider becoming an agency owner or a member of an agency team.

Quick Stats:

Job ID: R13792 Agency Owner - South Puget Sound Area, Washington (Open) Job Summary:

Business owner. Community leader. Protector of dreams. That's what makes an American Family Insurance agency owner. It's a highly rewarding opportunity that allows you to create financial stability while making a positive impact on our customers' lives. So if you're looking for an opportunity to build a business and own your future - we're interested in you!

Responsibilities:

Additional Job Information:

Job Description:

At American Family, we're seeking highly motivated individuals with a demonstrated track record of success and eagerness to accomplish something that takes time, energy and commitment. Do you possess a strong work ethic and have an inner drive that makes you hungry for success? Have you failed but learned from those mistakes in order to overcome the challenges you've faced?

Our agency owners operate as independent contractors, representing American Family and its products exclusively. As an agency owner, you'll be responsible for your agency's overall management, sales and growth. You'll also hire your own staff and work with your team to meet the strategic business goals you set.

And when you choose to become an American Family agency owner, you'll be partnering with an industry leader that's driven by our customers and committed to your success. Here are just a few more reasons why you should become an American Family agency owner:
  • Financially Fit: With nearly $8 billion in policyholder equity, American Family has the financial security to protect the dreams of your policyholders.
  • Market Smart: American Family agency owners sell the American Family brand of products along with Brokerage and Alliance products. The enterprise operates other companies including The General, Homesite, HomeGauge, Moonrise and Networked Insights.
  • Invested in Innovation: American Family stays in touch with and ahead of the most innovative technology and trends like Artificial Intelligence, Machine Learning and Robotic Process Automation.
    You'll be in control of your future and have the opportunity to create financial stability within your business. You'll also be rewarded for your hard work through various programs that award our most successful agency owners with travel and networking opportunities.

    The journey to becoming an agency owner begins with the introduction of our company, culture and the greater details of this opportunity and by getting to know you and your aspirations through several meetings, interviews and other interactions. During this time, we'll also complete a background check, plus you'll want to get your Property, Casualty, Life and Health insurance licenses.

    Bottom line, as an agency owner, you'll be a trusted, caring advisor, working hard to inspire, protect and restore the dreams of the people around you. If you're looking to be part of something bigger, we're looking for you!

    Stay Connected: Join our Talent Community!

          

Vice Chair for the Department of Biomedical Informatics

 Cache   
Vice Chair of Biomedical Informatics Opportunity: The University of Arkansas for Medical Sciences (UAMS) is recruiting a Vice-Chair for Clinical and Translational Research, in the Department of Biomedical Informatics. This person would also serve as Director of the CTSA Comprehensive Informatics Resource Center (CIRC). The position will hold an academic appointment at the rank of Associate Professor or Professor on the tenure-eligible track and requires a PhD in biomedical informatics, medical informatics, data science, or other analytical science fields or an MD with board certification in clinical Informatics. All candidates should have at least four years of research experience in clinical research informatics, and significant experience with clinical trial operations. The duties include directing a core that emphasizes clinical trial operations, as well as doing scientific and scholarly research, publishing, and teaching in the biomedical informatics graduate and clinical informatics fellowship training programs. All inquiries, nominations and applications for the position of the vice-chair for Clinical and Translational Research are welcome. About the Department of Biomedical Informatics (DBMI): "Big Data" has come to hospitals, and for most hospitals the problem is how to deal with this explosion of information. Founded in 2015, the Department of Biomedical Informatics (DBMI), in the College of Medicine at the University of Arkansas for Medical Sciences, develops computational tools to assess and manage medical and public health information for patient care and research programs. The DBMI focuses on advanced medical information technologies, and provides highly competitive training and research, with faculty experts in several areas of medical informatics. This includes analysis of the UAMS Electronic Health Records (EHRs), numerous state data resources, external collaborations, and many collaborations with the Center for Translational Science Award (CTSA) activities. DBMI also has experts focusing on advancing clinical trial operations and clinical effectiveness research. Within DBMI are faculty members building on cancer informatics with extensions into neuroimaging informatics in collaboration with the Departments of Radiology, the Winthrop P. Rockefeller Cancer Institute, the Donald W. Reynolds Institute on Aging, the Brain Research Imaging Center and the Psychiatric Research Institute. DBMI faculty include several bioinformaticians, working closely with the Cancer Institute and also in developing key tools and technologies for ---omics' research across UAMS, including third-generation sequencing of DNA and RNA as well as proteomics, metabolomics, and microbiome research. The department works to cross-link these research activities with fundamental research in a variety of standards in clinical and imaging informatics and genomics, in ontology development and in the use of machine learning methods and advanced analytics in high performance computing. A high-performance computer is managed by DBMI in support of the UAMS research community and allows for routine analysis of terabytes of data, with a large 4.2 petabyte storage. About the UAMS Translational Research Institute (TRI) The UAMS Translational Research Institute (TRI) provides services and resources to ensure the swift translation of research into health care advances. This support is available to all UAMS researchers at the UAMS campus in Little Rock, the UAMS Northwest Regional Campus in Fayetteville, the Arkansas Children's Hospital in Little Rock, (including the Arkansas Children's Research Institute and the Arkansas Children's Nutrition Center (ACNC)), and the Central Arkansas Veteran's Healthcare System in Little Rock and North Little Rock. TRI is supported by a Clinical and Translational Science Award (CTSA) from the National Institutes of Health's National Center for Advancing Translational Sciences (NCATS). Translational research is often classified by which stage of translation (from beginning research to societal application and impact) it falls into. The T Spectrum (Translational Spectrum) below illustrates the different stages of translational research, ranging from Basic Science (T0) to Translation to Humans (T1), Patients (T2), Practice (T3) and finally to Translation to the Community (T4), as shown in the Figure. TRI is dedicated to advancing the use of cutting-edge informatics, providing researchers with the tools, expertise and procedures for clinical and translational research. This is done through the CIRC (Comprehensive Informatics Resource Center), a partnership with the Department of Biomedical Informatics in the UAMS College of Medicine. TRI, as a member of the CTSA network, is part of the national effort to study and innovate the process of conducting clinical trials using informatics approaches, for both single and multi-site trials. UAMS is a member of the Southeast SHRINE network and the ACTS consortium. About the University: The University of Arkansas for Medical Sciences (UAMS) is the state's only academic health sciences center, comprised of five health professions colleges (Medicine, Nursing, Pharmacy, Health Professions, and Public Health), a graduate school, six institutes, eight Regional Centers (six of which include family medicine practices and residency programs), and a comprehensive Medical Center. Its College of Medicine has held a unique and vital role in Arkansas for more than 130 years. UAMS is the largest public employer in the state of Arkansas with more than 11,000 employees. UAMS and its clinical affiliates: Arkansas Children's and the VA Medical Center, are an economic engine for the state with an annual economic impact of $3.92 billion. Centrally located within the state, UAMS's Little Rock campus is a tertiary referral center and the only Level 1 adult Trauma Center and Comprehensive Stroke Center for Arkansas. Role & Responsibilities: Direct a core that emphasizes clinical trial operations and innovative informatics approaches to address rural health and healthcare disparities. Foster and Leverage Relationships across the CTSA Informatics Community. Demonstrate the leadership, management ability, and administrative experience to take the CIRC to the next level of achievement. Be supportive of the educational needs and contributions of students, residents, faculty, and staff in the Department. Develop a research portfolio that stresses collaboration and maximizes opportunity for synergistic productivity across the University's campuses. Play a lead role in the Clinical Research Informatics component of the DBMI graduate education program, including helping students and staff successfully publish papers, write grants and present at professional conferences. Play a leading role in the emerging clinical informatics fellowship training program. YourMembership.Category: Education, Keywords: Department Chair
          

Client Partner

 Cache   
We are Infrrd - the Enterprise AI company that uses AI and Machine Learning technologies to help our customer automate human tasks. We are looking for a client partner to help us build the strategy and grow existing business with our customers


Like any job, it has it's pros and cons. Let's talk about the Cons first:


1. We are growing fast, so you will need to keep up with the pace and hit the ground running.2. We are a young company that gives the large AI companies a run for their money when it comes to solving enterprise automation problems. What this means for you is that you need to be prepared to deal with some of the best competitors on the planet and win. It's not necessarily a con but it can get pretty intense. Let's just say this is not a job for the faint hearted.


With the cons out of the way, lets talk about the good stuff.

1. We are a team of about 275 people, so while we are young - we are not a start-up.2. The work is challenging, and you will work with the VP of Customer Advocacy team.3. There are three levels of people in our team - those who do the work, those who can fix things when they are not working and the ones that can own outcomes. This job belongs in the third category - you will have complete freedom in how you build and grow your team, as long as you own the quarterly targets. You will get to make your own decisions and live to see their outcomes.4. Our team is spread in 3 parts of the globe - US, Europe and India. You need to be comfortable working with remote colleagues.

If you have read this far, then you may want to know what background we would like our client partner to have. We have made a list, here it is:
  • 10+ years of overall experience with 5+ years of experience in a similar role.Understanding of advanced technologies such as Machine learning and AI and its role in Enterprises
  • Excellent oral and written communication skills.
  • Good understanding of Global Delivery model in Enterprise software delivery
  • Strong leadership skills and ability to motivate peers through a collaborative effort in a highly dynamic environment
  • Track record to demonstrate ability to develop innovative strategies and effectively execute them
  • Strong written and verbal communication skills
    As a Client Partner at Infrrd, your job responsibilities will include:
    -- Working with our customers as a true client partner, understand the business challenges and come up with innovative solutions to help nurture the business-- Own up revenue growth targets and strategize a plan for execution-- Generate Opportunities with existing customers and execute a plan to take the same to closure.-- Build strategic relationships with key client personnel to help build a mutually beneficial business model.


    We are all about building next generation enterprise applications through the use of AI and ML. While we will help you get familiar with our platform and technology, it would be nice to have someone who already understands the basics of Machine Learning and AI.


    In short, your job is to help us grow our existing business. We aspire to be a big brand and we have put 9 years of grueling work to get to where we have reached. We are looking for someone to join us in our journey to create a company that their future generations can be proud of. As a lead mortgage consultant, we would like you to pay attention to detail for all of our work. In fact, to validate that you are not just mass mailing your resume to every job you see, we would like you to send an email with the subject 'I am your client partner' along with your profile to anoop at infrrd dot ai . This is our little validation trick to separate the people who apply to every job they see from people like you.


    Looking forward to meeting you.
          

Senior Data Architect

 Cache   
Job description: Need for a Data Architect with strong data architecture experience to assist in creating data strategy for this potential loan platform migration. The task is to carry out assessment of current state, target state, approach for data strategy & migration. Should possess at least 12 Years of experience in the Enterprise DATA space. Must have great experience and knowledge about data architectures. Should be able to handle and analyze large data. Preferred to have Hadoop skill/ experience . Strong knowledge/experience in programming languages and latest technologies such as : C#.NET, Elastic, all types of Javascript frameworks, HTML5, CSS, RESTful Services, -Spark, Python, Linux, Hive, Kafka, Redis Cloudera etc. Require knowledge and experience with the latest data technologies -and frameworks such as Hadoop, MapReduce, Pig, Hive, HBase, Oozie, Flume, ZooKeeper, MongoDB, NoSQL and Cassandra. Should possess knowledge of cloud computing and preferably possess experience in working with various cloud environments Strong decision making skills in terms of data analysis and must have the ability to architect large data. Machine learning is a desired skill for this position. Knowing about the pattern recognition, text mining, clustering can be an added advantage. Agile and scrum methodologies is a must to know for this job. Experience with Data warehousing and data mining is a must.
          

Chat with Scott about Software Development Engineer II - Relocation Available - 4545848-0

 Cache   
I'm Scott and I'm a sourcing recruiter with AWS! Interested or have questions? Start a chat with me today! All chats are text-based and I'm based on the East Coast (9-5pm ET). I may not respond right away but you can expect a response from me within 24 hours of receiving your message (except weekends).

JOB ID: 874966

Amazon Web Services (AWS) is the world leader in providing a highly reliable, scalable, low-cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries around the world!

The Team: We're a small, independent team inside AWS working on green-field services to improve operational tooling and automation across the most popular AWS services.

We need Developers who move fast, are capable of breaking down and solving complex problems, and have a strong will to get things done. Developers at Amazon work on real world problems on a global scale, own their systems end to end and influence the direction of our technology that impacts hundreds of millions customers around the world.

Join a team of super smart, customer obsessed Developers that like to have fun in a start-up like environment.

BASIC QUALIFICATIONS

3+ years of non-internship professional software development experience
Programming experience with at least one modern language such as Java, C++, or C# including object-oriented design
1+ years of experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems.

PREFERRED QUALIFICATIONS

Experience building new products and services from the ground up.
Experience developing systems that query large datasets
Some Machine Learning experience
Intermediate to advanced knowledge of computer networking and information security.
Strong communications skills; you will be required to proactively engage fellow Amazonians both inside and outside of your team.
Experience with distributed (multi-tiered) systems, algorithms, and relational databases.
Ability to effectively articulate technical challenges and solutions.
Deal well with ambiguous/undefined problems; ability to think abstractly.
Ability to synthesize requirements underlying feature requests, recommend alternative technical and business approaches, and facilitate engineering efforts to meet aggressive timelines.
Expertise in software processes, web services, multi-tiered systems, and enterprise application integration.
Meets/exceeds Amazon's leadership principles requirements for this role
Meets/exceeds Amazon's functional/technical depth and complexity for this role

*Please email AWS Sourcing Recruiter, Scott Korkowski (...@amazon.com) if you have questions.

Amazon is an Equal Opportunity - Affirmative Action Employer - Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation / Age.

This role will sit in our new headquarters in Northern Virginia, where Amazon will invest $2.5 billion dollars, occupy 4 million square feet of energy efficient office space, and create at least 25,000 new full-time jobs. Our employees and the neighboring community will also benefit from the associated investments from the Commonwealth including infrastructure updates, public transportation improvements, and new access to Reagan National Airport.

By working together on behalf of our customers, we are building the future one innovative product, service, and idea at a time. Are you ready to embrace the challenge? Come build the future with us. Associated topics: algorithm, application, backend, c c++, c++, develop, matlab, sde, software developer, software programmer
          

Machine Learning Engineer

 Cache   
Join Hired and find your dream job as a Machine Learning Engineer at one of 10,000+ companies looking for candidates just like you. Companies on Hired apply to you, not the other way around. You???ll receive salary and compensation details upfront??? - before the interview - and be able to choose from a variety of industries you???re interested in, to find a job you???ll love in less than 2 weeks. We're looking for a talented AI expert to join our team. Responsibilities Engaging in data modeling and evaluation Developing new software and systems Designing trials and tests to measure the success of software and systems Working with teams and alone to design and implement AI models Skills An aptitude for statistics and calculating probability Familiarity in Machine Learning frameworks, such as Scikit-learn, Pytorch and Keras TensorFlow An eagerness to learn Determination - even when experiments fail, the ability to try again is key A desire to design AI technology that better serves humanity These Would Also Be Nice Good communication - even with those who do not understand AI Creative and critical thinking skills A willingness to continuously take on new projects Understanding the needs of the company Being results-driven

Requirements:

Hired
          

Data Engineer for ML/AI Job posting in #Cedar Rapids #ITjobs

 Cache   
Date Posted: 2019-08-20-07:00 Country: United States of America Location: HIA32: Cedar Rapids, IA 400 Collins Rd NE , Cedar Rapids, IA, 52498-0505 USA At Collins Aerospace, were dedicated to relentlessly tackle the toughest challenges in our industry all to redefine aerospace. Created in 2018 through the combination of two leading companies Rockwell Collins and United Technologies Aerospace Systems were driving the industry forward through technologically advanced and intelligent solutions for global aerospace and defense. Every day we imagine ways to make the skies and the spaces we touch smarter, safer and more amazing than ever. Together we chart new journeys, reunite families, protect nations and save lives. And we do it all with some of the greatest talent this industry has to offer. We are Collins Aerospace and we hope you join us as we REDEFINE AEROSPACE. Do you want to be part of a new, exciting initiative to combine foundational IT with new digital technologies? Our Digital Technology team is driving business efficiencies and a better customer experience by connecting technologies, people, information and processes. From making aircraft more electric, intelligent and integrated to building new software platforms such as Internet of Things, big data, artificial intelligence, and blockchain, theres no better place to be right now than in digital. If youre an agile thinker who enjoys utilizing modern technology to make big improvements, then youre a perfect fit for this team. Join Collins Aerospace to help us revolutionize the aerospace industry today!Do you want to be part of a new, exciting initiative to combine foundational IT with new digital technologies? Our Digital Technology team is driving business efficiencies and a better customer experience by connecting technologies, people, information and processes. From making aircraft more electric, intelligent and integrated to building new software platforms such as Internet of Things, big data, artificial intelligence, and blockchain, theres no better place to be right now than in digital. If youre an agile thinker who enjoys utilizing modern technology to make big improvements, then youre a perfect fit for this team. Join Collins Aerospace to help us revolutionize the aerospace industry today! Primary Responsibilities: - Stakeholders include a team of data analysts and data scientists. Enable the team with data acquisition, performance tuning and data processing. - Administers the ML/AI data platform. Designs and develops data services for AI. Provides data to the team and enterprise toward the end goal of adopting AI. - Builds data pipelines focusing on data ingestion, integration, modeling, optimization, and quality for AI processing. Architects and launches new data models (marts, lakes, stores, hubs). Optimizes costs, storage, processing and access to integrated data. - Explores data for pattern detection prior to algorithm development. Prepares and pre-processes data utilized in algorithms. Ensures data is formatted and cleansed for AI processing. - Drives use and adoption of new data sources and data partnerships. - Deploys production algorithms created by AI Engineering team. Governs and monitors production algorithms. - Maintains and utilizes data catalog and algorithm catalog for AI space. - Develops and deploys APIs for data movement, manages the flow of information and system integration between applications. - Monitors production processes using enterprise scheduling tools and troubleshoots incidents surrounding supported solutions, including after-hours escalations of major incidents. Qualifications: Basic Qualifications: - 3+ years experience working in data, analytics or machine learning - Experience with the delivery of big data solutions and applications for 3+ years - 3+ years of experience in designing, developing, building and ongoing support of data integration services.Experience with Performance Tuning of complex solutions (both ETL queries and database structures). - Experience with multiple database platforms, cloud and on premise hosting models, and modern programming languages. Education: - This position requires a Bachelors degree in the appropriate discipline and 8 years of relevant experience or an Advanced degree in the appropriate discipline and 5 years of relevant experience. In the absence of a degree, 12 years of relevant experience is required. At Collins, the paths we pave together lead to limitless possibility. And the bonds we form with our customers and with each other propel us all higher, again and again. Some of our competitive benefits package includes: Medical, dental, and vision insurance Three weeks of vacation for newly hired employees Generous 401(k) plan that includes employer matching funds and separate employer retirement contribution Tuition reimbursement Life insurance and disability coverage And more now and be part of the team thats redefining aerospace, every day. United Technologies Corporation is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Privacy Policy and Terms: Click on this link to read the Policy and Terms
          

MTS 1, Software Engineer

 Cache   
The eBay Marketing Tech CRM team is looking for a strong server side engineer with a passion for providing innovative and scalable solutions for business applications. He/she will focus on building infrastructure and middle tier services and support mobile app teams leveraging platform services. Write server-side code (services) for mobile & web-based applications, create robust high-volume production applications, and develop prototypes quickly. One should also have a strong understanding of, and practical experience with, Java web application development:

  • Build our platforms and systems infrastructure using your strong background in distributed systems, network system design, and large scale database systems.
  • Research, analyze, design, develop and test the solutions that are appropriate for the business and technology strategies
  • Participate in design discussions, code reviews and project related team meetings.
  • Work with other engineers, Architecture, Product Management, and Operations teams to develop innovative solutions that meet business needs with respect to functionality, performance, scalability, reliability, realistic implementation schedules and adherence to development principles and quality goals.
  • Develop technical & domain expertise and apply to solving product challenges.

    Requirements

    • 7+ years of hands-on product development experience in Java, SOA services, XML and Web technologies after BSCS or MSCS or other relevant engineering discipline.
    • Experience in Database driven application development (Oracle, NoSQL Mongo, Cassandra, Couchbase), SQL and schema design.
    • Experience in web front end UI development such as JSPs in J2EE environments.
    • Experience in building a live e-commerce product that has scaled to large number of users is a plus.
    • Knowledge of Windows and UNIX development environment and associated tools like source code management, bug tracking etc.
    • A solid foundation in computer science, with strong competencies in data structures, algorithms and software design.
    • Extensive programming experience in Java.
    • Experience in other languages such as Scala, Node.js etc. is a plus.
    • Quality champion with a commitment to writing code and tests to maintain high quality.
    • Strong software design, problem solving and troubleshooting skills.
    • Experience in Machine Learning, Information Retrieval, Recommendation Systems, as well as BigData (Hadoop / Spark / Hive) is a plus
    • Experience in using ML learning software and libraries (R / Python) a plus.

      This website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies

      View our privacy policy

      View our accessibility info

      eBay Inc. is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at talent@ebay.com. We will make every effort to respond to your request for disability assistance as soon as possible.

      For more information see:

      EEO is the Law Poster

      EEO is the Law Poster Supplement
          

Data Architect

 Cache   
SunIRef:it Data Architect Hackensack Meridian Health 301 reviews - Hackensack, NJ 07601 Hackensack Meridian Health 301 reviews Read what people are saying about working here. Description: The Data Architect will be designing and implementing a Department of Quality data store in collaboration with the analytics teams within Quality Measurement to establish a robust reporting infrastructure that help drive clinical quality and performance improvement. The Data Architect will create ETL processes with built-in data quality monitoring that combines both internal and external data into a single, cohesive environment. S/he will be modeling data from clinical, operational, and administrative data systems to support reporting in patient experience, CMS quality and performance measures, hospital-acquired conditions, and other domains. In addition, s/he will participate in data governance activities and help analytic teams build more efficient and robust reporting processes. Responsibilties: 1. Build and maintain a Department of Quality data store for reporting and analytic needs of clinical, operational, and administrative analytic and data science teams. 2. Design logical and physical models for clinical and quality data across the network that are clear and concise for consumption by data analysts. 3. Develop and manage ETLs that integrate multiple sources into the departmental data warehouse using SQL Server Data Tools. 4. Translate metric definitions into technical specifications in building data marts that fulfill clinical reporting needs. 5. Develop and implement data quality monitoring procedures, create and provide data quality reports including data profiling, and provide support to data governance activities. 6. Interpret statistical analyses and machine learning outputs to determine were data quality exist, design and implement processes to limit such issues. 7. Develop processes to improve reporting efficiency to assist other analytic teams. 8. Work with other team members to create and maintain a comprehensive data dictionary. 9. Provide feedback to other team members by participating in peer review and educate the organization on data warehousing and data engineering. 10. Communicate progress (both written and verbally) to diverse stakeholders and present to senior leadership. Qualifications: 1. Advanced degree (M.S. or Ph.D.) in quantitative fields (statistics, engineering, physics, epidemiology or STEM) and 3 years experience in ETL, data warehousing, and data modeling required; or an equivalent combination of education and/or experience. 2. Strong experience with structured data and relational databases, and familiarity with unstructured data. 3. Five years of experience with SQL and three years of experience with ETL tools. 4. Ability to investigate, organize, and merge data from different sources into a single data structure. 5.. Experience with Microsoft SSMS, SSIS, SSRS, SSDS highly preferred. 6.. Experience in an acute health system preferred. 7. Experience with visualization/BI tools and automated reporting tools preferred (Crystal Reports, Tableau, Power BI etc.). 8. Familiarity with programming and/or scripting languages (Python, C, C++, Java etc.) preferred. Hackensack Meridian Health - Just posted report job - original job
          

Health Data Lead

 Cache   
Overview Are you ready to join an organization where you can make an extraordinary impact every day? Imagine all Americans enjoying ideal cardiovascular health free of heart disease and stroke. At the American Heart Association and American Stroke Association, we get to work toward that goal every day. Is it easy? No. Is it worthwhile? Absolutely. This is satisfying and challenging work that makes a real difference in people's lives. We are where you can achieve professional growth with personal fulfillment. We are where you can connect people to making a lifesaving impact. We are where you can partner with individuals, schools, lawmakers, healthcare providers and others to ensure everyone has access to healthier lifestyle choices and proper healthcare. The American Heart Association is where you can make an extraordinary impact. Responsibilities The Lead Health Data Science Assets is a new exciting role that offers a unique opportunity to lead data strategy across the organization! This role will work with our Emerging Strategies and Ventures Team and serve as a critical liaison between Emerging Health and Business strategies, the AHA's Mission Aligned Business and Health Solutions teams and other business segments. This role is vital to American Heart Association's efforts that bring to bear healthcare and science data as a core asset and growth driver, crafting an outstanding organization and capability that will support quantifiable outcomes, enable identification of new product opportunities and deliver unrivaled data partnerships that fuel creativity. We are looking for someone who is highly motivated, who is an expert data innovator, and who can share tangible results from their strategies, leadership actions. We also need someone who has current experience in large growth organizations where data is a core capability for creating outcomes. Essential Job Duties Develop and implement solutions built on a scalable and flexible architecture that will allow AHA to handle and use health data as an enterprise business asset Define and implement standard operating practices for health data collection, ingestion, storage, transformation, distribution, integration, and consumption within AHA's Health solutions portfolio. Lead all aspects of data access and distribution. Lead the design and delivery of Data Business Intelligence AI and automation solutions advisory engagements involving strategy, roadmap and longer-term operating models. Support the delivery of a broad range of data assets and analytics. Identify and demonstrate approaches, appropriate tools and methodologies. Run health data quality and security. Define data standards, policies and procedures ensuring effective and efficient data management across the company. Provide expertise and leadership in the disciplines of data governance, data quality and master data integration and architecture. Establish a data governance framework. Maintain and share data definitions, data integrity, security and classifications. Direct the continued design, build, and operations of our Big Data Platforms and Solutions. Help identify and understand data from internal and external sources for competitive, scenario and performance analyses, and financial modeling to gain insight into new and existing processes and business opportunities. Work with business teams on commercial and non-commercial opportunities. Advise on fair market value data value propositions Actively contribute to proposal development of transformation engagements focused on DataAnalytics AI and automation. Demonstrate thought leadership to advise teams on DataAnalytics, AI and automation strategy and detailed use cases development by industry. Possess deep understanding of trends and strategies for identifying solutions to meet objectives. Monitor technology trends and raise awareness of capabilities and innovations in selected domains of expertise Empower Data Architecture team to create optimized data pipelines, data storage and data transformation. Support the practice with depth of experience and expertise in the following domains: automation, machine learning, deep learning, advanced analytics, data science, data aggregation & visualization Qualifications Bachelors' degree from a globally recognized institution of higher learning is required, with an advanced degree (MS, PhD, equivalent) strongly preferred. 10 years experience in a company known for data innovation and excellence with responsibility for a comparably- sized analytics business 10 years experience in Health data, real world evidence analytics andor health informatics with the capability to design data strategies and source key health data, gain acceptability for methods, and build analyses for benefit-risk justifications, development and other regulatory needs Experience implementing and using cutting edge analytic tools and capabilities, including B2B, B2C and cross-channel integration tools A passion for and experience with big data-driven decision-making processes across business functions Demonstrated ability to work with technical team of product and data engineers, as well as data scientistsPhDs Consistent track record of successfully delivering top and bottom-line results individually and as part of a high-performance team. Outstanding oralwritten communication and presentation skills, especially with respect to clearly communicating complex data-driven topics to both technical and non-technical audiences. Strategic thinker, leadership, communication, people management skills and innovator with ability to work across segments to support tactical planning and deliver on the objectives of organization. Knowledge of Technology and Healthcare, Life sciences and Health-tech Industry trends Creative, collaborative thinker with an ability to learn new things, assess problems and identify proactive solutions quickly Self-starter, comfortable leading change and getting things done. Travel is required (at least 10%), including overnights. Location: Dallas, Texas is preferred. At American Heart Association - American Stroke Association, diversity, inclusion, and equal opportunity applies to both our workforce and the communities we serve as it relates to heart health and stroke prevention. Be sure to follow us on Twitter to see what it is like to work for the American Heart Association and why so many people enjoy #TheAHALife EOE MinoritiesFemalesProtected VeteransPersons with Disabilities Requisition ID 2018-3066 Job Family Group Business Operations Job Category Science & Research Additional Locations US-Anywhere US-Anywhere Location: St. Louis,MO
          

Multi - Disciplinary Algorithm Developer 3D Data

 Cache   
Applied Research Associates, Inc. (ARA) is actively seeking a highly qualified scientist / engineer for the development of advanced 3D data analysis algorithms for the intelligence and defense communities. Applications include geolocation, navigation, image analysis, machine learning and point cloud analysis. The scientist / engineer will join a multi-disciplined collaborative team of engineers and scientists. This position is located in the Intelligence, Surveillance and Reconnaissance (ISR) Directorate of at the ARA Southeast Division in Raleigh, NC.

The ideal candidate will have an active interest in applying math/statistics/physics/engineering concepts to solve multi-disciplinary problems. The candidate should be familiar with improving/optimizing/tuning existing algorithms as well as development of new algorithms from scratch. This will include software design, software development, and debugging / issue resolution. The candidate should demonstrate a hands-on approach to problem solving and must be willing to actively participate in evaluation of algorithm and system performance. Other responsibilities include assisting in preparation of oral and written reports, supporting R&D business acquisition and customer briefings, present results of research at scientific / engineering conferences and publish in technical journals.

Scientists / engineers who are passionate about applying their expertise to solve problems of national importance, who have a strong entrepreneurial spirit, and who are seeking opportunities for personal and professional growth in a stable environment are strongly encouraged to apply.

Required Qualifications:

* MS Degree in Mathematics / Physics / Engineering along with 5-7 years' of experience or PhD Degree with 3-5 years' experience.

* Strong foundation in software development (i.e., experience with version control, at least 1 higher level language like Python, and at least 1 lower level language like C++).

* Firm understanding of 3D geometry and geospatial concepts.

* Team player with excellent presentation and written / oral communication skills.

* Hands-on approach to problem solving.

* US Citizenship (selected applicants will undergo a security investigation and must meet eligibility requirements at the time of employment).

* Ability to obtain a Secret Security Clearance.

Additional Desirable Qualifications:

* Experience in the use of MATLAB and/or Python.

* Experience in Android mobile app development.

* Background in image analysis.

* Background in machine learning (e.g., Convolutional Neural Networks).

* Background in cloud-based computing.

* Background in analysis of point clouds from LiDAR and other sources.

* Experience working on intelligence and DoD programs.

* Work in real-time, parallel and distributed computing (e.g., CUDA or OpenCL).

* Prior / existing security clearance.

About Us:

Applied Research Associates, Inc. is an employee-owned international research and engineering company recognized for providing technically superior solutions to complex and challenging problems in the physical sciences. The company, founded in Albuquerque, NM, in 1979, currently employs over 1,100 professionals and continues to grow. ARA offices throughout the United States and Canada provide a broad range of technical expertise in defense technologies, civil technologies, computer software and simulation, systems analysis, environmental technologies, and testing and measurement. The corporation also provides sophisticated technical products for environmental site characterization, pavement analysis, and robotics.

While this is all of the Year One and Beyond stuff, Day One is highly impressive too. These are things like our competitive salary (DOE), Employee Stock Ownership Plan (ESOP), benefits package, relocation opportunities, and a challenging culture where innovation & experimentation are the norm. At ARA, employees are our greatest assets so we give our employees the tools, training, and opportunities to take active roles as owners. The motto, "Engineering and Science for Fun and Profit" sums up the ARA experience. The corporation realizes that employee ownership spawns greater creativity and initiative along with higher performance and customer satisfaction levels.

ARA is passionate about inclusion and diversity in our workplace, in 2018 40% of our new employees voluntarily self-identified as protected veterans. (Source-AAP EOY 2018 Veterans Data Collection Report). Additionally, the Southeast Division looks not only for the right skills, but also for a cultural fit. We seek colleagues who will contribute to the unique culture that makes ARA such a great place to work. Some of the social impact aspects we have implemented at our division include monthly get-togethers, team outings to local baseball games in the summer, board game lunches, holiday party, corn hole tournaments, chili cook-offs and so on. We are also very proud of our Women's Initiative Network (WIN) whose purpose is to motivate, support, and encourage professional career development for women in order to maximize career and professional accomplishments. For additional information and an opportunity to join this unique workplace, please apply at careers.ara.com.

EqualOpportunityEmployerDescription

Equal Opportunity Employer/Protected Veterans/Individuals with Disabilities

PayTransparencyPolicyStatement

The contractor will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor s legal duty to furnish information. 41 CFR 60-1.35(c)",

DegreeName: Doctorate

Education: Mathematics

MinimumRequiredYears: 5

MaximumRequiredYears: 7

Description: Relevant Work Experience (i.e. thesis, published research, industry)
          

Data Science Infrastructure Engineer

 Cache   
At ARA, we strive to hire valuable colleagues with not only the right skills, but also demonstrate our core values of passion, freedom, service and growth. As a Data Science Infrastructure Engineer you will help define and build the machine learning infrastructure systems for our team of data scientists, machine learning / artificial intelligence engineers.

As a valued team contributor you will work with a multidisciplinary team including cyber subject matter experts, engineers, scientists, and software developers, to deliver end to end solutions that address customer requirements. Tasks will include designing, deploying, and maintaining an 'internal cloud' to support the growing and evolving data collection and analysis needs of your team. You will also design, recommend, procure, install, and maintain hardware and software systems (Linux, Windows, etc.), networking, and file storage (SAN/NAS) for the data analysis system components. A wide degree of creativity and latitude is expected for the perfect person in this role.

This role requires frequent travel to our Aberdeen/Southern Pines, NC office and various facilities to interact with the government team

Data Science Infrastructure Engineer Security Clearance Requirement:

* US citizenship is required

* Ability to get a TS clearance is required (TS/SCI preferred)

Data Science Infrastructure Engineer Required Experience:

* Bachelor's degree in Computer Science, Information Systems, Engineering, or other related scientific or technical discipline along with 7-9 years' of relevant experience or 13-15 years' relevant experience in lieu of a degree

* Infrastructure (cloud-like) design and deployment. Blade server implementations, fiber switching

* Operating systems management: Linux, Windows

Preferred Experience & Skills as a Data Science Infrastructure Engineer:

Above all, we value passion, a desire to learn, and teamwork. We are confident that if you possess the right attitude, work ethic, and skill set that you could succeed in the role. In addition to the experience and skills above, if you have any of the following you will be able to accelerate your effectiveness and impact.

* Networking: TCP/IP, IPSEC, VPN, NAT, Routing Protocols, Firewalls and Routers/switch administration (e.g., CCNA)

* Virtualization (VMs) and containers

* Scripting (bash, Python, or similar)

* 5+ years of pure system administration experience and knowledge of a modern programming language

About Us:

Applied Research Associates, Inc. is an employee-owned international research and engineering company recognized for providing technically superior solutions to complex and challenging problems in the physical sciences. The company, founded in Albuquerque, NM, in 1979, currently employs over 1,100 professionals and continues to grow. ARA offices throughout the United States and Canada provide a broad range of technical expertise in defense technologies, civil technologies, computer software and simulation, systems analysis, environmental technologies, and testing and measurement. The corporation also provides sophisticated technical products for environmental site characterization, pavement analysis, and robotics.

While this is all of the Year One and Beyond stuff, Day One is highly impressive too. These are things like our competitive salary (DOE), Employee Stock Ownership Plan (ESOP), benefits package, relocation opportunities, and a challenging culture where innovation & experimentation are the norm. At ARA, employees are our greatest assets so we give our employees the tools, training, and opportunities to take active roles as owners. The motto, "Engineering and Science for Fun and Profit" sums up the ARA experience. The corporation realizes that employee ownership spawns greater creativity and initiative along with higher performance and customer satisfaction levels.

ARA is passionate about inclusion and diversity in our workplace, in 2018 40% of our new employees voluntarily self-identified as protected veterans. (Source-AAP EOY 2018 Veterans Data Collection Report). Additionally, the Southeast Division looks not only for the right skills, but also for a cultural fit. We seek colleagues who will contribute to the unique culture that makes ARA such a great place to work. Some of the social impact aspects we have implemented at our division include monthly get-togethers, team outings to local baseball games in the summer, board game lunches, holiday party, corn hole tournaments, chili cook-offs and so on. We are also very proud of our Women's Initiative Network (WIN) whose purpose is to motivate, support, and encourage professional career development for women in order to maximize career and professional accomplishments. For additional information and an opportunity to join this unique workplace, please apply at careers.ara.com.

EqualOpportunityEmployerDescription

Equal Opportunity Employer/Protected Veterans/Individuals with Disabilities

PayTransparencyPolicyStatement

The contractor will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor s legal duty to furnish information. 41 CFR 60-1.35(c)",

DegreeName: Bachelors

Education: Computer Science

MinimumRequiredYears: 5

Description: Extensive experience with SQL, Database Management and Operating Systems.

Preferred Exp

MinimumRequiredYears: 7

MaximumRequiredYears: 9

Description: Relevant Work Experience

Licenses & Certifications

LicenseAndCertificationName: Security Clear Top Secret
          

Product Manager

 Cache   
ResponsibilitiesCollaborate with internal/external stakeholders in order to create functional requirements for product roadmapLead product design meetings with various stakeholders to ensure requirements are properly vettedDemonstrated ability to bring early-stage products to market through creative problem solving, ruthless focus on minimum viable product, and an ability to foster engagement with early adopters.Design and bring to market ML/AI powered population health insights and personalized engagement strategies.Experience with machine learning frameworks, libraries, and technologies.Create a culture of cross functional collaboration with key stakeholdersCommunicate with key stakeholders and become a subject matter expert in all areas of the productDemo the product to internal and external stakeholdersWork with product team to develop meaningful key performance indicatorsStrive to deliver product updates on-time and within budgetIdentify areas for improvement in all areas related to product management and delivery - - -QualificationsBachelor's degree in Computer Science or Data Science DegreeExperience with AI Learning/Machine Learning Previous working experience as a Product Manager -In-depth knowledge of Agile process and principles -Outstanding communication, presentation and leadership skillsExcellent organizational and time management skillsSharp analytical and problem-solving skillsCreative thinker with a visionStrong Attention to details
          

Data Analyst II

 Cache   
Cotiviti is a leading solutions and analytics company that leverages unparalleled clinical and financial datasets to deliver deep insight into the performance of the healthcare system. These insights uncover new opportunities for healthcare organizations to collaborate to improve their financial performance, reduce inefficiency, and improve healthcare quality.

Within Cotiviti, the Advanced Solutions Group (ASG) is a fast-paced team focused on generating new market-driven solutions that help industry stakeholders make higher quality decisions and reduce cost while improving health and the experience of healthcare. ASG is looking for a seasoned data analyst to join our team.

The Data Analyst II position works with various healthcare datasets, like claims and medical records, and various reference data sources to create analytical models and outputs for new healthcare solutions. This position requires you to be a data guru and a self-starter who can work through the entire analytics process. You will immediately be able to apply your experience to link disparate data sources together, conduct exploratory analysis, engineer new features, test the effectiveness of different models, and code all of the above for optimal throughput. Our analytics team uses Databricks or AzureML depending on the project, so your proficiency is key. You will become a subject matter expert on the data, model(s), and business needs and methods of the projects you work on, so throughout all phases of work, you will develop and maintain documentation and consistently communicate with other data analysts, developers, and business stakeholders.

Principal Responsibilities and Essential Duties:

* Prepare data for analytical models using various data cleansing methods.

* Develop and deploy machine learning models.

* Validate models and ensure outputs accurately meet requirements in downstream processes.

* Meet and communicate with users, team members, and subject matter experts throughout each project.

* Identify and implement quality measures and innovative analytic methods to monitor and improve the speed and/or quality of processes.

* Contribute ideas for the development of new capabilities and/or improvements that will increase the value for customers.

* Develop and actively maintain high quality, consistent documentation throughout all phases of work.

Requirements:

* Minimum of 5-8 years of experience manipulating and analyzing data.

* Experience with Databricks (pyspark), AzureML, and data mining tools.

* Experience with data manipulation using spreadsheets and database applications, including extraction and querying skills.

* Experience analyzing raw data, with ability to think logically and process sequentially with a high level of detailed accuracy.

* Problem solver, resourceful, quick learner.

* Strong written and verbal communication skills to interact with a diverse group of stakeholders including executives, managers, clients and subject matter experts.

* Ability to prioritize projects and tasks to meet deadlines.

* Master's degree in Analytics/Informatics, Computer Science, Programming or equivalent work experience.

Equal Opportunity Employer/Protected Veterans/Individuals with DisabilitiesThe contractor will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor s legal duty to furnish information. 41 CFR 60-1.35(c)
          

Professor (All Ranks) in Distributed Data and Computing

 Cache   
- Job Type Employee - Job Status Full Time The Ira A. Fulton Schools of Engineering at Arizona State University (ASU) seek applicants for a tenure-track/tenured faculty position in Distributed Data and Computing in the School of Computing, Informatics, and Decision Systems Engineering (CIDSE). This search will target scientists and engineers with research into designing distributed systems for acquiring, storing, and processing real-time, large-scale, and multi-modal data and developing distributed machine learning and consensus tools to convert data into actionable information and knowledge. Areas of interest include applied and theoretical innovations in distributed data management and analysis, distributed/decentralized consensus, and distributed operating and networking systems. Candidates with application interest in one or more of our key research thrust areas of BlockChain, IoT, Health, and Sustainability are particularly encouraged to apply. CIDSE currently houses several ASU Centers - including Center for Assured and Scalable Engineering (CASCADE) ************************ , Center for Accelerating Operational Efficiency (CAOE) ********************* , Center for Cybersecurity and Digital Forensics (CDF) ************************************************************************* , Center for Embedded Systems (CES) ********************************************************* , and Center for Biocomputing, Security and Society (CBSS) *********************************************************** - and have a large number of faculty working on a variety of relevant topics that include data management, distributed algorithms and systems, cloud and high performance computing, cybersecurity, network algorithms and optimization, self-organizing and self-stabilizing distributed systems, bio-inspired collective algorithms, survivable networks, IoT, blockchain, machine learning, and AI. The current openings are intended to broaden and strengthen this expertise, which is crucial to university initiatives and velocity. We seek applicants who will contribute to our programs and expand collaborations with existing faculty at ASU. Located in Tempe with easy access to the outdoors and urban amenities, ASU's vibrant and innovative approaches to research and teaching are charting new paths in education and research in the public interest. Faculty members are expected to develop an internationally recognized and externally funded research program, develop and teach graduate and undergraduate courses, advise and mentor graduate and undergraduate students, and undertake service activities. ASU strongly encourages transdisciplinary collaboration and use-inspired, socially relevant research. Successful candidates will be encouraged to expand expertise and collaborations in these areas. Although the tenure home may be in any of the Ira A. Fulton Schools of Engineering, the School of Computing, Informatics, and Decision Systems Engineering is currently the most involved in the interest areas of this research. Appointments will be at the Assistant, Associate, or Full Professor rank commensurate with the candidate's experience and accomplishments, beginning August 2020. Application reviews will begin on December 16, 2019. Applications will continue to be accepted on a rolling basis for a reserve pool. Applications in the reserve pool may then be reviewed in the order in which they were received until the position is filled. Apply at *********************************** . Candidates will be asked to submit the following through their Interfolio Dossier: - Cover letter - Current CV - Statement describing research interests - Statement describing teaching interests - (Optional) A short diversity statement - Contact information for at least three references For further information or questions about this position please contact Professor K. Selcuk Candan at (**************) Arizona State University is a VEVRAA Federal Contractor and an Equal Opportunity/Affirmative Action Employer. All qualified applicants will be considered without regard to race, color, sex, religion, national origin, disability, protected veteran status, or any other basis protected by law. See ASU's full non- discrimination statement (ACD 401) at https:// *************************************** and the Title IX statement at https:// ******************** In compliance with federal law, ASU prepares an annual report on campus security and fire safety programs and resources. ASU's Annual Security and Fire Safety Report is available online at **************************************************** You may request a hard copy of the report by contacting the ASU Police Department at ************. Requirements Required qualifications: Earned doctorate or equivalent in computer science, computer engineering, or a closely related field by the time of appointment and demonstrated evidence of excellence in research and teaching as appropriate to the candidate's rank. Desired qualifications: Commitment to teaching at both the graduate and the undergraduate levels, evidence of commitment to a diverse academic environment, and potential (for junior applicants) or evidence (for senior applicants) for establishing an externally funded research program, as appropriate to the candidate's rank. Categories - Computer Engineering - Faculty - Research Previous Job: Associate or Full Professor - Computational Neuroscientist University of California, San Diego La Jolla, California 92093
          

Business Operations Analyst

 Cache   
Business Operations Analyst - Job ID #: 17457 - Job Category: Finance & Accounting - Employment Type: Experienced Professionals - Division: Research - Department: Research - Primary Country: USA - Primary Location: Austin (TX) We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class.Job Description Arm Research is growing to tackle the world's leading technological challenges. Arm is looking for a highly motivated and innovative business operations analyst to join the Research Operations team to drive the financial management of engagements with standards organizations, governments, research agencies, industry partners, industry associations, and Universities around the world. You will be key in supporting Research management and Arm Finance Business Partner and the wider finance team with accounts payable queries, PO Processing, Supplier Set Up, invoice creation, Sub-Budget review and analysis. Supporting the business operations of a wide range of engagements, this role will be key to ensuring progress toward the Research collaborations team objectives, namely, to accelerate Arm's strategic research agenda and establishing Arm technologies as the preferred choice for those conducting academic research. Working closely on a day to day basis with senior research management, the finance team, and procurement, this role has the opportunity to accelerate research projects and future product development across technical areas such as Machine Learning, Silicon Technology, Robotics, Emerging Technologies, Software, Security/Biometrics, High Performance Computing, and Internet of Things. Arm Research's open culture ensures you'll be exposed to the many areas in which Research is active. What will I be accountable for? You will work closely with our senior management team and Finance Business Partner supporting financial operational activities across the portfolio of Arm Research projects including Expense tracking, Sub-Budget guidance and reporting key information using Excel, Tableau, SAP, Ariba, Jira, Confluence and other tools. You will report on a variety of business Information offering guidance where variances arise. Reporting requirements would include the following: automation, customization, and ad-hoc, ensuring that business activities execute on time and within budget. You will have responsibility for weekly presentation of reports/KPI metrics. Job Requirements What skills, experience and qualifications do I need? - Bachelor's or Master's degree in Business Administration or equivalent. - Minimum of 3 years' experience in a dynamic business supporting all functions with an understanding of Finance and Impact on business decisions. - Experience working with all levels of Management, Project managers, and engineers - Experience authoring financial reports using Excel, PowerPoint, and/or Tableau. - Expert at Excel pivot table design and reports. Expert level proficiency with Ariba procurement purchase order processing, invoice processing. - Provide regular reporting and updates tracking financial actuals vs. budget. - Experience in tracking payments. - Fully proficient with Microsoft Office applications including Word, PowerPoint, Sharepoint - Strong analytical and problem-solving abilities. - Strong attention to detail, process oriented, goal driven, and organized. - Lateral thinker and problem solver. - Effective handling multi-site technical projects, communications, and presentations. - You are hardworking, ambitious, flexible, and have excellent written and verbal interpersonal skills. - You are an active listener, inquisitive and passionate about technology. - A tenacious, resilient and positive personality with high levels of energy and enthusiasm. - You're self-organized and can work at pace often in an environment of ambiguity. - You act expertly and remain objective and fair when faced with negativity or challenge.What are the desired behaviors for this role? At Arm, we are guided by our core beliefs that reflect our unique culture and guide our decisions, defining how we work together to defy ordinary and shape extraordinary: We not I - Take daily responsibility to make the Global Arm community thrive - No individual owns the right answer. Brilliance is collective - Information is important, share it - Realize that we win when we collaborate and that everyone misses out when we don't Passion for Progress - Our differences are our strength. Widen and mix up the pool of people you connect with - Difficult things can take unexpected directions. Stick with it - Make feedback positive and expansive, not negative and narrow - The essence of progress is that it can't stop. Grow with it and own your own progress Be your Brilliant Self - Be quirky not egocentric - Recognize the power in saying I don't know' - Make trust our default position - Hold strong opinions lightly Benefits Your particular benefits package will depend on position and type of employment and may be subject to change. Your package will be confirmed on offer of employment. Arm's benefits program provides permanent employees with the opportunity to stay innovative and healthy, ensure the wellness of their families, and create a positive working environment. - Annual Bonus Plan - Discretionary Cash Awards - 401(k), 100% matching on first 6% eligible earnings - Medical, Dental & Vision, 100% coverage for employee only, shared cost for dependents - Basic Life and Accidental Death and Dismemberment Insurance (AD & D) - Short Term (STD) and Long Term (LTD) Disability Insurance - Vacation, 20 days per year with option to buy 5 more. - Holidays, 13 days per year - Sabbatical, 20 paid days every four-years of service - Sick Leave, 7 days per year - Volunteering, four hours per month (TeamARM) - Office location dependent: caf on site, fitness facilities, team and social events - Additional benefits include: Flexible Spending Accounts for health and dependent care, EAP, Health Advocate, Business Travel Accident Program & Commuter programs. ARM, Inc. (USA) participates in E-Verify. For more information, please refer to ******************** About Arm Arm technology is at the heart of a computing and connectivity revolution that is transforming the way people live and businesses operate. From the unmissable to the invisible; our advanced, energy-efficient processor designs are enabling the intelligence in 86 billion silicon chips and securely powering products from the sensor to the smartphone to the supercomputer. With more than 1,000 technology partners including the world's most famous business and consumer brands, we are driving Arm innovation into all areas compute is happening inside the chip, the network and the cloud. With offices around the world, Arm is a diverse community of dedicated, innovative and highly talented professionals. By enabling an inclusive, meritocratic and open workplace where all our people can grow and succeed, we encourage our people to share their unique contributions to Arm's success in the global marketplace.
          

Data Scientist I (Mid Level)

 Cache   
PURPOSE OF JOB

Uses advanced techniques that integrate traditional and non-traditional datasets and method to enable analytical solutions; Applies predictive analytics, machine learning, simulation, and optimization techniques to generate management insights and enable customer-facing applications; participates in building analytical solutions leveraging internal and external applications to deliver value and create competitive advantage; Translates complex analytical and technical concepts to non-technical employees

JOB REQUIREMENTS

* Partners with other analysts across the organization to fully define business problems and research questions; Supports SME's on cross functional matrixed teams to solve highly complex work critical to the organization.

* Integrates and extracts relevant information from large amounts of both structured and unstructured data (internal and external) to enable analytical solutions.

* Conducts advanced analytics leveraging predictive modeling, machine learning, simulation, optimization and other techniques to deliver insights or develop analytical solutions to achieve business objectives.

* Supports Subject Matter Experts (SME's) on efforts to develop scalable, efficient, automated solutions for large scale data analyses, model development, model validation and model implementation.

* Works with IT to research architecture for new products, services, and features.

* Develops algorithms and supporting code such that research efforts are based on the highest quality data.

* Translates complex analytical and technical concepts to non-technical employees to enable understanding and drive informed business decisions.

MINIMUM REQUIREMENTS

* Master's degree in Computer Science, Applied Mathematics, Quantitative Economics, Statistics, or related field. 6 additional years of related experience beyond the minimum required may be substituted in lieu of a degree.

* 4 or more years of related experience and accountability for complex tasks and/or projects required.

* Proficient knowledge of the function/discipline and demonstrated application of knowledge, skills and abilities towards work products required.

* Proficient level of business acumen in the areas of the business operations, industry practices and emerging trends required.

Must complete 12 months in current position (from date of hire or date of placement), or must have manager's approval prior to posting.

*Qualifications may warrant placement in a different job level*

PREFERRED

* Expertise in experimental design, advanced statistical analysis, and modeling to discover key relationships in data and applying that information to predict likely future outcomes; fluent in regression, classification, tree-based models, clustering methods, text mining, and neural networks.

* Proven ability to enrich (add new information to) data, advise on appropriate course(s) of action to take based on results, summarize complex technical analysis for non-technical executive audiences, succinctly present visualizations of high dimensional data, and explain & justify the results of the analysis conducted.

* Highly competent at data wrangling and data engineering in SQL and SAS as well as advanced machine learning (ML) techniques using Python; comfortable in cloud computing environments (Azure, GCP, AWS).

* Hands-on experience developing products that utilize advanced machine learning techniques like deep learning in areas such as computer vision, Natural Language Processing (NLP), sensor data from the Internet of Things (IoT), and recommender systems; along with transitioning those solutions from the development environment into the production environment for full-time use.

* PhD in Computer Science, Applied Mathematics, Quantitative Economics, Operations Research, Statistics, or related field with coursework in advanced Machine Learning techniques (Natural Language Processing, Deep Neural Networks, etc).

* Fluent in deep learning frameworks and libraries (TensorFlow, Keras, PyTorch, etc).

* Highly skilled in handling Big Data (Hadoop, Hive, Spark, Kafka, etc).

* Experience in reinforcement learning, knowledge graphs and graph databases, Generative Adversarial Networks (GANs), semi-supervised learning, multi-task learning is a plus.

* Experience in publishing at top ML, computer vision, NLP, or AI conferences and/or contributing to ML/AI-related open source projects and/or converting ML/AI papers into code is a plus.

* Background in Property insurance operations with an understanding of claims, underwriting, and insurance pricing a plus.

* Additional Skills: Ability to translate business problems and requirements into technical solutions by building quick prototypes or proofs of concept with business and technical stakeholders.

* Ability to convert proofs of concept into scalable production solutions.

* Ability to lead teams by following best practices in development, automation, and continuous integration / continuous deployment (CI/CD) methods in an agile work environment.

* Ability to work in and with technical, multidisciplinary teams.

* Willingness to continuously learn and apply new analytical techniques

RELOCATION assistance is AVAILABLE for this position.

The above description reflects the details considered necessary to describe the principal functions of the job and should not be construed as a detailed description of all the work requirements that may be performed in the job.

Must complete 12 months in current position (from date of hire or date of placement), or must have manager s approval prior to posting.

LAST DAY TO APPLY TO THE OPENING IS 11/06/19 BY 11:59 PM CST TIME.

USAA is an equal opportunity and affirmative action employer and gives consideration for employment to qualified applicants without regard to race, color, religion, sex, national origin, age, disability, genetic information, sexual orientation, gender identity or expression, pregnancy, veteran status or any other legally protected characteristic. If you'd like more information about your EEO rights as an applicant under the law, please click here. For USAA s Affirmative Action and EEO statement, please click here. Furthermore, USAA makes hiring decisions compliant with the Fair Chance Initiative for Hiring Ordinance (LAMC 189.00).

USAA provides equal opportunity to qualified individuals with disabilities and disabled veterans. If you need a reasonable accommodation, please email HumanResources@usaa.com or call 1-800-210-USAA and select option 3 for assistance.
          

Software Development Engineer - SDE Java Angular

 Cache   
SunIRef:Manu:title Software Development Engineer - SDE - Java & Angular JP Morgan Chase 21,577 reviews - Seattle, WA 98101 JP Morgan Chase 21,577 reviews Read what people are saying about working here. JP Morgan Chase operates in over 100 markets serving millions of customers, business, and clients (corporate, institutional, and government). It holds $18 trillion of assets under custody and manages $393 billion in deposits every day. As a member of the Application Classification and Protection team you will build trust with our customers to innovate and develop next generation solutions to protect our systems and data across the business. You will design and engineer software that will enable our business to meet the changing security standards while setting the strategic direction for how to support the business long term. To be successful you will need to connect with a global network of technologists from around the world to apply your skills to solve mission critical problems while embracing new technologies and methodologies. Along the way you will develop skills with a wide range of technologies including distributed systems, cloud infrastructure, and cybersecurity concepts and methodologies (encryption, tokenization, masking, and other data protection techniques). The world of Cybersecurity involves adapting to a constantly changing world; as part of this team you will be thinking both how to solve problems in the now and also for the next generation of threats. JP Morgan Chase invests $9.5B+ annually in technology and you would be one of 40k+ technologists who innovate in how the firm builds initiatives like big data, machine learning, and mobile/cloud development. We want people like yourself to create innovative solutions that will not only transform the financial services industry, but also change the world. This role requires a wide variety of strengths and capabilities, including: Experience developing with Java, C# (or similar Object-Oriented languages) or Experience developing with high-level script languages such as Python or JavaScript Knowledge and experience designing and building large scale and high availability systems Experience utilizing operational tools and monitoring solutions that ensure the health and security of our services. 5+ years' experience developing enterprise software or Masters' degree in Computer Science or equivalent degree) Experience in one or more of the following preferred: Working knowledge of Spring Framework (Core, Boot, MVC) Working knowledge of RDBMS and NOSQL technologies Working within an agile development methodology (Kanban, Scrum, etc.) Experience with continuous delivery and deployment. Experience assessing of data protection approaches, requirements, and activities Understanding of cryptography, masking, tokenization or other data protection technologies and their impact to the application Knowledge of system security vulnerabilities and remediation techniques, including penetration testing and the development of exploits Experience developing software using continuous integration/deployment pipeline that includes vendor solutions Experience in next generation platforms such as cloud, PaaS, mobile, and big data At JPMorgan Chase & Co. we value the unique skills of every employee, and we're building a technology organization that thrives on diversity. We encourage professional growth and career development, and offer competitive benefits and compensation. If you're looking to build your career as part of a global technology team tackling big challenges that impact the lives of people and companies all around the world, we want to meet you. JPMorgan Chase & Co. is an equal opportunity employer. JP Morgan Chase - Just posted report job - original job
          

Director of Advanced Analytics

 Cache   
  • Leveraging extensive, deep analytical knowledge and team leadership skills to drive the development of advanced analytical solutions and implement data-driven recommendations and outcomes.
  • Leading a multi-disciplinary team to develop advanced analytics, including predictive modeling/machine learning algorithms and advanced statistical tools; Function as a senior level coach and mentor to analysts; ensure quality results across the team while scaling our advanced analytics functionality.
  • Leading the initiatives involving exploration and analysis of data across a variety of data platforms.
  • Champion the execution of sophisticated analysis to address specific clinical/business problems determined by consultation with various stakeholders across the organization.
  • Develop sophisticated data products (visualizations, models, insights etc.) for business users.
  • Challenge conventional thinking and traditional ways of operating as you work with stakeholders to identify and improve the status quo.
  • Partnering with data science peers to identify gaps, improve quality, and share advanced modeling techniques and learnings.
  • Serve as the subject matter expert in analytics methodologies and best practices, including outcomes measurement and study design.
  • Act as a subject matter expert in advanced analytics and bring best in class, innovative ideas to test and measure performance and impact programs.
    • Must have computer skills and be proficient with Windows-style applications and keyboard.
    • Effective verbal and written communication skills and the ability to present information clearly and professionally to varying levels of individuals throughout the patient care process.
    • Must have strong analytical, financial and systems skills.
    • Experience in building and managing highly competent and talented advanced analytic teams.
    • Experience working with analytics in the payor/insurance space (fraud, epidemiology/care management, actuarial, marketing/consumer dynamics, or financial) or provider/clinical space (Pharma, health system).
    • Experience with healthcare claims data and/or clinical data from electronic medical information systems.
    • Experience managing large data sets and using quantitative and qualitative analysis to draw meaningful and valid insights.
    • Experience using SQL, SAS/Python/R; Strong understanding of R, Python, or SAS.
    • Strong understanding of TSQL or PL/SQL; Strong understanding of different database environments including cloud-based ones (AWS, Azure).
    • Experience with modern visualization tools (Tableau, PowerBI, Cognos) and/or other data analysis tools.
    • Strong communication skills (both oral and written); Must be able to present results to senior leadership, internal and external stakeholders.
    • Excellent organizational, motivational and interpersonal skills, capable of interfacing well at multiple levels within a large organization.
    • Outstanding analytic and modeling skills, proficient at conceptualizing, implementing, and evaluating highly accurate and scalable advanced analytics solutions to business problems.
    • Knowledge of health-related analytics concepts such as risk stratification, episode groupers, and benchmarks.
    • Professional and positive approach in building relationships and quickly gain credibility with senior executives.
          

Now available in Tableau: View Recommendations, table improvements, Webhooks support, and more

 Cache   

The newest release of Tableau is here! With Tableau 2019.4, we’re continuing to make it easier for you to find, connect to, and analyze your data. Upgrade to take advantage of these new innovations!

Let’s look at the highlights:

  • Discover content faster with View Recommendations for Tableau Server and Tableau Online.
  • Better manage wide tables with support for up to 50 columns.
  • Integrate and extend the Tableau Online and Tableau Server with Webhooks support.
  • Plus new data connectors, added security in Tableau Mobile, and more!

Quickly discover relevant vizzes with View Recommendations

Finding the vizzes you care about on Tableau Server and Tableau Online just got easier. View Recommendations are personalized suggestions that instantly connect you to relevant data and content on your site. Powered by machine learning, these recommendations match preferences between users, surfacing content that others like you have found interesting or useful, including what's most popular and recent. Bringing trending views front and center also helps new users to quickly find valuable content. You can find recommendations in a dedicated section on your Tableau homepage, as well as a separate Recommendations page accessible from the left navigation menu.

Better manage large tables with horizontal scrolling, per-pane sorting, and increased column limit

You asked, and we heard you loud and clear! We know that you use tables in your analysis for meeting a variety of business needs. In 2019.4, we’re making it easier to view and edit wide tables across sheets, dashboards, and stories with the following enhancements:

  • Increased column limit — You can now create tables with up to 50 columns. This setting can be changed easily in the Table Options dialog.
  • Horizontal scrolling — You now have the ability to scroll horizontally, making it easy to view and edit list-view tables.
  • Per-pane sorting — For flat tables, you can now sort entire columns by dimensions and discrete measures across multiple panes for a more intuitive sorting experience.

Create automated workflows with Webhooks support

Calling all developers—we’re excited to introduce Webhooks support in 2019.4 to make it easier to integrate Tableau with other applications. With Webhooks, you can now build automated workflows that are triggered by events as they happen in Tableau Server and Online. In other words, a server or site admin can build a workflow that tells Tableau to send a message when a certain event happens. The system that receives the message can then process it and take further action.

For example, when an extract refresh in your Tableau workbook fails, you can trigger filing a ServiceNow ticket automatically. Or, when a workbook is published, trigger a notification or a confetti emoji in your team’s Slack channel. The possibilities are endless! You can build Webhooks off various events in Tableau such as Workbook and Data Source status changes.

Sign up for the Tableau Developer Program and check out the Webhooks documentation and samples to learn more and get started.

Connect to, prep, and analyze even more data with new Desktop and Prep connectors

In 2019.4, we’ve added more data connectors, so connecting to your data can be a one click operation. Seamlessly access your LinkedIn Sales Navigator usage data in Tableau to uncover insights that will drive your sales effectiveness. Tableau Online customers can use the new LinkedIn Sales Navigator dashboard starter to jumpstart their analysis.

And for customers who have data in the Alibaba Cloud—with the new Alibaba connectors in 2019.4, you can now natively connect and analyze data in MaxCompute, AnalyticDB, and Data Lake Analytics.

Also accompanying this release is Tableau Prep Builder 2019.4.1. We’re introducing brand-new cloud connectors to Dropbox, Google Drive, OneDrive, and Box. This means you can connect to a new category of input files coming from cloud files and prepare an even more diverse set of data.

Authentication for these cloud connectors is similar to how you authenticate using Tableau Desktop. Only embedding credentials is supported, so when publishing a flow to Tableau Server or Tableau Online, make sure you have your saved credentials set up on your Account Settings page.

Enable app lock in Tableau Mobile for added security

Tableau Mobile uses long-lived authentication tokens allowing users to remain signed in, giving them frictionless access to data. However, admins within organizations might have concerns about this easy access to data via the app. Rather than requiring users to sign in more frequently, admins can now enable app lock to give users a secure, yet simple way to access content.

Using an app lock does not authenticate users; instead, it provides an additional layer of security for users who are already signed in. Admins can enable app lock via a site-level setting (beginning with Tableau version 2019.4), or via an AppConfig setting using an enterprise mobile device management solution, such as Microsoft Intune or BlackBerry Dynamics (for Tableau versions 2019.3 and earlier).

Once the setting is enabled, users who are signed in will be required to set up a method to unlock their device using the supported biometrics (Face ID or Touch ID on iOS and fingerprint on Android) or alternatively, a device passcode. If users fail to unlock the app after a certain number of attempts using a biometric method, or if their devices are not configured for biometrics, they will be prompted to unlock using an alternative method such as a passcode or log out of Tableau.

These are just a few highlights from 2019.4. Check out tableau.com/new-features to learn more.

Thank you, Tableau Community!

We can’t do this without you so thank you for your continued feedback and inspiration. Check out the Ideas forum in the community to see all of the features that have been incorporated thanks to your voices.

We’d also like to extend thanks to the many testers who tried out Tableau 2019.4 in beta. We appreciate your time and energy to help make this release successful.

Get the newest version of Tableau today, and if you’d like to be involved in future beta programs, please sign up to participate!


          

Associate Bioinformatician

 Cache   
Day Zero Diagnostics is a bacterial genomics startup in Boston that is seeking to recruit a highly motivated bioinformatician to join our team. At Day Zero Diagnostics we are modernizing the way infectious diseases are diagnosed and treated by developing a rapid diagnostic that sequences the genomes of pathogenic bacteria, and then uses machine learning methods to identify the cause of the clinical infection.As a bioinformatician, you will work with a senior computational biologist to implement NGS data pipelines and microbial genomic data analysis tools. These tools will be used both to aid internal R&D projects, and to provide lab-based services for customer-facing projects. Candidates will gain experience in a multidisciplinary and fast-paced startup environment, and will have ample opportunities to acquire new skills, work closely with an accomplished team, and communicate results through patents, conference presentations, and peer-reviewed publications while working in a supportive and energetic environment. We value intellectual curiosity and a strong work ethic, and look for candidates who are both excited to contribute their expertise and eager to broaden their skillset to new areas.ResponsibilitiesUnder the direction of a senior computational biologist, the applicant independently carries out bioinformatics and software engineering tasks, including:Implementing analytical tools and reports on hospital outbreaks of bacterial infectionsMaintaining pipelines for NGS data, including Illumina and MinION sequencing dataExecute genomic-based lab services for clinical samplesMaintain organized, tested code and corresponding documentationPresent data within and outside of the company at meetings and symposiaWrite, edit, and submit manuscripts/abstracts/grants detailing the results of the projectWork closely within the group and with outside collaboratorsMaintain close communications with the team regarding progressRequirementsBachelor's or Masters Degree in Computer Science, Bioinformatics, Computational Biology, or equiv.Relevant experience in bioinformatics with a strong preference for microbial genomics experienceFluency in Python, and Linux; familiarity with SQL and git helpfulFamiliarity with NGS data and standard bioinformatics tools (alignment, variant calling, assembly)Familiarity with ONT MinION data helpfulHighly motivated and independent, with the ability to work in a dynamic team environmentStrong oral and written communications skillsExcellent organizational skills and attention to detail
          

Micro review from November

 Cache   
I have tried to put some order into all the links I gathered over the last weeks (including about ten days on vacation with near to zero activity alongside). I came across Mathematics for Machine Learning again. I already has an older version in my library of PDFs, but now I just downloaded the latest version available for free, and I decided to take a closer look over the next few weeks.
          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

Transportation Execution and Visibility Success

 Cache   

A few weeks ago, I wrote an article that gave a snapshot of the transportation execution and visibility systems market. In that article, I highlighted the evolving nature of technology and how machine learning, IoT, and blockchain are helping to fuel market growth. I also highlighted the that are contributing to rapid growth, such as the ROI connected to these systems, capacity fluctuations, and e-commerce. In order for the market to continue to grow, buyers […]

The post Transportation Execution and Visibility Success appeared first on Logistics Viewpoints.


          

(USA-MD-Baltimore) Biostatistician

 Cache   
General summary/purpose: The North American AIDS Cohort Collaboration on Research and Design (NA-ACCORD) study team is looking for a Biostatistician to help us produce rigorous, relevant scientific results as directed by the research aims of our projects. The NA-ACCORD is the largest collaboration of adults living with HIV in the US and Canada, and the study is currently in its 13 th year. Observational, longitudinal, individual-level data are pooled using a collaborative study design across >20 interval and clinical cohort studies of adults living with HIV in the US and Canada (www.naaccord.org). The Biostatistician will join a team of data managers, computer programmers, other biostatisticians, and epidemiologists who use data to help execute the study aims and hypotheses. Experience with observational (as opposed to clinical trials), longitudinal data is preferred, including survival analysis. The successful applicant will provide critical analytic rigor to answering the scientific questions of interest. Interest or experience in machine learning techniques is advantageous, as are programming skills in R and R Markdown. Collaboration with others on the team, including data managers, biostatisticians, and scientific investigators is critical for this position. Specific duties & responsibilities: The primary duties and responsibilities include performing data analysis, including the data management needed to translate the specified study design into an analytic-ready data set and analysis using a broad range of methods. This includes both cross sectional and longitudinal analyses (Kaplan Meier, discrete and continuous time-to-event survival analysis, incidence rate estimation and Poisson regression) using complex, longitudinal data from two different types of longitudinal cohort studies (i.e. interval and clinical cohort studies). The position offers opportunities to present results and co-author peer-reviewed publications, which necessitates excellent oral and written communication skills. The position requires the individual to be self-motivated, self-directed, efficient, and responsible for multiple analytic projects simultaneously. Applicant must be willing to work collaboratively with the NA-ACCORD Epidemiology/Biostatistics Core (EBC) study team in consultation with investigators at the contributing cohorts, as well as external scientific investigators, on: + Formation of study designs specific to the study objective + Drafting statistical analysis plans + Translating the specified study design into an analytic-ready data set, including analysis of cohort-level data to ensure proper participant selection + Analysis of individual-level data to answer the scientific question of interest + Identify potential problems with study data and collaborate with the data managers to resolve issues encountered + Preparing data tables and figures for publications and presentations at scientific meetings + Contribute to technical/scientific writing and reviewing drafts of reports, posters, and manuscripts, including writing up interpretation of analysis results and description of statistical methods used + Documenting decision making during the conduct of studies and archiving code for the purposes of reproducing research findings + Navigating shared files and contributing to shared files to create a transparent research workspace + Attend staff meetings and provide updates on status of projects Minimum qualifications (mandatory): + Master's degree in biostatistics, epidemiology, or related quantitative field. + 1 year related experience. + Mastery of SAS and/or R + Demonstrated ability on significant graduate project or additional doctoral education may substitute for experience to the extent permitted by the JHU equivalency formula. _JHU Equivalency Formula: 30 undergraduate degree credits (semester hours) or 18 graduate degree credits may substitute for one year of experience. Additional related experience may substitute for required education on the same basis. For jobs where equivalency is permitted, up to two years of non-related college course work may be applied towards the total minimum education/experience required for the respective job._ Preferred qualifications: Special knowledge, skills, and abilities: Although not required, the following skills and knowledge are preferred: + Microsoft Office programs (Word, Excel, PowerPoint, Outlook) proficiency. + Experience performing longitudinal data analysis and survival analysis. + Good problem-solving skills, including organizing and investigating possible solutions and presenting them to the team for discussion. + Good organizational, written and verbal communication skills in the preparation and presentation of results. + Good interpersonal skills in dealing with investigators and a “team-oriented” approach with other staff members and investigators. Classified Title: Biostatistician Working Title: Biostatistician ​​​​​ Role/Level/Range: ACRP/04/MD Starting Salary Range: $52,495 - $72,210; Commensurate with experience Employee group: Full Time Schedule: M-F 37.5 hours/week Exempt Status: Exempt Location: 31-MD:JH at 111 Market Place Department name: 10001101-Epidemiology Personnel area: School of Public Health The successful candidate(s) for this position will be subject to a pre-employment background check. If you are interested in applying for employment with The Johns Hopkins University and require special assistance or accommodation during any part of the pre-employment process, please contact the HR Business Services Office at jhurecruitment@jhu.edu . For TTY users, call via Maryland Relay or dial 711. **The following additional provisions may apply depending on which campus you will work. Your recruiter will advise accordingly.** During the Influenza ("the flu") season, as a condition of employment, The Johns Hopkins Institutions require all employees who provide ongoing services to patients or work in patient care or clinical care areas to have an annual influenza vaccination or possess an approved medical or religious exception. Failure to meet this requirement may result in termination of employment. The pre-employment physical for positions in clinical areas, laboratories, working with research subjects, or involving community contact requires documentation of immune status against Rubella (German measles), Rubeola (Measles), Mumps, Varicella (chickenpox), Hepatitis B and documentation of having received the Tdap (Tetanus, diphtheria, pertussis) vaccination. This may include documentation of having two (2) MMR vaccines; two (2) Varicella vaccines; or antibody status to these diseases from laboratory testing. Blood tests for immunities to these diseases are ordinarily included in the pre-employment physical exam except for those employees who provide results of blood tests or immunization documentation from their own health care providers. Any vaccinations required for these diseases will be given at no cost in our Occupational Health office. **Equal Opportunity Employer** Note: Job Postings are updated daily and remain online until filled. **EEO is the Law** Learn more: https://www1.eeoc.gov/employers/upload/eeoc_self_print_poster.pdf Important legal information http://hrnt.jhu.edu/legal.cfm Equal Opportunity Employer: Johns Hopkins University is an equal opportunity employer and does not discriminate on the basis of race, color, gender, religion, age, sexual orientation, national or ethnic origin, disability, marital status, veteran status, or any other occupationally irrelevant criteria. The university promotes affirmative action for minorities, women, disabled persons, and veterans.
          

(USA-VA-Chantilly) Jr. Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Jr. Software Developer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for people who are passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + Requires 4 years of relevant experience + Capable of working independently and as a member of a dedicated team to solve complex problems in a clear and repeatable manner. **Preferred Qualifications:** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service,Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-FL-Melbourne) Jr. Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Jr. Software Developer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for people who are passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + Requires 4 years of relevant experience + Capable of working independently and as a member of a dedicated team to solve complex problems in a clear and repeatable manner. **Preferred Qualifications:** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service,Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-VA-Arlington) Business Analysts Intern

 Cache   
**Please review the job details below.** Maxar is currently seeking a Business Analyst Intern to join our 2020 Summer Internship Program. In this role, you will work as part of the Agile Intelligence team based in Charlottesville, VA. You'll be responsible for applying modern business analysis methods to document requirements and processes that will support efforts to drive operational innovation and excellence. Additionally, you will contribute by demonstrating an understanding of the Business Analysis Body of Knowledge® (BABOK®), while collaborating with managers, engineers, and stakeholders. **Minimum Requirements:** + Must be a US citizen. + Must be at least a rising Sophomore or higher pursuing a degree in a relevant field. Preference will be given to enrolled students majoring in business, information systems management, customer experience management, project management, or similar. + Familiarity with the Business Analysis Body of Knowledge® (BABOK®). **Desired Qualifications:** + Familiarity with Agile development principles, methods, processes, and tools. + Microsoft Office, Excel. + Ability to quickly grasp technical capabilities and requirements. + Process modeling notations and tools **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-FL-Melbourne) Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Software Engineer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for someone who is passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + “Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + 8 years of relevant experience. **Preferred Qualifications** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service, Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-VA-Chantilly) Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Software Engineer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for someone who is passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + “Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + 8 years of relevant experience. **Preferred Qualifications** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service, Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-VA-Arlington) Graphic Design Intern

 Cache   
**Please review the job details below.** Maxar Technologies is looking for a Graphic Design Intern to join our 2020 Summer Internship Program. In this role, you will be a part of a team providing support to multiple projects. **Responsibilities:** + Applying the design process for each project. + Designing identities for new products. + Creating artwork for promotions, events, and brands. + Sitting in on design/project meetings. **Minimum Requirements:** + Must be a US citizen. + Must be at least a rising Senior pursuing a Bachelor's degree in Design or a related discipline + Deep knowledge of the design process. + Knowledge of graphic design, branding, etc. + Ability to balance multiple projects at a time. + Ability to take direction from senior team members. + Ability to work both independently and in a team environment. + Creative, outside-the-box problem solver. **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-VA-Charlottesville) Business Analysts Intern

 Cache   
**Please review the job details below.** Maxar is currently seeking a Business Analyst Intern to join our 2020 Summer Internship Program. In this role, you will work as part of the Agile Intelligence team based in Charlottesville, VA. You'll be responsible for applying modern business analysis methods to document requirements and processes that will support efforts to drive operational innovation and excellence. Additionally, you will contribute by demonstrating an understanding of the Business Analysis Body of Knowledge® (BABOK®), while collaborating with managers, engineers, and stakeholders. **Minimum Requirements:** + Must be a US citizen. + Must be at least a rising Sophomore or higher pursuing a degree in a relevant field. Preference will be given to enrolled students majoring in business, information systems management, customer experience management, project management, or similar. + Familiarity with the Business Analysis Body of Knowledge® (BABOK®). **Desired Qualifications:** + Familiarity with Agile development principles, methods, processes, and tools. + Microsoft Office, Excel. + Ability to quickly grasp technical capabilities and requirements. + Process modeling notations and tools **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-VA-Herndon) Jr. Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Jr. Software Developer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for people who are passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + Requires 4 years of relevant experience + Capable of working independently and as a member of a dedicated team to solve complex problems in a clear and repeatable manner. **Preferred Qualifications:** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service,Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-CO-westminster) Software Engineering Intern

 Cache   
**Please review the job details below.** Maxar Technologies is currently looking for a Software Engineering Intern to join our 2020 Summer Internship Program. You will join our Tactical Ground Programs team in Westminster, CO and help deliver real-world, actual problem-solving solutions for our customers via direct-downlink stations producing near-real time imagery from various satellites. **Responsibilities:** + Be presented with a problem, examine the outputs and trace through code to identify the disconnect. + Work with the software team to figure out what components we already have, how to make a new component or capability, and how to make this code run fast. + Demonstrate an ability to Use version control (GIT) to bring in multiple repositories, issue tracking (JIRA), and automated builds (Bamboo). + Write code that's platform specific (Linux and Windows). + Start with existing code and contribute to modifications and improvements depending on customer needs. **Minimum Requirements:** + Must be a U.S. citizen + Must be at least a rising Sophomore or higher pursuing a Bachelor’s degree in a software engineering, computer science, information technology, etc. + Must show a passion for innovation, an understanding of software systems and applications, and the ability to learn to effectively manage, grow and evolve software solutions. + Knowledge of C++, Java or Python. **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-VA-Herndon) Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Software Engineer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for someone who is passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + “Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + 8 years of relevant experience. **Preferred Qualifications** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service, Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

Jumio’s investment in AI, OCR and biometrics enhances its identity verification

 Cache   

Jumio, the leading AI-powered trusted identity as a service provider, announced the results of its two-year focus on automation, enabled through a variety of AI, machine learning, OCR and biometric-based investments and innovations. This multi-pronged approach has resulted in dramatically faster and more accurate verifications as well as a more intuitive user experience. Jumio has invested heavily in supervised machine-learning models, fed by massive datasets, which are steadily improving verification speed and helping Jumio spot … More

The post Jumio’s investment in AI, OCR and biometrics enhances its identity verification appeared first on Help Net Security.


          

IT / Software / Systems: Frontend Software Engineer (JavaScript) - Plano, Texas

 Cache   
Our Consumer & Community Banking group is looking for seasoned software engineers for our experimentation platform. As a front-end developer, you will be expected to be a subject matter expert in designing and building appealing UIs for customer delight. As a senior member of a small team, you will be in a position to shape the culture and influence the technology stack. Product development skills and experience with product launches are a big plus. Culture is important to us and we are looking for intellectually curious and honest, passionate, and motivated individuals who would like to expand their skills while working on a new exciting venture for the firm. When you work at JPMorgan Chase & Co., you're not just working at a global financial institution. You're an integral part of one of the world's biggest tech companies. In 14 technology hubs worldwide, our team of 40,000+ technologists design, build and deploy everything from enterprise technology initiatives to big data and mobile solutions, as well as innovations in electronic payments, cybersecurity, machine learning, and cloud development. Our $9.5B+ annual investment in technology enables us to hire people to create innovative solutions that will not only transform the financial services industry, but also change the world. At JPMorgan Chase & Co. we value the unique skills of every employee, and we're building a technology organization that thrives on diversity. We encourage professional growth and career development, and offer competitive benefits and compensation. If you're looking to build your career as part of a global technology team tackling big challenges that impact the lives of people and companies all around the world, we want to meet you. Required Skills / Experience - 7+ years of professional software development in JavaScript, HTML, and CSS, with possibly a second set of programming technologies. - Expertise in or strong knowledge of modern front-end technologies (React/Angular/Vue, ES6/Typescript, Redux or similar state management libraries) with a strong preference for React. - Strong experience in various types of testing approaches and code quality tools: TDD/BDD, unit, integration, end-2-end, contract, linting, etc... - Experience with build tools (webpack, yarn/npm, node) and familiarity with the Node ecosystem. - Familiarity with CSS Preprocessors (SASS/LESS) and CSS organization methodologies (BEM, OOCSS, SMACSS) - General: - Excellent communication skills in English (both written and spoken forms). - Desire to build innovative products using cutting-edge technologies. - A firm grasp of fundamental web/internet technologies. - Knowledge and experience working in an Agile environment. Desired Skills / Experience - Knowledge of accessibility design rules (a11y). - Sharp eye for design and UI/UX. - Experience with data visualization and graphing/charting libraries (D3.js, HighCharts, Google Charts). - Experience with Service Workers and Progressive Web Apps (Workbox, Lighthouse). - Experience with containerization and cloud technologies. - Experience developing on a macOS environment using OSS. JPMorgan Chase is an equal opportunity and affirmative action employer Disability/Veteran. ()
          

Data Scientist - Machine Learning - 6 Month Project

 Cache   
Southern-Brighton, Senior Data Scientist Job Role This is an initial 6 month contract working on-site in Brighton with potential for a long-term engagement. You will be working for an organisation that is looking to revolutionise its industry. They are looking for someone to come on board to support the company with their data science tasks and develop research concepts. You will join a collaborative team looking to
          

IT / Software / Systems: Senior Software Architect - Acton, Massachusetts

 Cache   
Why Work at Capital Advisors Group: Capital Advisors Group, Inc. () is a boutique investment advisor ($11.7 billion in assets-under-management as of 6/30/19) focused on providing innovative investment management and debt finance consulting solutions to venture capital-funded startups, emerging growth companies, and Fortune 100 companies. As an established innovator in the financial services industry, we pride ourselves on fostering an entrepreneurial and creative culture that pushes boundaries and promotes growth. Our team of seasoned professionals embodies these values in all that we do. So, if you're a creative, self-motivated software professional interested in leading the development of web-based financial services solutions at a firm that embraces thinking outside the box, we invite you to apply today. The Role: We are looking for a Senior Software Architect to lead the development of a new web-based financial platform. This is a unique opportunity to lead the entire software development life cycle (SDLC) of an innovative solution that has the potential to disrupt the marketplace and grow a large segment of our business. This will be a highly visible role reporting to senior leadership and located in our Newton, MA headquarters. The role's responsibilities include, but are not limited to, the following: Leading the software development and implementation of new and existing web-based financial services solutions Working closely with Leadership, Business Development, Compliance, Research, Marketing, and external teams to help define product roadmaps and estimate development efforts Overseeing the internal Database Developer and managing external software development teams, ensuring all business and software architecture requirements are met Automating manual processes through software development Applying and encouraging innovative thinking throughout the software development life cycle (SDLC) Maintaining a current understanding of technologies and acting as the primary internal expert for software architecture, applications, and industry standards Providing guidance to internal and external stakeholders on how to solve complex software and application issues Defining, managing , and tracking project budgets Requirements: Bachelor's degree in Computer Science or Mathematics Master's degree in Computer Science is a plus, but not required 5 years of software systems design experience with 3 years technical leadership preferred Working knowledge of C#, SQL, .NET, HTML5, JavaScript, CSS, Office365, Salesforce, and AWS Experience with Python, React, Bootstrap, and/or other languages/frameworks is a plus, but not required Knowledge of Machine Learning and Natural Language Processing is a plus, but not required Having been a key contributor to bringing one or more software products to market Ability to identify and understand complex problems and develop effective solutions Strong understanding of the full SDLC and experience with formal SDLC methodologies Experience establishing software engineering best practices, including code standards, code reviews, source control management, build processes, testing, and operating guidelines Experience with business and technical requirements analysis, business process modeling/mapping, methodology development, and data mapping Demonstrated project and team management abilities Ability to work independently or as part of a group Knowledge of the finance industry and current available technologies in the finance industry is a plus, but not required Working knowledge of effective vendor management and IT continuity management Ability to perform market and competitor analysis and business domain analysis Strong oral and written communication skills Strong analytical skills Superb attention to detail and excellent time management and organizational skills Benefits: Capital Advisors Group offers a competitive benefits package that includes: Medical, dental and vision insurance; Life insurance; Short-term and long-term disability insurance; A 401(k) plan with matching contributions; A health care flexible spending account plan; and Education assistance. Benefits are subject to eligibility requirements and other provisions. Capital Advisors Group is an equal opportunity employer. ()
          

IT / Software / Systems: Senior Software Engineer - Search Services - Lexington, Massachusetts

 Cache   
Senior Software Engineer - Search ServicesUS-MA-LexingtonTitle: Senior Software Engineer - Search ServicesJob ID: 2019-4378Type: Permanent Full Time# of Openings: 1Category: Engineering and R&DLexington, MassachusettsOverviewAbout the PositionArchiving and e-Discovery and are core offerings of Mimecast which are underpinned by search technology. Our challenge is to scale it to keep pace with a rapidly expanding dataset.Job AimWe are developing in a number of areas and looking for engineers who want to join us in building: Next generation Indexing and Search platform: completely redesigning the Mime - OS search backend Graph Search: distributed graph database to provide insight and intelligence on the data customers store in the Mimecast Platform. This project is a custom built solution to handle hundreds of billions of objects in a cost efficient manner Application Log Search: the focus of the project is the aggregation and indexing / search of metadata generated by Mime - OS applications Text extraction: we constantly strive to improve the quality and efficiency of our text extraction process to feed our indexes.ResponsibilitiesKey Responsibilities: Writing software components to operate at massive scale Maximising performance of core Java and Lucene technologies Utilising emerging techniques such as graph databases and machine learning to enrich the platform Due to this role's responsibilities, you must be a U.S. citizen to be considered a candidateQualificationsEssential Skills and Experience: Minimum of 5 years experience developing in one or more of the following languages: C/C++, Java, C# Solid experience with concurrency, multithreading, server architectures, distributed systems and load balancing techniques Expert knowledge developing and debugging distributed applications in a *nix environment Search engine experience, ideally Lucene Good knowledge of storage hardware (HDD, SSD) Understanding of various levels of file system caching at operating system level Desired Skills Experience with operating system internals, programming language design Solr, ElasticSearch REST / SOAP programming Knowledge of TCP/IP and network programmingPersonal Skills: Attention to detail Analytical skills Proactivity Efficiency Honesty Follow throughRewardWe offer a highly competitive rewards and benefits package including private healthcare, dental and life coverage. Mimecast is an entrepreneurial and high growth company which will provide the right candidate with a wealth of career development opportunities. All Mimecasters strive on being high performers, problem solvers, and team players with passion and integrity. An Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability. #LI-MG1 PI115149138 ()
          

Other: Senior Software Engineer - Search Services - Lexington, Massachusetts

 Cache   
Senior Software Engineer - Search ServicesUS-MA-LexingtonJob ID: 2019-4378Type: Permanent Full Time# of Openings: 1Category: Engineering and R&DLexington, MassachusettsOverviewAbout the PositionArchiving and e-Discovery and are core offerings of Mimecast which are underpinned by search technology. Our challenge is to scale it to keep pace with a rapidly expanding dataset.Job AimWe are developing in a number of areas and looking for engineers who want to join us in building: Next generation Indexing and Search platform: completely redesigning the Mime - OS search backend Graph Search: distributed graph database to provide insight and intelligence on the data customers store in the Mimecast Platform. This project is a custom built solution to handle hundreds of billions of objects in a cost efficient manner Application Log Search: the focus of the project is the aggregation and indexing / search of metadata generated by Mime - OS applications Text extraction: we constantly strive to improve the quality and efficiency of our text extraction process to feed our indexes.ResponsibilitiesKey Responsibilities: Writing software components to operate at massive scale Maximising performance of core Java and Lucene technologies Utilising emerging techniques such as graph databases and machine learning to enrich the platform Due to this role's responsibilities, you must be a U.S. citizen to be considered a candidateQualificationsEssential Skills and Experience: Minimum of 5 years experience developing in one or more of the following languages: C/C++, Java, C# Solid experience with concurrency, multithreading, server architectures, distributed systems and load balancing techniques Expert knowledge developing and debugging distributed applications in a *nix environment Search engine experience, ideally Lucene Good knowledge of storage hardware (HDD, SSD) Understanding of various levels of file system caching at operating system level Desired Skills Experience with operating system internals, programming language design Solr, ElasticSearch REST / SOAP programming Knowledge of TCP/IP and network programmingPersonal Skills: Attention to detail Analytical skills Proactivity Efficiency Honesty Follow throughRewardWe offer a highly competitive rewards and benefits package including private healthcare, dental and life coverage. Mimecast is an entrepreneurial and high growth company which will provide the right candidate with a wealth of career development opportunities. All Mimecasters strive on being high performers, problem solvers, and team players with passion and integrity. An Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability. #LI-MG1 PI115149023 ()
          

Other: Machine Learning Engineer - Sharon, Massachusetts

 Cache   
Join Hired and find your dream job as a Machine Learning Engineer at one of 10,000+ companies looking for candidates just like you. Companies on Hired apply to you, not the other way around. You???ll receive salary and compensation details upfront??? - before the interview - and be able to choose from a variety of industries you???re interested in, to find a job you???ll love in less than 2 weeks. We're looking for a talented AI expert to join our team. Responsibilities Engaging in data modeling and evaluation Developing new software and systems Designing trials and tests to measure the success of software and systems Working with teams and alone to design and implement AI models Skills An aptitude for statistics and calculating probability Familiarity in Machine Learning frameworks, such as Scikit-learn, Pytorch and Keras TensorFlow An eagerness to learn Determination - even when experiments fail, the ability to try again is key A desire to design AI technology that better serves humanity These Would Also Be Nice Good communication - even with those who do not understand AI Creative and critical thinking skills A willingness to continuously take on new projects Understanding the needs of the company Being results-driven Requirements: Hired ()
          

Data Analyst, Advanced Analytics

 Cache   
LOCATION Calgary Alberta (CA-AB) JOB NUMBER 32430 Why you should join us We’re experiencing an exciting time at Suncor we are working to apply digital technologies to accelerate operational excellence to help us achieve world-class performance generate value drive and enhance our competitive advantage and create the workplace of the future As part of this evolution what we call Suncor 4 0 we’ve started using machine learning robotics AI and remote sensing technology and we know that our journey into the digital world will only accelerate and we need your help The next phase in our company’s evolution is about unleashing the full potential of our people and our company to work differently harnessing emerging technology and new digital capabilities and developments that are transforming our world We have a fantastic role for someone early on in their data analytics career Join our team during this transformative time and build your career with us You will be mining data creating visuals and graphic representations to convey a powerful insightful message while supporting our organization in making meaningful decisions based on advanced data You will use your expertise to - Participate in product development from ideation to full deployment - Perform exploratory data analysis - Work closely with user experience and interface designer to assess implementation efforts and influence solution design - Translate mockups and wireframes into informative visuals in multiple formats (e g pbix D3 js avi wmv psd interactive notebook and others) - Assist in the creation of advanced analytics models and product development - Design and execute performance tests of front end applications in a big data context - Work with advanced analytics teams to make smarter products - Maintain quality and ensure responsiveness of front end applications - Create and maintain documentation for support and future enhancements - Interact with Center of Excellence (COE) teams Analytics Specialists and Engineers in the creation of models - Apply numerical analysis exploratory data analytics engineering principles mathematical and other data techniques to business problems solvable through data-driven decisions We’d like to review your application if you have… Must-haves (minimum requirements) - One to three years of experience in a quantitative field with strong knowledge of data and analytics - Proven experience with relational database and writing SQL code - Experience with analytics scripting languages such as Python R - A Bachelor’s degree with a focus in Computer Science Analytics or Computer Information Systems - Knowledge and experience with Microsoft Power BI and other visualization tools - Alignment with our values of safety above all else respect raise the bar commitments matter and do the right thing Preference for - Experience working in an AI startup environment or organizations with an agile culture - Experience and interest in visual and graphic design - An open mind to new approaches and learning - A professional attitude and service orientation superb team player Where you’ll be working your work schedule and other important information - You will work out of our Calgary head office located in the Suncor Energy Centre at 150 – 6th Ave S W - Hours of work are a regular 40-hour work week Monday to Friday with the potential for extended work hours based on business needs Why Suncor We are Canada s leading integrated energy company with a business portfolio that includes oil sands development and upgrading offshore oil and gas production petroleum refining and product marketing under the Petro-Canada brand Our global presence offers rewarding opportunities for you to learn contribute and grow in a variety of career-building positions We live by the value of safety above all else – do it safely or don’t do it Our st
          

Project Manager (f/m) SAP Innovative Business Solutions - SAP - Walldorf

 Cache   
We make innovation real by using the latest technologies around the Internet of Things, blockchain, artificial intelligence / machine learning, and big data and…
Gefunden bei SAP - Tue, 29 Oct 2019 18:37:42 GMT - Zeige alle Walldorf Jobs
          

Global Program Manager (f/m/d) SAP Innovative Business Solutions - SAP - Walldorf

 Cache   
We make innovation real by using the latest technologies around the Internet of Things, blockchain, artificial intelligence / machine learning, and big data and…
Gefunden bei SAP - Thu, 24 Oct 2019 18:37:28 GMT - Zeige alle Walldorf Jobs
          

Developer/Senior Developer (m/f/d) SAP S/4 HANA - SAP Innovative Business Solutions - SAP - Walldorf

 Cache   
We make innovation real by using the latest technologies around the Internet of Things, blockchain, artificial intelligence / machine learning, and big data and…
Gefunden bei SAP - Fri, 18 Oct 2019 12:37:28 GMT - Zeige alle Walldorf Jobs
          

Senior Project Manager (m/f/d) SAP Innovative Business Solutions - SAP - Walldorf

 Cache   
We make innovation real by using the latest technologies around the Internet of Things, blockchain, artificial intelligence / machine learning, and big data and…
Gefunden bei SAP - Mon, 14 Oct 2019 18:37:16 GMT - Zeige alle Walldorf Jobs
          

Innovations in AI, Blockchain, and Business Intelligence, 2019 Research Report - ResearchAndMarkets.com

 Cache   
This edition of IT, Computing and Communications (ITCC) TechVision Opportunity Engine (TOE) provides a snapshot of the emerging ICT led innovations in artificial intelligence, machine learning, blockchain, IoT analytics, and business intelligence. This issue focuses on the application of information and communication technologies in alleviating the challenges faced across industry sectors in areas such as retail, supply chain, telecom, and industrial sectors.
          

Sixgill HyperLabel™ Developer Now Free With Unlimited Labeling

 Cache   
SANTA MONICA, Calif.--(BUSINESS WIRE)--#AI--HyperLabel Developer – a full-featured desktop application for creating labeled datasets for Machine Learning (ML) quickly, easily and with complete privacy – is now available free, with no label quantity restrictions.
          

Sophos Annual Threat Report Details Top Cyberattacks

 Cache   
Keyword tags: 

Sophos , a global leader in cloud-enabled next-generation cybersecurity, launched its 2020 Threat Report providing insights into the rapidly evolving cyberthreat landscape. The report, produced by SophosLabs researchers, explores changes in the threat landscape over the past 12 months, uncovering trends likely to impact cybersecurity in 2020.
 
John Shier, senior security advisor, Sophos said, “The threat landscape continues to evolve – and the speed and extent of that evolution is both accelerating and unpredictable. The only certainty we have is what is happening right now, so in our 2020 Threat Report we look at how current trends might impact the world over the coming year.  We highlight how adversaries are becoming ever stealthier, better at exploiting mistakes, hiding their activities and evading detection technologies, and more, in the cloud, through mobile apps and inside networks. The 2020 Threat Report is not so much a map as a series of signposts to help defenders better understand what they could face in the months ahead, and how to prepare.”
 
The SophosLabs 2020 Threat Report, which is also summarised in a SophosLabs Uncut article, focuses on six areas where researchers noted particular developments during this past year. Among those expected to have significant impact on the cyberthreat landscape into 2020 and beyond are the following:
 

  • Ransomware attackers continue to raise the stakes with automated active attacks that turn organisations’ trusted management tools against them, evade security controls and disable back-ups in order to cause maximum impact in the shortest possible time.

 

  • Unwanted apps are edging closer to malware. In a year that brought the subscription-abusing Android Fleeceware apps, and ever more stealthy and aggressive adware, the Threat Report highlights how these and other potentially unwanted apps (PUA), like browser plug-ins, are becoming brokers for delivering and executing malware and fileless attacks. 

 

  • The greatest vulnerability for cloud computing is misconfiguration by operators. As cloud systems become more complex and more flexible, operator error is a growing risk. Combined with a general lack of visibility, this makes cloud computing environments a ready-made target for cyberattackers.

 

  • Machine learning designed to defeat malware finds itself under attack. 2019 was the year when the potential of attacks against machine learning security systems were highlighted. Research showed how machine learning detection models could possibly be tricked, and how machine learning could be applied to offensive activity to generate highly convincing fake content for social engineering. At the same time, defenders are applying machine learning to language as a way to detect malicious emails and URLs. This advanced game of cat and mouse is expected to become more prevalent in the future.

 
Other areas covered in the 2020 Threat Report include the danger of failing to spot cybercriminal reconnaissance hidden in the wider noise of internet scanning, the continuing attack surface of the Remote Desktop Protocol (RDP), and the further advancement of automated active attacks (AAA).


          

Engineering: Senior Machine Learning Engineer - Los Angeles, California

 Cache   
We've partnered on an exclusive basis with a leading FinTech brand based out of Santa Monica. They're on the market for a Senior Machine Learning to sit in their Data team. This role will require an ML Engineer who can bring ML models into production together with a team of product analysts, data engineers, and product managers. Skills/Qualifications: 3 years of experience with Machine learning techniques like classification, regression, anomaly detection, and clustering. Experience with data analysis languages such as Python or Scala. Experience with bringing at least 2 models to production. In addition to extremely competitive cash compensation - which includes a quarterly performance bonus - they offer; a generous stock package, 100% coverage of health benefits (incl. coverage for dependents), 401K match, monthly fitness reimbursement, referral car buying program, unlimited PTO, parental leave, plus 10 paid federal holidays. If interested in learning more about this opportunity, then reach out ()
          

IT / Software / Systems: Senior Data Engineer - SQL / Redshift / AWS - Premier Ecommerce Publishing Brand - Los Angeles, California

 Cache   
Are you a Senior Data Engineer with a strong SQL, ETL Redshift and AWS background seeking an opportunity to work with massive amounts of data in a very hip marketplace? Are you a Senior Data Engineer interested in unifying data across various consumer outlets for a very well-funded lifestyle brand in the heart of Santa Monica? Are you an accomplished Senior Data Engineer looking for an opportunity to work in a cutting-edge tech environment consisting of; SQL, Redshift, Hadoop, Spark, Kafka and AWS? If yes, please continue reading.... Based in Santa Monica, this thriving lifestyle brand has doubled size in the last year and keeps on growing With over $75 million in funding, they work hard to provide their extensive audience with advice and recommendations in all things lifestyle: where to shop, eat, travel, etc. Branching into a number of different services and products over the next 12 months, they are building out their Engineering team. They are looking for a Senior Data Engineer to unify and bring to life mass amounts of data from all areas of the business; ecommerce, retail, content, web, mobile, advertising, marketing, experiential and more. WHAT YOU WILL BE DOING: Architect new and innovative data systems that will allow individuals to use data in impactful and exciting ways Design, implement, and optimize Data Lake and Data Warehouses to handle the needs of a growing business Build solutions that will leverage real-time data and machine learning models Build and maintain ETL's from 3rd party sources and ensure data quality Create data models at all levels including conceptual, logical, and physical for both relational and dimensional solutions Work closely with teams to optimize data delivery and scalability Design and build complex solutions with an emphasis on performance, scalability, and high-reliability Design and implement new product features and research the next wave of technology WHAT YOU NEED: Extensive experience and knowledge of SQL, ETL and Redshift Experience wrangling large amounts of data Skilled in Python for scripting Experience with AWS Experience with Big Data tools is a nice plus; Hadoop, Spark, Kafka, Ability to enhance and maintain a data warehouse including use of ETL tools Successful track record in building real-time ETL pipelines from scratch Previous Ecommerce or startup experience is a plus Understanding of data science and machine learning technologies Strong problem solving capabilities Strong collaborator and is a passionate advocate for data Bachelor's Degree in Computer Science, Engineer, Math or similar WHAT YOU GET: Join a team of humble, creative and open-minded Engineers shipping exceptional products consumers love to use Opportunity to work at an awesome lifestyle brand in growth mode Brand new office space, open and team oriented environment Full Medical, Dental and Vision Benefits 401k Plan Unlimited Vacation Summer vacations / Time off Offices closed during winter holidays and new years Discounts on products Other perks So, if you are a Senior Data Engineer seeking an opportunity to grow with a global lifestyle brand at the cusp of something huge, apply now ()
          

IT Solution Architect

 Cache   
Job SummaryNetApp s Information Technology team is looking for a Solution Architect who can create and influence effective technology solutions for the Customer Support and Services domain. The Solution Architect is responsible for mapping the business requirements to systems/technical requirements to ensure they are in line with the enterprise architectural plan. The Solution Architect helps establish the collaboration framework within IT and with the business stakeholders and provides support to the project management team. Strong organizational skills, technical expertise, and attention to detail are key in this customer-focused role. Requires excellent communication skills to influence and negotiate, while working collaboratively.Specific areas of responsibility of the Solution Architect include:
    Review, interpret, and respond to business requirements to ensure alignment between customer expectations and current or future systems capabilityProvide input to the strategic direction of technology investments to assist in the development of the enterprise architecture and maximize the return on technology investmentWithin the agreed enterprise architecture, define and design technology solutions to assist the business in meeting their business objectivesOversee the development, testing, and implementation of technology solutions and report on delivery commitments to ensure solutions are implemented as expected and to agreed timeframesJob Requirements
      Ability to work with other IT teams and/or external solution providers to understand technology landscape and define and document end-to-end solution architecturesClear ability to turn business and functional requirements into actionable designs, and partner with IT teams to drive implementationUnderstanding of IT application architecture and cloud developmentIn-depth understanding and outstanding technical skills with core Customer Relationship Management (CRM) platform features (e.g., SAP, Salesforce)
        Strong preference for exposure to one or more of the following Salesforce/SAP Einstein/Leonardo Artificial Intelligence features such as Data Discovery, Intent, Bot, etc.Design & build of Azure/AWS based systems (EC2, SNS, S3, API gateway, etc.)Technologies like Java, Python, Tomcat, Node.jsConsumption and provision of RESTful and SOAP based APIs, SAML / OAUTHKnowledge of RDBMS, NoSQL, Bigdata In-Memory database (HANA), ETL/ELT platformsUnderstanding of Machine Learning, AI, and NLP capabilities and technologiesFamiliarity with content management tools desired. Knowledge of SEO best practices preferred as well as experience with Knowledge Management toolsWorking knowledge of IT Business System Analyst (BSA) role and project and product management practices in a large IT organizationEducation
          A minimum of 8 years of experience is required, with a minimum of 2 years working experience as Solution Architect in a large IT organization.A Bachelor of Arts or Science Degree required; Information Technology, computer science, or related technical field is preferred.Demonstrated ability to have completed multiple, complex technical projects.Nearest Major Market: DurhamNearest Secondary Market: Raleigh Associated topics: .net, application architect, architecture, back end, devops, lead, perl, programming, project architect, software engineer lead
          

Azure Chat Bot Architects/Engineers

 Cache   
Azure Chat Bot Architects/Engineers - Allentown PA Location: Allentown, PA Duration: 5 to 7 months Skillset:
  • Chatbot development using Azure Bot Framework
  • Expertise in chat bots using NLP and Microsoft BOT framework, QNA Maker and deploying in Azure PaaS.
  • Proficiency in creating Machine learning models using azure machine learning studio.
  • Experience in automation of products, configuration and automated deployment using PowerShell.
  • Experience in Master bot (Virtual Assistant)
  • Developing Chat Bots using C#., NET and Azure BOT framework and REST APIs
  • Creating conversational flows for NLU Agents.
  • Develop PowerShell scripts for Azure Resource manager to create platform for the BOTs. Technologies required:
    • Azure Bot Framework
    • Powershell
    • Azure PaaS
    • C#
    • .Net
    • REST APIs
    • Azure
    • Machine learning - provided by Dice
          

Research Scientist - Flexible Sensors

 Cache   

UES, Inc., has a position available for a highly motivated PhD researcher to join our advanced sensor team working at the Air Force Research Laboratory in Dayton, OH. The scientist will perform research in the areas of flexible electronics, smart sensors and actuators, and real-time materials characterization.
Job functions include sensor materials processing using film casting, evaporation and 3D printing, device fabrication and packaging, sensor operation and performance testing, real-time electrical monitoring, and characterization. Materials of interest include polymers, nanocomposites, two-dimensional materials, carbon nanotubes. Excellent communication and writing skills are required. Experience with implementation of machine learning tools for data analysis and interpretation is desired. Some programming experience is desired. Data analysis, summary and reporting, and presentation of results are required.
This position will work closely with the Associate Scientist position posted concurrently.

Requirements:

  • PhD in Materials Science and Engineering, Biomedical Engineering, Chemical Engineering, Electrical Engineering or a closely related field is required
  • 1-3 years of experience in related flexible electronics and/or sensor research
  • Record of excellent communication skills, to include peer reviewed publications and conference presentations
  • Ability to travel to conferences and/or workshops
  • This position is working on-site at a government facility and will require U.S. citizenship


    Additional Information
    UES, Inc. is an innovative science and technology company providing customers with superior research and development expertise since its inception in 1973. Our long-term success is a direct result of a strong commitment to the success of our employees. We look forward to reviewing your application.
    UES, Inc. is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class. U.S. Citizenship is required for most positions.
    PI115213375
          

Azure Chat Bot Architects/Engineers

 Cache   
Title: Azure Chat Bot Architect/EngineerLocation: Allentown, PAType: Contract Skillset:
  • Chatbot development using Azure Bot Framework
  • Expertise in chat bots using NLP and Microsoft BOT framework, QNA Maker and deploying in Azure PaaS.
  • Proficiency in creating Machine learning models using azure machine learning studio.
  • Experience in automation of products, configuration and automated deployment using PowerShell.
  • Experience in Master bot (Virtual Assistant)
  • Developing Chat Bots using C#., NET and Azure BOT framework and REST APIs
  • Creating conversational flows for NLU Agents.
  • Develop PowerShell scripts for Azure Resource manager to create platform for the BOTs.Technologies required:
    • Azure Bot Framework
    • Powershell
    • Azure PaaS
    • C#
    • .Net
    • REST APIs
    • Azure
    • Machine learningMax TrujilloTechnical RecruiterAscent720-573-5273 **If this is not a fit for you or you are not interested, Ascent Services Group offers an excellent Referral Bonus! We look forward to hearing from you! About Ascent: The Ascent Services Group (ASG) is a nationally recognized technology staffing and consulting firm whose fundamental business is providing staffing services to Small, Medium, and Large Enterprise clients in our core market verticals: Financial Services, Healthcare, Technology and Life Sciences. As consultants for ASG, you will have access to many of the top clients within the industries we serve. Our goal is to deliver innovative talent through proven best practices and effective resource optimization. Become one of ASG s candidates and experience the difference! - provided by Dice
          

Quantitative Analytics Specialist - Credit and PPNR Modeling

 Cache   
Important Note During the application process, ensure your contact information (email and phone number) is up to date and upload your current resume prior to submitting your application for consideration. To participate in some selection activities you will need to respond to an invitation. The invitation can be sent by both email and text message. In order to receive text message invitations, your profile must include a mobile phone number designated as Personal Cell or Cellular in the contact information of application. At Wells Fargo, we want to satisfy our customers financial needs and help them succeed financially. We re looking for talented people who will put ou customers at the center of everything we do. Join our diverse and inclusive team where you ll feel valued and inspired to contribute your unique skills and experience. Help us build a better Wells Fargo. It all begins with outstanding talent. It all begins with you. Corporate Riskhelps all Wells Fargo businesses identify and manage risk. The team focuses on several key risk types, including conduct, credit, financial crimes, information security, interest rate, liquidity, market, model, operational, regulatory compliance, reputation, strategic, and technology risk. The group provides leadership, enhances communications, assists with problem identification and solutions, and shares best practices. In addition, the group provides an enterprise-wide view of risk, assists management and our Board of Directors in identifying and monitoring risks that may affect multiple lines of business, and takes appropriate action when business activities exceed the risk tolerance of the company. The group provides leadership, enhances communications, assists with problem identification and solutions, and shares best practices. In addition, the group provides an enterprise-wide view of risk, assists management and our Board of Directors in identifying and monitoring risks that may affect multiple lines of business, and takes appropriate action when business activities exceed the risk tolerance of the company. The Credit and PPNR Modeling (CaPM) Center of Excellence (CoE) within Corporate Credit is responsible for development and implementation of the following models * Credit Risk Credit loss estimation models for the entire loan portfolio to support regulatory requirements and internal business needs; and * Pre-Provision Net Revenue (PPNR) Models used to forecast revenue and expenses and to support Asset and Liability Management. Position Details CaPM is seeking five qualified candidates who are graduating with Master s degrees in quantitative fields to join the team. As a Quantitative Analytics Specialist 1, you ll work under the supervision of a senior team member to gain comprehensive professional and industry experience. Team members may be responsible for developing, implementing, calibrating, or monitoring models; educating business leaders in the strengths and weaknesses of models; and for providing risk leaders with analysis to manage their risk. You ll also have the opportunity to interact with Wells Fargo senior leaders and learn about various risk management areas. Team members will begin with a combination of orientation, classroom training, and professional development activities. Opportunities are available in Atlanta, GA; Charlotte, NC; Des Moines, IA; McLean, VA; Minneapolis, MN Responsibilities will include (but are not limited to) * Performing statistical and mathematical model development under the direction of more experienced team members * Producing required documentation to evidence model development, validation and/or auditing * Understanding credit and operational processes, work flows and issues to sufficiently document and make recommendations for process improvements * Supporting model implementation, production, and monitoring; performing analytics around model results * Understanding business needs and providing possible solutions through clear verbal and written communications to management and fellow team members * Participating in model risk projects for varying purposes, methodologies and relevant lines of business * Staying current with bank regulatory framework and developments * Bringing closure to issues, questions, and requests * Collaborating as a member of a team to solve problems that arise * Presenting solutions effectively to a variety of audiences Required_Qualifications * A Master s degree or higher in statistics, mathematics, physics, engineering, computer science, economics, or quantitative field Desired_Qualifications * Good verbal, written, and interpersonal communication skills * Ability to prioritize work, meet deadlines, achieve goals, and work under pressure in a dynamic and complex environment * Ability to develop partnerships and collaborate with other business and functional areas Other_Desired_Qualifications * Experience and demonstrated first-hand knowledge in a number of the following areas data analysis, statistical modeling, machine learning, data management, and computing * Excellent computer programing skills and use of statistical software packages such as Python, R, SAS, C++ and SQL Street Address NC-Charlotte11625 N Community House Road - Charlotte, NC MN-Minneapolis600 S 4th St - Minneapolis, MN GA-Atlanta171 17th St Nw - Atlanta, GA IA-Des Moines800 Walnut St - Des Moines, IA IA-West Des Moines7001 Westown Pkwy - West Des Moines, IA VA-McLean1753 Pinnacle Dr - Mclean, VA Disclaimer All offers for employment with Wells Fargo are contingent upon the candidate having successfully completed a criminal background check. Wells Fargo will consider qualified candidates with criminal histories in a manner consistent with the requirements of applicable local, state and Federal law, including Section 19 of the Federal Deposit Insurance Act. Relevant military experience is considered for veterans and transitioning service men and women. Wells Fargo is an Affirmative Action and Equal Opportunity Employer, Minority/ Female/Disabled/Veteran/Gender Identity/Sexual Orientation. Reference Number *******-6
          

Business Information Analyst

 Cache   
Job Post has been updated successfully Job Share Recipient ShareCancel Job Description * Share * Share * * Facebook * Twitter * Linkedin * Print Business Information Analyst Reston, Virginia - United States - Posted - 08/20/19 ApplyEasy Apply Overview Note : Client not provided any sponsorship Location: PARK5, Reston, VA 6-11 years experience. General Information: - 6-11 Years Experience - Strong demonstrated analytical skills applied to business software solutions maintenance and/or development. - Knowledge of the software development standards and practices. - Demonstrated ability to find efficiencies and improve processes. - Proven ability to determine systemic process faults and improve overall process performance across organizations. - Understanding of claims adjudication, particularly in a health care environment - Understanding of health industry diagnosis, procedure and revenue codes and their application - Strong business analytical skills. - Strong process analysis skills - Strong process documentation skills - Excellent communication/writing skills, - Excellent interpersonal skills - Attentive to detail, highly -organized, and able to multi-task - Ability to work independently and meet deadlines Preferred: - Experience with claims adjudication in the Federal Employee Program Health Benefit Plan - Experience working on a Machine Learning or Data Science project - Experience in a software application development environment in an analytical role - Bachelors (Health administration; business administration; software engineering or related degree required) Responsibilities: - Partners with stakeholders, process specialists, and users to elicit and surface the unarticulated need - Elicits requirements using interviews, document analysis, scenarios, business analysis, and task and workflow analysis - Collaborate with Product Owner to create the initial Backlog, reprioritize requirements, and remove outdated story cards - Breakdown epics, develop user stories and drive detailed reporting requirements with necessary business rules for BI report development - Define and document business requirements for new metrics and reports - Present the requirements back to stakeholders and receive feedback - Translates non-technical requirements into technical business requirements - Collect and review data (e.g. for claims, provider, utilization data) with Data Analysts for all Product Increment deliverables - Critically evaluates information gathered from multiple sources, reconciles conflicts, abstracts up from low-level information to a general understanding, and works with customers to uncover unmet business needs - Partners with architecture and solution delivery teams to enable BI applications that have long-term value - Uses current knowledge of business and technology to recommend systems and process improvements Required Skills: - Knowledge of BI tools such as MicroStrategy, BusinessObjects, or Cognos. - Experience with relational databases and knowledge of query tools is required - Proven ability to quickly learn new applications, processes, and procedures - Proven experience with gathering and documenting information requirements for healthcare projects - Able and willing to collaborate in a team environment and exercise independent judgment - Professional image with the ability to form good partner relationships across functions - Strategic, intellectually curious thinker with focus on outcomes - Experience working on data warehouse deployments - Eight to Ten years of BI, data analysis or related experience in Healthcare Payer Business - Demonstrated experience in writing software requirements and test specifications in agile methodology using tools like HPE-Agile Manager Sundar Kolachina Talent Acquisition Specialist KMM Technologies, Inc. Research Court, Suite 450, Rockville, MD. CMMI Level 2 and ISO 9001:2008 Certified, WOSB, SBA 8(A), NMSDC & VA SWaM Certified Contract Vehicles: GSA Schedule 70 & SeaPort-e Prime Tel: ************ - Fax: ************** E-Mail: ************************** Web Site: ***********************
          

Machine learning: innovative technology within supply chains

 Cache   
Watson is IBM’s suite of artificial intelligence (AI) services, applications and tools. Watson aims to help businesses unlock the value of data in new ways and remove repetitive tasks from employees to shift the focus to high-value work. This is in addition to allowing companies to predict and shape...
          

Senior Computer Vision Researcher

 Cache   
Baidu Research General AI team (GAIT) is looking for an outstanding senior researcher with strong background in computer vision, machine learning, deep learning. Our mission is to research and develop next generation artificial intelligence (AI) technologies for image and video understanding as well as related products in cloud, intelligent cameras and robots. As a senior researcher at Baidu, you will be uniquely positioned in our team to work on different industry problems and to push forward frontiers of AI technologies. Publications in premier conferences or journals are also highly encouraged.

Qualifications:


  • PhD (or master with at least 5 years' working experience) in Computer Science, EE, Applied Mathematics, or related fields.
  • Strong publication record in premier AI-related venues such as CVPR, ICCV, ECCV, NIPS, PAMI, TIP or other related major conferences or journals.
  • Strong analytical and problem-solving skills.
  • Team player with good communication skills.
  • Strong coding skill with Python, CUDA, C/C++ and so on.

    Preferred knowledge/skills:


    • Experience in neural architecture search
    • Experience in neural model compression
    • Experience in human pose understanding and so on

          

Sr IT Developer

 Cache   
Ecolab is the leading global supplier of cleaning and sanitation products, programs, and services to national retail grocery stores and quick service restaurants. Known as "Your Food Safety Experts", our mission is to provide a cleaning and sanitation program that helps ensure our customers have clean, food-safe environments while delivering savings to their bottom line.Ecolab is seeking a Sr IT Developer to assist with high profile projects supporting various business initiatives related to application development, system analysis, design and production support. There will be opportunity to leverage a wide variety of tools and develop professionally through formal and informal training.What you will do:
  • Assist in designing new systems and platforms that are utilized in our food safety programs by internal employees and our customers
  • Collaborate to maintain industry standard design patterns to ensure continuity of application usage
  • Develop applications and unit tests in the following arenas:
    • Xamarin Forms
    • Web API
    • ASP.Net MVC
    • WPF
    • Azure resources ? Notifications, Machine Learning, SQL, API
    • Develop proof of concept solutions for cutting edge technologies, such as machine learning and virtual reality
    • Utilize cutting edge technologies as the business determines how to apply them.
    • Maintain existing and legacy code for multiple projectsMinimum Qualifications:
      • 6 years of IT experience
      • 4 years? experience working with c# .Net
      • 4 years? experience working with SQL databases
      • No Immigration Sponsorship available for this opportunityPreferred Qualifications:
        • Bachelor?s Degree in related field
        • 4 years? experience working with Entity Framework
        • 4 years? experience working with MVVM
        • 4 years? experience working with database design
        • 4 years? experience working with advanced queries
        • 4 years? experience working with IoC/DI
        • 4 years? experience working with unit tests
        • 2 years? experience working with Xamarin FormsA trusted partner at nearly three million customer locations, Ecolab (ECL) is the global leader in water, hygiene and energy technologies and services that protect people and vital resources. With annual sales of $15 billion and 49,000 associates, Ecolab delivers comprehensive solutions, data-driven insights and on-site service to promote safe food, maintain clean environments, optimize water and energy use, and improve operational efficiencies for customers in the food, healthcare, energy, hospitality and industrial markets in more than 170 countries around the world. For more Ecolab news and information, visit www.ecolab.com.Follow us on Twitter @ecolab, Facebook at facebook.com/ecolab, LinkedIn at Ecolab or Instagram at?Ecolab Inc. ?Our Commitment to Diversity and InclusionAt Ecolab, we believe the best teams are diverse and inclusive, and we are on a journey to create a workplace where every associate can grow and achieve their best. We are committed to fair and equal treatment of associates and applicants. We recruit, hire, promote, transfer and provide opportunities for advancement on the basis of individual qualifications and job performance. In all matters affecting employment, compensation, benefits, working conditions, and opportunities for advancement, we will not discriminate against any associate or applicant for employment because of race, religion, color, creed, national origin, citizenship status, sex, sexual orientation, gender identity and expressions, genetic information, marital status, age, disability, or status as a covered veteran.In addition, we are committed to furthering the principles of Equal Employment Opportunity (EEO) through Affirmative Action (AA). Our goal is to fully utilize minority, female, disabled and covered veteran individuals at all levels of the workforce. Ecolab is a place where you can grow your career, own your future and impact what matters.
          

Data Scientist Engineer

 Cache   
Octo is currently seeking a Platform Security Engineer to join a growing team on an exciting and highly visible project for a DoD customer.The project you will be working is to define and design the data architecture and taxonomy in preparation for conducting extensive analysis of the data ingested via Air Force existing legacy applications to a more evolvable architecture that can better leverage a cloud environment to deliver better technology, reduce program sustainment costs, and higher system reliability. Our approach is to transform legacy applications to be cloud native and reside on a Platform as a Service (PaaS).?Additionally, modernize current applications by breaking them down into loosely coupled micro-services, and leveraging a continuous integration / continuous delivery pipeline to enable an agile DevOps Strategy.Octo Data Scientists on this project will have an opportunity to ?receive 6+ months of Pivotal Cloud Foundry training as part of the standard on-boarding process for this project.You?As a Data Scientist at Octo, you will be involved in the analysis of unstructured and semi-structured data, including latent semantic indexing (LSI), entity identification and tagging, complex event processing (CEP), and the application of analysis algorithms on distributed, clustered, and cloud-based high-performance infrastructures. Exercises creativity in applying non-traditional approaches to large-scale analysis of unstructured data in support of high-value use cases visualized through multi-dimensional interfaces. Handle processing and index requests against high-volume collections of data and high-velocity data streams. Has the ability to make discoveries in the world of big data.??Requires strong technical and computational skills??- engineering, physics, mathematics,??coupled with??the ability??to code design, develop, and deploy sophisticated applications using advanced unstructured and semi-structured data analysis techniques and utilizing high-performance computing environments. Has the ability to utilize advance tools and computational skills to interpret, connect, predict and make discoveries in complex data and deliver recommendations for business and analytic decisions.??Experience with software development, either an open-source enterprise software development stack (Java/Linux/Ruby/Python) or a Windows development stack (.NET, C#, C++). Experience with data transport and transformation APIs and technologies such as JSON, XML, XSLT, JDBC, SOAP and REST. Experience with Cloud-based data analysis tools including Hadoop and Mahout, Acumulo, Hive, Impala, Pig, and similar. Experience with visual analytic tools like Microsoft Pivot, Palantir, or Visual Analytics. Experience with open source textual processing such as Lucene, Sphinx, Nutch or Solr. Experience with entity extraction and conceptual search technologies such as LSI, LDA, etc. Experience with machine learning, algorithm analysis, and data clustering.Us?We were founded as a fresh alternative in the Government Consulting Community and are dedicated to the belief that results are a product of analytical thinking, agile design principles and that solutions are built in collaboration with, not for, our customers. This mantra drives us to succeed and act as true partners in advancing our client?s missions.What we?d like to see?
  • Full-stack software development experience with a variety of server-side languages such as Java, C#, PHP, or Javascript (NodeJS)
  • Experience with modern front-end frameworks like React, Vue, or Angular
  • Intimate knowledge of agile and lean philosophies and experience successfully leading software teams in the practice of these philosophies
  • Experience with Continuous Delivery and Continuous Integration techniques using tools like Jenkins or Concourse
  • Experience with test-driven development and automated testing practices
  • Experience with data analytics, data science, or data engineering, MySQL and/or Postgres, GraphQL, Redit, and/or Mongo
  • Experience with building and integrating at the application and database level REST/SOAP APIs and messaging protocols and formats such as Protobuf, gRPC, and/or RabbitMQ
  • Experience with Pivotal Cloud Foundry
  • Experience with Event/Data Streaming services such as Kafka
  • Experience with Enterprise Service Bus and Event Driven Architectures
  • Experience with prototyping front-end visualization with products such as ElasticStack and/or Splunk
  • Strong communication skills and interest in a pair-programming environment Bonus points if you?
  • Possess at least one of the Agile Development Certifications
    • Certified Scrum Master
    • Agile Certified Practitioner (PMI-ACP)
    • Certified Scrum Professional
    • Have proven experience writing and building applications using a 12-factor application software architecture, micro services, and API
    • Are able to clearly communicate and provide positive recommendation of improvements to existing software applicationsYears of Experience:??5 years or moreEducation:?Associates in a Technical Discipline ? Computer Science, Mathematics, or equivalent technical degreeClearance:?SECRET
          

Data Scientist

 Cache   
Description WHO WE ARE Oliver Wyman is a global leader in management consulting. With a staff of over 5,000 spread across 30 countries, Oliver Wyman combines deep industry knowledge with specialized expertise in strategy, operations, risk management, and organization transformation. Harbour Advanced Manufacturing & Engineering division is specialized on manufacturing improvement, product development related services, technical risk management and Digital Manufacturing across the industrial domain. Job Summary: Data Scientist at Harbour provides comprehensive technical knowledge in data science (obtain data extracts, build analytics models, create visualizations) combined with business expertise in digital manufacturing, quality management and operations improvement. Data Scientist should have extensive experience in automotive, aerospace and/or manufacturing industry, providing strong technical leadership to ensure valid analytics solutions to solve real world problems in client projects with international clients. Responsibilities: - Leads complex development of analytics solutions to solve real world problems in client projects from idea to concept to implementation in some of the world's largest organizations. - Work in a multi-disciplinary environment with specialists in manufacturing operations, advanced analytics, machine learning and design. - Plan and execute good practice data integration strategies and approaches. - Mentor more junior colleagues in projects and provide guidance where required. - Develop methods, tools and software assets to reinforce the capabilities of the team/company. - Presents concepts and solutions to colleagues and clients. Initiates customer feedback on effectiveness of services and products. - Travel nationally & internationally, as required, to support clientsKnowledge, Skills & Abilities: - Business expert in digital manufacturing, quality management and operations improvement. - Experience in applying advanced analytical and statistical methods in automotive, aerospace and/or manufacturing industry. - Strong track record and expertise of programming languages, with a focus on advanced analytics (such as Python, R, Scala, Java, SQL, Tableau). - Experience working with large data sets and relational databases. - Good team spirit and ability to communicate complex ideas effectively to both colleagues and clients. - Excellent problem-solving skills and the ability to analyze issues, identify causes, and recommend solutions quickly. - Good presentation and communication skills, with the ability to explain complex analytical concepts to people from other fields.Education & Experience: - BSc or MSc level in the field of Computer Science, Machine Learning or Engineering. - Minimum of three years of relevant work experience in automotive, aerospace and/or manufacturing industry with strong data science track recordSuccess Factors: - Team player - Self-directed and willing to take initiative - Resourceful; able to solve problems with minimal supervision - Exhibits attention to detail and is a champion for accuracyIf you like what you've read, we'd love to hear from you. You can submit your CV including a short covering email introducing yourself and what you're looking for. The application process will include both technical testing and team-fit interviews. Oliver Wyman is an equal opportunity employer. Our commitment to diversity is genuine, deep and growing. We're not perfect yet, but we're working hard right now to make our teams balanced, representative and diverse. Requisition #: R_******-en
          

Two Ph.D. positions in Water Security and Data Science at the University of Delaware

 Cache   
Employer University of Delaware, Department of Geography and Spatial Sciences Location Newark, DE Salary Competitive stipend, tuition waiver, and subsidized health insurance. Posted Nov 01, 2019 Closes Nov 30, 2019 Discipline Earth and Space Science Informatics, Hydrology, Interdisciplinary/Other, Natural Hazards, Social Sciences Career Level Student / Graduate Education Level PhD Job Type Internship Relocation Cost No Relocation Sector Type Academia You need to sign in or create an account to save Do you have a passion for water, and an interest in data science? The Water Security group in the Department of Geography and Spatial Sciences and Department of Civil and Environmental Engineering at the University of Delaware is seeking two motivated Ph.D. students to join our program in Fall 2020, studying complex water systems and addressing water security issues through the lens of coupled human and water systems. Requirements: The ideal candidates who have passion for water, and can commit to improving water security in the agricultural, urban or coastal environment, and have a Master's degree in applied mathematics, computer science, engineering, geography, hydrology, physics, statistics, water resources or a related field. It is preferable that the candidates have some knowledge of computer programming (e.g., R, Python, Java, Fortran, C), machine learning, cloud computing, system optimization or computational modeling. Experienced in probabilistic graphical models and/or agent-based modeling is a plus. Candidates are expected to 1) have good oral communication and writing skills; 2) work independently and also in an interdisciplinary team; 3) develop their own research ideas; and 4) have the willingness and ability to learn, use and develop data analysis tools and/or cyber-infrastructure to address water security issues in the changing environment. Please check out our lab website for more information. Salary and Benefits: Both positions offer a competitive stipend, a tuition waiver, and subsidized health insurance. The students will directly work with Dr. Yao Hu, and interact with interdisciplinary experts. Application: Interested applicants should send a brief letter of their interests and a copy of their CV and transcripts via email to Dr. Yao Hu (******************) (******************************************* by December 1 prior to formally applying to UD Graduate School. The deadline for priority consideration for admission and funding is January 5, 2020. Women and underrepresented minorities are strongly encouraged to apply. About: The Department of Geography and Spatial Sciences and Department of Civil and Environmental Engineering at the University of Delaware are nationally and internationally recognized departments that offer a variety of graduate programs. Our students are also given the opportunity to pursue a Master's degree in Data Science, a dual Master's degree in Civil Engineering and Business Administration (MCE/MBA) or a Ph.D. degree in Engineering and Public Policy. The University of Delaware is an equal opportunity affirmative action employer. Recognized by the Chronicle of Higher Education as one of America's best universities to work for in 2012, the University of Delaware is located midway between Philadelphia and Baltimore, and is a Sea Grant, Space Grant, and Land Grant institution. for Two Ph.D. positions in Water Security and Data Science at the University of Delaware Already uploaded your resume? Sign in to apply instantly First name Last name Email address Upload your resume Upload from your computer Or import from cloud storage Dropbox Google Drive Your Resume must be a .doc, .pdf, .docx, .rtf, and no bigger than 1Mb Secondary Document Upload from your computer Or import from cloud storage Dropbox Google Drive Your Secondary Document must be a .doc, .pdf, .docx, .txt, .rtf, and no bigger than 1Mb Your covering message for Two Ph.D. positions in Water Security and Data Science at the University of Delaware (limit 4000 characters) 4000 characters left Email me jobs like this one when they become available Marketing Communication We'd love to send you information about Jobs and Services from AGU Pathfinder Career Centre by email. Yes please, I'd like to receive emails about jobs and services from AGU Pathfinder Career Centre (includes the Career Center Newsletter) All emails will contain a link in the footer to enable you to unsubscribe at any time. When you
          

AI/ML Executive Architect

 Cache   
Summary / DescriptionUnisys is seeking candidates to make a difference by providing meaningful solutions to help our government secure the nation and fulfill the mission of government most effectively and efficiently. We are looking for candidates for Artificial Intelligence/Machine Learning Executive Architect role for our corporate office in Reston, VA.--The role of an AI/ML Solution Executive includes:--- Educates Unisys Federal Delivery Leadership, our existing clients and prospects as to emerging opportunities to apply AI/ML analytics to better leverage government data to make more timely and better mission decisions--- Provides the AI/ML vision for Unisys Federal--- Participates actively in providing technical leadership for AI/ML opportunities in the new business development cycle from deal identification, participating in call plans, driving solution strategy, in responding to a solicitation and in participating in tech challenges/hackathons to showcase our AI/ML skills--- Working with Unisys business development, program teams, capture and account teams to engage customers to best understand their AI/ML needs and to present Unisys capabilities, offerings and solutions in a compelling manner, thereby shaping customer perspectives --- Leads the establishment and sustainment of Unisys portfolio of capabilities for AI/ML, including marketing literature, proposal content, BoE/rate cards, proof points, reference architectures and proofs of concept/demoware--- Provides deep domain expertise regarding AI/ML, data modeling, enterprise data warehousing, data integration, data quality, master data management, statistical analyses of primarily structured datasets--- Provides deep domain expertise of AI/ML algorithms, tooling and solutions to solve mission problems for Unisys Federal clients--- Provides expertise in building government oriented solutions leveraging NoSQL solutions, big data (Hadoop/Apache Spark), Geographic Information Systems (GIS), key-value pair, columnar, graph, search, natural language processing, data science, machine learning and data visualization --- Drives market demand for AI/ML solutions by providing concise messages tailored for Unisys customers and their desired outcomes--- Defines our go to market strategy for AI/ML --- Collaborates closely with our corporate solutions organizations and alliance partners to incubate, design and deliver AI/ML offerings--- Curates proof points and past performance qualifications for Unisys success stories for applying AI/ML capabilities supporting the mission of government--- Identifies market trends in technology for AI/ML solutions--- Collaborates with Unisys Commercial Solutions organizations to prioritize corporate investments in AI/ML solutions--- Works with business units in tailoring capability strategies specific for them and work with appropriate government relationships to shape agency procurement--- Shapes procurements through presentations to clients and other speaking engagements --- Determines which alliances to pursue and events for Unisys to participate The AI/ML Executive is intimately familiar with market trends, helps to define go to market strategy and ensure that Unisys is in a position to be the best choice for meeting our customers--- AI/ML needs through collaboration with customers, partners, and internal stake holders to understand the requirements and connect them with Unisys capabilities and offerings.RequirementsRequired Skills:--- Master's degree and 20 years of relevant experience or equivalent--- Strong expertise in designing and delivering AI/ML/Deep Learning solutions--- Expertise and experience implementing technology solutions in four or more of the following areas: database design, data warehousing, data governance, metadata management, big data, noSQL, data science, data analytics, machine learning, natural language processing, streaming data.--- Experience with scientific scripting languages (e.g. Python, R) and object oriented programming languages (e.g. Java, C#) --- Strong expertise with machine learning and deep learning models and algorithms--- Solid grounding in statistics, probability theory, data modeling, machine learning algorithms and software development techniques and languages used to implement analytics solutions--- Deep experience with data modeling and Big Data solution stacks--- Deep knowledge in enterprise IT technologies, including databases, storage, and networks--- Deep experience with one or more Deep Learning frameworks such as Apache MXNet, TensorFlow, Caffe2, Keras, Microsoft Cognitive Toolkit, Torch and Theanu--- Has a successful track record in providing technical leadership in federal new business pursuits --- In-depth understanding of application, cloud, middleware, data management and system architecture concepts; experience leading the design and integration of enterprise-level technical solutions. --- Experience in capturing technical requirements and defining technical solutions in the form of conceptual, logical, and physical designs, including the ability to articulate those concepts verbally, graphically and in writing. --- Ability to synthesize solution design information, architectural principles, available technologies, third-party products, and industry standards to formulate a system architecture that meets client requirements and can be delivered within the desired timeframe. --- Experience developing cost models, technical delivery plans, technical solutions and basis of estimates (BOEs), including BOM development. Also develop concept of operations and discuss these models in Agile, federal SDLC or ITIL based terms. --- Experience identifying potential design, performance, security, and support problems, including ability to identify technical risks/challenges and develop relevant mitigation strategies. --- Extensive knowledge of the broad spectrum of technology areas, including technology trends, forthcoming industry standards, new products, and the latest solution development techniques; ability to leverage this knowledge to formulate technical solution strategy. --- Ability to consistently apply architectural guidelines when creating new solution architectures. --- Ability to develop integrated technology requirements project plan. --- Ability to interface with team members at all levels, including business operations, finance, technology, and management. --- Was primary author for a technical conference or whitepaper submission. (to be provided)Desired Qualifications --- Certifications from leading analytics platform providers (Cloudera, Horton, Databricks, AWS, Microsoft, etc.) --- Experience in leading remote teams in building demonstrations and proofs of concept--- Experience in classical DMBOK data management practices including data governance, data quality management, master data management, metadata management practices and tools--- Deep knowledge of Federal domain-specific data formats and structures, data storage, retrieval, transport, optimization, and serialization schemes--- Demonstrated experience developing engineering solutions for both structured and unstructured data, including data search. --- Experience working with very large (petabyte scale) datasets including data integration, analysis and visualization--- Experience with data integration and ETL tools (e.g. Apache NiFi, SSIS, Informatica, Talend, Azure Data Factory)--About UnisysDo you have what it takes to be mission critical? Your skills and experience could be mission critical for our Unisys team supporting the Federal Government in their mission to protect and defend our nation, and transform the way government agencies manage information and improve responsiveness to their customers. --As a member of our diverse team, you---ll gain valuable career-enhancing experience as we support the design, development, testing, implementation, training, and maintenance of our federal government---s critical systems. Apply today to become mission critical and help our nation meet the growing need for IT security, improved infrastructure, big data, and advanced analytics.Unisys is a global information technology company that solves complex IT challenges at the intersection of modern and mission critical. We work with many of the world's largest companies and government organizations to secure and keep their mission-critical operations running at peak performance; streamline and transform their data centers; enhance support to their end users and constituents; and modernize their enterprise applications. We do this while protecting and building on their legacy IT investments. Our offerings include outsourcing and managed services, systems integration and consulting services, high-end server technology, cybersecurity and cloud management software, and maintenance and support services. Unisys has more than 23,000 employees serving clients around the world. Unisys offers a very competitive benefits package including health insurance coverage from first day of employment, a 401k with an immediately vested company match, vacation and educational benefits. To learn more about Unisys visit us at www.Unisys.com.Unisys is an Equal Opportunity Employer (EOE) - Minorities, Females, Disabled Persons, and Veterans.#FED#
          

Assistant Professor - Data Science: Multiple Areas

 Cache   
ASSISTANT PROFESSOR - DATA SCIENCE: MULTIPLE AREAS APPLY NOW TO ASSISTANT PROFESSOR - DATA SCIENCE: MULTIPLE AREAS

Job #JPF02319

* Hal c o lu Data Science Institute - HALICIO LU DATA SCIENCE INSTITUTE

RECRUITMENT PERIOD

Open date: October 28th, 2019
Next review date: Sunday, Dec 15, 2019 at 11:59pm (Pacific Time)
Apply by this date to ensure full consideration by the committee. Final date: Thursday, Dec 31, 2020 at 11:59pm (Pacific Time)
Applications will continue to be accepted until this date, but those received after the review date will only be considered if the position has not yet been filled.

DESCRIPTION

The University of California, San Diego invites applications from outstanding candidates for a tenure-track faculty position for primary appointment at the Halicioglu Data Science Institute with optional joint appointment in another academic department. The appointment will be at the Assistant level. Successful appointees will have a track record of scientific accomplishments, excellence in teaching, a commitment to university service and a commitment to support diversity, equity and inclusion at the university. The University of California, San Diego is committed to academic excellence and diversity within the faculty, staff and student body.

This search spans all areas of the data science including artificial intelligence, machine learning, data management and their applications and systems. For review purposes, candidates must submit an application to one of the following four broad areas of search that list topics of current interests, but are not limited to those listed.

a) Statistical Foundations of Data Science, Applied Statistics and Biostatistics
Statistical foundations of data science is one of the critical areas for hiring. Statistics (including Biostatistics) is the science of drawing inferences from data, thus forming a pillar of the emerging discipline of data science, together with Machine Learning. While both Statistics and Machine Learning are seeking optimal procedures for inference, e.g. prediction, the latter is more focused on algorithms and their computation/implementation, while the former is crucially entasked with quantifying the accuracy of such inference. Topics of current interest in Statistics include (but are not limited to): High-dimensional data, Large-scale Hypothesis Testing, Regularization and Sparsity, Functional Data, Causal inference, Complex Data, Dependent Data, Inference after model selection, Prediction Intervals, quantification of statistical significance and control of false discovery rate.

b)Digital and Data Infrastructure including Security
We invite candidates who build and study software systems and software-hardware integrated systems that serve as platforms to enable and amplify the impact of data science algorithms and applications. Our focus spans the whole lifecycle of digital data infrastructure, including (but not restricted to) systems for sensing and sourcing data, systems for storing, querying, learning, and analytics over data, streaming methods for analyzing very large datasets and systems for securely deploying data science methods in high-impact application verticals. Across this lifecycle, concerns of scalability, security, and usability will receive special attention. Examples of research areas that fit this focus include database and analytics systems, information integration and knowledge base construction, data mining and network/graph analytics, Internet of Things and cyber-physical systems, spatiotemporal information systems, cloud computing for data science, secure machine learning, database security. Examples of high-impact application verticals include smart and connected health, smart cities, e-commerce platforms, and social media platforms.

c) Artificial Intelligence, AI in Science Applications
We seek faculty candidates with a background in artificial intelligence or machine learning. This area includes, but is not exclusive to: natural language processing, computer vision, modeling high dimensional data with low intrinsic dimension, modeling dynamical systems, streaming methods for analyzing very large datasets, active learning, methods for incorporating machine learning into real-world systems that combine humans and machines, neural networks/deep-learning, reinforcement learning, optimization. We are interested in candidates with a strong applications emphasis as well as those that study the theoretical properties of the algorithms. Priority will be given to those who show a strong connection and relevance to data science.

d) Data Science in Public Policy
We seek faculty candidates with a background in statistics, computer science, economics, or public policy who are dedicated to exploring the use, risks, and benefits of data science for a well-defined vertical application area: such as democratic practice, media, trade in digital services, health, education, and societal infrastructure for energy, communication, transportation, and national security. Of particular interest will be candidates who are pursuing advances in theory, methods and tools that help us understand the opportunities and challenges that digital data pose to markets, organizations, society, and government; and (or) who are devising methods and evaluating policies to advance these opportunities and address these challenges, while considering both technical constraints and constraints related to social, political and economic feasibility.

A PhD in Computer Science, Math, Economics, Engineering or related discipline is required at the start of position.

Successful applicants will be expected to teach graduate and undergraduate students in the Data Science major/minor degree programs offered by the Institute. In case of a partial joint appointment with another department, the teaching workload would include appropriate course work in the participating department. All candidates are expected to establish a vigorous program of high-quality federally funded research that focuses on innovations in one of the targeted search areas.

The preferred candidate will have demonstrated strong leadership or a commitment to support diversity, equity and inclusion in an academic setting. The level of appointment and salary is commensurate with qualifications and based on UC pay schedules.

Applications must be submitted electronically through AP-Online Recruit website: https://apol-recruit.ucsd.edu/JPF02319

For applicants with interest in spousal/partner employment, please see the UC San Diego Partner Opportunities Program website.

UC San Diego is an Equal Opportunity/Affirmative Action Employer with a strong institutional commitment to excellence through diversity.

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.

JOB LOCATION

La Jolla, California

REQUIREMENTS

Document requirements

*

Curriculum Vitae - Your most recently updated C.V.

The University of California, San Diego is an Equal Opportunity/Affirmative Action Employer. You have the right to an equal employment opportunity.
https://www1.eeoc.gov/employers/upload/eeoc_self_print_poster.pdf
For more information about your rights, see the EEO is the Law Supplement.
https://www.dol.gov/ofccp/regs/compliance/posters/pdf/OFCCP_EEO_Supplement_Final_JRF_QA_508c.pdf
The University of California, San Diego is committed to providing reasonable accommodations to applicants with disabilities.
See our Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act Annual Security Reports.
https://www.ucop.edu/ethics-compliance-audit-services/compliance/clery-act/clery-act-details.html
          

Discrimination Systems Analyst

 Cache   
Description:Make a lasting impact! Be part of a team that brings together leading missile defense industry experts to help guide the future of the Missile Defense Agency's (MDA's) ballistic and hypersonic Missile Defense System (MDS). National Team-Engineering (NT-E) is MDA's one-stop shop for systems engineering reach-back into the major companies that build the essential elements of the MDS. Systems engineers on the National Team are helping to drive the future systems that will protect the United States, its friends and allies for generations to come.

The National Team has an immediate need for an experienced senior staff-level systems analyst to perform the following tasks:
--- Perform simulation based end-to-end functional and performance analysis of the Ballistic Missile Defense System (BMDS) utilizing large-scale complex BMDS Modeling and Simulation (M&S) tools
--- Develop modeling and simulation tools in support of system engineering discrimination products for the MDA
--- Support end-to-end analysis on discrimination products for the BMDS
--- Perform development, updates, and benchmarking/anchoring of BMDS M&S tools to support system level analysis and discrimination tasks
--- Document analysis results in reports and briefings to be presented to the Missile Defense National Team (MDNT) and Missile Defense Agency (MDA) representatives
--- Perform as a systems investigator as part of a larger team
--- This position is in Huntsville, AL.
Basic Qualifications:
--- Demonstrated capability to perform systems analysis and performance analysis tasks
--- Demonstrated numerical modeling and simulation capability using a high-level programming language (e.g. MATLAB, Python, Java, FORTRAN) or experience analyzing output
--- Demonstrated experience in any of the following domains: system concept trade studies, operational analysis, requirements analysis, feasibility analysis, system performance measures of effectiveness analysis (incl. technical performance measures), discrimination methods and techniques definition, EO/IR sensors, RF/Radar sensors, missiles/interceptors, ICBM / hypersonic / MRM / SRM threats, discrimination analysis, detection/tracking, or Command and Control (e.g., C3I, C4I, etc.)
--- Must have analysis experience in aspects of the Ballistic missile defense system (e.g., Aegis Weapons Systems, Ground-based Midcourse Defense (GMD), Fire Control (FOM, GFC), C2BMC, BMDS sensors (EO/IR/RF), kill vehicles, interceptors, countermeasures, counter-countermeasures, or threat systems
--- Must be able to effectively communicate orally and written with a diverse group of individuals.
--- Must be comfortable of working both independently and in a team environment
--- Must have an active Top-Secret clearance
Desired Skills:
National Team employees may have the opportunity to work on a variety of specialized projects based on their backgrounds. The following skills are desirable, but not required to be considered a strong candidate for this position. If you have experience in one or more of the topics listed below, please note it on your resume:

--- Demonstrated experience modeling or simulating missile defense applications or similar domains (air combat, air defense, intercept of ballistic targets, intercept of non-ballistic targets, space vehicles, sub-orbital mechanics, etc.)
--- Element-level experience with Analysis, Assessment, or Verification and familiarity with BMDS Performance Tier 1 Metrics (PES, DA, LAD, RSC, etc)
--- Knowledge of ballistic and advanced threat systems (e.g., aero-maneuvering, hypersonic, hyper-glide, countermeasures, etc.)
--- Modeling and analysis experience in aspects of the Ballistic missile defense system (e.g., LRDR, AN/TPY-2, SBX, UEWR, Aegis Weapons Systems, Ground-based Midcourse Defense (GMD), Fire Control, C2BMC, THAAD, etc.)
--- Systems engineering experience with large, complex systems
--- Experience with data analytics and machine learning principles
--- Demonstrated team leadership experience
BASIC QUALIFICATIONS:
job.Qualifications

Lockheed Martin is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
Join us at Lockheed Martin, where your mission is ours. Our customers tackle the hardest missions. Those that demand extraordinary amounts of courage, resilience and precision. They're dangerous. Critical. Sometimes they even provide an opportunity to change the world and save lives. Those are the missions we care about.

As a leading technology innovation company, Lockheed Martin's vast team works with partners around the world to bring proven performance to our customers' toughest challenges. Lockheed Martin has employees based in many states throughout the U.S., and Internationally, with business locations in many nations and territories.
EXPERIENCE LEVEL:
Experienced Professional
          

Wenn KI bei der Diagnose hilft

 Cache   
Künstliche Intelligenz (KI) hält nach und nach auch in Kliniken Einzug. Gesteigerte Rechenleistung, intensivierte Generierung von Daten sowie Fortschritte im Bereich des Machine Learning versprechen neue Möglichkeiten in der medizinischen Forschung und Versorgung. Zugleich rücken eine ganze Reihe ethischer und rechtlicher Fragen in den Mittelpunkt. Wie verändert KI die Rollen...
          

Finance Technology Global Operations Lead

 Cache   
recruiting for a Director level, Finance Technology Global Operations Lead to be located in New Brunswick or other local New Jersey locations. Caring for the world, one person at a time has inspired and united the people of Johnson & Johnson for over 125 years. We embrace research and science - - bringing innovative ideas, products and services to advance the health and well-being of people. Employees of the Johnson & Johnson Family of Companies work with partners in health care to touch the lives of over a billion people every day, throughout the world. With $76.5 billion in 2017 sales, Johnson & Johnson is the world's most comprehensive and broadly-based manufacturer of health care products, as well as a provider of related services, for the consumer, pharmaceutical, and medical devices. There are more than 265 Johnson & Johnson operating companies employing approximately 126,500 people and with products touching the lives of over a billion people every day, throughout the world. If you have the talent and desire to touch the world, Johnson & Johnson has the career opportunities to help make it happen. The FTS Global Operations Lead is a primary thought leader that plays a critical role in coordinating the execution of all major tech solution development and deployment within the Finance Technology Solutions organization. You will report directly to the Vice President of FTS, and acts as their primary point of contact for all cross-functional tech solution projects and partners with senior management and technologists to drive target- state tech transformation efforts throughout the business. This role is a critical leader in driving operations of the FTS organization, particularly through the deployment of major projects (e.g. CFIN). This leader also partners closely with Finance Global Process Owners to ensure that technology solution architecture enables business process designs through innovative technology solutions, with the FS&T Project Management Organization to govern and manage ongoing tech project intake. This role oversees the conceptualization, design, and subsequent implementation of innovative, scalable, and fit-for-purpose finance technology products that respond to organizational demands. The Integrated Global Operations Lead should be an established thought leader, driving decisions around next generation technology solutions in conjunction with a shared vision for the future, with a focus on the role of technology in transforming the scale, business adoption, cost, and impact of finance technology solutions. Responsibilities As a key partner for the VP, FTS, drives the operations of both the run and project deployment efforts within the FTS organization Oversees development, implementation and adoption of next-gen finance tech solutions throughout the J&J finance organization alongside key Finance and Technology leaders; coordinates innovation efforts through major project deployments such as Central Finance and Enterprise Performance Management Supports cross-functional implementation efforts of new tech solutions within finance technology to improve reporting, planning, and analysis capabilities and ensures seamless integration into the business (processes, systems, people, etc.) Acts as primary point of contact for the Vice President of Finance Technology Solutions for all cross-functional initiatives Partners closely with Global Process Owners to ensure tech solution architecture aligns appropriately with business process designs and with the FS&T Project Management Organization to govern and manage ongoing tech project intake Supports the development of the technology solutions roadmap, working closely with the lead Strategic Solutions Architect Qualifications * A minimum of a Bachelors Degree is required * 10+ years of business operations experience required, preferably with end-to-end technology solutions architecture experience within a large- scale, transformational business environment * Extensive experience developing and implementing operational Finance & Technology solutions throughout a large-scale, complex business landscape is highly preferred * Experience translating business needs into technology solutions and managing cross-functional priorities * Expertise in industry best-practices for next-gen operational finance solutions (e.g. RPA, AI / Machine Learning, Blockchain, etc.) * Knowledge of digital finance technologies, with an emphasis on implementing these technologies at scale and driving business value and adoption * Experience in developing run governance models so that technology solutions can effectively manage change, and keep pace with changing business environments * Experience in complex multi-team delivery models and practical application of technical methods and procedures * Deep knowledge of organizational systems, models, and interdependencies needed to align the organization to the FS&T agenda * Excellent at building strong relationships with peers and with other senior-level stakeholders * Up to 20% travel may be required * Skills to influence others and move toward a common vision * Flexible, adaptable, and able to thrive in ambiguous situations * Experience with large-scale transformation and process change efforts * Team-oriented attitude and ability to work collaboratively with and through others * Do you strive to join an outstanding team that is dynamic and ever-changing? Is career growth and opportunity appealing to you? Apply to this opportunity today. Johnson & Johnson is an Affirmative Action and Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, or protected veteran status and will not be discriminated against on the basis of disability Primary Location United States-New Jersey-New Brunswick- Other Locations United States-New Jersey-Raritan, United States-New Jersey-Somerville, United States-New Jersey-Skillman Organization Johnson & Johnson Services Inc. (6090) Job Function Finance Requisition ID **********
          

Data Scientist

 Cache   
Job Details Overview Who we are & . Ciox Health, a health technology company, is dedicated to improving U.S. health outcomes by transforming clinical data into actionable insights. With an unmatched network offering ubiquitous access, Ciox Health can release, acquire, enhance and deliver medical record and discrete clinical data from anywhere across the United States. What we offer & . At Ciox Health we offer all employees a place to grow and expand their current skills so that they can not only help build Ciox Health into the greatest health technology company but create a career that you can be proud of. We offer you complete training and long-term career goals. Our environment is what most of our employees are the proudest of and our IT Group is comprised of some of the brightest and talented individuals. Give us just a few moments to explain why we need you and hope you will help us change how the health Industry manages its' medical records. Purpose of this role. Open to recent Graduates within the Data Science field. Be a part of transforming the exchange of clinical data using the most advanced technology available. Ciox Health is on a mission to simplify the exchange of medical information. By partnering with the healthcare providers who hold health data and those who are requesting it Ciox is uniquely positioned to access, facilitate and improve the management and exchange of protected health information. Data Scientists are pioneers in leveraging data with Natural Language Processing (NLP) and Machine Learning (ML) algorithms to drive better clinical actions and outcomes. By partnering with business leaders at Ciox Health, this role will validate varying hypothesis to support business strategy into reality and make a very visible impact to consumers of data and improve the bottom line performance of Ciox Health.Responsibilities - Data exploration and discover new uses for existing data sources - Partner with management and business units on innovative ways to successfully utilize data and related AI/ML/NLP tools to advance business objectives and develop efficiencies - Work with product / business team to identify possible avenues to apply AI/ML - Provide guidance to application engineering team so that they can build, deploy and support AI/ML models in production - Develop hypothesis and evaluate the performance of various NLP and AI/ML algorithms to address the business opportunity - Perform analyses using statistical packages / languages such as Python or Spark - Provide oversight to application engineering team so that they can interpret and monitor usage of ML models and continuously measure & tune its accuracy - Develop subject matter expertise on source systems data and metadata - Gain and master a comprehensive understanding of operations, processes, and business objectives and utilize that knowledge for data analysis and business insightQualifications - Master's degree or higher in a quantitative or relevant field (Statistics, Math, Economics, Engineering, Computer Science, Business Analytics, Data Science) - Experience in leading large-scale data science projects and delivering from end to end - Strong proficiency in Python & scripting in general. - Strong experience in data management and analysis with relational and NoSQL database - Excellent problem solving and critical thinking capabilities. - Experience with NLP technology - Experience with Python (sklearn et al), Spark, Scala, or Java - Strong foundational quantitative knowledge and skills - Strong experience in SQL and database management Recommended skills Natural Language Processing Data Analysis Data Management Databases Java (Programming Language) Operations
          

Software Engineering Manager

 Cache   
THE CHALLENGEEventbrite's business continues to grow and scale rapidly, powering millions of events. Event creators and event goers need new tools and technologies that empower them to create/have some of the most memorable of life's moments, live experiences. One of our most important elements in achieving our company goals is our people. As an engineering manager you're responsible for the careers, productivity, and quality (among other things) of Eventbrite's builders. THE TEAMWe're a people-focused Engineering organization: the women and men on our team value working together in small teams to solve big problems, supporting an active culture of mentorship and inclusion, and pushing themselves to learn new things daily. Pair programming, weekly demos, tech talks, and quarterly hackathons are at the core of how we've built our team and product. We believe in engaging with the community, regularly hosting free events with some of the top technical speakers, and actively contributing to open source software (check out Britecharts as an example!). Our technology spans across web, mobile, API, big data, machine learning, search, physical point of sale, and scanning systems. This role is based in Eventbrite's Nashville office. We're one of 5 Eventbrite engineering offices around the world. For a little taste of what the team is like and how Eventbrite's Nashville office hashttp://bit.ly/NashEngTHE ROLEWe're looking for a people-focused manager to help support the career growth of our engineers and collaborate on improvement within our organization.THE SKILL SET





    • Demonstrated experience in recruiting a well-rounded, diverse technical team
    • You have a strong technical background and can contribute to design and architectural discussions - coach first, player second.
    • You support your team in providing context and connecting it with how the team impacts the organization
    • Experience working with a highly collaborative environment, coaching a team who ships code to production often
    • With the help of other engineering managers, you develop a sustainable, healthy work environment which is both encouraging and challenging
    • In a leadership/management position for 2-5 years with demonstrated growth of high-functioning engineering teams



      ABOUT EVENTBRITEEventbrite is a global ticketing and event technology platform, powering millions of live experiences each year. We empower creators of events of all shapes and sizes - from music festivals, experiential yoga, political rallies to gaming competitions -- by providing them the tools and resources they need to seamlessly plan, promote, and produce live experiences around the world. Last year, the team served 795,000 creators hosting nearly 4 million experiences across 170 countries. Meet some of the Britelings that make it happen.
      IS THIS ROLE NOT AN EXACT FIT?Sign up to keep in touch and we'll let you know when we have new positions on our team.

      Eventbrite is a proud equal opportunity/affirmative action employer supporting workforce diversity. We do not discriminate based upon race, ethnicity, ancestry, citizenship status, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), marital status, registered domestic partner status, caregiver status, sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, genetic information, military or veteran status, mental or physical disability, political affiliation, status as a victim of domestic violence, assault or stalking, or other applicable legally protected characteristics. Applicant Privacy Notice

          

Principal Software Engineer - Voice Commerce

 Cache   
SunIRef:Manu:title Principal Software Engineer - Voice Commerce Walmart 190,995 reviews - Sunnyvale, CA 94087 Walmart 190,995 reviews Read what people are saying about working here. Position Description As the Walmart Voice Commerce team we are building completely new capabilities to allow our customers to shop by seamlessly interacting with their connected devices using spoken language. This team is part of the Growth organization and will build new voice experiences both in-house and in collaboration with strategic partners. Voice as a medium for shopping is still in its infancy and as part of this team you will get to work on industry leading solutions and be at the forefront of this emerging platform. You will get to part of defining how customers shop in everyday lives. Minimum Qualifications Masters or equivalent degree in a computational science or engineering with 5+ years of experience or Bachelors with 7+ years experience Familiarity with distributed computing frameworks (e.g., Hadoop/Spark) and relational data base (e.g., Oracle, MySQL), and knowledge of NoSQL database Strong implementation experience with a programming language (e.g., Java/C++/Scala) and a scripting language (e.g., Python/Perl/Ruby), and familiarity with Linux/Unix/Shell environments Strong and demonstrable experience of building complex software systems with deep algorithmic solutions. Additional Preferred Qualifications Experience building and maintaining large scale data pipelines in online advertising, recommender system, search, ecommerce or relevant areas Experience building and/or maintaining machine learning models and pipelines Familiarity with job scheduler (e.g., Jenkins/Azkaban/Airflow) Experience with Elastic Search/Solr. #LI-SN1 Company Summary The Walmart eCommerce team is rapidly innovating to evolve and define the future state of shopping. As the world's largest retailer, we are on a mission to help people save money and live better. With the help of some of the brightest minds in technology, merchandising, marketing, supply chain, talent and more, we are reimagining the intersection of digital and physical shopping to help achieve that mission. Position Summary As the Walmart Voice Commerce team we are building completely new capabilities to allow our customers to shop by seamlessly interacting with their connected devices using spoken language. This team is part of the Growth organization and will build new voice experiences both in-house and in collaboration with strategic partners. Voice as a medium for shopping is still in its infancy and as part of this team you will get to work on industry leading solutions and be at the forefront of this emerging platform. You will get to part of defining how customers shop in everyday lives. Walmart - Just posted report job - original job
          

Sr. Research Scientist, Reinforcement Learning

 Cache   
Changing the world through digital experiences is what Adobe s all about. W give everyone from emerging artists to global brands everything they n to design and deliver exceptional digital experiences. We re passionate abo empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours. Take a peek into Adobe life in this video. The challenge The Cloud Technology organization builds platform and client services that are foundational building blocks for many other Adobe products and services. Areas of focus include: identity, security, cloud storage, e-commerce, workflow management, synchronization, customer facing web apps, scalability, infrastructure management and search, just to name a few. Our mission is to build highly scalable, highly available and highly resilient services that fulfill the business objectives of Adobe The Data Science Lab (DSL) at Adobe Research, San Jose, is looking to hire an established researcher in the broad area of multi-armed bandits, reinforcement learning, sequential decision-making, and probabilistic planning. The successful candidate should have credentials that are preferably at the senior research scientist level. This is an opportunity to work alongside an established world class team of researchers with expertise in reinforcement learning, sequential decision making, multi-armed bandits, optimization, game theory, sketching and streaming algorithms, causation, counterfactuals and imagination-based AI. DSL has an excellent publication record with dozens of papers at top-tier machine learning and AI conferences and journals in recent years. Over the past few years, Adobe has had a world class team in RL, with a highly successful track record of publications in top AI and ML conferences. With more than 170 world-class research scientists and engineers, Adobe Research blends cutting-edge academic discovery with industry impact. Our scientists are provided with the resources, support, and freedom to shape their ideas into innovative technologies. They collaborate with colleagues at over fifty universities, often presenting their work at top-tier international conferences. Many of our researchers discoveries are incorporated into Adobe s products, building the company s reputation as a pioneer in co and data intelligence. Adobe is one of the largest software companies in the world, and a market leader in three major areas: Creative Cloud (Photoshop, Illustrator, Spark etc.), Document Cloud (PDF and related software), and the Digital Experience Cloud (including Adobe Audience Manager, Adobe Analytics, Target, and Campaign). Adobe Experience Cloud (******************************************* is one of the largest data collection platforms in the world, managing the content, customer intelligence, and digital marketing for most Fortune 500 companies. Adobe Digital Experience cloud processes trillions of transactions per year, involving hundreds of petabytes of data, and offers unparalleled opportunities for exploring web scale solutions for AI and machine learning. The role * Help drive the technical agenda in reinforcement learning and related topics. * Responsible for providing technical leadership across multiple teams, by understanding the technical space deeply enough to help guide corporate strategy or business unit strategy, or by providing innovations that fuel the growth of Adobe. * Be able to lead cross functional working groups or initiatives or provide consulting/advice to other departments or groups within the company * Partner with the already existing research teams at Adobe, including world class experts in computer vision, deep learning, NLP, graphics, and HCI * Provide critical analysis of issues for continuous improvement of technology, process and team productivity * Act as an industry evangelist, internally and externally. Engage with the research community at large, including university collaborations, intern recruiting and mentorship, and publishing and participation in top-tier conferences. The_Requirements * Five or more years of experience and a PhD in Machine Learning, Artificial Intelligence or related field * Prior experience in the development of current or future products or technologies * Recognized by peers in industry, academia or the research community * Recognized technical expertise includes but not limited to publications, editorial and advisory boards, conference/symposium presentations, patents, professional peer recognition and strategically important developments, innovations, or technical contributions * Demonstrated experience in mentoring and coaching interns, and junior technical contributors. Application Submission The application should include a brief description of the applicant's research interests and experience, plus a CV that contains the degrees, relevant publications, names and the contact information of references, and other relevant documents. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best_Companies_lists. You will also be surrounded by colleagues who are committed to helping each other grow through our unique Check-In approach where ongoing feedback flows freely. If you re looking to make an impact, Adobe's the place for you. Discover wh our employees are saying about their career experiences on the Adobe Life blog and explore the meaningful benefits we offer. Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of race, gender, religion, age, sexual orientation, gender identity, disability or veteran status.
          

Topaz Mask AI 1.0.1 (Win64)

 Cache   

Topaz Mask AI 1.0.1 (Win64)
Topaz Mask AI 1.0.1 (Win64) REPACK | 1.2 Gb

Creating complex selections by hand and perfecting them almost always takes way longer than expected. Meet Topaz Mask AI. Mask AI allows you to create tricky masks in record time thanks to our intuitive machine learning technology and trimap technique. Less user input for an extremely high-quality mask has always been a photographer's dream, and now you can have it with Mask AI.


          

Automation Engineer Intern

 Cache   
SunIRef:Manu:title Automation Engineer Intern Intel 4,511 reviews - San Jose, CA 95125 Temporary, Internship Intel 4,511 reviews Read what people are saying about working here. Job Description Come intern with one of the largest engineering companies in the world! In this role, you will be a part Intel Programmable Solutions Group (PSG) Team. We are looking for a candidate that conducts or participates in multidisciplinary research in the design, development, testing and utilization of information processing hardware and circuitry in FPGA. You will be working with product development engineering on the automation and infrastructure for large data analysis. 6 month internship is preferred. Qualifications You must possess minimum qualifications to be initially considered for this position. Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates. Experience listed below would be obtained through a combination of your school work/classes/research and/or relevant previous job and/or internship experiences. Minimum Requirements: The candidate must be pursuing a Master's degree in Computer Engineering, Computer Science, Data Science or related field. Minimum of 6 months experience in: SQL and/or database knowledge. Python programming. Preferred qualifications: Machine learning knowledge is a plus. Large data analysis. Inside this Business Group The Programmable Solutions Group (PSG) was formed from the acquisition of Altera. As part of Intel, PSG will create market-leading programmable logic devices that deliver a wider range of capabilities than customers experience today. Combining Altera's industry-leading FPGA technology and customer support with Intel's world-class semiconductor manufacturing capabilities will enable customers to create the next generation of electronic systems with unmatched performance and power efficiency. PSG takes pride in creating an energetic and dynamic work environment that is driven by ingenuity and innovation. We believe the growth and success of our group is directly linked to the growth and satisfaction of our employees. That is why PSG is committed to a work environment that is flexible and collaborative, and allows our employees to reach their full potential. Posting Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.... Intel - Just posted report job - original job
          

Security Technology Manager

 Cache   

We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class.




Job DescriptionArm is at the heart of the world's most advanced digital products. Our technology enables the creation of new markets and transformation of industries and society. We design scalable, energy efficient-processors and related technologies to deliver the intelligence in applications ranging from sensors to servers, including smartphones, tablets, enterprise infrastructure and the Internet of Things.

Our innovative technology is licensed by Arm Partners who have shipped more than 50 billion Systems on Chip (SoCs) containing our intellectual property since the company began in 1990! Together with our Connected Community, we are breaking down barriers to innovation for developers, designers and engineers, ensuring a fast, reliable route to market for leading electronics companies.

With offices around the world, Arm is a diverse community of dedicated, innovative and highly talented professionals! By enabling an inclusive, meritocratic and open workplace where all our people can grow and succeed, we encourage our people to share their unique contributions to Arm's success in the global marketplace

About the role

Arm has an exciting opportunity to join and be part of the Technology Strategy team within the Automotive and IoT segment organization. This team is responsible for ensuring key technologies such as security, functional safety, communications and machine learning come together holistically to support Arm's growth in key market verticals. This includes being the funnel for feedback on security technologies to Arm's product management team and we are an essential actor in driving new market requirements into the product planning process.

This challenging and stimulating role requires you to engage across the full breadth of the automotive and IoT/ embedded ecosystem including tier ones, OEMs, silicon partners and other suppliers, regional standards groups, industry consortia, as well as emerging players in the in the security value chain. The Technology Strategy team plays a key role in educating, influencing and enabling our partners to supply secure, robust systems for automotive and IoT use cases. This senior individual contributor has a deep understanding of existing Arm security architectures and a high comfort level creating and delivering content and collateral, demos and other technical educational materials, as a subject matter expert. This role will have global responsibility for the segments' security ecosystem engagements with a primary focus on automotive and IoT partners. The role will require some amount of international travel for customer meetings, support standards bodies and to collaborate with fellow teams based in other regions. The critical measurements of the team are based upon successfully driving adoption of Arm security technologies.

This is a high-profile opportunity in a critical role contributing to the success of Arm in automotive and embedded markets; security will shape many of Arm's future business opportunities in the Internet of secure things.

What will I be accountable for?

  • Develop and execute market expansion strategies, from broad based evangelism and marketing communications through to business development projects and design win activities to facilitate the adoption of Arm-based security solutions.
  • Drive adoption programs within the automotive and industrial ecosystems and ARM partnership.
  • Engage with players across the ecosystem to identify new market opportunities, trends and requirements.
  • Generate technical marketing materials, including blogs, eBooks, whitepapers, presentation scripts and other marketing collateral
  • Build relationships throughout the Arm ecosystem, from silicon partners, software partners, electronics distributors through to service providers, tier ones and OEMs, standards bodies and industry associations to achieve results







    Job RequirementsWhat skills, experience, and qualifications do I need?

    Required

    • Advanced technical degree in Computer Science, or a similar, relevant subject. Ideal candidates will have a business or marketing degree in addition to a technical background
    • Ten years+ experience working in relevant positions such us business development, marketing and development
    • Active role driving standards activities in security
    • Ability to influence at all levels including C-floor
    • Proven experience in developing and executing upon market expansion and development programs
    • In depth experience in embedded and security technology
    • Product marketing or product management experience in embedded and security technology
    • Marketing and content creation experience in technical markets a plus
    • Excellent written and verbal communication skills
    • Wide technology experience from software through hardware
    • This role will involve 30-50% travel

      Desirable

      • Understanding of CPU & MCU security technology
      • Understanding of TrustZone based TEE architecture and security processors
      • Understanding of wireless technology and connectivity.
      • Knowledge of Arm supply chain and supplier ecosystem
      • Experience working with standards bodies and industry associations.
      • Public speaking and presentation experience

        At Arm, we are guided by our core beliefs that reflect our unique culture and guide our decisions, defining how we work together to defy ordinary and shape extraordinary:

        We not I

        • Take daily responsibility to make the Global Arm community thrive
        • No individual owns the right answer. Brilliance is collective
        • Information is essential, share it
        • Realise that we win when we collaborate - and that everyone misses out when we don't

          Passion for progress

          • Our differences are our strength. Widen and mix up the pool of people you connect with
          • Difficult things can take unexpected directions. Stick with it
          • Make feedback positive and expansive, not negative and narrow
          • The essence of progress is that it can't stop. Grow with it and own your own progress

            Be your brilliant self

            • Be quirky not egocentric
            • Recognise the power in saying 'I don't know'
            • Make trust our default position
            • Hold strong opinions lightly


              #LI-CH1




              BenefitsYour particular benefits package will depend on position and type of employment and may be subject to change. Your package will be confirmed on offer of employment. Arm's benefits program provides permanent employees with the opportunity to stay innovative and healthy, ensure the wellness of their families, and create a positive working environment.

              • Annual Bonus Plan

              • Discretionary Cash Awards

              • 401(k), 100% matching on first 6% eligible earnings

              • Medical, Dental & Vision, 100% coverage for employee only, shared cost for dependents

              • Basic Life and Accidental Death and Dismemberment Insurance (AD&D)

              • Short Term (STD) and Long Term (LTD) Disability Insurance

              • Vacation, 20 days per year with option to buy 5 more.

              • Holidays, 13 days per year

              • Sabbatical, 20 paid days every four-years of service

              • Sick Leave, 7 days per year

              • Volunteering, four hours per month (TeamARM)

              • Office location dependent: caf-- on site, fitness facilities, team and social events

              • Additional benefits include: Flexible Spending Accounts for health and dependent care, EAP, Health Advocate, Business Travel Accident Program & Commuter programs.

                ARM, Inc. (USA) participates in E-Verify. For more information, please refer to www.dhs.gov/E-Verify







                About ArmArm-- technology is at the heart of a computing and connectivity revolution that is transforming the way people live and businesses operate. From the unmissable to the invisible; our advanced, energy-efficient processor designs are enabling the intelligence in 86 billion silicon chips and securely powering products from the sensor to the smartphone to the supercomputer. With more than 1,000 technology partners including the world's most famous business and consumer brands, we are driving Arm innovation into all areas compute is happening inside the chip, the network and the cloud.

                With offices around the world, Arm is a diverse community of dedicated, innovative and highly talented professionals. By enabling an inclusive, meritocratic and open workplace where all our people can grow and succeed, we encourage our people to share their unique contributions to Arm's success in the global marketplace.




                About the officeThe Arm Austin office employs staff from across all divisions of ARM and is considered the engineering hub for North America. Austin has the nickname of "Silicon Hills" thanks to the high number of tech companies in the area, and is also known as the "Live Music Capital of the World". Events such as South by Southwest, Austin City Limits Music Festival and the F1 Grand Prix are but a few of the many activities that make Austin a top destination for both residents and travelers.


                Austin, TX USA

                Arm Inc.

                Encino Trace

                5707 Southwest Pkwy

                Bldg 1 Suite 100

                Austin, TX. 78735

          

Geomatics Technician

 Cache   
Geomatics Technician ==================== US-NY-Rochester Company Information Mixing technology, data, and first-in-class innovation, EagleView is not only leading the property data analytics market, but also changing lives along the way. Come join us and make great things happen! EagleView is a fast-growing technology company driving game changing innovation in multibillion-dollar markets such as property insurance, energy, construction, and government. Leveraging 17 years of the most advanced aerial imaging technology in the world, along with the most recent advances in machine learning and AI, EagleView is fundamentally transforming how our customers do business. At EagleView, we believe that making our culture engaging and empowering are keys to success. Our kitchens are stocked 24/7; social, athletic, and wellness opportunities are plentiful; and the growth, education, and potential of employees is a top priority, making EagleView a Best Place to Work for more than five years running. Job Description We re looking for a Geomatics Technician to join our team. A Geomatics Technician is responsible for quality control and assembly of all Pictometry products. One of the most vital components of a Technician s responsibilities is reviewing digital images and making pass or fail decisions based on image quality and accuracy. We are a fast paced, energetic team driven by continuous process improvement. We re looking for motivated, organized, and independent team members. This position requires good communication skills and the ability to quickly pick up new technologies. Primary Responsibilities * Utilize proprietary internal and industry standard geomatics packages to post-process raw GPS, INS, and digital image data * Ensure that customer image quality and accuracy requirements are met by following established procedures and policies * Creation of various image mosaics using proprietary and third party software * Assembly and QC of final customer image library * Document details related to project progress and overall status updates using established company standards and systems * Work with Geomatics Management to identify best practices and procedural improvements * Ongoing software training and development support * Other duties as assigned Skills & Requirements * Bachelor s Degree preferred * Excellent personal computer abilities required, experience with file management, networking, databases, etc. strongly preferred * Ability to manage detail intensive data processing within given timelines * Experience with information technology, digital imaging, photography, physics, mathematics, management, GPS, GIS, scripting, remote sensing or administrative work would be helpful EagleView offers competitive pay and robust benefit plans along with the opportunity to grow your career in a fast-paced, fun and casual environment. EagleView and its subsidiaries are committed to leveraging the talent of a diverse workforce to create great opportunities for our business and our people. EOE/AA. Minority/Female/Disability/Veteran.
          

Software Engineer (Mid-Level) with Security Clearance

 Cache   
Software Engineer (Mid-Level)
Chantilly, VA 20151 Security Clearance: TS/ISSA Aperio Global is hiring a mid-level Software Engineer to to provide support to a federal government program providing full life cycle development for data development, database operations, and data analytics. Responsibilities include: --- Work with a talent team of developers and data scientists exposing non-standard data through APIs and web applications, particularly REST APIs using AJAX.
--- Support Natural Language Processing (NLP) including OCR, information extraction, and indexing.
--- Stretch your technical capabilities to work across as much of the full stack as you are able for cloud-based operations, ETL, database operations, analytics, front end work, and technology evaluation. Requirements include: --- U.S. citizenship --- Current TS clearance and poly (TS/ISSA)
--- Bachelor's degree in Computer Science, Engineering, Information Security, Data Science, or related field. Additional years of experience in lieu of a degree will be considered.
--- Mid-level experience (5+ years) providing development and data services in a government environment.
--- Experience supporting Agile development is required. Experience in a SecDevOps environment is preferred.
--- Full stack development experience or the ability to expand your IT acumen.
--- Experience with ETL, database operations, and data analytics.
--- Experience with SQL database management.
--- Strong Java development experience using v8 or later.
--- Front-end development experience is a plus.
--- Experience developing in an Amazon Web Services (AWS) environment.
--- Hands-on development supporting data through APIs and web application.
--- Prior experience with common services, drop-in UI components, and deep linking is desired.
--- Experience with information extraction such as regex, entities, sentiment, geotags, topic, events, etc.
--- Experience supporting interfaces and working with tools for big data such as R, Python, Hive, or Pig, particularly for Natural Language Processing (NLP) or Machine Learning (ML).
--- Programming experience with web applications in HTML, CSS and JavaScript using Node, React, or Angular (v2 or later).
--- Experience establishing NiFi data flows is highly desired.
--- Prior work with Systems Administration in an AWS environment is desired.
--- Experience with Elasticsearch, Logstash, and Kibana (ELK) or Solr. Aperio Global delivers professional, innovative strategies and technology to integrate information security and artificial intelligence into the Department of Defense, federal and local government agencies, and commercial sector operations. We bring world-class resources and experience to help clients successfully navigate the complex and ever-changing issues in implementing next-generation concepts while effectively discovering and sustaining critical technology. Visit us at www.aperioglobal.com.
          

Colorado Springs Enterprise Software Startup, Bluestaq, Lands $37M GSA Contract to Extend Unified Data Library

 Cache   

Colorado Springs Enterprise Software Startup, Bluestaq, Lands $37M GSA Contract to Extend Unified Data Library

Posted: Oct. 29, 2019

Unified Data Library will expand to integrate additional space, air, ground and intelligence data sources.

COLORADO SPRINGS, Co., October 29, 2019 – Bluestaq LLC announced it had been awarded a three-year, $37M Phase III Small Business Innovation Research (SBIR) Program contract by the General Services Administration (GSA) to expand the Advanced Command and Control Enterprise Systems and Software (ACCESS) project for the Air Force Research Laboratory (AFRL), the Air Force Space and Missile Systems Center (SMC) Data Program Management Office and the Directorate of Special Programs, Space Situational Awareness Division. Under the Phase III SBIR, the Data Management Platform, the Unified Data Library, will expand to integrate data supporting Space and Air and Multi-domain operations. Data will be integrated from a wide range of sources spanning commercial, foreign, Department of Defense (DoD) and the Intelligence Community (IC).

The Unified Data Library consumes, processes, and distributes millions of unique data products daily originating from dozens of commercial, academic, and government organizations across the world to a diverse user base spanning 25 countries. The Unified Data Library storefront provides a robust interactive online API to assist users or developers with education and discovery of available dashboards, data streams, services, structures, and formats. The Air Force plans to expand the Unified Data Library to allow different security classification user access levels and fuse data from all types of sensors to provide command and control for most Air Force missions.

Work will take place in the Colorado Springs, Co., at the Bluestaq headquarters.

“Bluestaq is thrilled to continue supporting SMC’s Special Programs Division and the Data Program Management Office on the Unified Data Library”, said Andy Hofle, Bluestaq Chief Engineer and Co-Founder. “It has been exciting to see the growing community interest in the data management platform over the last 18 months, and our team has had a tremendous amount of fun playing a role in the development of the project.”

To learn more about the Unified Data Library or to get an account visit, click here

About Bluestaq

Launched in 2018, Colorado Springs-based Bluestaq is a technology company developing transformative enterprise systems, securing disparate data using state-of-the-art practices and the latest technologies, enabling streamlined global operations and modern Artificial Intelligence/Machine Learning based analytics. Learn more at www.bluestaq.com.


          

Data Engineer

 Cache   
Riskonnect is the leading integrated risk management software solution provider that empowers organizations to anticipate, manage and respond in real-time to strategic and operational risks across the extended enterprise.Riskonnect is the only provider ranked in the leadership and visionary quadrants by world renowned industry analysts - Gartner, Forrester and Advisen RMIS Review.We employ more than 500 risk professionals in the Americas, EMEA and Asia Pacific and serve over 900 customers across 6 continents.The combination of innovative risk technology, a customer success mindset, and employee-first belief makes Riskonnect a sought after place to work.

Responsibilities:

  • Develop strategy for new multi-platform data integration and analytics.
  • Develop strategy for new multi-platform-sourced data lake.
  • Contribute to API strategy to facilitate application connectivity and analytics.
  • Contribute to the maintenance and evolution of best practices.
  • Contribute to process documentation.
  • Perform multiple proofs of concept (POCs).
  • Contribute to implementation plan for decided-upon solution(s).

    Required Qualifications:

    • Experience with JavaScript/Java/ Python or Jitterbit and other developer languages.
    • Experience with Data Analytics.
    • Experience with Web Services and APIs.
    • Experience in the development of batch and real-time data integration and data consolidation processes.
    • Experience with machine learning, AI, and data lakes.
    • Proficiency in TSQL/PLSQL query-writing, stored procedure development, and views.
    • Strong analytical skills with ability for problem-solving.
    • Understands the importance of data provenance and the ability to demonstrate it to clients.
    • Detail oriented, organized, self-motivated.

      Preferred Qualifications:

      • Experience with Salesforce.
      • Experience in the Risk Management, Healthcare, Financial, and/or Insurance industries is recommended.
      • Experience with Financial data sets, involving financial validation.

          

Bigger Law Firm Magazine Now Available on Amazon Kindle

 Cache   
San Francisco, CA (Law Firm Newswire) March 29, 2017 - Bigger Law Firm magazine is now available for download on Amazon Kindle as a user-friendly ebook. For just $0.99, readers can download BLF issues directly to their e-reader, iPad, phone or other mobile reading device using the Kindle app, and enjoy reading BLF on the go. The magazine, formatted specifically for Amazon Kindle, offers a user-friendly experience that allows readers to interact with each issue with ease. How Lawyers Can Use Artificial Intelligence and Machine Learning: Bigger Law Firm Magazine Volume 44 is the first BLF issue to be published […]
          

Artificial Intelligence, Virtual Reality, and the Future of Law: This Month in Bigger Law Firm Magazine

 Cache   
San Francisco, CA (Law Firm Newswire) March 20, 2017 – Bigger Law Firm magazine delivers another issue jam-packed with in-depth stories on the intersection of law, technology, and marketing. In this month’s feature story, Roxanne Minott reports that artificial intelligence and machine learning are beginning to take hold in the legal industry. Attorneys are increasingly using advanced AI software to perform repetitive tasks, parse contracts, and review documents with greater accuracy and less labor. Tech startups and researchers are developing more cutting-edge use cases such as recruitment, public legal tools, and even prediction of actual judgments. Minott closes with a […]
          

MTS Intern - PhD

 Cache   
Date Posted October 29, 2019 Category Science-Computer Sciences Employment Type Full-time Application Deadline Open until filled Who are our employees? We're an eclectic group of 4,000+ dreamers, believers and builders, operating in over 40 countries. We're Hungry. Humble. Honest. With Heart. The 4H's: these are our core values and the DNA of our company. They help drive our employees to succeed, to strive to be better, to learn from every experience. Our employees are encouraged to have spirited debates and conversations and to think with a founder's mindset. This means we're all CEO's of the company and, as such, make the best decision every day that aligns with our company goals. It's through our values, our conversations and mindsets that we can continue to disrupt the industry and drive innovation in the market. Who are we in the market? Nutanix is a global leader in cloud software and hyperconverged infrastructure solutions, making infrastructure invisible so that IT can focus on the applications and services that power their business. Companies around the world use Nutanix Enterprise Cloud OS software to bring one-click application management and mobility across public, private and distributed edge clouds so they can run any application at any scale with a dramatically lower total cost of ownership. The result is organizations that can rapidly deliver a high-performance IT environment on demand, giving application owners a true cloud-like experience. Learn more about our products at *************** or follow us on Twitter @Nutanix. Nutanix engineers are crafting a groundbreaking technology, building the Nutanix Enterprise Cloud OS. We're using our love of programming and diverse backgrounds to deliver the simplicity and agility of popular public cloud services, but with the security and control that you need in a private cloud. At Nutanix, you'll find no shortage of challenging problems to work on. We work closely with our product in a collegiate, collaborative environment that encourages the open exploration of idea. The Role: MTS Intern The Engineering Summer Internship is an opportunity to gain exposure to one or more Nutanix engineering roles according to your skillset and interests. Some potential roles include (but not limited to) working on the core data path, storage and filesystems development, distributed systems, infrastructure and platform/hardware deployment, data protection and replication, tools and automation, development of a big data processing platform, development of the API and analytics platform, and Web and front-end UI/UX development. Each intern is paired with a Member of Technical Staff who serves as a guide through our engineering culture, toolsets, and development methodology. Our internship program also includes a series of lunch and learns, training events, and social outings to expose you to other aspects of a rapidly growing Silicon Valley technology company. Responsibilities: - Architect, design, and development software for the Nutanix Enterprise Cloud Platform - Develop a deep understanding of complex distributed systems and design innovative solutions for customer requirements - Work alongside development, test, documentation, and product teams to deliver high-quality products in a fast pace environment - Deliver on an internship project over the course of the program. Present the final product to engineering leadership. Requirements: - Love of programming and skilled in one of the following languages: C++, Python, Golang, or HTML/CSS/Javascript - Extensive knowledge or experience with Linux or Windows - Have taken courses or completed research in the areas of operating systems, files systems, big data, machine learning, compilers, algorithms and data structures, or cloud computing - Knowledge of or experience with Hadoop, MapReduce, Cassandra, Zookeeper, or other large scale distributed systems preferred - Interest or experience working with virtualization technologies from VMware, Microsoft (Hyper-V), or Redhat (KVM) preferred - Detailed oriented with strong focus on code and product quality - The passion & ability to learn new things, while never being satisfied with the status quo Qualifications and Experience: - Pursuing a PhD degree in Computer Science or a related engineering field required. - Available to work up to 40 hours per week for 12 weeks over the summer months Nutanix is an equal opportunity employer. The Equal Employment Opportunity Policy is to provide fair and equal employment opportunity for all associates and job applicants regardless of race, color, religion, national origin, gender, sexual orientation, age, marital status, or disability. Nutanix hires and promotes individuals solely on the basis of their qualifications for the job to be filled. Nutanix believes that associates should be provided with a working environment that enables each associate to be productive and to work to the best of his or her ability. We do not condone or tolerate an atmosphere of intimidation or harassment based on race, color, religion, national origin, gender, sexual orientation, age, marital status or disability. We expect and require the cooperation of all associates in maintaining a discrimination and harassment-free atmosphere. Apply *Please mention PhdJobs to employers when
          

Assistant/Associate Professors--Physical Science and Data Science

 Cache   
Job Summary The College of Science at Purdue University invites applications for multiple positions in ---Physical Science and Data Science--- at the Assistant or Associate Professor level beginning August 17, 2020. Assistant Professor candidates with exceptional qualifications may be considered for an early career endowed professorship. This opportunity is coordinated with concurrent searches in ---Computer Science, Mathematics, and Statistics focused on Data Science--- and ---Data Science in the Life Sciences.--- Qualifications These positions come at a time of new leadership and with multiple commitments of significant investment for the College of Science. We particularly encourage candidates who demonstrate the potential for collaboration across multiple disciplines. We expect that most faculty hired through this search will have interdepartmental joint appointments. College of Science Departments hosting research related to Physical Science include: Chemistry, Earth, Atmospheric, and Planetary Sciences, and Physics and Astronomy, as well as Computer Science, Mathematics, and Statistics. Candidates must have a Ph.D. (or its equivalent) in a closely related field. Successful candidates are expected to develop a vigorous, externally funded, internationally recognized theoretical, computational, experimental, and/or observational research program that addresses research questions of fundamental importance. They are also expected to teach undergraduate and/or graduate courses to a diverse student body and supervise graduate students. Successful candidates will combine an outstanding record of research excellence with a commitment to effective and engaged teaching in both physical science and data science. Candidates should have a broad understanding of the numerical and analytic methods in data science, including machine learning, for physical science subject matters, along with the software systems that implement them. The candidate's program is expected to complement existing research within the home department and teaching needs at the undergraduate and graduate levels. The potential to develop one or more of the following areas is desirable. Development and application of data science and machine learning methods to all areas of chemistry, including computational chemistry, measurement science, analytical chemistry, organic chemistry, physical chemistry, and biological chemistry, or Development and application of data intensive computations in the fields of numerical astrophysics and cosmology, or Development of techniques in big data/astrostatistics in a variety of astronomical sub-fields with increasingly large data sets, or Development and application of advanced data science methods to areas of atmospheric sciences, including but not limited to computational geofluid dynamics, clouds and convection, climate systems, severe weather, subseasonal-to-seasonal prediction, atmospheric chemistry, and remote sensing of Earth or other planetary atmospheres, or Development and application of data science methods to large-scale problems in solid-earth geosciences, including but not limited to those of theoretical and applied geophysics, seismology, geodynamics, tectonophysics, geochemistry, and energy science. The University, College and Departments Purdue University is a public land-grant university in West Lafayette, Indiana. Purdue Discovery Park provides open, collaborative research environments with over 25 interdisciplinary centers, institutes, and affiliated project centers, most notably the Integrative Data Science Initiative. The Rosen Center for Advanced Computing offers advanced computational resources and services with local HPC clusters, research data storage, and data networks. It is the campus liaison to NSF XSEDE and Open Science Grid. As a part of the Physics and Astronomy department, the Astrophysics group has a strong funding record by the major agencies. NSF is strongly invested in LSST, advanced LIGO, and IceCube; all areas of research focus in the group. Inter-departmental efforts to connect with faculty in Computer Science and Statistics in the broad scope of Data Science are underway to develop a state-of-the-art classification and strategy engine for LSST. The group has leadership in theoretical and data intensive numerical modeling of Astrophysical sources making extensive use of the Purdue as well as NASA and NSF clusters. The Department of Earth, Atmospheric, and Planetary Sciences has a Geodata Science Initiative that merges geosciences and data science strategically in research and education. Select participants conduct transdisciplinary collaborative research in the nexus of weather, climate, environment, resources, energy, and society, supported by HPC clusters with GPU, Hadoop, or Spark systems. The Geodata Science for Professionals MS program is an agent for industrial partnerships. Application Procedure: Applicants should submit a cover letter, a curriculum vitae, a teaching statement, and a description of proposed research electronically at https://career8.successfactors.com/sfcareer/jobreqcareer?jobId=8002&company=purdueuniv&username=. Additionally, applicants should arrange for three letters of reference to be e-mailed to the search committee at physdatasci@purdue.edu, specifically indicating the position for which the applicant is applying. Applications will be held in strict confidence and will be reviewed beginning December 1, 2019. Applications will remain in consideration until positions are filled. Inquiries can be sent to physdatasci@purdue.edu. Purdue University's College of Science is committed to advancing diversity in all areas of faculty effort, including scholarship, instruction, and engagement. Candidates should address at least one of these areas in the cover letter, indicating past experiences, current interests or activities, and/or future goals to promote a climate that values diversity, and inclusion. Salary and benefits are competitive, and Purdue is a dual-career friendly employer. Purdue University is an EOE/AA employer. All individuals, including minorities, women, individuals with disabilities, and veterans are encouraged to apply. YourMembership.Category: Education, Keywords: Associate Professor
          

Principal Technical Product Manager - Telecom OSS/BSS Applications and Services - Amazon.com Services, Inc. - Bellevue, WA

 Cache   
Machine Learning and Deep Learning applicability to Telecom services. Strong understanding of business flows and integrated up stream & downstream applications.
From Amazon.com - Fri, 09 Aug 2019 07:52:12 GMT - View all Bellevue, WA jobs
          

State of the Map Asia 2019 - Dhaka, Bangladesh

 Cache   

Pertama kali untuk bisa ikut State of the Map adalah salah satu impian saya sejak bergabung dengan Humanitarian OpenStreetMap Team Indonesia, dan akhirnya impian itu terwujud di tahun ini yaitu State of the Map Asia yang berlokasi di Dhaka, Bangladesh. Mengapa menjadi salah satu impian? Karena State of the Map ini merupakan konferensi internasional untuk saling berbagi pengetahuan dan pengalamannya dalam berkontribusi di pemetaan yang menggunakan OpenStreetMap ataupun data lain. Sesuai dengan ekspetasi, saya pun mendapatkan kesempatan mengikuti State of the Map Asia melalui program beasiswa.

Saat mendapatkan pengumuman bahwa saya menerima beasiswa rasanya senang sekali dan saya langsung bergegas menyiapkan keperluan yang dibutuhkan untuk ke Bangladesh. Kesempatan ini juga didapatkan oleh Silvia Dwi Wardhani dan Tri Selasa Pagianti, mereka adalah teman satu kantor di HOT - ID. Hari pertama tiba di Dhaka, Bangladesh tepatnya di Bandara Hazrat Shahjalal International pada pukul 01.30 am waktu Dhaka saya bertemu juga dengan dua orang penerima beasiswa yang sedang menunggu penjemputan untuk menuju tempat penginapan dari pihak panitia SotM Asia 2019, yaitu Monica (asal Phillipine) dan Suthakaran (asal Sri Lanka).

State of the Map Asia 2019 berlangsung selama dua hari, tanggal 1-2 November 2019. Selama dua hari tersebut akan ada berbagai sesi mulai dari talk, lightening talk, workshop dan panel discussion. Sesi saya ada di hari pertama pada pukul 16.30 pm yang berlokasi di Main Auditorium. Sesi saya bertipe talk dengan judul “Quality Assurance for Indonesia Road Mapping”. Judul tersebut saya ceritakan tentang pengalaman saya sebagai Quality Assurance dalam proyek Indonesia Road Mapping. Proyek tersebut bekerjasama antara HOT-ID dan Facebook, yang dimana untuk memetakan jalan di Indonesia dengan menggunakan Machine Learning. Sungguh pengalaman yang luar biasa saya dapatkan di tahun ini dengan mengikuti State of the Map Asia 2019.

Pengalaman luar biasa yang saya dapatkan adalah berani berbicara bahasa inggris di depan orang banyak sekelas internasional, membagikan pengalaman dalam pemetaan menggunakan OpenStreetMap dan menambah jaringan pertemanan internasional. Banyak ilmu baru yang saya dapatkan pada kesempatan kali ini.


          

Engineering: Senior Machine Learning Engineer - Los Angeles, California

 Cache   
We've partnered on an exclusive basis with a leading FinTech brand based out of Santa Monica. They're on the market for a Senior Machine Learning to sit in their Data team. This role will require an ML Engineer who can bring ML models into production together with a team of product analysts, data engineers, and product managers. Skills/Qualifications: 3 years of experience with Machine learning techniques like classification, regression, anomaly detection, and clustering. Experience with data analysis languages such as Python or Scala. Experience with bringing at least 2 models to production. In addition to extremely competitive cash compensation - which includes a quarterly performance bonus - they offer; a generous stock package, 100% coverage of health benefits (incl. coverage for dependents), 401K match, monthly fitness reimbursement, referral car buying program, unlimited PTO, parental leave, plus 10 paid federal holidays. If interested in learning more about this opportunity, then reach out ()
          

IT / Software / Systems: Senior Data Engineer - SQL / Redshift / AWS - Premier Ecommerce Publishing Brand - Los Angeles, California

 Cache   
Are you a Senior Data Engineer with a strong SQL, ETL Redshift and AWS background seeking an opportunity to work with massive amounts of data in a very hip marketplace? Are you a Senior Data Engineer interested in unifying data across various consumer outlets for a very well-funded lifestyle brand in the heart of Santa Monica? Are you an accomplished Senior Data Engineer looking for an opportunity to work in a cutting-edge tech environment consisting of; SQL, Redshift, Hadoop, Spark, Kafka and AWS? If yes, please continue reading.... Based in Santa Monica, this thriving lifestyle brand has doubled size in the last year and keeps on growing With over $75 million in funding, they work hard to provide their extensive audience with advice and recommendations in all things lifestyle: where to shop, eat, travel, etc. Branching into a number of different services and products over the next 12 months, they are building out their Engineering team. They are looking for a Senior Data Engineer to unify and bring to life mass amounts of data from all areas of the business; ecommerce, retail, content, web, mobile, advertising, marketing, experiential and more. WHAT YOU WILL BE DOING: Architect new and innovative data systems that will allow individuals to use data in impactful and exciting ways Design, implement, and optimize Data Lake and Data Warehouses to handle the needs of a growing business Build solutions that will leverage real-time data and machine learning models Build and maintain ETL's from 3rd party sources and ensure data quality Create data models at all levels including conceptual, logical, and physical for both relational and dimensional solutions Work closely with teams to optimize data delivery and scalability Design and build complex solutions with an emphasis on performance, scalability, and high-reliability Design and implement new product features and research the next wave of technology WHAT YOU NEED: Extensive experience and knowledge of SQL, ETL and Redshift Experience wrangling large amounts of data Skilled in Python for scripting Experience with AWS Experience with Big Data tools is a nice plus; Hadoop, Spark, Kafka, Ability to enhance and maintain a data warehouse including use of ETL tools Successful track record in building real-time ETL pipelines from scratch Previous Ecommerce or startup experience is a plus Understanding of data science and machine learning technologies Strong problem solving capabilities Strong collaborator and is a passionate advocate for data Bachelor's Degree in Computer Science, Engineer, Math or similar WHAT YOU GET: Join a team of humble, creative and open-minded Engineers shipping exceptional products consumers love to use Opportunity to work at an awesome lifestyle brand in growth mode Brand new office space, open and team oriented environment Full Medical, Dental and Vision Benefits 401k Plan Unlimited Vacation Summer vacations / Time off Offices closed during winter holidays and new years Discounts on products Other perks So, if you are a Senior Data Engineer seeking an opportunity to grow with a global lifestyle brand at the cusp of something huge, apply now ()
          

Machine Learning Algorithms Help Predict Traffic Headaches

 Cache   
Urban traffic roughly follows a periodic pattern associated with the typical “9 to 5” work schedule. However, when an accident happens, traffic patterns are disrupted. Designing accurate traffic flow models, for use during accidents, is a major challenge for traffic engineers, who must adapt to unforeseen traffic scenarios in real time. A team of Lawrence Berkeley National Lab computer scientists are working with the California Department of Transportation (Caltrans) to use high performance
          

 Cache   

Skip to main content LINKEDIN RECRUITER LITE PROJECTS CLIPBOARD1 JOBS REPORTS MORE INBOX NOTIFICATIONS TODO3 HELP Profiles from Search 549 of 2,194 <Previous Page >Next Page David Welch 3rd — San Diego, California, United StatesComputer Software Education University of California San Diego, Mathematics-Computer Science and Cognitive Science Computation, Machine Learning 37 Contact Info EditEdit unlinked […]

The post appeared first on PureBold.


          

Other: Cloud Application SME - Reston, Virginia

 Cache   
Summary / DescriptionWe are currently seeking a motivated, career and customer-oriented Cloud Application SME with a background as a leader in using Agile Development methods to join our team in Northern Virginia and begin an exciting and challenging career with Unisys Federal Systems.--This individual shall be familiar with cloud native services and be able to handle all the back-end and front-end technologies , including software development, databases, systems engineering, security and user experience, necessary to deliver mission capabilities to clients. The ideal candidate will have entrepreneurial approach for addressing opportunities and a proven record of successfully leading teams in delivering capabilities.--In this role, you must be able to translate business requirements into technical solutions. You will be leading teams as well as collaborating. You will apply your expertise on multiple complex work assignments, which are broad in nature, requiring originality and innovation in determining how to accomplish tasks. You will apply your comprehensive knowledge across key tasks and high impact assignments, evaluate performance results and recommend changes as necessary to achieve project success. You will lead development and migration/ modernization of systems related to a broad range of business areas. Your overall responsibilities will include designing, developing, enhancing, debugging, and implementing software solutions in cloud to meet customer requirements and goals. In this role you will:--- Create App architectures in the cloud--- serve as the creative solution expert, apply your application design expertise and firm grasp on the latest application development technologies.--- utilize your vast experience in and continual growth of all development technologies in use in the federal market place and be seen as the application development expert amongst his/her peers.--- utilize hands on experience with CSP native application development tools to host secure, responsive and user experience driven applications.--- Employ Agile/DevSecOps (Scrum, Kanban, etc.), supplying coaching, expertise and thought leadership--- Serve as project lead or lead technical staff in course of application development projects including leading Agile Teams--- Build and manage delivery teams (including remote resources) in support of Unisys Federal Systems opportunities--- Lead and support hands-on full stack engineering development on projects, coaching and helping delivery teams to adapt this as part of their development life cycle--- Understand the DevSecOps tooling landscape and have experience integrating various DevSecOps tools together into toolchains to provide end-to-end application lifecycle management--- Support proposal development through the acquisition lifecycle, including creating response, written, orals, plans, and artifacts----Requirements--- Master's degree and 15 years of relevant experience or equivalent--- Must be familiar with Agile methodology and be able to lead and work collaboratively in a team environment. Excellent written and oral communications skills are essential as well as strong customer focus and presence.--- Hands on experience at a mastery level in at least 4 current generation languages (Java, .Net, any javascript variant, Go, etc.)--- Hands on experience at a senior level in at least 3 scripting languages (Bash, Python, Powershell, CloudFormation, Terraform, etc.)--- Mastery level understanding of common development data formats to include xml, json, yaml, sql and have the ability to rapidly parse this data for meaningful information--- Must be a good problem solver, and enjoy complex challenges and perform well under pressure.--- Must be a self-starter and require minimal oversight in grasping requirements or potential tools or techniques to solve complex problems / tasks.--- Demonstrable record successfully leading teams.--- Expertise with DevSecOps, Testing Automation, Continuous Integration & Deployment (CI-CD) environment using tools such as Gradle and Jenkins. --- Familiarity with Microservices Patterns and best practices.--- Proven skills in Cloud native development--- REST API creation--- Angular--- NoSQL databases such as Mongo or Dynamo--- Java, including frameworks such as Spring and SpringBoot--- Amazon Web Services and MS Azure--- Docker and other operating-system-level virtualization (containerization) programsFamiliarity and experience with the following is desirable: --- Kafka distributed streaming platform--- Mobile application development experience--- Event driven architecture--- Code samples or demo GitHub repos preferred--- OpenShift--- Graph databases--- Machine Learning--- Advanced visualization tools--- Rules engines--- UI/UX experienceAbout UnisysDo you have what it takes to be mission critical? Your skills and experience could be mission critical for our Unisys team supporting the Federal Government in their mission to protect and defend our nation, and transform the way government agencies manage information and improve responsiveness to their customers. --As a member of our diverse team, you---ll gain valuable career-enhancing experience as we support the design, development, testing, implementation, training, and maintenance of our federal government---s critical systems. Apply today to become mission critical and help our nation meet the growing need for IT security, improved infrastructure, big data, and advanced analytics.Unisys is a global information technology company that solves complex IT challenges at the intersection of modern and mission critical. We work with many of the world's largest companies and government organizations to secure and keep their mission-critical operations running at peak performance; streamline and transform their data centers; enhance support to their end users and constituents; and modernize their enterprise applications. We do this while protecting and building on their legacy IT investments. Our offerings include outsourcing and managed services, systems integration and consulting services, high-end server technology, cybersecurity and cloud management software, and maintenance and support services. Unisys has more than 23,000 employees serving clients around the world. Unisys offers a very competitive benefits package including health insurance coverage from first day of employment, a 401k with an immediately vested company match, vacation and educational benefits. To learn more about Unisys visit us at www.Unisys.com.Unisys is an Equal Opportunity Employer (EOE) - Minorities, Females, Disabled Persons, and Veterans.#FED# ()
          

Engineering: Senior Machine Learning Engineer - Los Angeles, California

 Cache   
We've partnered on an exclusive basis with a leading FinTech brand based out of Santa Monica. They're on the market for a Senior Machine Learning to sit in their Data team. This role will require an ML Engineer who can bring ML models into production together with a team of product analysts, data engineers, and product managers. Skills/Qualifications: 3 years of experience with Machine learning techniques like classification, regression, anomaly detection, and clustering. Experience with data analysis languages such as Python or Scala. Experience with bringing at least 2 models to production. In addition to extremely competitive cash compensation - which includes a quarterly performance bonus - they offer; a generous stock package, 100% coverage of health benefits (incl. coverage for dependents), 401K match, monthly fitness reimbursement, referral car buying program, unlimited PTO, parental leave, plus 10 paid federal holidays. If interested in learning more about this opportunity, then reach out ()
          

IT / Software / Systems: Senior Data Engineer - SQL / Redshift / AWS - Premier Ecommerce Publishing Brand - Los Angeles, California

 Cache   
Are you a Senior Data Engineer with a strong SQL, ETL Redshift and AWS background seeking an opportunity to work with massive amounts of data in a very hip marketplace? Are you a Senior Data Engineer interested in unifying data across various consumer outlets for a very well-funded lifestyle brand in the heart of Santa Monica? Are you an accomplished Senior Data Engineer looking for an opportunity to work in a cutting-edge tech environment consisting of; SQL, Redshift, Hadoop, Spark, Kafka and AWS? If yes, please continue reading.... Based in Santa Monica, this thriving lifestyle brand has doubled size in the last year and keeps on growing With over $75 million in funding, they work hard to provide their extensive audience with advice and recommendations in all things lifestyle: where to shop, eat, travel, etc. Branching into a number of different services and products over the next 12 months, they are building out their Engineering team. They are looking for a Senior Data Engineer to unify and bring to life mass amounts of data from all areas of the business; ecommerce, retail, content, web, mobile, advertising, marketing, experiential and more. WHAT YOU WILL BE DOING: Architect new and innovative data systems that will allow individuals to use data in impactful and exciting ways Design, implement, and optimize Data Lake and Data Warehouses to handle the needs of a growing business Build solutions that will leverage real-time data and machine learning models Build and maintain ETL's from 3rd party sources and ensure data quality Create data models at all levels including conceptual, logical, and physical for both relational and dimensional solutions Work closely with teams to optimize data delivery and scalability Design and build complex solutions with an emphasis on performance, scalability, and high-reliability Design and implement new product features and research the next wave of technology WHAT YOU NEED: Extensive experience and knowledge of SQL, ETL and Redshift Experience wrangling large amounts of data Skilled in Python for scripting Experience with AWS Experience with Big Data tools is a nice plus; Hadoop, Spark, Kafka, Ability to enhance and maintain a data warehouse including use of ETL tools Successful track record in building real-time ETL pipelines from scratch Previous Ecommerce or startup experience is a plus Understanding of data science and machine learning technologies Strong problem solving capabilities Strong collaborator and is a passionate advocate for data Bachelor's Degree in Computer Science, Engineer, Math or similar WHAT YOU GET: Join a team of humble, creative and open-minded Engineers shipping exceptional products consumers love to use Opportunity to work at an awesome lifestyle brand in growth mode Brand new office space, open and team oriented environment Full Medical, Dental and Vision Benefits 401k Plan Unlimited Vacation Summer vacations / Time off Offices closed during winter holidays and new years Discounts on products Other perks So, if you are a Senior Data Engineer seeking an opportunity to grow with a global lifestyle brand at the cusp of something huge, apply now ()
          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

15. Advanced Technology Daysi o umjetnoj inteligenciji, hibridnom oblaku i novim praksama na području mobilnih i web rješenja

 Cache   
15. Advanced Technology Daysi o umjetnoj inteligenciji, hibridnom oblaku i novim praksama na području mobilnih i web rješenja

Zagreb 24. listopada, 2019.

15. Advanced Technology Daysi o umjetnoj inteligenciji, hibridnom oblaku i novim praksama na području mobilnih i web rješenja

Objavljena lista predavanja i predavača koji će sudjelovati na 15. izdanju konferencije, a sudionici mogu očekivati top IT teme i demoe koji će prikazati uspješne primjere primjene najnovijih tehnologija.

15. izdanje Advanced Technology Days konferencije održat će se u Zagrebu, u Plaza Event Centru, 4. i 5. prosinca 2019. godine. Konferencija koja donosi najaktualnije domaće IT teme i pregled globalnih trendova okupit će više od 500 sudionika koji će imati priliku naučiti više putem predavanja koja će se odvijati u 5 programskih smjerova. 15. rođendan ATD će proslaviti još bogatijim i kvalitetnijim programom koji će okupiti više od 50 vrhunskih domaćih i stranih stručnjaka, a više o njima i temama o kojima će govoriti može se saznati na web stranici na kojoj će uskoro biti objavljen i raspored predavanja.


Dvodnevni program donijet će pregled domaćih i svjetskih aktualnosti, a među top temama bit će razvoj mobilnih i web rješenja, kontejneri i DevOps, moderne podatkovne tehnologije, umjetna inteligencija, hibridni oblak, Serverless i poslovna IT rješenja. U skladu s konceptom konferencije, koji je usmjeren na stjecanje znanja i iskustva, sudionici će moći čuti i vidjeti niz demoa uspješnih primjera i IT praksi. 15. konferencijsko izdanje obilježit će i neki strani gosti predavači koje domaća IT publika još nije imala prilike slušati te niz popratnih događanja kojima će ATD proslaviti svoj rođendan.

ATD 14 sudionici

Marco Hochstrasser, osnivač i CTO startupa za marketinšku analitiku nexoya, prvi puta dolazi na Advanced Technology Days. Održat će dva predavanja, 5 reasons why GraphQL will replace REST very soon, i Anomaly detection for time-series data, a machine-learning usecase. Marco i ekipa iz nexoye izgradili su svoju cjelokupnu arhitekturu na mikroservisnom pristupu, a jedna od važnih arhitektonskih odluka bila je upravo korištenje GraphQLa. Na predavanju će predstaviti GraphQL i objasniti zašto misli da će vrlo brzo zamijeniti REST kao standard za API. Na svom drugom predavanju bavit će se detekcijom anomalija, od osnova do toga kako je primijeniti na podatke uz pomoć modernih machine learning algoritama, uz nezaobilazan demo.

Alan Debijađi, suosnivač Unitflyja, održat će predavanje B2B Enterprise Integracije u Azureu. B2B način komunikacije više nije rezerviran samo za specijalizirane “on-premise” alate otkada je Azure Logic Apps uveo je “Enterprise Integration Pack” i poseban “Integracijski račun” pomoću kojih podržava standarde kao što su Electronic Data Interchange (EDI) i Enterprise Application Integration (EAI). Tijekom predavanja će pokazati osnove B2B komunikacije i Azure Logic Apps “Enterprise” usluga. Alen Delić, Senior Information Security Consultant u Divertu, održat će predavanje zabavnog naziva: Kako besplatno jesti u hotelu. Tijekom predavanja će koristiti pomoć sudionika da opiše osnovne koncepte napada sustava putem metoda socijalnog inženjeringa, uključujući i napade telefonom i direktnim kontaktom. Uz navedene, na konferenciji će svoja predavanja i demoe održati više od 50 domaćih i stranih predavača, a program sa satnicom bit će uskoro objavljen na www.advtechdays.com

Organizator ATD-a su predstavnici hrvatske IT zajednice, a operativnu pomoć pri organizaciji pruža im agencija MPG. Programski odbor koji priprema program konferencije svake se godine mijenja da bi odabir tema bio što raznovrsniji te da bi sudionici mogli naučiti više o raznim područjima, od Microsoft tehnologija do rješenja otvorenog koda. Programski odbor ove godine čine: Marin Franković, Filip Glavota, Ivan Marković, Domagoj Pavlešić, Ivan Pranjić, Ilija Ranogajec, Tomislav Tipurić i Vedran Vučetić.

Svi koji žele sudjelovati na konferenciji mogu se prijaviti online. Cijena kotizacije iznosi 950,00 kn + PDV za oba konferencijska dana.


          

Senior Systems Engineer - Denver, CO

 Cache   
Required Security Clearance: TS/SCI with an ability to get CI poly Required Certifications: N/A Required Education: Bachelor’s degree in technology or the sciences is preferred with 12 years experience. Required Experience: 12 years’ experience with a BA/BS. We can modify education based on the length of experience and actual education level. Functional Responsibility: Support high performance (HPC) and accelerated compute environments from the ground up. Help in the creation and maintenance of a DevOps process for program efforts, from the basic data collection and pre-processing, to building and training models in AI and Machine Learning within a R&D environment. Apply experience in devops, high-performance compute, GPU-processing, and cluster management. This is not system administration, this is an engineering role to optimize. You will be working as a direct engineer. Qualifications: Experience working on Linux systems. Experience with building and deploying containerized, gpu-enabled applications in Docker, Singularity, or Kubernetes. Experience in orchestration and cluster management tools such as Slurm, Mesos, or Moab. Experience with deploying systems in both on-premise and cloud environments (AWS, Azure, Google). Preferences: Strong preference to those with experience with AI and Machine Learning Development Tool Sets (Jupyter, Keras, TensorFlow, MPI, OpenMP, OpenCL, CUDA). Working Conditions: Work is typically based in a busy office environment and subject to frequent interruptions. Business work hours are normally set from Monday through Friday 8:00am to 5:00pm, however some extended or weekend hours may be required. Additional details on the precise hours will be informed to the candidate from the Program Manager/Hiring Manager. Physical Requirements: May be required to lift and carry items weighting up to 25 lbs. Requires intermittent standing, walking, sitting, squatting, stretching and bending throughout the work day. Background Screening/Check/Investigation: Successful Completion of a Background Screening/Check/Investigation will/may’ be required as a condition of hire. Employment Type: Full-time / Exempt Benefits: Metronome offers competitive compensation, a flexible benefits package, career development opportunities that reflect its commitment to creating a diverse and supportive workplace. Benefits include, not all inclusive – Medical, Vision & Dental Insurance, Paid Time-Off & Company Paid Holidays, Personal Development & Learning Opportunities. Other: An Equal Opportunity Employer: All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status or disability status. Metronome LLC is committed to providing reasonable accommodations to employees and applicants for employment, to assure that individuals with disabilities enjoy full access to equal employment opportunity (EEO). Metronome LLC shall provide reasonable accommodations for known physical or mental limitations of qualified employees and applicants with disabilities, unless Metronome can demonstrate that a particular accommodation would impose an undue hardship on business operations. Applicants requesting a reasonable accommodation may make a request by contacting us.
          

OPIR Scientist - Journeyman - Springfield, VA

 Cache   
Required Security Clearance: TS/SCI with an ability to get CI poly Required Certifications: N/A Required Education: 4 years of experience and a BA/BS. In place of the degree, may substitute 11 years of relevant work experience. Desired: MA/MS degree. In place of the degree, may substitute 15 years of relevant work experience. Required Experience: see Quals Functional Responsibility: Work as a scientist in a fast–paced research and development team on EO/IR imagery systems addressing image and video collection, characterization and exploitation problems. Apply remote sensing principles and methods to analyze data and solve problems including: automating data analysis using machine learning, characterize existing and new sensor systems, develop new and improve data analysis techniques, propose collection strategies to address new signature development. Support informational briefings for educating scientists and managers on research outcomes and the impact on the mission. Design collection experiments and compare measured results with ground truth data. Examples of potential projects include: extract weak signals and identify new signatures from noisy data, image reconstruction, object detection and tracking, feature extraction and classification, machine learning, signal processing, and radiometric analysis. Qualifications: Working experience with algorithm development, Matlab and/or Python scripting, HPC, image science, signal processing, EO/IR systems. Ability to learn new concepts and apply them to meet research objectives is required. Able to perform the listed functional responsibilities. Experience is critical. Preferences: Masters Degree, but not necessary. Working Conditions: Work is typically based in a busy office environment and subject to frequent interruptions. Business work hours are normally set from Monday through Friday 8:00am to 5:00pm, however some extended or weekend hours may be required. Additional details on the precise hours will be informed to the candidate from the Program Manager/Hiring Manager. Physical Requirements: May be required to lift and carry items weighting up to 25 lbs. Requires intermittent standing, walking, sitting, squatting, stretching and bending throughout the work day. Background Screening/Check/Investigation: Successful Completion of a Background Screening/Check/Investigation will/may’ be required as a condition of hire. Employment Type: Full-time / Exempt Benefits: Metronome offers competitive compensation, a flexible benefits package, career development opportunities that reflect its commitment to creating a diverse and supportive workplace. Benefits include, not all inclusive – Medical, Vision & Dental Insurance, Paid Time-Off & Company Paid Holidays, Personal Development & Learning Opportunities. Other: An Equal Opportunity Employer: All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status or disability status. Metronome LLC is committed to providing reasonable accommodations to employees and applicants for employment, to assure that individuals with disabilities enjoy full access to equal employment opportunity (EEO). Metronome LLC shall provide reasonable accommodations for known physical or mental limitations of qualified employees and applicants with disabilities, unless Metronome can demonstrate that a particular accommodation would impose an undue hardship on business operations. Applicants requesting a reasonable accommodation may make a request by contacting us.
          

Tara O'Shea, Rebecca Moore and Carlos Souza on NextGenMap and forest monitoring

 Cache   
On a panel moderated by Tara O'Shea (Planet), Rebecca Moore (Google) and Carlos Souza (Imazon) converse about NextGenMap's ability to revolutionize our abilities to monitor forests. Historically, monitoring forests and land-use change has involved a compromise between the resolution and frequency of available satellite images and limited computing capacity for analysis. Recent developments in satellite imagery, machine learning and land use classification offer new opportunities to disrupt these limitations.
          

Förderprogramm JumpStart schafft für Jungunternehmen den passenden Rahmen

 Cache   

Das Bundesministerium für Digitalisierung und Wirtschaftsstandort (BMDW) hat gemeinsam mit der Austria Wirtschaftsservice GmbH (aws) die dritte Ausschreibungsrunde des Programms JumpStart erfolgreich abgeschlossen. Aus 24 eingereichten Projekten hat eine unabhängige Expertenjury die besten Konzepte ausgewählt, die nun als Inkubatoren mit jeweils bis zu 150.000 Euro unterstützt werden. Diese sind: Female Founders, Lemmings und The Ventury aus Wien, Climate KIC aus Niederösterreich und I.E.C.T. aus Tirol. Schwerpunkt des Programms liegt dabei auf der Unterstützung und Weiterentwicklung heimischer Inkubatoren und Akzeleratoren, die innovativen Start-ups nicht nur Büro-, Labor-, oder Produktionsflächen, sondern insbesondere maßgeschneiderte Beratungsleistungen zur Verfügung stellen.

„Unsere innovativen Start-ups brauchen die besten Rahmenbedingungen. Mit dem JumpStart-Programm leisten wir einen wichtigen Beitrag, um aus Ideen erfolgreiche Geschäftsmodelle zu machen. Besonders positiv ist, dass wir in dieser Runde mit der Unterstützung von Female Founders auch gezielt Frauen in Start-ups unterstützen können. Wir brauchen mehr Gründerinnen und dafür braucht es neben Mut und Eigeninitiative auch entsprechende Rahmenbedingungen“, sagt Wirtschaftsministerin Margarete Schramböck.

Projekte aus ganz Österreich

Im Rahmen der dritten Ausschreibungsrunde wurden 24 Anträge aus ganz Österreich eingereicht. Neben bekannten und in der Szene fest verankerten Akteuren konnten in dieser Runde auch viele junge Initiativen angesprochen werden. Die Bandbreite der Bewerber und der ausgewählten Projekte reichte dabei von „Stand-alone“- über Corporate-Inkubatoren und Technologiezentren bis hin zu akademischen Akzeleratoren und verteilt sich über alle Start-up-relevanten Branchen, wie Life Sciences, IT, Web/Mobile, Dienstleistungen und Hardware.

„Um ihre Ideen umzusetzen, brauchen Start-ups neben finanziellen Ressourcen eine Arbeitsumgebung, in der sie sich ganz auf ihre Projekte konzentrieren und gleichzeitig von der Vernetzung und vom lebendigen Erfahrungsaustausch mit anderen Start-ups profitieren können. Mit aws JumpStart unterstützen wir die besten Inkubatoren und schaffen damit die notwendigen Rahmenbedingungen“, sagt die aws Geschäftsführung, Edeltraud Stiftinger und Bernhard Sagmeister.

In einem ersten Schritt wurden in der Förderungsschiene nun geeignete Inkubatoren und Akzeleratoren ausgewählt und unterstützt. Damit wird für Unternehmungen ein produktiver und unbürokratischer Rahmen geschaffen, in dem sie sich entwickeln können. Zudem brauchen besonders innovative Start-ups auch selbst Finanzierung. In einem zweiten Modul des Förderungsprogramms werden daher vielversprechende Start-ups auch direkt unterstützt. Bis zu fünf der Unternehmen, die sich in einem JumpStart Inkubator/ Akzelerator befinden, werden dazu ausgewählt. Pro Start-up ist eine Förderung von 22.500 Euro vorgesehen.

Die Projekte im Überblick:

Climate KIC

Climate KIC ist Europas größtes öffentlich-privates Netzwerk für Klimaschutzinnovation, das sowohl in Österreich als auch in 31 weiteren europäischen Ländern tätig ist. Durch das enorme Partnernetzwerk von mehr als 330 Forschungsinstitutionen, Bildungseinrichtungen und KMU wird den Start-ups in den Bereichen der Entwicklung von grünen Finanzinstrumenten, nachhaltigen Produktionssystemen, klimafreundlicher Landnutzung und nachhaltiger Städtenutzung ein breites Angebot an Coachings und Workshops geboten.

Female Founders

Der Female Founders Verein wurde 2016 von Lisa-Marie Fassl, Tanja Sternbauer und Nina Wöss gegründet, um eine Plattform zur stärkeren Vernetzung und Förderung von Frauen im Start-up Bereich zu schaffen. Mittlerweile hat sich die Female Founders Community auch international einen Namen gemacht, mit Mitgliedern aus mehr als 10 Nationen. Diese Community dient in weiterer Folge der Akquise von Unternehmen für das geplante Accelerator-Programm, das ausgewählte Projekte zu einer „investment-readiness“ und einem erfolgreichen Markteintritt führen soll.

I.E.C.T.

Die private Institution I.E.C.T. – Institute for Entrepreneurship Cambridge – Tirol hat mit Ihrem Co-Founder Dr. Hermann Hauser, dem Mitbegründer des Cambridge Phenomenon, ein Urgestein mit an Bord, das einen essentiellen Beitrag zum Aufbau einer aufstrebenden Entrepreneurship-Kultur beigetragen hat. I.E.C.T. bietet bestehenden und etablierten Unternehmen als auch der Industrie durch einen Strategie-Support und Innovationsscouting die optimalen Voraussetzungen, um auf ihre Bedürfnisse einzugehen.

Lemmings

Lemmings ist ein Wiener Early-Stage Inkubator und Akzelerator mit einem Schwerpunkt auf Emerging Technology wie Artificial Intelligence, Blockchain und Virtual & Augmented Reality. Das Gründerteam Thomas Schranz und Allan Berger weist bereits große Erfahrung durch die Gründung Ihres Start-ups Blossom auf, dass ein Projektmanagement Service für Software-Teams bereitstellt. Lemmings hat in den letzten zwei Jahren über 200 Teilnehmer betreut und um die Talente noch besser zu fördern und anzuziehen, wird jetzt ein Programm namens "Project Magic" etabliert.

The Ventury

The Ventury wurde 2016 unter anderem von Christoph Aschberger, Christoph Bitzner und Jakob Reiter in Wien gegründet. Zusammengefunden haben sich die drei durch die gemeinsame Arbeit am Start-up Simplewish, das auch weiterhin operativ agiert. Ihre Erfahrungen sammelten die Gründer im österreichischen Start-up Ökosystem als Mentoren, Jury-Mitglieder und Vortagende für Organisationen und Bildungseinrichtungen. Der Fokus des Inkubator- und Akzelerator Programms liegt in der operativen Unterstützung von Start-ups im Bereich Conversational Interfaces, AI und Machine Learning.


          

LinkedIn Data Engineering with Kapil Surlaker

 Cache   

A large social network needs to develop systems for ingesting, storing, and processing large volumes of data. Data engineering at scale requires multiple engineering teams that are responsible for different areas of the infrastructure. Data needs to be structured coherently in order to minimize the data cleaning process. Machine learning models need to be developed,

The post LinkedIn Data Engineering with Kapil Surlaker appeared first on Software Engineering Daily.


          

Probabilistic Super-Resolution of Solar Magnetograms: Generating Many Explanations and Measuring Uncertainties. (arXiv:1911.01486v1 [cs.LG])

 Cache   
Probabilistic Super-Resolution of Solar Magnetograms: Generating Many Explanations and Measuring Uncertainties. (arXiv:1911.01486v1 [cs.LG]) <a href="http://arxiv.org/find/cs/1/au:+Gitiaux_X/0/1/0/all/0/1">Xavier Gitiaux</a>, <a href="http://arxiv.org/find/cs/1/au:+Maloney_S/0/1/0/all/0/1">Shane A. Maloney</a>, <a href="http://arxiv.org/find/cs/1/au:+Jungbluth_A/0/1/0/all/0/1">Anna Jungbluth</a>, <a href="http://arxiv.org/find/cs/1/au:+Shneider_C/0/1/0/all/0/1">Carl Shneider</a>, <a href="http://arxiv.org/find/cs/1/au:+Wright_P/0/1/0/all/0/1">Paul J. Wright</a>, <a href="http://arxiv.org/find/cs/1/au:+Baydin_A/0/1/0/all/0/1">At&#x131;l&#x131;m G&#xfc;ne&#x15f; Baydin</a>, <a href="http://arxiv.org/find/cs/1/au:+Deudon_M/0/1/0/all/0/1">Michel Deudon</a>, <a href="http://arxiv.org/find/cs/1/au:+Gal_Y/0/1/0/all/0/1">Yarin Gal</a>, <a href="http://arxiv.org/find/cs/1/au:+Kalaitzis_A/0/1/0/all/0/1">Alfredo Kalaitzis</a>, <a href="http://arxiv.org/find/cs/1/au:+Munoz_Jaramillo_A/0/1/0/all/0/1">Andr&#xe9;s Mu&#xf1;oz-Jaramillo</a> Machine learning techniques have been successfully applied to super-resolution […]
          

FoodBase corpus: a new resource of annotated food entities

 Cache   
Abstract
The existence of annotated text corpora is essential for the development of public health services and tools based on natural language processing (NLP) and text mining. Recently organized biomedical NLP shared tasks have provided annotated corpora related to different biomedical entities such as genes, phenotypes, drugs, diseases and chemical entities. These are needed to develop named-entity recognition (NER) models that are used for extracting entities from text and finding their relations. However, to the best of our knowledge, there are limited annotated corpora that provide information about food entities despite food and dietary management being an essential public health issue. Hence, we developed a new annotated corpus of food entities, named FoodBase. It was constructed using recipes extracted from Allrecipes, which is currently the largest food-focused social network. The recipes were selected from five categories: ‘Appetizers and Snacks’, ‘Breakfast and Lunch’, ‘Dessert’, ‘Dinner’ and ‘Drinks’. Semantic tags used for annotating food entities were selected from the Hansard corpus. To extract and annotate food entities, we applied a rule-based food NER method called FoodIE. Since FoodIE provides a weakly annotated corpus, by manually evaluating the obtained results on 1000 recipes, we created a gold standard of FoodBase. It consists of 12 844 food entity annotations describing 2105 unique food entities. Additionally, we provided a weakly annotated corpus on an additional 21 790 recipes. It consists of 274 053 food entity annotations, 13 079 of which are unique. The FoodBase corpus is necessary for developing corpus-based NER models for food science, as a new benchmark dataset for machine learning tasks such as multi-class classification, multi-label classification and hierarchical multi-label classification. FoodBase can be used for detecting semantic differences/similarities between food concepts, and after all we believe that it will open a new path for learning food embedding space that can be used in predictive studies.

          

Practical AI 63: Open source data labeling tools

 Cache   

What’s the most practical of practical AI things? Data labeling of course! It’s also one of the most time consuming and error prone processes that we deal with in AI development. Michael Malyuk of Heartex and Label Studio joins us to discuss various data labeling challenges and open source tooling to help us overcome those challenges.

Discuss on Changelog News

Sponsors

Featuring

Notes and Links


          

Engineering: Senior Machine Learning Engineer - Los Angeles, California

 Cache   
We've partnered on an exclusive basis with a leading FinTech brand based out of Santa Monica. They're on the market for a Senior Machine Learning to sit in their Data team. This role will require an ML Engineer who can bring ML models into production together with a team of product analysts, data engineers, and product managers. Skills/Qualifications: 3 years of experience with Machine learning techniques like classification, regression, anomaly detection, and clustering. Experience with data analysis languages such as Python or Scala. Experience with bringing at least 2 models to production. In addition to extremely competitive cash compensation - which includes a quarterly performance bonus - they offer; a generous stock package, 100% coverage of health benefits (incl. coverage for dependents), 401K match, monthly fitness reimbursement, referral car buying program, unlimited PTO, parental leave, plus 10 paid federal holidays. If interested in learning more about this opportunity, then reach out ()
          

IT / Software / Systems: Senior Data Engineer - SQL / Redshift / AWS - Premier Ecommerce Publishing Brand - Los Angeles, California

 Cache   
Are you a Senior Data Engineer with a strong SQL, ETL Redshift and AWS background seeking an opportunity to work with massive amounts of data in a very hip marketplace? Are you a Senior Data Engineer interested in unifying data across various consumer outlets for a very well-funded lifestyle brand in the heart of Santa Monica? Are you an accomplished Senior Data Engineer looking for an opportunity to work in a cutting-edge tech environment consisting of; SQL, Redshift, Hadoop, Spark, Kafka and AWS? If yes, please continue reading.... Based in Santa Monica, this thriving lifestyle brand has doubled size in the last year and keeps on growing With over $75 million in funding, they work hard to provide their extensive audience with advice and recommendations in all things lifestyle: where to shop, eat, travel, etc. Branching into a number of different services and products over the next 12 months, they are building out their Engineering team. They are looking for a Senior Data Engineer to unify and bring to life mass amounts of data from all areas of the business; ecommerce, retail, content, web, mobile, advertising, marketing, experiential and more. WHAT YOU WILL BE DOING: Architect new and innovative data systems that will allow individuals to use data in impactful and exciting ways Design, implement, and optimize Data Lake and Data Warehouses to handle the needs of a growing business Build solutions that will leverage real-time data and machine learning models Build and maintain ETL's from 3rd party sources and ensure data quality Create data models at all levels including conceptual, logical, and physical for both relational and dimensional solutions Work closely with teams to optimize data delivery and scalability Design and build complex solutions with an emphasis on performance, scalability, and high-reliability Design and implement new product features and research the next wave of technology WHAT YOU NEED: Extensive experience and knowledge of SQL, ETL and Redshift Experience wrangling large amounts of data Skilled in Python for scripting Experience with AWS Experience with Big Data tools is a nice plus; Hadoop, Spark, Kafka, Ability to enhance and maintain a data warehouse including use of ETL tools Successful track record in building real-time ETL pipelines from scratch Previous Ecommerce or startup experience is a plus Understanding of data science and machine learning technologies Strong problem solving capabilities Strong collaborator and is a passionate advocate for data Bachelor's Degree in Computer Science, Engineer, Math or similar WHAT YOU GET: Join a team of humble, creative and open-minded Engineers shipping exceptional products consumers love to use Opportunity to work at an awesome lifestyle brand in growth mode Brand new office space, open and team oriented environment Full Medical, Dental and Vision Benefits 401k Plan Unlimited Vacation Summer vacations / Time off Offices closed during winter holidays and new years Discounts on products Other perks So, if you are a Senior Data Engineer seeking an opportunity to grow with a global lifestyle brand at the cusp of something huge, apply now ()
          

DB Schenker deploys autonomous mobile robots as “next-generation supply chain”

 Cache   
DB Schenker has put Gideon Brothers’ autonomous logistics robots into service at its Leipzig, Germany, facility following a successful trial. 

The Gideon Brothers robot is able to move 800kg and is designed to navigate safely around people and equipment, as well as moving other machines. It is also equipped with a visual perception-based robot autonomy system, which integrates machine learning with stereoscopic cameras to create next generation robot visual capabilities.

The robots also feature a hot-swappable battery system, allowing minimum downtime for recharging.

Xavier Garijo, member of the board, contract logistics at Schenker, said: “In our drive to offer strategic advantages for our clients in the increasingly complex digital environment, DB Schenker continuously explores opportunities to integrate innovations from visionary start-up companies.

“Delivering automation possibilities for logistics and warehouse operations is a foundation for building the next-generation supply chain.”

During the pilot, robots automated tasks associated with regular order fulfillment, speeding it up and allowing employees to focus on more complex tasks. A few weeks into the project, DB Schenker expanded the pilot by adding a significant number of new pick-up and drop-off points.

In the first month of the pilot, a typical distance covered by a robot surpassed 26 km per week.

Matija Kopić, chief executive and co-founder of Gideon Brothers, said: “Our machines perceive the world just like we do – by processing visual inputs and understanding what surrounds them and how it relates to their tasks.

“This is a technological leap. Self-driving machines, powered by vision and AI, will succeed where earlier technology failed – it will become ubiquitous in industrial environments. We are incredibly proud to have built a team that has the potential – the vision and expertise – to disrupt material handling in indoor manufacturing and logistics environments.”

Gideon Brothers finalised a second seed financing round earlier this month, raising €2.65 million (£2.3 million), with investors including NJF Capital, whose founder Nicole Junkermann said: “Logistics and intra-logistics have so far seen very little automation as the available technology didn’t offer the level of flexibility most operations need. This is changing.

“Gideon Brothers is one of the few players globally developing – and validating – next-generation autonomy, which places them in a good position to be the company to disrupt material handling.”
During the pilot, robots automated tasks associated with regular order fulfillment, speeding it up and allowing employees to focus on more complex tasks. A few weeks into the project, DB Schenker expanded the pilot by adding a significant number of new pick-up and drop-off points.

In the first month of the pilot, a typical distance covered by a robot surpassed 26 km per week.

Matija Kopić, chief executive and co-founder of Gideon Brothers, said: “Our machines perceive the world just like we do – by processing visual inputs and understanding what surrounds them and how it relates to their tasks.

“This is a technological leap. Self-driving machines, powered by vision and AI, will succeed where earlier technology failed – it will become ubiquitous in industrial environments. We are incredibly proud to have built a team that has the potential – the vision and expertise – to disrupt material handling in indoor manufacturing and logistics environments.”

Gideon Brothers finalised a second seed financing round earlier this month, raising €2.65 million (£2.3 million), with investors including NJF Capital, whose founder Nicole Junkermann said: “Logistics and intra-logistics have so far seen very little automation as the available technology didn’t offer the level of flexibility most operations need. This is changing.

“Gideon Brothers is one of the few players globally developing – and validating – next-generation autonomy, which places them in a good position to be the company to disrupt material handling.”

source: Logistics Manager


          

DASA awards £2m contracts to counter hostile drone threats

 Cache   

The Defence and Security Accelerator (DASA) has announced it has awarded nearly £2m to develop new capabilities to detect, disrupt, and defeat the hostile and malicious use of drones.

Eighteen bids have been funded as part of the Countering Drones competition launched earlier this year by the then-Defence Secretary.

Among the proposal being developed are methods for detecting 4G & 5G controlled drones, cutting edge applications of machine learning and artificial intelligence for sensors to automatically identify UAVs, and low risk methods of stopping drones through novel electronic defeat or interceptor solutions.

The competition, run by DASA – the MOD’s innovation hub – on behalf of Defence Science and Technology Laboratory (Dstl), is the latest stage in Dstl’s ongoing research programme into countering unmanned air systems (UAS) which has been running for ten years.

The competition has also been supported by the Department for Transport and NATO to counter the rapidly evolving threats from UAS.

David Lugton, competition technical lead, said: "The introduction of Unmanned Air Systems (UAS), often referred to as drones, has been one of the most significant technological advances of recent years and represents a shift in capability of potential adversaries.

"The threat from UAS has evolved rapidly and we are seeing the use of hostile improvised UAS threats in overseas theatres of operation. There is a similar problem in the UK with the malicious or accidental use of drones becoming a security challenge at events, affecting critical infrastructure and public establishments; including prisons and major UK airports."

There was a very high level of interest from industry with over 90 bids from a wide range of organisations from micro businesses, small and medium-sized enterprises, large defence firms and academia.

This led to a doubling of initial funding from around £1m to around £2m being awarded to organisations in Phase 1.

The first phase of this competition is intended to demonstrate proof of concepts that can be further developed and integrated during later phases.

Phase 2 is planned to launch next year with a focus on developing and maturing successful research into integrated solutions

The 18 projects funded around £100,000 each are:
  • Airspeed Electronics Ltd – to develop an artificial intelligence detection system which uses acoustic sensors.
  • Animal Dynamics – to develop UAS swarm system to detect and neutralise.Unmanned Air Vehicles (UAV) by employing peregrine falcon attack strategies.
  • Autonomous Devices Limited – to develop interception technology.
  • BAE Systems Applied Intelligence Ltd – to develop electromagnetic defeat of UAS.
  • BAE Systems Applied Intelligence Ltd – to develop passive radar for detection of UAVs.
  • Cubica Technology Ltd – to develop an automatic recognition and targeting system of UAVs from large distances.
  • MBDA UK Ltd – to demonstrate an integrated system to detect, track and intercept hostile drones.
  • Northrop Grumman – to develop UAS defeat using cyber and sensor vulnerabilities.
  • Northumbria University – to develop anti-swarm drone technology.
  • PA Consulting – to develop a detection system against cellular controlled UAS.
  • Plextek Services Limited – to develop detection and signal jamming capability for UAS.
  • Plextek Services Limited – to develop miniature Counter-UAS radar.
  • QinetiQ – to develop a drone tracking system in complex environments.
  • QinetiQ – to develop a ‘hard kill’ for disrupting the UAV’s on board electronics.
  • RiskAware Ltd – to develop an automated drone identification and target tracking system.
  • Thales UK – to develop a machine learning for Counter-UAS radar.
  • University College London – to develop signal processing and machine.learning algorithms to identify drones in areas highly populated by birds.
  • An additional proposal, subject to contract.

Phase 1 of the competition is due to run until summer 2020.

DASA and Dstl will be hosting a collaboration day for the Countering Drones competition on Thursday 28 November 2019 in London.

Representatives from industry and academia interested in making collaborative bids for Phase 2 of the competition can register their interest in attending the event here.

Note that numbers at the event are limited and those who express an interest will be selected to attend depending on their skills and experience.


          

Embedded linux - plattformsutveckling för machine learning

 Cache   
Vi behöver duktiga embedded Linux utvecklare för ett riktigt spännande projekt – utveckling av en plattform för Machine Learning. Arbetet bedrivs agilt i en dynamisk miljö. Det ställer krav på att du är självgående och bra på att kommunicera. Vi söker nu flera personer med gedigen embedded Linux och med bred kompetens inom inbyggda system. Och den vi letar efter är en person som ...
          

AI Data Scientist / Machine Learning Engineer

 Cache   
This position is a good opportunity for an experienced data scientist, passionate for AI and machine learning, and with interest to develop in a consultant role. At Explipro you will have the opportunity to work with different industries and customers and learn new technologies.  You will work with development of machine learning models and AI proof of concepts for increasing efficiency of ...
          

Samsung leest klantinteracties met AI

 Cache   
Samsung lanceert deze week Collective Memomory, een innovatieve service-oplossing die het elektronicabedrijf samen met Teleperformance en Building Blocks heeft ontwikkeld. Door machine learning en kunstmatige intelligentie in te zetten kan de betreffende oplossing antwoorden ‘voorspellen’ op basis van eerdere ervaringen.
          

Retail commercial representative at Wefarm

 Cache   

Wefarm, the world’s largest farmer-to-farmer digital network, enables farmers to connect with each other and key partners over SMS to solve problems, share ideas, obtain vital products and services, and spread innovation, through utilising the latest machine learning technology.

Small-scale agriculture is the biggest industry on earth, with more than a billion farmers globally supplying 70% of the world's food and commodities, yet remaining digitally unconnected. Until Wefarm, no other had built a digital platform for these farmers to share their vital insights, without having to go online!

Since the launch in 2015 we have grown to serve 1.9 million farmers across the world, who share more than 30,000 Qs & As per day. Wefarm has recently secured $13 million in Series A funding from some of the world’s leading VCs, including True Ventures in Silicon Valley, and we are looking to add to a world-class team based across London, Nairobi, Kampala and Dar-es-Salaam.

Join Wefarm and be a part of the mission to build an ecosystem for global agriculture, with the farmer at the centre!

The role

The role of the Retail Commercial Representative is to ensure that farmers can easily access affordable and quality products and services from retailers.

Responsibilities will include:

  • Pitching and recruiting retailers in the designated region
  • Onboarding and account managing of retailers in the designated region
  • Manage Retailers to deliver on agreed monthly and weekly targets within designated regions
  • Resolve business challenges with retailers within the designated region
  • Prepare weekly and daily activity reports for the designated regions as per defined reporting templates
  • Improve engagement and performance of retailers in the designated region
  • Manage data collection & mapping processes as required
  • Day to day contact with retailers and report on daily retailer activities
  • Develop business relationships with existing retailers
  • Provide market intelligence and feedback to Wefarm regularly

Requirements

  • Minimum of a High School Diploma. A degree in Business Administration will be an added advantage
  • Knowledge and experience in sales
  • Marketing skills
  • Experienced team player 
  • Experience working with retail stores/networks
  • Experience working with a CRM tool (Salesforce) would be an added advantage

Application process

Click here to access the original job post at Wefarm then apply through the portal.


          

Learning Algorithms and Signal Processing for Brain-Inspired Computing [From the Guest Editors]

 Cache   
The articles in this special section focuses on machine learning (ML) and signal processing algorithms for bio-inspired computing. The articles bring together key researchers in this area to provide readers of IEEE Signal Processing Magazine with up-to-date and survey-style articles on algorithmic, hardware, and neuroscience perspectives on the state-of- the-art aspects of this emerging field.
          

Low-Power Neuromorphic Hardware for Signal Processing Applications: A review of architectural and system-level design approaches

 Cache   
Machine learning has emerged as the dominant tool for implementing complex cognitive tasks that require supervised, unsupervised, and reinforcement learning. While the resulting machines have demonstrated in some cases even superhuman performance, their energy consumption has often proved to be prohibitive in the absence of costly supercomputers. Most state-of-the-art machine-learning solutions are based on memoryless models of neurons. This is unlike the neurons in the human brain that encode and process information using temporal information in spike events. The different computing principles underlying biological neurons and how they combine together to efficiently process information is believed to be a key factor behind their superior efficiency compared to current machine-learning systems.
          

Speech Processing for Digital Home Assistants: Combining signal processing with deep-learning techniques

 Cache   
Once a popular theme of futuristic science fiction or far-fetched technology forecasts, digital home assistants with a spoken language interface have become a ubiquitous commodity today. This success has been made possible by major advancements in signal processing and machine learning for so-called far-field speech recognition, where the commands are spoken at a distance from the sound-capturing device. The challenges encountered are quite unique and different from many other use cases of automatic speech recognition (ASR). The purpose of this article is to describe, in a way that is amenable to the nonspecialist, the key speech processing algorithms that enable reliable, fully hands-free speech interaction with digital home assistants. These technologies include multichannel acoustic echo cancellation (MAEC), microphone array processing and dereverberation techniques for signal enhancement, reliable wake-up word and end-of-interaction detection, and high-quality speech synthesis as well as sophisticated statistical models for speech and language, learned from large amounts of heterogeneous training data. In all of these fields, deep learning (DL) has played a critical role.
          

Software Engineer II - Walt Disney Direct-to-Consumer and International - Seattle, WA

 Cache   
The Software Engineer will work on the DTCI Operational Intelligence team, a multi-disciplinary team tasked with building machine learning models for demand,…
From Disney - Tue, 05 Nov 2019 18:21:12 GMT - View all Seattle, WA jobs
          

לחברה בצפון הארץ דרוש מתכנת בעל נסיון בתחום עיבוד תמונה

 Cache   
היקף המשרה: גמישדרישות: 3-4 years of experience in Python or similar programming languages.Experience in machine learning/computer vision Advantages for developers with knowledge and experice in: data analysis (SQL and panda) Image ProcessingOpenCV, PyTorch, TensorFlow
          

App - (Cradle) Applicazione che riconosce malattie oculari attraverso una semplice foto (lukalove)

 Cache   
lukalove scrive nella categoria App che: Applicazione che riconosce malattie oculari attraverso una semplice foto- Ritorniamo a parlare di applicazioni , oggi vogliamo segnalarvi CRADLE un’applicazione basata sul machine learning che consente di riconoscere malattie oculari attraverso una semplice foto in questo modo il sistema è in grado di suggerire possibili retinoblastoma, cataratte e altre patologie oculari. Cradle è disponibile
vai agli ultimi aggiornamenti su: cradle app
1 Voti

Vai all'articolo completo » .(Cradle) Applicazione che riconosce malattie oculari attraverso una semplice foto.
(Cradle) Applicazione che riconosce malattie oculari attraverso una semplice foto
          

Applications of Deep-Learning in Exploiting Large-Scale and Heterogeneous Compound Data in Industrial Pharmaceutical Research

 Cache   

In recent years, the development of high-throughput screening (HTS) technologies and their establishment in an industrialized environment have given scientists the possibility to test millions of molecules and profile them against a multitude of biological targets in a short period of time, generating data in a much faster pace and with a higher quality than before. Besides the structure activity data from traditional bioassays, more complex assays such as transcriptomics profiling or imaging have also been established as routine profiling experiments thanks to the advancement of Next Generation Sequencing or automated microscopy technologies. In industrial pharmaceutical research, these technologies are typically established in conjunction with automated platforms in order to enable efficient handling of screening collections of thousands to millions of compounds. To exploit the ever-growing amount of data that are generated by these approaches, computational techniques are constantly evolving. In this regard, artificial intelligence technologies such as deep learning and machine learning methods play a key role in cheminformatics and bio-image analytics fields to address activity prediction, scaffold hopping, de novo molecule design, reaction/retrosynthesis predictions, or high content screening analysis. Herein we summarize the current state of analyzing large-scale compound data in industrial pharmaceutical research and describe the impact it has had on the drug discovery process over the last two decades, with a specific focus on deep-learning technologies.


          

Help me get footage of mosquitos

 Cache   
I need to get video of mosquitos in flight. I'm particularly interested in ones around 250 cm (8 ft) from the camera so I was going to set the focus there, but even if they're blurred I need to record all of them within the field of vision. I know between little and nothing about cameras or video in general.

I'm building a machine learning rig to predict mosquito flight paths. I need video to train it with.

I've got a Logitech Brio for recording. I realize that's not videographer equipment but I'd like to do what I can as a proof of concept if nothing else.

Mosquitos I have. I average one every fifteen minutes against a white wall (makes for great contrast) indoors but I'd like to move operations out to the garden where there are constantly dozens -- at least around dusk and dawn. What would be the best way to set this up?

          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

Tech Mahindra and Atidot Collaborate to Offer AI Solution for Life Insurance Companies

 Cache   

Provide a one-of-a-kind platform to increase profitability by predicting customer behavior and lapse patterns Tel Aviv, New Delhi – November 7th, 2019: Tech Mahindra Ltd., a leading provider of digital transformation, consulting and business re-engineering services and solutions, announced today its collaboration with Atidot, an Israel based InsurTech  that offers predictive, analytics, Artificial Intelligence (AI) and Machine Learning (ML) tools for […]

The post Tech Mahindra and Atidot Collaborate to Offer AI Solution for Life Insurance Companies appeared first on businessfortnight.com.


          

Google si allea con ESET, Lookout e Zimperium per combattere i malware su Android

 Cache   
Google utilizza una serie di misure di difesa per proteggere gli utenti di Android e verificare che sul Play Store vengano pubblicate solamente app benigne. Dal momento che l'analisi di ciascun file APK caricato dagli sviluppatori avviene in maniera automatizzata, qualcosa può sfuggire, come dimostrano gli incidenti che ogni tanto interessano anche lo store di Google.
Google Play Protect, lo strumento per la sicurezza attivato di default sui dispositivi degli utenti (vedere Google Play Protect evolve: abilitato di default su tutti i dispositivi Android), aiuta a difendere i propri device in maniera più efficace anche se, spesso, può non bastare come abbiamo visto nell'articolo Antivirus Android: no, non è affatto inutile.


Per offrire nuove garanzie di sicurezza agli utenti di Android, Google annuncia di aver siglato un'intesa con ESET, Lookout e Zimperium lanciando l'iniziativa App Defense Alliance.

I motori di scansione antimalware di ESET, Lookout e Zimperium verranno integrati in Google Play Protect così da poter effettuare un controllo ancora più approfondito sia in fase di pubblicazione delle app Android, sia sui dispositivi degli utenti.

Come Google Play Protect, anche le soluzioni antimalware dei partner partecipanti all''alleanza' utilizzano una combinazione di sistemi di analisi basati sul machine learning oltre che sull'analisi statica e dinamica al fine di rilevare comportamenti sospetti o potenzialmente pericolosi.

Google sottolinea che la collaborazione tra le varie aziende specializzate nella sicurezza informatica rappresenta la chiave di volta per elevare il livello di protezione assicurato a tutti gli utenti finali.

Per maggiori informazioni è possibile fare riferimento al sito dell'App Defense Alliance, raggiungibile a questo indirizzo.

ESET e Google già collaborano da tempo sullo sviluppo del Chrome Cleanup Tool, uno strumento integrato nel browser dell'azienda di Mountain View che non è soltanto un meccanismo per rilevare l'installazione di estensioni malevole ma si propone come un vero e proprio antivirus capace di effettuare la scansione completa dell'intero sistema: Antivirus integrato in Chrome: cos'è e come si utilizza..
          

[ASAP] Improved Representations of Heterogeneous Carbon Reforming Catalysis Using Machine Learning

 Cache   

TOC Graphic

Journal of Chemical Theory and Computation
DOI: 10.1021/acs.jctc.9b00420

          

It Takes a Village to Automate a Plant

 Cache   
robots, robotic integration, automation platforms, palletizing, packaging, supply chain tools, warehouse tools, AMRs
A palletizing robot from Honeywell Intelligrated on the show floor of PackExpo 2019. (Image source: Design News / Honeywell Intelligrated) 

Rather than selling equipment to its plant and warehouse customers, Honeywell Intelligrated is creating solutions that include a range of technologies. As the name Intelligrated implies, Honeywell is acting like an integrator, by providing a range of equipment and software to solve warehouse, plant, and packaging solutions from concept to operation.

“We’re expanding our smart robotic offerings to provide end-to-end solutions to make work cells more efficient,” Joseph Lui, VP and general manager of robotics, computer vision and AI at Honeywell Intelligrated, told Design News at PackExpo 2019. “We can be a single source for autonomation for our customers. That’s automation with a human touch.”

Lui noted that the use of technology – including voice-guided solutions for workers to increase picking efficiencies and automated mobile robots for transporting items quickly – is just the start of the digital transformation of warehouse and manufacturing operations. “The next 10 years will see a revolution in how these centers work and operate,” said Lui.

Partnering to Build a Collection of Technologies

To accomplish this, Honeywell has brought together the expertise from a range of companies and equipment providers, including software vendors, universities, startups, and incubators. “In the digital technology space, we’re connecting warehouse operations to increase efficiencies by employing advanced solutions that include machine vision, smart robotics, augmented reality, and voice technologies,” said Lui.

As part of the buildout for creating solutions, Honeywell has partnered with Fetch Robotics to provide autonomous mobile robots for effectively fulfilling orders. The robots operate safely alongside human workers to transport items through distribution centers without human guidance or fixed paths. Honeywell is also utilizing a number of other robot companies. “In additional to Fetch, Honeywell has created strategic partnerships and investments in Soft Robotics and Attabotics,” said Lui.

In order to blend these technologies into solutions, Honeywell has created space where all the technologies can be integrated. “We’ve taken these investments, and established a robotics center of excellence,” said Lui.

Curating a Collection of Technologies

The investments to build out Honeywell’s logistics and packaging solutions reach beyond robotics and into advances that are still in world of academics and start-ups. “We’re investing in partnerships with software vendors, universities, startups, and incubators to create new solutions for both simple and complex needs,” said Lui.

In order to reach some of the bleeding edge technology, Honeywell has engaged Carnegie Mellon University. “Our collaboration with AI researchers at Carnegie Mellon University’s National Robotics Engineering Center is helping to develop breakthrough technologies for distribution centers,” said Lui. “The focus is on building architecture relying on artificial intelligence and advanced robotic systems for advanced supply chain demands.”

To support the packaged solutions, Honeywell has created platform that enables the technology elements. “Part of the collaboration comes from the Honeywell Universal Robotics Controller (HURC). This is a high-performance platform for vision, planning, and motion,” said Lui. “The HURC leverages the machine learning and robotic control software to provide the processing power to handle volumes of real-time data for faster perception and more effective action. The HURC uses a virtual environment for simulation, testing, and troubleshooting to drive rapid solution deployment.”

Rob Spiegel has covered automation and control for 19 years, 17 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.


          

The Industry 4.0 Blueprint Is Being Rewritten by Startups

 Cache   
(Image source:  xresch from Pixabay)

The manufacturing industry is no stranger to misconceptions and buzzwords. Collaborative automation, Industry 4.0, artificial intelligence, blockchain – the real reason we allow ourselves to spin in circles on these topics is because we’re inherently hopeful and practical people: We want to build better and believe there is a path to doing so if we could only find the way.

As part of my series to uncover what leaders in the manufacturing space are actually doing to build better, I sat down with Juan L. Aparicio Ojea, head of the research group for advanced manufacturing automation at Siemens. Aparicio Ojea’s role grants him unique insight into the latest research across universities, startups, and government agencies. Access to so many different types of technologies that are all working to provide value in one way or another has enabled him to create a simple blueprint for the key requirements of an Industry 4.0 system.

Aparicio Ojea acknowledges that we’re far from seeing completed Industry 4.0 systems in practice. However, while it seems like startups are leading the charge, there are steps every manufacturing leader can and should be taking today.

1.) Interoperability: One Solution from Many Parts

Much of the challenge of bringing new technologies to the factory floor is in the interfaces between them. Aparicio Ojea asserted, “Being able to interoperate machines from different vendors is key.” These connections will allow for the flow of previously underutilized data, enabling faster integration and time to value.

There are two schools of thought on this issue. Some, like Aparicio Ojea, believe that industry standardizations laid out by industrial consortiums, which includes frameworks for OPC UA and DDS, will be key. Others, like Andrew Scheuermann, CEO of Arch Systems, a data sensing startup, believe that the industry cannot wait for the long cycle of old equipment to be replaced.

New technologies like collaborative robots already have to work together with legacy systems. So Arch Systems, which counts top tier electronics manufacturer’s among its customers, has built out an extensive library of software and hardware retrofit integrations where manufacturers can expedite a path towards interoperability with what is on their floors today, while leveraging modern standards for their new equipment.

Universal Robots
Universal Robots creates robots designed for manufacturing products with short life cycles (Image source: Universal Robots).

2.) Modularity: One Piece That Can Fit in Many Places

The second requirement is modularity, or as Aparicio Ojea clarifies, “not having a monolithic approach to manufacturing.” An easy-to-see example of modularity on an electronics factory floor is the surface mount assembly (SMA) line. Instead of one huge machine that can make only one kind of PCB, there are modular machines for each step in the process: solder paste deposition, pick and place machines, reflow ovens, and inspection.

But the SMA process has been around for decades, so what does it mean in the modern context? It means the time for custom-built, single-purpose machines is coming to an end, to be replaced by generalized technologies that can be applied to a much wider variety of products and problems.

Universal Robots, which was acquired by Teradyne in 2015, is tackling this by creating easy-to-program robot arms that can be reprogrammed to different functionalities when the program is over, enabling the technology to be viable for products with short life cycles (like consumer electronics).

3.) Digital Twin: Model What Matters

Aparicio Ojea believes the third element of the blueprint is the creation and use of a digital twin, or simulation, of factory processes. Here’s the point: If you want better outputs from your process (such as higher yields or higher throughput), as engineers we would measure the inputs (such as individual machine parameters) and to try to use statistics to figure out which variables matter. If you can find a mathematical correlation between the inputs and the outputs, you may be able to “turn the knobs” on the input parameters to get the outputs you need. A digital twin is the concept of doing that at a much larger scale, where the goal is to replicate every single process for a holistic model of the factory.

digital twin, Siemens PLM, IoT, producability, manufacturing, greenfield, brownfield
Digital twin technology allows design engineers to simulate the full design and manufacturing process. (Image source: Siemens PLM).

While it’s possible there are successful implementations of true digital twins out in the wilds of the manufacturing world, in general, this is viewed as an aspirational concept. As Aparicio Ojea said, “It is not a greenfield, it is a brownfield” – meaning that most factories already exist and are filled with both legacy equipment and manual processes that are difficult to digitalize.

While digital twins might be obtainable for highly automated bottling plants, it feels like fantasy for electronics assembly, which still has hundreds of human hands on the line. In those cases, leaders should focus on wrapping their arms around the data they can get at the highest possible resolution, if not from the process, then from the products themselves. Engineers can use this data to create these correlations the old fashioned way – with experiments, spreadsheets, and statistics.

4.) Flexibility: Pieces That Can Adapt

Arguably the most exciting element of the blueprint is flexibility. This element is all about reducing waste – not scrap waste, but equipment waste. Single-purpose machines are not easily repurposed, and yet are how short life cycle production lines have been able to automate to date.

How do we create more flexibility in the production process and the machines we use? AI, computer vision, and robotics can be combined to enable machines that are both more adaptable to variation, and more adaptable from product to product. A quick example is in the quality control process, where camera systems can program themselves to find anomalies more broadly during an inspection – allowing greater inspection coverage than humans or traditional quality control.

Where to Start?

How does one adopt Industry 4.0 technologies and embody these smart manufacturing principles? Aparicio Ojea recommends investing in two key areas: digitalization and strategic partnerships. Without digitalization there will be no data foundation, a requirement for a wide array of initiatives. Simply decreasing paper processes represents a first step that many can take. When it comes time to adopt innovative technologies, Aparicio Ojea recommends, “viewing vendors as strategic partners and having a co-creation mentality. Partnering with a startup, automation vendor, or university and working together to solve a problem that has a real KPI and a clear goal merits investment now.”

Aparicio Ojea specializes in these types of strategic partnerships for Siemens Corporate Technology. He views partnerships as opportunities for cutting edge technology to solve larger problems that have concrete ROI for big businesses. For instance, Siemens Corporate Technology has partnered with Sewbo in an ARM-funded project. Sewbo is a startup tackling automated garment production by incorporating a solution that stiffens fabrics. By reframing the problem of movement and variation, Sewbo (in conjunction with Siemens, UC Berkeley, and Bluewater Defense) has the opportunity to overhaul the status quo manual practices of the entire garment industry.

With so much expensive groundwork, is Industry 4.0 worth all of the buzz? Aparicio Ojea views it as an “evolution, rather than a revolution.” New technologies absolutely merit investment, but he advises that leaders to stick to technologies that make their processes better today and lay the foundation for the future.

Those new technologies may very well come from startups, which are reframing Industry 4.0 roadblocks and applying novel solutions. In order to stay competitive in this ever changing international landscape, it will be these technology investments today that differentiate the winners and losers tomorrow.

Anna-Katrina Shedletsky is a former product design lead at Apple and the Founder and CEO of Instrumental, a company that leverages AI and machine learning for quality assurance and manufacturing applications


          

Siri - Machine Learning Engineer, Advanced Development

 Cache   
Apple, Inc. - SummarySummaryPosted: Oct 14, 2019Weekly Hours: 40 Role Number: 200101402The Siri Advanced Development Group (ADG) is looking for an exceptional ma...
          

Machine Learning QA Engineer

 Cache   
Apple, Inc. - The group comprises teams of Software Developers, Data Engineers, Data Analysts and Data Scientists that focus on crafting and implementing fraud ...
          

A mechanistic model of meditation

 Cache   
Published on November 6, 2019 9:37 PM UTC

Meditation has been claimed to have all kinds of transformative effects on the psyche, such as improving concentration ability, healing trauma, cleaning up delusions, allowing one to track their subconscious strategies, and making one’s nervous system more efficient. However, an explanation for why and how exactly this would happen has typically been lacking. This makes people reasonably skeptical of such claims.

In this post, I want to offer an explanation for one kind of a mechanism: meditation increasing the degree of a person’s introspective awareness, and thus leading to increasing psychological unity as internal conflicts are detected and resolved.

Note that this post does not discuss “enlightenment”. That is a related but separate topic. It is possible to pursue meditation mainly for its ordinary psychological benefits while being uninterested in enlightenment, and vice versa.

What is introspective awareness?

In an earlier post on introspective awareness, I distinguished between being aware of something, and being aware of having been aware of something. My example involved that of a robot whose consciousness contains one mental object at a time, and which is aware of different things at different times:

Robot’s thought at time 1: It’s raining outside
Robot’s thought at time 2: Battery low
Robot’s thought at time 3: Technological unemployment protestors are outside
Robot’s thought at time 4: Battery low
Robot’s thought at time 5: I’m now recharging my battery

At times 2-5, the robot has no awareness of the fact that it was thinking about rain at time 1. As soon as something else captures its attention, it has no idea of this earlier conscious content - unless a particular subsystem happens to record the fact, and can later re-present the content in an appropriately tagged form:

Time 6: At time 1, there was the thought that [It’s raining outside]

I said that at time 6, the robot had a moment of introspective awareness: a mental object containing a summary of its previous thoughts, which can then be separately examined and acted upon.

Humans are not robots. But I previously summarized the neuroscience book Consciousness and the Brain, and its global neuronal workspace (GNW) model of consciousness. According to this model, the contents of consciousness correspond to what is being represented in a particular network of neurons - the global workspace - that connects different parts of the brain. Different systems are constantly competing to get their contents into the global workspace, which can only hold one piece of content at a time. Thus, like robots, we too are only aware of one thing at a time, and tend to lose awareness of our earlier thoughts - unless something reminds us of them.

In what follows, I will suggest that like robots, humans also have a type of conscious content that we might call introspective awareness, which allows us to be more aware of our previous mental activity. (I am borrowing the term from the meditation book The Mind Illuminated, which distinguishes between introspective attention, introspective awareness, and metacognitive introspective awareness. I am eliding these differences for the sake of simplicity.)

I will also explore the idea that introspective awareness is a sensory channel in a similar sense as vision and sound are. The experience of sight or sound is produced by subsystems which send information to consciousness; likewise, introspective awareness is produced by a subsystem which captures information in the brain and then sends it (back) to consciousness.

We can train our other senses to become more accurate and detailed. Gilbert, Sigman & Crist (2001), reviewing the neuroscience or sensory training, list a number of ways in which discrimination can be increased in a variety of sensory modalities: among other things, "visual acuity, somatosensory spatial resolution, discrimination of hue, estimation of weight, and discrimination of acoustical pitch all show improvement with practice"; even the spatial resolution of the visual system can be deliberately increased by training.

If introspective awareness is a sensory channel, can it also be practiced to improve the number of details it will pick up on? One may feel that I am stretching the metaphor here. But in fact, Consciousness and the Brain suggests that all sensory training is in a sense training in introspection. The additional information that we get by training our senses has always been collected by our brain, but that information has remained isolated at lower levels of processing. To make it conscious, one needs to grow new neural circuits which extract the lower-level information and re-encode it in a format which can be sent to consciousness.

Thus, the brain already has the ability to take normally unavailable subconscious information and make it consciously available by practice. What is needed is a way to point that learning process at the kind of information that we would normally consider "introspective", rather than on an external information source.

From Consciousness and the Brain:

... a fourth way in which neural information can remain unconscious, according to workspace theory, is to be diluted into a complex pattern of firing. To take a concrete example, consider a visual grating that is so finely spaced, or that flickers so fast (50 hertz and above), that you cannot see it. Although you perceive only a uniform gray, experiments show that the grating is actually encoded inside your brain: distinct groups of visual neurons fire for different orientations of the grating. Why can’t this pattern of neuronal activity be brought to consciousness? Probably because it makes use of an extremely tangled spatiotemporal pattern of firing in the primary visual area, a neural cipher too complex to be explicitly recognized by global workspace neurons higher up in the cortex. Although we do not yet fully understand the neural code, we believe that, in order to become conscious, a piece of information first has to be re-encoded in an explicit form by a compact assembly of neurons. The anterior regions of the visual cortex must dedicate specific neurons to meaningful visual inputs, before their own activity can be amplified and cause a global workspace ignition that brings the information into awareness. If the information remains diluted in the firing of myriad unrelated neurons, then it cannot be made conscious.
Any face that we see, any word that we hear, begins in this unconscious manner, as an absurdly contorted spatiotemporal train of spikes in millions of neurons, each sensing only a minuscule part of the overall scene. Each of these input patterns contains virtually infinite amounts of information about the speaker, message, emotion, room size . . . if only we could decode it—but we can’t. We become aware of this latent information only once our higher-level brain areas categorize it into meaningful bins. Making the message explicit is an essential role of the hierarchical pyramid of sensory neurons that successively extract increasingly abstract features of our sensations. Sensory training makes us aware of faint sights or sounds because, at all levels, neurons reorient their properties to amplify these sensory messages. Prior to learning, a neuronal message was already present in our sensory areas, but only implicitly, in the form of a diluted firing pattern inaccessible to our awareness.

Richard’s therapy session

We saw an example of introspective awareness in my post on the book Unlocking the Emotional Brain. In the transcript, a man named Richard has been suffering from severe self-doubt, and is asked to imagine how it would feel like if he made confident comments in a work meeting. The following conversation follows:

Richard: Now I’m feeling really uncomfortable, but-it’s in a different way.
Therapist: OK, let yourself feel it - this different discomfort. [Pause.] See if any words come along with this uncomfortable feeling.
Richard: [Pause.] Now they hate me.

The therapist is asking Richard to focus his attention on the feeling of discomfort, generating moments of introspective awareness about the discomfort. Notice that Richard becomes more thoughtful and less reactive to the anxiety as he does so. My guess of what is happening is something like this:

When Richard is feeling anxious, this means that a mental object encoding something like “the feeling of anxiety” is being represented in the workspace. This activates neural rules which trigger the kinds of responses that anxiety has evolved to produce. For example, a system may be triggered which attempts to plan how to escape the situation causing the anxiety. This system’s intentions are then injected into the workspace, producing a state of mind where the feeling of anxiety alternates with thoughts of how to get away.

Introspective awareness is its own type of mental object, produced by a different subsystem which takes inputs from the global workspace, re-encodes them in a format which highlights particular aspects of that data, and outputs that back into the workspace. When a representation of an anxious state of mind is created, that representation does not by itself trigger the same rules as the original anxiety did.

As a result, as representations of the anxiety begin to alternate together with the anxiety, there are proportionately less moments of anxiety. This in turn triggers fewer of the subsystems attempting to escape the situation, making it easier to reflect on the anxiety without being bothered by it.

When Richard’s therapist asks him to feel the anxiety and to see if any words come along with it, the subsystem for introspective awareness was primed to look for any content that could be re-presented in verbal form. As Richard’s anxiety had been produced by an emotional schema including a prediction that being confident makes you hated, some of that information had passed through the workspace and been available for the awareness subsystem to capture. This brought up the verbalization of what the schema predicted would happen if Richard was confident - “now they hate me”.

Therapist: “Now they hate me.” Good. Keep going: See if this really uncomfortable feeling can also tell you why they hate you now.

According to the GNW model, when a particular piece of content is maintained as the center of attention, it strengthens the activation of any structures associated with it. As Richard’s therapist guides him to focus on the verbal content, more information related to it is broadcast into the workspace. The further prompt guides the awareness subsystem to look for patterns that feel like the reason for the hate.

Richard: [Pause.] Hnh. Wow. It’s because… now I’m… an arrogant asshole… like my father… a totally self-centered, totally insensitive know-it-all.

The therapist then takes a pattern which Richard has brought up and helps crystallize it further, and throws it back to Richard for verification.

Therapist: Do you mean that having a feeling of confidence as you speak turns you into an arrogant asshole, like Dad?
Richard: Yeah, exactly. Wow.

In this example, we saw that having more moments of introspective awareness was beneficial for Richard. As aspects of his moment-to-moment consciousness were made available for other subsystems to examine, the emotional schema causing the anxiety was identified and its contents extracted into a format which could be fed into other subsystems. Later on, when Richard’s co-worker displayed confidence which others approved of, a contradiction-detection mechanism noticed a discrepancy between reality and the prediction that confidence makes you hated, allowing the prediction to be revised.

Under this model, the system which produces moments of introspective awareness is a subsystem like any other in the brain. This means that it will be activated when the right cues trigger it, and its outputs compete with the outputs of other systems submitting content to consciousness. The circumstances under which the system triggers, and its probability of successfully making its contents conscious, are modified by reinforcement learning. Just as practicing a skill such as arithmetic eventually causes various subsystems to manipulate the content of consciousness in the right order, practicing a skill which benefits from introspective awareness will cause the subsystem generating introspective awareness to activate more often.

Meditation as a technique for generating moments of introspective awareness

Just as there are different forms and styles of therapy, there are also different forms and styles of meditation. All of them involve introspective awareness to at least some degree, but they differ in what that awareness is then used for.

In the example with Richard, his therapist asked him to imagine being confident and to then bring his awareness to why that felt uncomfortable. In contrast, a more behaviorally oriented therapist might not have examined the reason behind the discomfort. Rather, they might have taught Richard to notice his reaction to the discomfort, and then use that as a cue for implementing an opposite reaction. Both kinds of therapists would ask their clients to generate some introspective awareness, but aiming that awareness at different kinds of features, and using the awareness to trigger different kinds of strategies. The results would correspondingly be very different.

Likewise, systems of meditation differ in how much introspective awareness they produce, what kinds of features the awareness-producing subsystem is trained to extract, and what that awareness is then used for. For this article, I have chosen to use the example of the system in The Mind Illuminated (TMI), as it is clearly explained and explicitly phrased in these terms. (Again, TMI has a more precise distinction between introspective attention and introspective awareness, which I am eliding for the sake of simplicity.)

In TMI’s system, as in many others, you start with trying to keep your attention on your breath. In terms of our model, this means that you want to keep sensory outputs corresponding to your breath as the main thing in your consciousness.

The problem with this goal is that there is no subsystem which can just unilaterally decide what to maintain as the center of attention. At any given moment, many different subsystems are competing to make their content conscious. So one system might have the intention to follow the breath, and you do it for a while, but then a planning system kicks in with its intention to think about dinner. Such planning has tended to feel rewarding, so it wins out and the intent to meditate is forgotten until five minutes later, when you decide what you want for dinner and then suddenly remember the thing about following your breath.

TMI calls this mind-wandering from forgetting, and the first step of practice is just to notice it whenever it happens, congratulate yourself for having noticed it, and then return to the breath. Being able to notice forgetting requires having a moment of introspective awareness which points out the fact that you had not been following your breath. When you take satisfaction in having noticed this, your awareness-producing subsystem gets assigned a reward and becomes slightly more likely to activate in the future. “Have I remembered to follow my breath or not?” acts a feedback mechanism that you can explicitly train on.

As the awareness-producing system starts to activate more often and ping you if you have forgotten to meditate, periods of mind-wandering grow shorter.

Now, even if you stop getting entirely lost in thought, you still have distraction: content from other subsystems that is in consciousness together with the sensations of the breath and the intention to focus on the breath. For example, you might be having stray thoughts, hearing sounds from your environment, and experiencing sensations from your body.

To more exclusively focus on the breath, you are instructed to maintain the intent to both attend to it and also to be aware of any distractions. The subsystems which output mental content can, and normally do, operate independently of each other. This means that the following may happen:

Subsystem 1: I’m meditating well!
Subsystem 2: Hmm, what’s that smell.
Subsystem 1: I’m meditating well! No distractions.
Subsystem 2: Smells kinda like cookies.
Subsystem 2: Mmm, cookies.
Subsystem 1: Continuing to meditate well!
Subsystem 2: Say, what’s for dinner?

That is, a system which tracks the breath can continue to repeatedly find the breath, and report that your meditation is proceeding well and with no distractions… all the while the content of your consciousness continues to alternate with distracted thoughts, which the breath-tracking subsystem is failing to notice (because it is tracking the breath, not the presence of other thoughts). Worse, since you may find it rewarding to just think that you are meditating well, that thought may start to become rewarded, and you may find yourself just thinking that you are meditating well… even as that thought has become self-sustaining and no longer connected to whether you are following the breath or not!

There are all kinds of subtle traps like this, and reducing the amount of distraction requires you to first have better awareness of the distraction. This means more moments of introspective awareness which are tracking what’s actually happening in your mind:

Subsystem 1: I’m meditating well!
Subsystem 2: Hmm, what’s that smell.
Subsystem 1: I’m meditating well! No distractions.
Subsystem 2: Smells kinda like cookies.
Subsystem 2: Mmm, cookies.
Awareness subsystem: Wait, one train of thought keeps saying that it’s meditating well, but another is totally getting into the thought of food.
Subsystem 1: Oh. Better refocus that attention on the breath, and spend less time thinking about the concept of following the breath.

This kind of a process also teaches you to pay attention to patterns of cause and effect in your mind. In this example, the smell of cookies caused you to think of cookies, which in turn made you think of dinner, which could have ultimately led to forgetting and mind-wandering.

Catching the train of thought after “mmm, cookies” meant that three “processing steps” had passed before you noticed it. If you practice tracing back trains of thought in your mind, you seem to teach your awareness-system to collect and store data from a longer period, even when it is not actively outputting it. This means that at the “mmm, cookies” stage, you can query your awareness to get a trace of the immediately preceding thought chain.

You notice that you started to get distracted starting from the smell of the cookie and can then use this as further input to your awareness system. You are essentially taking the re-presented smell of the cookie which the system output, and feeding it back in, asking it to pay more attention to detecting “things like this”. The next time that you notice a smell, your introspective awareness may flag it right away, letting you catch the distraction at the very first stage and before it turns into an extended train of thought.

Note that there is nothing particularly mysterious or unusual about any of this. You are employing essentially the same process used in learning any skill. In learning to ride a bike, for example, attempting to keep the bike balanced involves adjusting your movements in response to feedback. When you do so, your brain becomes better at detecting things like “tilting towards the right” in the sense data, increasing your ability to apply the right correction. After you have learned to identify tilting-a-lot-but-not-quite-falling, your brain learns to backtrace to the preceding state of tilting-a-little-less, and apply the right correction there. Once its precision has been honed to identify that state, you can further detect an even subtler tilt, until you automatically apply the right corrections to keep you balanced.

Essentially the kind of a learning algorithm is being applied here. Increased sensory precision leads to improvements in skill which allow for increased sensory precision. (See also this article, which goes into more detail about TMI as a form of deliberate practice.)

Uses for moments of introspective awareness

I should again emphasize that the preceding explanation is only looking at one particular meditation system. There are other systems which work very differently, but they all use or develop introspective awareness to some extent. For example:

  • In Shinzen Young's formulation of “do nothing” practice, you have just two basic instructions: let whatever happens, happen and when you notice an intention to control your attention, drop that intention. This trains introspective awareness to notice when one is trying to control their attention… but it is also a very different system, since maintaining an intention to notice when that happens would also be an attempt to control attention! Thus, one is instructed to drop intentions if one spontaneously notices them, but not to actively look for them.
  • In noting practice, you are trying to consciously name or notice everything that happens in your consciousness. Introspective awareness is trained to very rapidly distinguish between everything that happens, but is not trained to maintain attention on any particular thing.
  • In visualization practice, you might create a visual image in your mind, then use introspective awareness to examine the mental object that you’ve created and compare it to what a real image would look like. This gives the subsystem creating the visualization feedback, and helps slowly develop a more realistic image.

Going back to TMI-style introspective awareness, once you get it trained up, you can use it for various purposes. In particular, once you learn to maintain it during your daily life - and not just on the meditation couch - it will bring up more assumptions in your various schemas and mental models. Think of Richard paying attention to the assumptions behind his unwanted reactions and making them explicit, but as something that happens on a regular basis as the reactions come up.

Romeo Stevens described what he called “the core loop of Buddhism”:

So, what is the core loop?
It's basically cognitive behavioral therapy, supercharged with a mental state more intense than most pharmaceuticals.
There are two categories of practice, one for cultivating the useful mental state, the other uses that mental state to investigate the causal linkages between various parts of your perception (physical sensations, emotional tones, and mental reactions) which leads to clearing out of old linkages that weren't constructed well.
You have physical sensations in the course of life. Your nervous system reacts to these sensations with high or low valence (positive, negative, neutral) and arousal (sympathetic and parasympathetic nervous system activation), your mind reacts to these now-emotion-laden sensations with activity (mental image, mental talk) out of which you then build stories to make sense of your situation.
The key insight that drives everything is the knowledge (and later, direct experience) that this system isn't wired up efficiently. Importantly: I don't mean this in a normative way. Like you should wire it the way I say just because, but in the 'this type of circuit only needs 20 nand gates, why are there 60 and why is it shunting excess voltage into the anger circuits over there that have nothing to do with this computation?' way. Regardless of possible arguments over an ultimately 'correct' way to wire everything, there are very low hanging fruit in terms of improvements that will help you effectively pursue *any* other goal you set your mind to.

Again, we saw an example of this with Richard. He had experienced his father as acting confident and as causing suffering to Richard and others; sensations which his mind has classified as negative. In order to avoid them, a model (story) was constructed saying that confidence is horrible, and behaviors (e.g. negative self-talk) were created to avoid appearing horrible.

Now, this caused problems down the line, making him motivated to try to appear more confident… meaning that there was now a mechanism in his brain trying to prevent him from appearing confident, and another which considered this a problem and tried to make him more confident, in opposition to the first system. See what Romeo means when talking about circuits that only need 20 gates but are implemented using 60?

The article “tune your motor cortex” makes the following claims about muscle movement:

Your motor cortex automatically learns to execute complex movements by putting together simpler ones, all the way down to control of individual muscles.
Because the process of learning happens organically, the resulting architecture of neural connections (you can think of them as "hidden layers" in machine learning terms) is not always perfectly suited to the task.
Some local optima of those neural configurations are hard to get out of, and constantly reinforced by using them.
There is some pressure for muscle control to be efficient, and the motor cortex is doing a "good enough" job at it, but tends to stop a fair bit from perfection.
By repeating certain movements and positions over and over again (e.g. during sitting work), we involuntarily strengthen connections between movements and muscles that don't make much sense lumped together.
E.g. control of shoulders might become spuriously wired together with control of thighs (both are often tense during sitting).

There are various mental motions which are learned in basically the same way as physical motions are:

  • You learn to calculate 12*13 by a technique such as first multiplying 10*13, keeping the result in your memory, calculating 2*13, and then adding the intermediate results together.
  • You learn that a particular memory makes you feel slightly unpleasant, and that flinching away from anything that would remind you of it takes the pain away.
  • You learn that this also works on uncomfortable chores, teaching you to keep pushing the thought of them away.
  • You learn that your father’s behavior is painful to you, and that any confidence reminds you of that, so you learn negative self-talk which blocks you from acting confident.
  • You learn that saying “no” to people reminds you of being punished for saying “no” to your parents, but that saying “yes” too often means that you are constantly fulfilling promises to other people - so you learn to avoid situations where you would be asked anything.
  • You learn that there’s something you can do in your mind to stop feeling upset, so you start ignoring your emotions and any information they might have.
  • You learn that if you feel bad about not getting the respect you want, thinking “if only I was good enough at persuasion, I would get what I want” gives you a sense of control - even though this pattern also makes you feel personally at fault when you don’t get what you want.
  • You learn that it’s rewarding to punish people who have wronged, so you always want to punish someone when something goes wrong - even if there is nobody but reality to punish.
  • You learn that it feels good to mentally punish someone who is munching too loud, but actually complaining about it would feel petty, and you’ve learned that pettiness is frowned upon. So you also learn to block the impulse to say anything out loud, but continue to get increasingly angry about the sound, causing an escalating circle of both the annoyance and the blocking ramping up in intensity.

As with physical movements, these can form local optima that are hard to get out of. Many of them are learned in childhood, when your understanding of the world is limited. But new behaviors continue to build on top of them, so you will eventually end up with a system which could use a lot of optimization.

If you have more introspective awareness of the exact processes that are happening in your mind, you can make more implicit assumptions conscious, causing your brain’s built-in contradiction detector to notice when they contradict your later learning. Also, getting more feedback about what exactly is happening in your mind allows you to notice more wasted motion in general.

One particular effect is that, as Unlocking the Emotional Brain notes, the mind often makes trade-offs where it causes itself some minor suffering in order to avoid a perceived greater suffering. For example, someone may feel guilt in order to motivate themselves, or experience self-doubt to avoid appearing too confident. By employing greater introspective awareness, one may find ways to achieve their goals without needing to experience any suffering in order to do so.

Of course, Buddhist meditation is not the only way to achieve this. Various therapies and techniques such as Focusing, Internal Family Systems, Internal Double Crux, and so on, are also methods which use introspective awareness to reveal and refactor various assumptions. Increased introspective awareness from meditation tends to also boost the effectiveness of related techniques, as well as reveal more situations where they can be employed.

If introspective awareness is so great, why don’t we have it naturally?

As with anything, there are tradeoffs involved. Having more introspective awareness can help fix a lot of issues… but it also comes with risks, which I assume is the reason why we have not evolved to have a lot of it all the time.

First, it’s worth noting that even for experienced meditators, intense emotional reactions tend to shut down introspective awareness. If one of the functions of e.g. fear and anxiety is to cause a rapid response, then excessive amounts of introspective awareness would slow down that response by reducing cognitive fusion. Many emotions seem to inhibit many competing processes from accessing consciousness, so that you can deal with the situation at hand.

Another consideration involves traumatic memories. In the beginning of the article, I suggested that anxiety is a special kind of mental object which activates particular behaviors. In general, different emotional states have specific kinds of behaviors and activities associated with them - meaning that if you have some memories which are really painful, they can become overwhelming, making it necessary to block them in order to carry on with your normal life. Meditation can be helpful for working through your trauma, but it can also bring it up before you are ready for it, to the point of requiring professional psychotherapy to get through. If you are better at noticing all kinds of subtle details in your mind, it also becomes easier to notice anything that would remind you of things you don’t want to remember. A decrease in introspective awareness seems to be a common trauma symptom, as this helps block the unpleasant memories from being too easily triggered.

I have also heard advanced meditators mention that increased introspective awareness makes it difficult to push away pangs of conscience that they would otherwise have ignored, causing practical problems. For example, people have said that they are no longer able to eat animal products or tell white lies.

On the other hand, extended concentration practice can also make it easier to block things which you would be better off not blocking.

So far, this article has mostly focused on using introspective awareness to notice the content of your thoughts. But you can also use it to notice the structure of the higher-level processes generating your thoughts. Part of how you develop concentration ability is by maintaining introspective awareness of the fact that being able to concentrate on just one thing feels more pleasant than having your attention jump between many different things. This can give you an improved ability to choose what you are concentrating on… but also to selectively exclude anything unpleasant from your mind.

For example, there was an occasion when I needed to do some work, but also had intense anxiety about not wanting to; intense enough that it would normally have made it impossible for me to focus on it. So then I tried to work, and let my introspective awareness observe the feeling of head-splitting agony from my attention alternating between the work and the desire not to… and to also notice that whenever my attention was on the work, I felt temporarily better.

After a while of this, the anxiety started to get excluded from my consciousness, until it suddenly dropped away completely - as if some deeper process had judged it useless and revoked its access to consciousness. And while this allowed me to do the work that I needed to, it also felt internally violent, and like it would be too easy to repress any unpleasant thoughts using it. I still use this kind of technique on occasion when I need to concentrate on something, but I try to be cautious about it.

The negative side of being able to get better feedback about your mental processes, is that you can also get better feedback on exactly how pleasant wireheading feels. If you like to imagine pleasant things, you can get better and better at imagining pleasant things, and excluding any worries about it from your consciousness. Meditation teacher Daniel Ingram warns:

Strong insight and concentration practice, even when that practice wasn’t dedicated to the powers, can make people go temporarily or permanently (or for the rest of that lifetime) psychotic. The more the practice involves creating experiences that diverge significantly from what I will crudely term “consensus reality”, and the longer one engages in these practices, the more likely prolonged difficulties are. It is of note that a significant number of the primary propagators of the Western magickal traditions became moderately nuts towards the ends of their lives.
As one Burmese man said to Kenneth, “My brother does concentration practice. You know, sometimes they go a little mad!” He was talking about what can sometimes happen when people get into the powers. [...]
I remember a letter from a friend who was on a long retreat in Burma and was supposed to be doing insight practices but had slipped into playing with these sorts of experiences. He was now fascinated by his ability to see spirit animals and other supernormal beings and was having regular conversations with some sort of low-level god that kept telling him that he was making excellent progress in his insight practice—that is, exactly what he wanted to hear. However, the fact that he was having stable visionary experiences and was buying into their content made it abundantly clear that he wasn’t doing insight practices at all, but was lost in and being fooled by these.

Now, it should be pointed out that “being able to exclude anything unpleasant from your consciousness” is only going to be a worry for advanced practitioners who spend a lot of time on the kind of practice that inclines you towards these kinds of risks. Before you get to the point of something like this being a risk, you will get to resolve a lot of internal conflicts and old issues first.

Here is Culadasa, the author of The Mind Illuminated, being interviewed about this kind of a “first you resolve a lot of issues, but then you can get the ability to push down the rest” dynamic:

Michael Taft: … and you’re using the meditation practice to help work with your stuff. But what about the other case that we both know of where people have reached very high levels of meditative capacity, they’ve got a lot of insight, maybe they’re at some level of awakening, and they seem to have, in a way, missed a whole pocket of material, or several pockets of material. It’s like they think they’re doing fine, but maybe everyone around them is aware that they’ve got these behavior patterns that do not seem awake at all. And yet the meditation has somehow missed that.
Culadasa: Yes, yes. [...] ... there seems to be a certain level of the stuff that we’re talking about that it’s necessary to deal with to achieve awakening, but it’s sort of a minimal level. [...] What I think that is indicative of is that if that hasn’t been sufficiently dealt with earlier, it has to get dealt with in one way or another at that point. That doesn’t necessarily mean that it’s going to get resolved; it may just get reburied a little more deeply.
Michael Taft: Pushed out of the way.
Culadasa: Yeah, pushed out of the way, or bypassed in some way. That allows a person to go ahead and [progress] and it’s unrealistic to think that everything has been resolved. [...] a lot of the things that change [...] actually help to push these things aside, to bypass them in one way or another, whereas before somebody has [made as much progress] these would have been sufficiently problematic in their life that, in one way or another, they would be aware of them, whether or not they did anything about them or were at a place of just taking for granted that I have these, quote, “personality characteristics” that are a bit difficult.

I used to be very enthusiastic about TMI’s meditation system. I still consider it important and useful to make progress on, but am slightly more guarded after some of my own experiences, hearing about the experience of a friend who reached a high level in it, reading some critiques of its tendency to emphasize awareness of positive experiences [1 2], and considering both the interview quoted above and Culadasa’s subsequent actions. (That said, the focus on positive experiences can be a useful counterbalance for people who start off with an overall negative stance towards life.)

I continue to practice it, and would generally find it safe until you get to around the sixth or so of its ten stages, at which point I would suggest starting to exercise some caution. Off the couch, I mostly don’t do much concentration practice (except in a context where I would need to concentrate anyway). Rather I try to focus my introspective awareness towards just observing my mind without actively interfering with it, Internal Family Systems -style practice, and other activities that do not seem to risk excluding too much unpleasant material.

Finally, developing too much awareness into your mind may cause you to start noticing contradictions between how you thought it worked, and how it actually works. I suspect that a part of how our brains have evolved to operate, relies on those differences going unnoticed. This gets us to the topic of enlightenment, which I have not yet discussed, but will do in my next post.

Thanks to Maija Haavisto, Lumi Pakkanen and Romeo Stevens for comments on an earlier draft.



Discuss
          

An Amazon-focused tech firm has hired a vet of the e-commerce giant to build advertising tools for sellers

 Cache   

Teikametrics Alasdair McLean-Foreman and Srini Guddanti

  • Amazon tech firm Teikametrics has hired ex-Amazon ad exec Srini Guddanti as its first chief product officer.
  • Teikametrics is one of a handful of tech firms that pitches Amazon sellers so-called deep data, like the cost of goods sold.
  • Teikametrics CEO Alasdair McLean-Foreman said Guddanti's insider expertise would help sellers grow their revenue on Amazon.
  • Click here for more BI Prime stories.

As Amazon's ad business continues to grow, a cottage industry of tech firms have popped up that promise sellers proprietary data and insights into the platform that Amazon doesn't provide.

Boston-based Teikametrics is one such firm and has hired former Amazon Advertising exec Srini Guddanti as its first chief product officer. Teikametrics sells software that helps sellers and agencies manage their ad spend and listings. The company also helps sellers with branding and figuring out which platforms to sell on.

Guddanti worked at Amazon for 14 years, mostly in finance roles across Amazon's retail, Prime and advertising business. Since 2016, he worked in a director role on Amazon Advertising and oversaw Sponsored Ads, one of Amazon's bigger ad formats.

Guddanti said he left Amazon because he was interested in using his background to build something new. He will work on Teikametrics' data science and machine learning tools, create new products for Amazon's newer ad formats, and build out a reporting tool for Amazon's app.

Teikametrics CEO Alasdair McLean-Foreman said he met Guddanti three years ago and that he brings insider expertise that will help sellers grow their revenue on Amazon.

He said that Amazon sellers face challenges like handling logistics, pricing and advertising. For example, sellers need help figuring out how to distribute ad spend across formats, he said. Teikametrics says that its data includes seller-specific stats that Amazon does not have, like the cost of goods sold and data about business objectives. Its clients include Clarks, Razer, and Mark Cuban Companies.

"If you're a seller, a brand or even a bigger company, you are flying blind," he said. "In a closed loop, you need to understand inventory and performance to understand advertising, and that is something that Amazon cannot do themselves."

The e-commerce firm is growing as Amazon adds more sellers

For its part, Amazon has increasingly recognized advertisers' need for third parties to dig into data. In August, it launched an online directory tool listing about 60 vetted agencies and tools for advertisers to work with. Facebook, Google, Twitter and Pinterest also have programs with advertising and marketing companies that allow third parties to sell ad space and develop strategies for advertisers.

For example, Sellics is another Amazon-specific firm that recently raised $10 million to grow its partnerships with agencies.

Seven-year-old Teikametrics has raised $10 million in Series A funding. McLean-Foreman said an estimated two million companies sell goods on Amazon. As that number grows, Teikametrics is opening a 10-person office in Seattle to be closer to Amazon's headquarters. The firm currently has 95 employees.

Amazon made $3.59 billion in "other revenue" during the third quarter, most of which is advertising. Research firm eMarketer projects Amazon to make $11.33 billion from advertising this year.

Join the conversation about this story »

NOW WATCH: 5 things about the NFL that football fans may not know


          

CCE 2019 - 3M, Shell, Halliburton and Unibap weigh in on their AI results to date

 Cache   
CCE 2019 - 3M, Shell, Halliburton and Unibap weigh in on their AI results to date Jon Reed Wed, 11/06/2019 - 10:24
Summary:
It's hardly unique to hear about AI and IoT from the enterprise stage. But it's rare to hear customers speak to results and live lessons. At Constellation Connection Enterprise '19, a real world AI/IoT panel was a highlight - here's my review.

CCE 19 AI panel
CCE 2019 customer AI and IoT panel

Despite my incessant buzzword bashing, I'll concede this much: it's important to grapple with next-gen tech via experts who actually know what they are talking about.

We got an earful on day one of the Constellation Research Connected Enterprise 2019 event. Example: most CXOs are not falling over themselves to launch quantum computing projects in 2019, but they do need to be aware of possible threats to RSA encryption: 

Still, next-gen tech needs to be held to the fire of project results. Blockchain is a case in point. My upcoming podcast with blockchain panel moderator (and critic) Steve Wilson of Constellation will get into that in a big way. That's precisely why a day one CCE '19 highlight was "The Road to Real World AI and IoT Results." Moderated by Constellation's "Data to Decisions" lead Doug Henschen, the panelists shared AI lessons, as they bring tech to bear on logistics problems.

3M on AI - how does a 100+ year manufacturing company stay relevant?

Panelist Jennifer Austin, Manufacturing & Supply Chain Analytics Solutions Implementation Leader at 3M, told attendees why 3M is pursuing several AI-related initiatives. Start with the disruptions in the manufacturing sector:

We're looking at how, as a hundred-year-old plus manufacturing company: how do we stay relevant? How do we keep our products [in line] with consumer changes?

Joking about an earlier panel debate, Austin quipped:

I was also glad to hear that manufacturing is not dead.

As for those AI initiatives, one is a global ERP data standardization project:

As some of the speakers spoke of this morning, we struggle with our data, and we have a lot of self-sufficient organizations across the world. And so we don't have a standard way to represent our data. So we've been on a long journey to do that through our global ERP.

The next AI project? Smart products, such as 3M's smart air filter. The third? Manufacturing and supply chain pursuits, including an Industry 4.0 push:

The third [AI area] that I'm most focused on right now is in our manufacturing and supply organization. One aspect is Industry 4.0, which we're referring to as "digital factory."

We have over two hundred sites around the world, so we're trying to make sure that we have those all fully sensored, and that we're using the data that comes off of those sensors in a meaningful way -  to help us with things like capacity optimization, planning and cost reduction, and quality improvement for our customers.

Another aspect of the intelligent supply chain pursuit? Inventory optimization and other "customer value" projects.

The second portion of that manufacturing effort is connected to supply chain, so it's more transactional. That's where we're doing more of the machine learning activity right now. It's focused on things such as optimizing your inventory, by automatically determining what your saved stock should be. It's about minimizing and leaning out your value stream so you can deliver faster to our customers.

This is not a tiptoe into the AI kiddie pool:

We're starting to introduce some exciting new algorithms that are homegrown from our data scientists using, of course, open source models to scale that across the entire operation. So it's something we started out about 18 months ago. [At first], we didn't really think it was real, but it is very much real  - and driving results for our business.

Unibap AB - pursuing Industry 5.0 on earth, and in space

Next up? Frederick Bruhn from Unibap AB. Unibap is what you'd call a forward-thinking outfit. In a nutshell, they commercialize so-called "intelligent automation" - both on earth and in space. They've adopted the phrase Industry 5.0 to emphasize the shift from connected manufacturing (Industry 4.) to intelligent automation.

AI-in-space sounds like a science fiction popcorn movie special. But as Bruhn told us, it's a reality today, and not as different from "on earth" as we might assume:

For us, automation is both in the factories of tomorrow, and in space. Because if you have a mining operation on the ground, or if you have a mining operation on the moon for instance, for us it's the same. So we actually build the server hardware for space, and on the ground, and we do have software to go with that.

One of the cases for Unibap: replacing humans in real-time production lines for painting and coating, assembly, welding, and drilling. No, there aren't any mining operations on the moon yet, but Bruhn says that will happen in about fifteen years. In the meantime, Unibap is supplying computers to customers like NASA, "for intelligent data processing in space."

Royal Dutch Shell on AI - serving customers better is the goal

Deval Pandya of Royal Dutch Shell told us that Shell already has predictive maintenance models in operation, "giving us insights which you can act on to make business decisions and operational decisions, that is generating immense value for us."

The renewable energy space is another AI playing field for Royal Dutch Shell, including solar batteries, and a project to optimize when to charge or discharge batteries. Many of these "AI" and/or IoT projects, despite their focus on automation and "smart" machines, ultimately come back to serving customers better. Pandya:

We've been driving this culture of customer-centricity, and Shell is one of the largest in energy retail. There is a lot of information, and we're just starting to extract value out of it.

Getting AI projects right - talent and culture over tech

On diginomica, we've criticized digital transformation efforts that lack buy-in and total organizational commitment. Yet there is a need for small wins. In that context, how do you get AI projects right? Austin told Henschen: no matter how sexy the tech is portrayed, it's just a tool. 

I think that we have less of an AI strategy, than a commitment to delivering for our customers and our shareholders. So it's all about growth and innovation. AI/machine learning has become a tool that we're now more comfortable with. It's becoming a primary driver for helping us deliver on what our agenda is.

Pandya hit a similar note. Royal Dutch Shell has combined their digital technologies into a digital center of excellence:

A big portion is AI or machine learning, but a lot of it goes hand-in-hand. So in IoT, we are using a lot of this IoT data, and then applying AI to it.

I don't care how good your tech is, or how good your implementation partner is, you're still going to face adversity, your digital moment of truth. Henschen asked the panel: what is your biggest sticking point: talent, culture or technology platforms?

Halliburton's Dr. Satyam Priyadarshy says it's the talent. But for Halliburton, it's more of a training problem than a talent problem:

I call it talent transformation. Because we can't go and hire data scientists, right? A lot of us face the same challenge... We compete with Silicon Valley talent as well. The burning talent question for Halliburton is: can they transform the talent they have? The oil and gas industry has one of the most talented workforces scientifically, from geophysicists to geologists, right? So the question is: can they be turned and trained into data scientists? That has been very highly successful; we have been globally training people.

Two companies on the panel, Halliburton and Shell, use hackathons as a means to spark new hires, or upskill. As Priyadarshy shared, their hackathons are a crash course for developers on industry issues:

Our hackathons, or what we call boot camp workshops, are very contextualized and customized. Everybody can go and take a class on Coursera on AI, right? But how do you apply to oil and gas industry problems - that remains a challenge.

So Halliburton designs these boot camps to get geologists and drillers immersed in AI and IoT:

We have a big workforce of drillers; they are actually on the field. We are sitting in the office. So we have to understand their mindset. 

For Pandya, culture comes first, then talent, then tech: "culture sets the stage for everything else." But Pandya makes a critical point: if your workers don't feel free to fail, then your culture isn't ready for digital change.

This new technology is changing fundamentally the way we do business, the way we make decisions. And so it is a different mindset... The culture of failing fast and learning from failures is something which we have championed across Shell. It's okay to fail. And that's a huge, huge change in mindset, because when you are putting billions of dollars of investments [at stake], failure is usually not an option.

My take

Most of the panelists are investing in some type of AI/IoT COE (Center of Excellence). Give me a COE over a POC (Proof of Concept) anyway. A COE reflects a grittier commitment - and a recognition of the skills transitions needed. True, not all companies are able, or willing, to build data science teams, but it's instructive to see how approaches like COEs are holding up across projects.

A couple of panelists emphasized choosing the right implementation partner/advisory - that wisdom remains a constant. This panel was a welcome reminder that enterprise tech is at different maturity levels. It's our collective job to push beyond the marketing bombast and determine where we stand. Blockchain and quantum computing remain futuristic in an enterprise context, albeit with very different issues to conquer, whereas IoT, and now AI, have some live use cases to consider. Granted, none of the panelists offered up hard ROI numbers, but that's also a question that wasn't explored, and probably should have been.

Any discussion that comes back to data-powered business models must also return to issues of security, privacy, and governance. That wasn't a focus of this panel, but it was addressed in other Connected Enterprise sessions. My upcoming podcast with Steve Wilson on the persistent problem of identity will dig further.

Image credit - Photo of AI and IoT real world use case panel at Connected Enterprise 2019 by Jon Reed.

Disclosure - Constellation Research provided me with a press pass and hotel accommodations to attend Connected Enterprise 2019.

Read more on:

          

Indian start-ups can create 12 lakh direct jobs by 2025

 Cache   
New Delhi: India's start-up ecosystem has the potential to create up to 12.5 lakh direct jobs by 2025, from 3.9-4.3 lakh direct jobs in 2019, according to a new report from industry body National Association of Software and Services Companies (Nasscom).

The number of indirect jobs created by the start-up ecosystem in India can jump to 39-44 lakh by 2025 from 14-16 lakh jobs this year, said the report titled "India's Tech Start-up Ecosystem".

"India's talent base is expanding beyond large cities as fresh graduates are choosing to stay back in non-metropolitan cities. These individuals have an almost similar exposure to technologies via the Internet. This enables the founders to recruit quality talent at a relatively lesser cost - allowing better runway and also a base for growth," said the report.

The analysis suggests that the Indian start-up ecosystem has the potential to grow about four times by 2025, it added.

The research found that 18 per cent of all start-ups are now leveraging deep-tech and fintech, enterprise, and retail tech which are the most mature sectors with strong metrics across dimensions.

"There is increased activity in edtech, retail & retail tech, HR, and healthtech technology start-ups with significant improvement in sectors like agritech, aerospace, defence and space," said the report that Nasscom brought out in collaboration with global management and strategy consulting firm Zinnov.

Since 2014, the deep-tech start-up pool in India has grown at 40 per cent compound annual growth rate. The study found that while Artificial Intelligence and Machine Learning is being deployed heavily in enterprise, fintech and healthtech, deep-tech adoption is actually pervasive across sectors.

Total investment in start-up ecosystem has increased by 16 per cent year-on-year in 2019 - during the January to August period, said the report.
          

DataToBiz - ETL Engineer (2-6 yrs) Chandigarh (Backend Developer)

 Cache   
We are a team of young and dynamic professionals looking for an exceptional Data Engineer to join our team in Chandigarh. We are trying to solve some very exciting business challenges by applying cutting-edge Big Data, Machine Learning and Deep Learning Technologies. Being a consulting and services startup we are looking for quick learners who can work in a cross-functional team of Consultants,...
          

Prof G. Mugesh from IISc and five others win Infosys Prize 2019

 Cache   
Read time: 5 mins
The Infosys Prize winners for 2019. Top (L–R): Manu V. Devadevan, G Mugesh, and Majula Reddy. Bottom (L–R): Siddhartha Mishra, Anand Pandian, and Sunita Sarawagi.

The Trustees of the Infosys Science Foundation (ISF) announced the winners of the Infosys Prize 2019 at an event held today at the Infosys campus in Electronic City, Bengaluru. The awards were presented in six categories — Engineering and Computer Sciences, Humanities, Life Sciences, Mathematical Sciences, Physical Sciences and Social Sciences. The winners include Dr. Manu V. Devadevan, Dr. G Mugesh, Dr. Majula Reddy, Dr. Siddhartha Mishra, Dr. Anand Pandian and Dr. Sunita Sarawagi.

The Infosys Prize 2019 for Engineering and Computer Science was awarded to Dr Sunita Sarawagi, Institute Chair Professor, Computer Science and Engineering, Indian Institute of Technology, Bombay. She was awarded for her research in databases, data mining, machine learning and natural language processing, and for important applications of these research techniques. The prize recognizes her pioneering work in developing information extraction techniques for unstructured data. Prof. Sarawagi’s work has practical applications in helping clean up unstructured data like addresses on the web and in repositories, easing the handling of queries.

In the Humanities category, the Prize was awarded to Dr Manu V. Devadevan, Assistant Professor, School of Humanities and Social Sciences, Indian Institute of Technology, Mandi. His work critically reinterprets much of the conventional wisdom about the cultural, religious and social history of the Deccan and South India. Dr. Devadevan's primary research interests include political and economic processes in pre-modern South India, literary practices in South India and the study of ancient inscriptions from the region.

Dr Manjula Reddy, Chief Scientist, Centre for Cellular and Molecular Biology (CCMB), Hyderabad, received the award in the Life Sciences category for her groundbreaking discoveries concerning the structure of cell walls in bacteria. Dr. Reddy and her colleagues have revealed critical steps of cell wall growth that are fundamental for understanding bacterial biology. This work could potentially help in creating a new class of antibiotics to combat antibiotic resistant microbes. 

The 2019 prize for Mathematical Sciences was awarded to Dr Siddhartha Mishra, Professor, Department of Mathematics, ETH Zürich, for his outstanding contributions to Applied Mathematics, particularly for designing numerical tools for solving problems in the real world. Prof. Mishra's work has been used in climate models, astrophysics, aerodynamics, and plasma physics. He has produced computer programs for complicated realistic problems such as tsunamis generated by rock slides, and waves in the solar atmosphere.

In the Physical Sciences category, Dr G. Mugesh, Professor, Department of Inorganic and Physical Chemistry, Indian Institute of Science (IISc), Bengaluru, received the award for his pioneering work in the chemical synthesis of small molecules and nanomaterials for biomedical applications. His work has contributed to the understanding of the role of trace elements, selenium and iodine, in thyroid hormone activation and metabolism, and this research has led to major medical advances.

The Infosys Prize 2019 in the Social Sciences category went to Dr Anand Pandian, Professor, Department of Anthropology, Krieger School of Arts & Sciences, Johns Hopkins University, USA, for his work on ethics, selfhood and the creative process. Prof. Pandian's research encompasses several themes such as cinema, public culture, ecology, nature and the theory and methods of anthropology.  His writing pushes the boundaries of how anthropologists render into words the worlds they encounter. 

(L–R): Salil Parekh – Chief Executive Officer and Managing Director, Infosys Limited; S. Gopalakrishnan (Kris) – Co-founder, Infosys Limited, Co-founder, Axilor Ventures, Trustee - Infosys Science Foundation; Nandan Nilekani - Co-founder and Non-Executive Chairman of the Board, Infosys Limited, Trustee – Infosys Science Foundation; S.D. Shibulal - Co-founder, Infosys Limited, Co-founder, Axilor Ventures Private Limited, President – Board of Trustees, Infosys Science Foundation; Narayana Murthy – Founder, Infosys Limited, Trustee – Infosys Science Foundation; K. Dinesh – Co-founder, Infosys Limited, Trustee – Infosys Science Foundation; Srinath Batni – Former Director, Infosys Limited, Co-founder Axilor Ventures, Trustee – Infosys Science Foundation. Photo: Infosys Science Foundation.

Addressing the gathering, Mr. S. D. Shibulal, Co-Founder, Infosys Limited and the President of the Infosys Science Foundation, said, “The Infosys Prize continues to recognize exemplary work in scientific research and enquiry. Many Infosys Prize laureates have gone on to contribute significantly in key areas like healthcare, genetics, climate science, astronomy and poverty alleviation, amongst other things. Their work has immediate implications for the human race and the planet. We hope it catalyzes social development.” 

Mr. N. R. Narayana Murthy, founder of Infosys and trustee of the Infosys Science Foundation, called on the need for helping youngsters pursue fundamental research enthusiastically. “They should be encouraged and equipped to become contributors to solving huge problems that confront us every day. I want India to be a place where discovery and invention happen every month,” he said.

There was also a brief interaction with the media and students, where some of the trustees fielded some of the questions related to the prize. Ms. Nandita Jayaraj, an independent free-lance science journalist while congratulating the winners noted that there was better representation this time. She further mentioned that she would like to see the Chair of the Jury also be well represented, to which, Mr. Kris Gopalakrishnan responded saying 30% of the jury committee members were women. 

The winners will be awarded on 7th January 2020 at a separate function at Infosys, Bengaluru. The award includes a pure gold medal, a citation and a prize purse of USD 100,000 (or its equivalent in Rupees). 

(With inputs based on a press release from the Infosys Science Foundation).


          

Машинное обучение, архивы и специальные коллекции: Высокоуровневая точка зрения, часть 1

 Cache   

Данная статья исполнительного директора Коалиции сетевой информации (Coalition for Networked Information, CNI, https://www.cni.org/ ) и пионера исследований в области искусственного интеллекта и цифровой информации в учреждениях культуры (главным образом в библиотеках) Клиффорда Линча (Clifford A. Lynch – на фото, о нём см. также https://www.cni.org/about-cni/staff/clifford-a-lynch ) была опубликована 2 октября 2019 года на блоге Международного совета архивов ( https://blog-ica.org/ ).
От редакции блога: Данная статья впервые была опубликована в № 38 бюллетеня Международного совета архивов (МСА) Flash за сентябрь 2019 года. Чтобы подробнее ознакомиться с материалами данного номера, посвящённого искусственному интеллекту в связи с архивным делом, члены МСА могут воспользоваться ссылкой https://www.ica.org/en/member/login?destination=node/18294 (или прямой ссылкой https://ica.us13.list-manage.com/... ). Если Вы не состоите в членах МСА, то можете вы можете воспользоваться ссылкой https://www.ica.org/en/join-international-council-archives , чтобы присоединиться к нам.
Сейчас делаются экстравагантные прогнозы в отношении того, что в популярных средствах массовой информации называют «искусственным интеллектом» (ИИ). Утверждают, что он станет причиной ликвидации миллионов рабочих мест; приведёт к широкому распространению беспилотных автомобилей; возьмёт на себя медицинскую диагностику и назначение лечения, а также принятие решений в ходе деловой деятельности и государственного управления. Создаётся ощущение, что ИИ каким-то образом – каким именно, пока чётко никто сказать не может - преобразует деятельность, связанную с созданием и управлением знаниями, а также деятельность организаций, занимающихся сохранением культурно-исторической памяти. Данная краткая статья является попыткой дать определенное разумно здравое и конкретное представление о соответствующих изменениях, которые реально могут произойти в течение следующего десятилетия (не вдаваясь при этом в технические подробности), и о том, что эти изменения могут означать для практической работы архивов и специальных коллекций, и, более широко, для учреждений, сохраняющих культурную память.

Исполнительный директор Коалиции сетевой информации (Coalition for Networked Information, CNI) д-р Клиффорд Линч (Dr Clifford A. Lynch) на конференции Jisc/CNI  2018 года (Фото: Jisc)

В последние годы были достигнуты замечательные успехи, в первую очередь, на достаточно специфическом и ограниченном направлении машинного обучения (machine learning). Говоря попросту, в машинном обучении используются наборы примеров для того, чтобы обучить программное обеспечение распознавать типовые ситуации и действовать в соответствии с этим. Например, использование этого подхода позволило компьютерной программе стать мировым чемпионом в игре «Го», которую большинство считает гораздо более сложной, чем шахматы; а также дало компьютерам возможность научиться блестяще действовать в различных видеоиграх. В настоящее время программное обеспечение не уступает человеку при анализе различных видов медицинских изображений с целью выявления определённых заболеваний. Многие из предсказанных и наиболее ожидаемых революционных решений объединяют в себе машинное обучение с различными формами робототехники и «компьютерного зрения» (реально подразумевающего применение широкого набора датчиков изображения и иных датчиков окружающей среды), особенно в таких приложениях, как беспилотные автономные автомобили, грузовики, корабли, дроны и военная техника.

Есть три побудительные причины для внедрения машинного обучения: снижение затрат за счет исключения людей (автономные транспортные средства), преодоление ограничений, связанных с возможностями человека (игры) и совершение действий, которые сегодня невозможны в желаемых масштабах с приемлемыми затратами (например, повсеместная слежка). Последняя из названных причин также открывает возможности для организаций, занимающихся охранением культурно-исторической памяти.

Мой комментарий: Я бы, со своей стороны, назвала их «возможностями для получения отдачи» и ранжировала бы, в порядке уменьшения величины отдачи, следующим образом:
  • Освоение совершенно новых и очень «лакомых» или абсолютно необходимых видов деятельности, которые невозможны без подобных технологий – в этом случае соответствующие затраты являются просто ценой вхождения в сферу деятельности; и привычные критерии типа TCO (полной стоимости владения) особого смысла не имеют;

  • Возможность для коммерческих предприятий заметно повысить прибыль от существующих видов деятельности;

  • Достижение намного большей эффективности, масштабной экономии и/или снижения риска в существующих видах деятельности;

  • Мелочная, часто сомнительная экономия и рационализация (аналог – экономия тонера при внедрении систем электронного документооборота).
С моей точки зрения, устранение человека самоцелью не является. Скорее наоборот – наиболее интересные перспективы, как мне кажется, будут у «гибридных» решений, позволяющие использовать сильные стороны как человека, так и машины, и взаимно-компенсировать их слабости и недостатки.

Среди приложений, имеющих непосредственное отношений к деятельности учреждений памяти, для которых внедрение машинного обучения привело к прорывам, можно назвать перевод с одного языка на другой; перевод печатного или рукописного текста в полностью распознанное машиночитаемое представление (его иногда называют (по традиции – Н.Х.) «оптическим распознаванием символов»); преобразование речи в текст; классификация изображений по их содержанию (например, выделение всех изображений, на которых есть собаки; или же перечисление всех объектов, которое программное обеспечение может распознать в изображении); и, как особый важный частный случай идентификации образов, распознавание лица человека.

Прогресс во всех этих областях подстёгивается и направляется государственным и/или коммерческим секторами, которые бесконечно лучше обеспечены финансовыми ресурсами, чем сфера сохранения культурно-исторической памяти. Например, многие национальные государства и крупные корпорации очень сильно заинтересованы в развитии технологии распознавании лиц. Ключевой стратегией для сектора сохранения культурно-исторической памяти будет использование этих достижений, адаптация и приспособление технологий для собственных нужд.

(Окончание следует, см. http://rusrim.blogspot.com/2019/10/2_31.html )

Клиффорд Линч (Clifford A. Lynch)

Источник: блог МСА
https://blog-ica.org/2019/10/02/machine-learning-archives-and-special-collections-a-high-level-view/

          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

Mobile Product Designer в BetterMe, Киев

 Cache   

Необходимые навыки

We are looking for Mobile Product Designer (UI\UX).

ABOUT YOU:
-You would be focusing on both UI and UX parts — still we are primarily interested in your UI mastery;
-Knowledge of app usage patterns depending on the platform;
-Deep knowledge of HIG / Material design guidelines;
-Experience in preparing design assets, animations and specification;
-Strong experience in prototyping and designing interfaces that suit specific purpose;
-Keen eye for details and high-quality designs.
-Ability to set up processes supporting effective communication with development team;
-Basic knowledge of app development processes;
-Passion for pixel-perfect interface design.

Предлагаем

We provide you with everything you need to stay focused on the important:

-Competitive salary that will help you focus on your projects and professional growth.
-Professional growth. Development team only Senior / Middle Level (90% / 10%). Internal courses and seminars, corporate library, English lessons and the ability to attend critical events worldwide. Also, you will have Study Day, which you can devote to the learning of new technologies.
-No legacy code. We work only with new technologies and new frameworks.
-Comfortable work environment. A spacious office (summer terrace, free lunches, Sony PlayStation 4 and other sorts of goodies) located a 5-minute walk from Taras Shevchenko metro station. We will provide you with the equipment that you think you need for comfortable work and perform tasks.
-Flexible work hours. You decide your work hours.
-Sport and Fitness. Medical insurance, corporate doctor visits and sports activities of your choice.
-Rest. Parties, team buildings, travel abroad and more.

Обязанности

YOUR IMPACT:
-Design and re-design variety of Health & Fitness Apps;
-Create hypothesis, design highly usable interfaces;
-Create wireframes, visual mockups, and prototypes to support design decisions;
-Be part of the Product team, supervise implementation of your ideas;
-Create breath-catching designs and animations for BetterMe mobile apps and websites.

At the end we finish:
With your thoughts, our business will reach new heights and will make our users even healthier, sportier, happier and better :-)

О проекте

BetterMe is one of the TOP-5 favorite apps in Health&Fitness category in the USA. For the last 1,5-year apps family BetterMe has been downloaded more than 15 million (iOS+Android). We have 5 mln women community on FB — more than any other competitor has. Apps have a 4.5* rating of the IOS App Store. This all became possible through a team of world-class talented professionals in composition 75 people in Kyiv.

We’re largest partners Facebook / Google / Snapchat / Pinterest / Twitter from CEE. We have huge experience in user acquisition / analytics / 
ROI calcs, and we work with very advanced technologies (AR, VR, machine learning, etc.).

BetterMe turned a smartphone into a personal trainer, who understands needs and helps reach desired results our customers. We by analyzing users behavior and create personalized user data profiles, ensuring reliable and long-lasting relationships between us.

There are several successful products in the family BetterMe:

BetterMe: Weight Loss Workouts
BetterMen: Workouts
BetterMe: Meditation
BetterMe: Yoga
BetterMe: Walking

Our mission is to help people creating happiness within. Having your mind and body in harmony is vital. There are 500 million people in the world who value a healthy lifestyle. We believe that every one of those people should be a BetterMe user. We plan to capture the growth of the Global Health Market, and our ideal candidate will focus on building the largest Health company in the world.


          

MLPerf Releases Over 500 Inference Benchmarks

 Cache   

Today the MLPerf consortium released over 500 inference benchmark results from 14 organizations. "Having independent benchmarks help customers understand and evaluate hardware products in a comparable light. MLPerf is helping drive transparency and oversight into machine learning performance that will enable vendors to mature and build out the AI ecosystem. Intel is excited to be part of the MLPerf effort to realize the vision of AI Everywhere,” stated Dr Naveen Rao, Corp VP Intel, GM AI Products.

The post MLPerf Releases Over 500 Inference Benchmarks appeared first on insideHPC.


          

False News: Are Smart Bots the Answer?

 Cache   
To us, this comes as no surprise—Axios reports, “Machine Learning Can’t Flag False News, New Studies Show.” Writer Joe Uchill concisely summarizes some recent studies out of MIT that should quell any hope that machine learning will save us from fake news, at least any time soon. Though we have seen that AI can be […]
          

Microsoft SQL Server 2019 biedt datavirtualisatie

 Cache   
Tijdens de Ignite-conferentie in Orlando heeft Microsoft SQL Server 2019 gepresenteerd. Microsoft positioneert SQL Server 2019 als een Unified data platform, waarop enterprise data in een data lake kunnen worden opgeslagen en met SQL en Spark query's bevraagd.Deze versie breidt de mogelijkheden van vorige releases uit, zoals de mogelijkheid om op Linux en in containers te draaien en de PolyBase-technologie voor connecteren met big data-opslagsystemen. SQL Server 2019 maakt gebruik van PolyBase v2 voor volledige datavirtualisatie en combineert de Linux/container-compatibiliteit met Kubernetes om de ​​nieuwe technologie Big Data Clusters te ondersteunen.Big Data Clusters implementeert een op Kubernetes gebaseerde multi-cluster implementatie van SQL Server en combineert deze met Apache Spark, YARN en Hadoop Distributed File System om een ​​enkel platform te leveren dat OLTP, data lakes en machine learning faciliteert. Het kan worden geïmplementeerd op elk Kubernetes-cluster, on-premises en in de cloud, inclusief op de eigen Microsoft Azure Kubernetes Services.Microsoft wil met SQL Server 2019 ook het ETL-proces vereenvoudigen met de levering van datavirtualisatie. Applicaties en ontwikkelaars kunnen met de T-SQL taal zowel gestructureerde als gestructureerde data vanuit bronnen als Oracle, MongoDB, Azure SQL, Teradata en HDFS benaderen.Azure Data StudioMicrosoft biedt ook de GUI tool Azure Data Studio, een cross-platform database tool voor data professionals. Azure Data Studio was eerder in preview bekend als SQL Operations Studio, en biedt een moderne editor experience met IntelliSense, code snippets, source control integratie een geïntegreerde terminal. Met Azure Data Studio zijn Big Data Clusters te benaderen door middel van interactieve dashboards, en ook biedt het SQL en Jupyter Notebooks toegang.Lees hier alle details in de uitgebreide blog van Asad Khan, Partner Director of Program Management, SQL Server and Azure SQL.Meer informatie: Microsoft
          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

AI Improves Quality and Usability of Optoacoustic Imaging

 Cache   

In optoacoustics, image quality depends on the number and distribution of sensors used by the device; the more sensors and the more broadly they are arranged, the better the quality.

To improve image quality in low-cost optoacoustic devices with only a small number of ultrasonic sensors, researchers at ETH Zurich and the University of Zurich turned to machine learning. They developed a framework for the efficient recovery of image quality from sparse optoacoustic data using a deep convolutional neural network and demonstrated their approach with whole body mouse imaging in vivo.

To generate accurate, high-resolution reference images for training, the team began by developing a high-end optoacoustic scanner with 512 sensors. An...
          

Applying NLP in Java: All From the Command-Line

 Cache   
Learn more about NLP in Java!

We are all aware of machine learning tools and cloud services that work via the browser and give us an interface we can use to perform our day-to-day data analysis, model training, evaluation, and other tasks to various degrees of efficiencies.

But what would you do if you wanted to run these tasks on or from your local machine or infrastructure available in your organization? And, if these resources available do not meet the pre-requisites, to do decent end-to-end data science or machine learning tasks. That’s when access to a cloud-provider agnostic, deep learning management environment like Valohai can help. And to add to this, we will be using the free-tier that is accessible to anyone.


          

How the cloud can help you see data with new eyes

 Cache   

Contributed by: Tony Baer, Principal at dbInsight LLC

For many organizations, the cloud is synonymous with transformation. It's also associated with speed, agility, flexibility, and value, and it's reasonable to ask, what could the cloud mean for your organization? Managed properly, the cloud could be all these things. And it can give your organization a new way to see your data, discover new insights, and unlock endless possibilities.

Register now
 

Organizations today are looking at the cloud as either the future default option for application or database modernization or as a major pillar as part of a hybrid strategy for leveraging the best of both worlds.

So, how can the cloud help your organization see data with new eyes?

The cloud can be your gateway to new services such as streaming analytics or machine learning that could help your organization respond intelligently and quickly to events that change the actions of customers, trading partners, or cyber attackers.

The cloud can help your organization incorporate new sources of data such as social network posts, logs, mobile device data, and IoT data to see new insights and create new business opportunities for your customers over competitors. The ability to onramp new applications utilizing this data without the budgetary and backlog hurdles of on-premise deployment accelerates time to value. This enables new predictive maintenance applications driven by IoT data, supply chain optimization, or next-best offers to shoppers via location-based data, sentiment analytics through social media activity, and so on. The cloud also makes possible innovative approaches to managing data such as autonomous database services that deliver on the promise of IT simplicity.

So how do you get there from here?

There is no single recipe that applies across the board to every organization or use case. Should your organization adopt an approach that moves business as usual to the cloud, minimizing disruption, or should it seek transformational change?

Among the factors or constraints, may be internal policies or external regulatory mandates that limit or preclude placing data in the public cloud. In other cases, it may be a case of balancing the need to prevent disruption of existing, always-on back-office systems vs. the need to reduce cost or operational complexity. In other cases, it may be the need for accessing new cloud-only services that are not available for on-premise deployment. For most organizations, the answer will likely be a hybrid strategy.

So, what will the cloud mean to your organization? How could it allow you to see and use data differently? How should your organization balance the drivers and constraints to determine the most optimal adoption strategy? How can you ensure customer privacy and protection on a global scale to allow for easier compliance and protect your organization from cybersecurity threats?

Join me at Oracle Cloud Leadership Summit, where I'll share my 4 key takeaways about the enterprise cloud market that will change the way you see your data forever. Please join me in a city near you.  

Register Now

Photo by eberhard grossgasteiger on Unsplash

 


          

Artificial Intelligence Bias: On Trust, Complexity, and Diversity

 Cache   
Just as in computers “Garbage In = Garbage Out,” so also in Machine Learning: “Bias In = Bias Out.” The Microsoft Twitter chatbot Tay had to be hastily withdrawn after it reacted to trolls with racist and misogynist tweets. Presumably, the bot developed this artificial intelligence bias from its machine learning basics.
          

Supervised learning for distribution of centralised multiagent patrolling strategies. , lundi 18 novembre à 14h.

 Cache   
For nearly two decades, patrolling has received significant attention from the multiagent community. Multiagent patrolling (MAP) consists in modelling a patrol task to optimise as a multiagent system. The problem of optimising a patrol task is to distribute agents over the area to patrol in space and time the most efficiently, which constitutes a decision-making problem. A range of algorithms based on reactive, cognitive, reinforcement learning, centralised and decentralised strategies, among others, have been developed to make such a task ever more efficient. However, the existing patrolling-specific approaches based on supervised learning were still at preliminary stages, although a few works addressed this issue. Central to supervised learning, which is a set of methods and tools that allow inferring new knowledge, is the idea of learning a function mapping any input to an output from a sample of data composed of input-output pairs; learning, in this case, enables the system to generalise to new data never observed before. Until now, the best online MAP strategy, namely without precalculation, has turned out to be a centralised strategy with a coordinator. However, as for any centralised decision process in general, such a strategy is hardly scalable. The purpose of this work is then to develop and implement a new methodology aimed at turning any high-performance centralised strategy into a distributed strategy. Indeed, distributed strategies are by design resilient, more adaptive to changes in the environment, and scalable. In doing so, the centralised decision process, generally represented in MAP by a coordinator, is distributed into patrolling agents by means of supervised learning methods, so that agents of the resultant distributed strategy tend to capture each a part of the algorithm executed by the centralised decision process. The outcome is a new distributed decision-making algorithm based on machine learning. In this thesis therefore, such a procedure of distribution of centralised strategy is established, then concretely implemented using some artificial neural networks architectures.
          

DIGI-TECH PHARMA & AI 2020

 Cache   
The 4th Annual Digi-Tech Pharma & AI conference brings with it even more interactive sessions, expert speakers, senior professionals and decision makers from leading pharma, bio-tech and healthcare industry. Meet the decision makers, benchmark and learn from real-life use cases to drive organizational change and to understand the new cutting-edge technologies and practical solutions. In this 4th edition as we explore the novel technologies and developments reforming pharmaceutical industry, we also dive deep into the implementation and advances in machine learning, deep learning, artificial intelligence, informatics and data science which has redefined the development of new drugs, tackle diseases, improving healthcare and much more. The enhancements in data management and data integration are providing improvements to both the speed and quality of drug discovery and many clinical trial processes. To be in the forefront, a necessity for partnership and collaboration with healthcare provider is a must for the pharmaceutical companies, and these partnerships will also lead to massive advances in R&D using artificial intelligence in genomics and precision medicine to develop a deep understanding of the root causes of diseases. The combination of AI, big data and IoT technologies are creating new innovations, also other eminent technologies like cloud computing, augmented reality, virtual reality and blockchain are being used extensively in the Pharmaceutical industry’s digital transformation. It gives us a great pleasure to welcoming you to this international pharmaceutical technology conference 4th Annual Digi-Tech Pharma & AI 2020. KEY HIGHLIGHTS : Digital Technology trends in Pharma and Bio-Tech industry Adopting AI and Machine Learning to unlock the full potential of Pharma How pharma can integrate into digital health environment Collaborative Innovation: Finding the right partners to leverage new technologies in Pharma Patient Centred Drug Discovery Applying AI to the design of lead compounds for new drugs Algorithms and Models for drug discovery AI and ML for Target Identification & Validation in Drug Discovery Advancing Drug Discovery through quantum computing Genomics & Drug Discovery Virtual and Hybrid Clinical Trials R&D Use Cases Implementation and relevance of FAIR data principles in Pharma R&D Harnessing Data Science for Drug Combination Discovery AI and Big Data: A powerful combination for future growth The use of AI to make sense of clinical data Use of big data for precision medicine Multi-omics & clinical data to unlock the power of complex datasets Integration and Visualization of translational Medicine Data Data & Healthcare Analytics The Growing Importance of Real-World Data RWD for clinical research and drug development RWE and RWD to support regulatory decision making Real?World Data Science to advance Patient Care Managing real world data governance Healthcare & Medical Technology Adoption of IoT in Pharma Potential of Cloud Computing in Pharma Impact of Digital Health in Pharma Digital Health strategy and Patient centric Clinical Trials The convergence of Digital Therapeutics and Pharma in Digital Health How pharma-health collaboration works on innovating drug discovery & patient experience Blockchain and AI-based Platform WHO SHOULD ATTEND THE CONFERENCE: This event is designed for senior level attendees from various companies including pharmaceutical, biotechnological, biopharmaceutical, CRO’s, Diagnostics, solution provider and government institutions. Attendees includes Chief Data Officer, VPs, GMs, Directors, Heads and Managers of • Drug Delivery Innovation • R&D IT • Big Data Solutions • AI/ Machine Learning • Cognitive Computing • Digital innovative strategic planning • Genomics & Drug Discovery • Virtual and Hybrid Clinical Trials • Real-World Data • Real-World Evidence • Data Management & Analytics • Data Sciences • Clinical trials and data management • Translational informatics • Data storage and analysis • Enterprise Architecture • Information Systems • Contract outsourcing service providers • Digital Health • Healthcare IT • Computational Biology • Multi-channel Management • Blockchain and AI-based Platform
          

Looking for End-to-End SharePoint Development Services

 Cache   
Microsoft SharePoint is a widely used enterprise-level platform, adopted by organizations for efficient collaboration, data storage, and document/project management. SharePoint simplifies business processes, provides a secure internal information sharing platform, and enhances productivity. Ranosys, a pioneer Microsoft SharePoint development company, provides custom SharePoint solutions in UK (London) for all your business needs. Ranosys is proactively meeting the diversified challenges in a cross section of industries by offering comprehensive solutions in the areas of: Enterprise E-Commerce (Magento Commerce Solution Partner) Sharepoint Development Services Mobile App Development (OutSystems Low-Code Platform Development, iOS & Android Mobile App, Cross Platform Mobile App, IoT Mobile App) Salesforce Consulting Services Web & CMS Development (Drupal, PHP, WordPress, Mean Stack) AI/Machine Learning (Chatbot Development)
          

We offer the training solutions for the technologies

 Cache   
Learnmyit.com, No.1 Online Training Center by Professional & Certified Consultants with real dedication. We offer the training solutions for the technologies like, Oracle, Data Warehousing, SAP, Salesforce, Testing, Data Science & Machine Learning and etc.,
          

GRADUATE STUDENT - SIGNAL TECHNOLOGIES 16-01020

 Cache   
TX-San Antonio, Job Summary: Join our Defense and Intelligence Solutions Division as a Graduate Student Intern! Support your team with several diverse tasks related to large scale testing and analysis of machine learning algorithms in complex environments. Assist your team and collaborate on signal processing and communication system projects. Pursue projects related to digital communications, machine learning, a
          

3rd Annual Pharma Supply-Chain & Security World 2020 “Supply-Chai

 Cache   
Corvus Global Events invites you to Pharma Supply Chain & Security World 2020 - Supply-chain, Drug Serialization &Anti-Counterfeiting conference, which will have Pharma industry experts sharing various challenges faced, new strategies, case studies and use of innovative ideas, the conference will also offer opportunities to encourage partnerships and collaborations. In this conference you’ll not only discover innovative technologies, transformation strategies and collaboration methods, but how best to implement them to optimise your supply chain processes and strategies for drug anti-counterfeiting. Key Highlights: Streamlining your supply chain End to End supply chain Designing an optimal supply chain network Developing a sustainable Serialization strategy Integration of track & trace solutions in production and supply chain Smart Packaging, Labelling, Artwork, Warehouse & Logistics Serialization Data and Analytics driven approach to increase supply chain agility Adoption of Blockchain in pharma supply chain Brand Protection & Securing supply chain integrity Global enterprise level solutions for anti-counterfeiting Tackling pharmaceutical crime - initiatives at multinational, EU and national level IP and regulatory enforcement Synchrony of the Pharma Industry and professional bodies against counterfeiting Understanding and meeting the needs of DSCSA, EU FMD and other global regulations Strategies for public awareness and patient protection Best practices to protect your brand The role of the Internet in aiding the counterfeiters – How to overcome the situation? Effective Authentication Technologies Best selection of tamper-evident features Developing a RMP for your supply chain to protect your Brand, Product and Patient Safety The need to understand and adopt new technologies like IoT, Analytics, Blockchain, Machine Learning and Artificial Intelligence Case study: How companies are structuring their counterfeiting efforts and departments? Who should attend the conference? Attendees include GMs, VPs, Directors, Heads and Managers of: • Pharmaceutical manufacturers and distributors • Supply Chain management companies • Healthcare professionals • Pharmacists • Serialization, Track and Trace solution providers • Brand protection, enforcement, security, integrity and management companies • Healthcare research organizations • Pharmaceutical industry professional associations • Anti-counterfeiting organizations • Packaging & labelling companies • Authentication technology suppliers • Contract manufacturing organization (CMO) • IT service providers • Intellectual Property, investigators and Trademark council • Drug regulatory agencies and customs
          

Microsoft and Nokia collaborate to accelerate digital transformation and Industry 4.0 for communications service providers and enterprises

 Cache   

 Microsoft and Nokia announced a strategic collaboration to accelerate transformation and innovation across industries with cloud, Artificial Intelligence (AI) and Internet of Things (IoT). By bringing together Microsoft cloud solutions and Nokia's expertise in mission-critical networking, the companies are uniquely positioned to help enterprises and communications service providers (CSPs) transform their businesses. As Microsoft's Azure, Azure IoT, Azure AI and Machine Learning solutions...

Read the full story at https://www.webwire.com/ViewPressRel.asp?aId=249600


          

Best Android App Development Company in London,UK

 Cache   
Being one of the top Android App Development Companies in London, UK,DxMinds offer creative and customized mobile app development services on all platforms including iOS, Android and Windows. We are the best mobile application development Company in London, UK is ready to help all size of business to get more value through trending technology including AI, Chatbot, BlockChain, Machine Learning, IOT, AR/VR/MR, React Native and many more. Visit https://dxminds.com/top-7-mobile-app-development-companies-in-manchester-cambridge-uk/
          

Přednáška - Ariel Shamir - The Face Of Art: Landmark Detection & Geometric Style in Portraits

 Cache   
Krátká anotace přednášky: Neural networks abilities have been utilised in many fields in science and engineering. Recently, they have also been used in Art and design to stylise photographs guided by artworks of various artists. However, such stylisation mostly concentrates on the texture and the colour of the artworks. In this work we try to capture the Geometric style of artists using neural networks. We concentrate on portraits and propose a new method for landmark detection in paintings that allow us to analyse and capture geometric deformations in face paintings, and define a geometric style for artists. We demonstrate our technique by creating average portraits for Artists as well as defining a geometry-aware portrait stylisation algorithm. This work was published in SIGGRAPH 2019 and is joint work with Jordan Yaniv and Yael Newman. See http://www.faculty.idc.ac.il/arik/site/foa/face-of-art.asp Bio: Ariel Shamir is the Dean of the Efi Arazi School of Computer Science at the Interdisciplinary Center in Israel. He received his Ph.D. in computer science in 2000 from the Hebrew University in Jerusalem. He spent two years as PostDoc at the University of Texas in Austin. Shamir has numerous publications and a number of patents. He is currently an associate editor for ACM Transactions on Graphics, Graphical Models and Computational Visual Media, and was an associate editor for Computers and Graphics journal (2010-2014), IEEE Transactions on Visualization and Computer Graphics (2015-2017), He also served on the program committee of many leading international conferences, including SIGGRAPH, SIGGRAPH Asia, and Eurographics. Shamir was named one of the most highly cited researchers on the Thomson Reuters list in 2015. He has a broad commercial experience consulting various companies including Disney research, Mitsubishi Electric, PrimeSense (now Apple), Verisk and more. Shamir specializes in geometric modeling, computer graphics, image processing and machine learning. He is a member of the ACM SIGGRAPH, IEEE Computer, AsiaGraphics and EuroGraphics associations.

19.11.2019


          

Episode 80 – Computers, Consciousness & Cockroaches (with Byron)

 Cache   
This week the Po & the Pro get down and nerdy with our guest, Byron, a (real) professor of computer science. Byron breaks down machine learning as well as the advents and limitations of artificial intelligence. We also discuss whether or not Instagram is secretly recording our conversations and if machines will take over the world a la Terminator 2.
          

Electric Vehicle driving Data

 Cache   
Learn how our specialist telematics devices have collected over 500,000km of real world electric vehicle driving data, and how we use machine learning to create bespoke solutions for customers looking to switch to electric vehicles.
          

Top Software Testing Instititue, Selenium Course - Quality Software Technologies (Thane-Kalyan) (Thane (W))

 Cache   
Quality Software Technologies - (Thane) An ISO 9001:2015 Certified Institute Expertise in Training of Software Testing, Selenium , ISTQB, JAVA Programming, Python Programming & Machine Learning. Training Methods are based on Real Time Industry E...
          

Carbon Quantification Director - Indigo - Boston, MA

 Cache   
At Indigo, our mission is: Experience with data analytics, modeling, statistics, and/or machine learning preferred. C-level executives and board members).
From Indigo - Fri, 09 Aug 2019 20:54:53 GMT - View all Boston, MA jobs
          

Singa becomes a top-level project of the Apache Software Foundation

 Cache   
After more than three and a half years in the Apache incubator, Singa fulfills all the conditions of an Apache project, as the Apache Software Foundation announces. Apache Singa is a distributed, scalable machine learning library. Singa was developed by the National University of Singapore in 2014 and handed over to the Apache Software Foundation
          

Canonical Partners with Nvidia to Certify Ubuntu 18.04 LTS on NVIDIA DGX-2 AI

 Cache   

Canonical and Nvidia have formed a new alliance to prove that the adoption and implementation of Artificial Intelligence and Machine Learning isn't a major challenge for enterprises due to the fact that AI-based workloads require greater compute power, security, and flexibility. As such, they've certified Ubuntu 18.04 LTS for NVIDIA DGX-2 AI systems to help organizations take advantage of AI's vast potential.

The Ubuntu 18.04 LTS update with NVIDIA DGX-2 AI system certification will allow for containerized and cloud-native development of GPU-accelerated workloads due to NVIDIA DGX-2 AI systems deliver 2 petaFLOPS of AI performance. The combination of Ubuntu 18.04 LTS and NVIDIA DGX-2 allows data scientists and engineers to work faster and at a greater scale while using their preferred operating system.

Read more


          

Use AI To Predict Customer Churn - Manas AI

 Cache   
Don,t look any further, If you are looking for best strategies and solutions to eliminate customer churn, We at ManasAI use Big Data and Machine Learning technologies to design solutions for businesses looking to reduce churn rate.
          

Machine Learning From Scratch Through Python

 Cache   

This course is for those who want to step into Artificial Intelligence domain, specially into Machine Learning, though I will be covering Deep Learning in deep as well. This is a basic course for beginners, just if you can get basic knowledge of Python that would be great and helpful to you to grasp things […]

View and Vote


          

UK: Bank Of England And Financial Conduct Authority Publish Report On Machine Learning In The UK Financial Services Sector - Ropes & Gray LLP

 Cache   
On 16 October 2019, the Bank of England ("BoE") and Financial Conduct Authority ("FCA") published a joint report on the use of machine learning
          

MLPerf Releases First Results From AI Inferencing Benchmark

 Cache   
AI is everywhere these days. SoC vendors are falling over themselves to bake these capabilities into their products. From Intel and Nvidia at the top of the market to Qualcomm, Google, and Tesla, everyone is talking about building new chips to handle various workloads related to artificial intelligence and machine learning. While these companies have […]
          

Get to Know About RPA | Contact Us

 Cache   
Our highly efficient team of professionals is capable to combine RPA with cognitive methods that emulate human decision-making to enhance the speed and efficiency of the process. In addition to that, we are also experienced to integrate Predictive Analytics, Machine Learning, and Smart Devices in the Robotic Process Automation. For More Details:- https://www.sphinx-solution.com/services/robotic-process-automation-rpa-company/ Contact Us:- Mail:- info@sphinx-solution.com Phone:- +964 0771 7777 916 Address:- Kemp House, 160 City Road, London, United Kingdom, EC1V 2NX
          

Know How Blockchain Security Keeps Transaction Data Safe?

 Cache   
We at Sphinx, we have unleashed the main aspect of Blockchain Development and its utilization. Our skills in the blockchain, Machine learning, Smart Contract development can shape your new projects. Want to know how we can help you in building a productive blockchain process and also take advantage of the services. Just contact us. For More Details:- https://www.sphinx-solution.com/services/blockchain-application-development/ Contact Us:- Mail:- info@sphinx-solution.com Phone:- +964 0771 7777 916 Address:- Kemp House, 160 City Road, London, United Kingdom, EC1V 2NX
          

Förderprogramm JumpStart schafft für Jungunternehmen den passenden Rahmen

 Cache   

Das Bundesministerium für Digitalisierung und Wirtschaftsstandort (BMDW) hat gemeinsam mit der Austria Wirtschaftsservice GmbH (aws) die dritte Ausschreibungsrunde des Programms JumpStart erfolgreich abgeschlossen. Aus 24 eingereichten Projekten hat eine unabhängige Expertenjury die besten Konzepte ausgewählt, die nun als Inkubatoren mit jeweils bis zu 150.000 Euro unterstützt werden. Diese sind: Female Founders, Lemmings und The Ventury aus Wien, Climate KIC aus Niederösterreich und I.E.C.T. aus Tirol. Schwerpunkt des Programms liegt dabei auf der Unterstützung und Weiterentwicklung heimischer Inkubatoren und Akzeleratoren, die innovativen Start-ups nicht nur Büro-, Labor-, oder Produktionsflächen, sondern insbesondere maßgeschneiderte Beratungsleistungen zur Verfügung stellen.

„Unsere innovativen Start-ups brauchen die besten Rahmenbedingungen. Mit dem JumpStart-Programm leisten wir einen wichtigen Beitrag, um aus Ideen erfolgreiche Geschäftsmodelle zu machen. Besonders positiv ist, dass wir in dieser Runde mit der Unterstützung von Female Founders auch gezielt Frauen in Start-ups unterstützen können. Wir brauchen mehr Gründerinnen und dafür braucht es neben Mut und Eigeninitiative auch entsprechende Rahmenbedingungen“, sagt Wirtschaftsministerin Margarete Schramböck.

Projekte aus ganz Österreich

Im Rahmen der dritten Ausschreibungsrunde wurden 24 Anträge aus ganz Österreich eingereicht. Neben bekannten und in der Szene fest verankerten Akteuren konnten in dieser Runde auch viele junge Initiativen angesprochen werden. Die Bandbreite der Bewerber und der ausgewählten Projekte reichte dabei von „Stand-alone“- über Corporate-Inkubatoren und Technologiezentren bis hin zu akademischen Akzeleratoren und verteilt sich über alle Start-up-relevanten Branchen, wie Life Sciences, IT, Web/Mobile, Dienstleistungen und Hardware.

„Um ihre Ideen umzusetzen, brauchen Start-ups neben finanziellen Ressourcen eine Arbeitsumgebung, in der sie sich ganz auf ihre Projekte konzentrieren und gleichzeitig von der Vernetzung und vom lebendigen Erfahrungsaustausch mit anderen Start-ups profitieren können. Mit aws JumpStart unterstützen wir die besten Inkubatoren und schaffen damit die notwendigen Rahmenbedingungen“, sagt die aws Geschäftsführung, Edeltraud Stiftinger und Bernhard Sagmeister.

In einem ersten Schritt wurden in der Förderungsschiene nun geeignete Inkubatoren und Akzeleratoren ausgewählt und unterstützt. Damit wird für Unternehmungen ein produktiver und unbürokratischer Rahmen geschaffen, in dem sie sich entwickeln können. Zudem brauchen besonders innovative Start-ups auch selbst Finanzierung. In einem zweiten Modul des Förderungsprogramms werden daher vielversprechende Start-ups auch direkt unterstützt. Bis zu fünf der Unternehmen, die sich in einem JumpStart Inkubator/ Akzelerator befinden, werden dazu ausgewählt. Pro Start-up ist eine Förderung von 22.500 Euro vorgesehen.

Die Projekte im Überblick:

Climate KIC

Climate KIC ist Europas größtes öffentlich-privates Netzwerk für Klimaschutzinnovation, das sowohl in Österreich als auch in 31 weiteren europäischen Ländern tätig ist. Durch das enorme Partnernetzwerk von mehr als 330 Forschungsinstitutionen, Bildungseinrichtungen und KMU wird den Start-ups in den Bereichen der Entwicklung von grünen Finanzinstrumenten, nachhaltigen Produktionssystemen, klimafreundlicher Landnutzung und nachhaltiger Städtenutzung ein breites Angebot an Coachings und Workshops geboten.

Female Founders

Der Female Founders Verein wurde 2016 von Lisa-Marie Fassl, Tanja Sternbauer und Nina Wöss gegründet, um eine Plattform zur stärkeren Vernetzung und Förderung von Frauen im Start-up Bereich zu schaffen. Mittlerweile hat sich die Female Founders Community auch international einen Namen gemacht, mit Mitgliedern aus mehr als 10 Nationen. Diese Community dient in weiterer Folge der Akquise von Unternehmen für das geplante Accelerator-Programm, das ausgewählte Projekte zu einer „investment-readiness“ und einem erfolgreichen Markteintritt führen soll.

I.E.C.T.

Die private Institution I.E.C.T. – Institute for Entrepreneurship Cambridge – Tirol hat mit Ihrem Co-Founder Dr. Hermann Hauser, dem Mitbegründer des Cambridge Phenomenon, ein Urgestein mit an Bord, das einen essentiellen Beitrag zum Aufbau einer aufstrebenden Entrepreneurship-Kultur beigetragen hat. I.E.C.T. bietet bestehenden und etablierten Unternehmen als auch der Industrie durch einen Strategie-Support und Innovationsscouting die optimalen Voraussetzungen, um auf ihre Bedürfnisse einzugehen.

Lemmings

Lemmings ist ein Wiener Early-Stage Inkubator und Akzelerator mit einem Schwerpunkt auf Emerging Technology wie Artificial Intelligence, Blockchain und Virtual & Augmented Reality. Das Gründerteam Thomas Schranz und Allan Berger weist bereits große Erfahrung durch die Gründung Ihres Start-ups Blossom auf, dass ein Projektmanagement Service für Software-Teams bereitstellt. Lemmings hat in den letzten zwei Jahren über 200 Teilnehmer betreut und um die Talente noch besser zu fördern und anzuziehen, wird jetzt ein Programm namens "Project Magic" etabliert.

The Ventury

The Ventury wurde 2016 unter anderem von Christoph Aschberger, Christoph Bitzner und Jakob Reiter in Wien gegründet. Zusammengefunden haben sich die drei durch die gemeinsame Arbeit am Start-up Simplewish, das auch weiterhin operativ agiert. Ihre Erfahrungen sammelten die Gründer im österreichischen Start-up Ökosystem als Mentoren, Jury-Mitglieder und Vortagende für Organisationen und Bildungseinrichtungen. Der Fokus des Inkubator- und Akzelerator Programms liegt in der operativen Unterstützung von Start-ups im Bereich Conversational Interfaces, AI und Machine Learning.


          

An Introduction to Satellite Imagery and Machine Learning

 Cache   

Today, the availability of satellite imagery still far outpaces our capacity to analyze it, but machine learning and tools like Raster Vision are helping.

The post An Introduction to Satellite Imagery and Machine Learning appeared first on Azavea.


          

Probing the nature of the universe with machine learning at the ATLAS experiment

 Cache   
Lundi 02/12, 11:00 - Steven Schramm - Séminaires du DPhP - CEA Paris-Saclay - Bat 141, salle André Berthelot - CEA Paris-Saclay
          

Senior Data Scientist - Marsh & McLennan Companies, Inc. - New York, NY

 Cache   
Recognizing that the ‘good’ is not the enemy of the ‘perfect’ - Demonstrate solid and battle-tested understanding of the standard canon of machine learning…
From Marsh & McLennan Companies - Sat, 05 Jan 2019 15:04:15 GMT - View all New York, NY jobs
          

Nov 11, 2019: Seminar @ Cornell Tech: Omid Rafieian at Cornell Tech, Bloomberg Center

 Cache   

Revenue-Optimal Dynamic Auctions for Adaptive Ad Sequencing

Digital publishers often use real-time auctions to allocate their advertising inventory. These auctions are designed with the assumption that advertising exposures within a user’s browsing or app-usage session are independent. Rafieian (2019) empirically documents the interdependence in the sequence of ads in mobile in-app advertising, and shows that dynamic sequencing of ads can improve the match between users and ads. In this paper, we examine the revenue gains from adopting a revenue-optimal dynamic auction to sequence ads. We propose a unified framework with two components – (1) a theoretical framework to derive the revenue-optimal dynamic auction that captures both advertisers’ strategic bidding and users’ ad response and app usage, and (2) an empirical framework that involves the structural estimation of advertisers’ click valuations as well as personalized estimation of users’ behavior using machine learning techniques. We apply our framework to large-scale data from the leading in-app ad-network of an Asian country. We document significant revenue gains from using the revenue-optimal dynamic auction compared to the revenue-optimal static auction. These gains stem from the improvement in the match between users and ads in the dynamic auction. The revenue-optimal dynamic auction also improves all key market outcomes, such as the total surplus, average advertisers’ surplus, and market concentration.

Bio:

I am a Ph.D. candidate in quantitative marketing at the Foster School of Business, University of Washington. My research interests broadly encompass topics related to digital marketing, mobile advertising, personalization, and privacy. I examine these topics through two complementary lenses – (1) how can we utilize the recent advancements in machine learning to create value in digital marketplaces, and (2) how can we use theory-driven structural frameworks to study the marketing and economic implications of such developments.

View on site | Email this event


          

Big Data Engineer

 Cache   
IL-Chicago, Our client is currently seeking a Big Data Engineer for a full time opening in Downtown Chicago This job will have the following responsibilities: Design mission-critical, highly visible big data and machine learning applications in direct support of business objectives Build big data pipelines and deriving insights out of the data using advanced analytic techniques, streaming and machine learning
          

Nationwide Partners With Betterview To Use Its Proprietary Computer Vision And Machine Learning Technologies In Commercial Property Underwriting

 Cache   


          

Facebook’s Automatic Placements: Take the Guesswork Out of Finding Your Audience

 Cache   

Earlier this year, Facebook rolled out Automatic Placements, a setting that ensures that your ads will serve on Facebook, Audience Network or Instagram, to achieve your most desired result based on the bidding objective you choose. Similar to Google, Facebook wants advertisers to entrust its machine learning to place ads in the most effective way […]

The post Facebook’s Automatic Placements: Take the Guesswork Out of Finding Your Audience appeared first on Marketing Insights - Official Blog of Marin Software.


          

The Complete Machine Learning A to Z Bundle (96% discount)

 Cache   
Chatbots are voice-aware bots, i.e. computer programs designed to simulate human conversations with users. Chatbots have become ubiquitous across sites and apps and a multitude of AI platforms exist which help you get up and running with a chatbot quickly. This course introduces DialogFlow, a conversational interface for bots, devices and applications. It’s Google’s bot…
          

Accounting Software Industry Veteran Todd Robinson Joins Vic.ai

 Cache   

Robinson to help accounting firms better leverage artificial intelligence and machine learning technology to improve client services and profitability

(PRWeb November 07, 2019)

Read the full story at https://www.prweb.com/releases/accounting_software_industry_veteran_todd_robinson_joins_vic_ai/prweb16702304.htm


          

Market Guide for Augmented Analytics Tools

 Cache   

Augmented analytics capabilities are disrupting analytics, BI, data science and machine learning markets. Tools leverage ML/AI to transform how analytics content is developed, consumed and shared. Data and analytics leaders should plan to adopt augmented analytics as capabilities mature.

Key findings:

  • Augmented analytics use machine learning (ML) to automate data preparation, insight discovery, model development and insights sharing for a broad range of business users, operational workers and citizen data scientist.
  • Augmented analytics tools can identify the most insights, based on statistical significance, and, in more advanced tools, users’ preferences and business context/relevancy (location, role, time, etc.).
  • Analytics and business intelligence (ABI) and data science and machine learning (DSML) solutions and platforms often complement augmented capabilities with natural language processing (NLP) and conversational interfaces, allowing all users to interact with data and insights without requiring advanced skills.


Request Free!

          

Highlights from the O’Reilly Software Architecture Conference in Berlin 2019

 Cache   
Experts from across the software architecture world are coming together in Berlin for the O’Reilly Software Architecture Conference. Below you’ll find links to highlights from the event. Modern machine learning architectures: Data and hardware and platform, oh my Brian Sletten takes a deep dive into the intersection of data, models, hardware, language, architecture, and machine […]
          

Modern machine learning architectures: Data and hardware and platform, oh my

 Cache   
This is a keynote highlight from the O’Reilly Software Architecture Conference in Berlin 2019. Watch the full version of this keynote on the O’Reilly online learning platform. You can also see other highlights from the event.
          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

Machine Learning Specialist | Aurora

 Cache   
Pittsburgh, Pennsylvania, Essential BS, MS or PhD in Robotics, Machine Learning, Computer Science, or a related field Strong grasp of fundamentals: linear algebra, discrete and continuous optimization, supervised and uns
          

Big Data Engineer

 Cache   
IL-Chicago, Our client is currently seeking a Big Data Engineer for a full time opening in Downtown Chicago This job will have the following responsibilities: Design mission-critical, highly visible big data and machine learning applications in direct support of business objectives Build big data pipelines and deriving insights out of the data using advanced analytic techniques, streaming and machine learning
          

Azure Cognitive Services for building enterprise ready scalable AI solutions

 Cache   

This post is co-authored by Tina Coll, Senior Product Marketing Manager, Azure Cognitive Services and Anny Dow, Product Marketing Manager, Azure Cognitive Services. Azure Cognitive Services brings artificial intelligence (AI) within reach of every developer without requiring machine learning expertise. All it takes is an API call to embed the ability to see, hear, speak, […]

The post Azure Cognitive Services for building enterprise ready scalable AI solutions appeared first on Cloudmovement.


          

Hundred-Page Machine Learning

 Cache   
Hundred-Page Machine Learning
          

Artificial Intelligence / Machine Learning (AI/ML) Software Engineer

 Cache   
NY-New york, Artificial Intelligence / Machine Learning (AI/ML) Software Engineer New York City $90K - $150K Base + 20% Bonus + Equity The Artificial Intelligence / Machine Learning (AI/ML) Software Engineer provides research, planning, design, and creates prototypes to validate new concepts and ideas. This opportunity offers the chance to work with the leading tech and equity options for every employee. The i
          

Introduction au Machine Learning

 Cache   
Auteur : Azencott, Chloé-Agathe
Editeur : Dunod

Le machine learning (apprentissage automatique) est au coeur des data sciences et s'applique à une multitude de domaines tels que la reconnaissance des visages par ordinateur, la traduction automatique d'une langue à l'autre, la conduite automobile automatique, la publicité ciblée, l'analyse des réseaux sociaux, le trading financier, ...Ce livre propose une introduction aux concepts et aux algorithmes qui fondent le machine learning.Son objectif est de fournir au lecteur les outils pour :  - identifier les problèmes qui peuvent être résolus par du machine learning,- formaliser ces problèmes en termes de machine learning,- identifier les algorithmes appropriés et les mettre en oeuvre,- savoir évaluer et comparer les performances de plusieurs algorithmes.Chaque chapitre est complété par des exercices corrigés. 


          

Big Data et Machine Learning : Les concepts et les outils de la data science Ed. 3

 Cache   
Auteur : Lemberger, Pirmin
Editeur : Dunod

Cet ouvrage s’adresse à tous ceux qui cherchent à tirer parti de  l’énorme potentiel des technologies Big Data, qu’ils soient data  scientists, DSI, chefs de projets ou spécialistes métier.Le Big Data s’est imposé comme une innovation majeure pour  toutes les entreprises qui cherchent à construire un avantage  concurrentiel de l’exploitation de leurs données clients,  fournisseurs, produits, processus, etc.Il a en outre permis l’émergence des techniques d’apprentissage  automatique (Machine Learning, Deep Learning…) qui ont  relancé le domaine de l’intelligence artificielle.Mais quelle solution technique choisir ? Quelles compétences  métier développer au sein de la DSI ?Ce livre est un guide pour comprendre les enjeux d’un projet Big  Data, en appréhender les concepts sous-jacents et acquérir les  compétences nécessaires à la mise en place d’une architecture  d’entreprise adaptée.Il combine la présentation :de notions théoriques (traitement statistique des données,  calcul distribué…) ;des outils les plus répandus ;d’exemples d’applications, notamment en NLP (Natural  Language Processing) ;d’une organisation typique d’un projet de data science.


          

Algorithmia CEO Diego Oppenheimer to Present on Continuous Deployment for Machine Learning at AWS re:Invent

 Cache   

          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

Software Engineer II - Walt Disney Direct-to-Consumer and International - Seattle, WA

 Cache   
The Software Engineer will work on the DTCI Operational Intelligence team, a multi-disciplinary team tasked with building machine learning models for demand,…
From Disney - Tue, 05 Nov 2019 18:21:12 GMT - View all Seattle, WA jobs
          

27553 Big Data/Machine Learning Engineer - Sr (Kissoon John Ramotar)

 Cache   
TX-Plano, Manager is relatively flexible with the bill rate for thoroughly qualified candidates* This position is working for the Capital One Garage (innovation center for C1) building a monitoring solution for a ML Platform. We will be building a web application, numerous api’s in flask, and providing machine learning algorithms as a service. The ideal candidate will be an experienced python developer who
          

Biznes musi przygotować się na największą rewolucję od lat. "To jest fala, której nie da się zatrzymać"

 Cache   
Sztuczna inteligencja, machine learning, a to wszystko zamknięte w chmurze. Infrastruktura chmurowa odmieni oblicze biznesu na świecie, w tym również polskiego. Dzięki niej przedsiębiorstwa zetną koszty, podniosą wydajność i wzniosą się na poziomy, które nigdy wcześniej w historii były dla nich nieosiągalne. - To jest fala, której nie da się zatrzymać - tłumaczy w rozmowie z Business Insider Polska Andrew Sutherland, wiceprezes Oracle, jednej z największych firm na świecie zajmującej się architekturą chmurową.
          

Utilizing Machine Learning for Pre- and Postoperative Assessment of Patients Undergoing Resection for BCLC-0, A and B Hepatocellular Carcinoma: Implications for Resection Beyond the BCLC Guidelines.

 Cache   
Related Articles

Utilizing Machine Learning for Pre- and Postoperative Assessment of Patients Undergoing Resection for BCLC-0, A and B Hepatocellular Carcinoma: Implications for Resection Beyond the BCLC Guidelines.

Ann Surg Oncol. 2019 Nov 06;:

Authors: Tsilimigras DI, Mehta R, Moris D, Sahara K, Bagante F, Paredes AZ, Farooq A, Ratti F, Marques HP, Silva S, Soubrane O, Lam V, Poultsides GA, Popescu I, Grigorie R, Alexandrescu S, Martel G, Workneh A, Guglielmi A, Hugh T, Aldrighetti L, Endo I, Pawlik TM

Abstract
BACKGROUND: There is an ongoing debate about expanding the resection criteria for hepatocellular carcinoma (HCC) beyond the Barcelona Clinic Liver Cancer (BCLC) guidelines. We sought to determine the factors that held the most prognostic weight in the pre- and postoperative setting for each BCLC stage by applying a machine learning method.
METHODS: Patients who underwent resection for BCLC-0, A and B HCC between 2000 and 2017 were identified from an international multi-institutional database. A Classification and Regression Tree (CART) model was used to generate homogeneous groups of patients relative to overall survival (OS) based on pre- and postoperative factors.
RESULTS: Among 976 patients, 63 (6.5%) had BCLC-0, 745 (76.3%) had BCLC-A, and 168 (17.2%) had BCLC-B HCC. Five-year OS among BCLC-0/A and BCLC-B patients was 64.2% versus 50.2%, respectively (p = 0.011). The preoperative CART model selected α-fetoprotein (AFP) and Charlson comorbidity score (CCS) as the first and second most important preoperative factors of OS among BCLC-0/A patients, whereas radiologic tumor burden score (TBS) was the best predictor of OS among BCLC-B patients. The postoperative CART model revealed lymphovascular invasion as the best postoperative predictor of OS among BCLC-0/A patients, whereas TBS remained the best predictor of long-term outcomes among BCLC-B patients in the postoperative setting. On multivariable analysis, pathologic TBS independently predicted worse OS among BCLC-0/A (hazard ratio [HR] 1.04, 95% confidence interval [CI] 1.02-1.07) and BCLC-B patients (HR 1.13, 95% CI 1.06-1.19) undergoing resection.
CONCLUSION: Prognostic stratification of patients undergoing resection for HCC within and beyond the BCLC resection criteria should include assessment of AFP and comorbidities for BCLC-0/A patients, as well as tumor burden for BCLC-B patients.

PMID: 31696396 [PubMed - as supplied by publisher]


          

Artificial Intelligence / Machine Learning (AI/ML) Software Engineer

 Cache   
NY-New york, Artificial Intelligence / Machine Learning (AI/ML) Software Engineer New York City $90K - $150K Base + 20% Bonus + Equity The Artificial Intelligence / Machine Learning (AI/ML) Software Engineer provides research, planning, design, and creates prototypes to validate new concepts and ideas. This opportunity offers the chance to work with the leading tech and equity options for every employee. The i
          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

Automated document procesing with machine learning

 Cache   
I need a solution for reading invoices and other documents. To start with, in this first project we focus on invoices and purchase orders only. The problem that we want to solve is to understand and categorize the data in any invoice – independent of format and layout... (Budget: $1500 - $3000 USD, Jobs: .NET, C# Programming, OpenCV, PDF)
          

Other: MTS Intern - PhD - San Jose, California

 Cache   
Date Posted October 29, 2019 Category Science-Computer Sciences Employment Type Full-time Application Deadline Open until filled Who are our employees? We're an eclectic group of 4,000+ dreamers, believers and builders, operating in over 40 countries. We're Hungry. Humble. Honest. With Heart. The 4H's: these are our core values and the DNA of our company. They help drive our employees to succeed, to strive to be better, to learn from every experience. Our employees are encouraged to have spirited debates and conversations and to think with a founder's mindset. This means we're all CEO's of the company and, as such, make the best decision every day that aligns with our company goals. It's through our values, our conversations and mindsets that we can continue to disrupt the industry and drive innovation in the market. Who are we in the market? Nutanix is a global leader in cloud software and hyperconverged infrastructure solutions, making infrastructure invisible so that IT can focus on the applications and services that power their business. Companies around the world use Nutanix Enterprise Cloud OS software to bring one-click application management and mobility across public, private and distributed edge clouds so they can run any application at any scale with a dramatically lower total cost of ownership. The result is organizations that can rapidly deliver a high-performance IT environment on demand, giving application owners a true cloud-like experience. Learn more about our products at *************** or follow us on Twitter @Nutanix. Nutanix engineers are crafting a groundbreaking technology, building the Nutanix Enterprise Cloud OS. We're using our love of programming and diverse backgrounds to deliver the simplicity and agility of popular public cloud services, but with the security and control that you need in a private cloud. At Nutanix, you'll find no shortage of challenging problems to work on. We work closely with our product in a collegiate, collaborative environment that encourages the open exploration of idea. The Role: MTS Intern The Engineering Summer Internship is an opportunity to gain exposure to one or more Nutanix engineering roles according to your skillset and interests. Some potential roles include (but not limited to) working on the core data path, storage and filesystems development, distributed systems, infrastructure and platform/hardware deployment, data protection and replication, tools and automation, development of a big data processing platform, development of the API and analytics platform, and Web and front-end UI/UX development. Each intern is paired with a Member of Technical Staff who serves as a guide through our engineering culture, toolsets, and development methodology. Our internship program also includes a series of lunch and learns, training events, and social outings to expose you to other aspects of a rapidly growing Silicon Valley technology company. Responsibilities: - Architect, design, and development software for the Nutanix Enterprise Cloud Platform - Develop a deep understanding of complex distributed systems and design innovative solutions for customer requirements - Work alongside development, test, documentation, and product teams to deliver high-quality products in a fast pace environment - Deliver on an internship project over the course of the program. Present the final product to engineering leadership. Requirements: - Love of programming and skilled in one of the following languages: C++, Python, Golang, or HTML/CSS/Javascript - Extensive knowledge or experience with Linux or Windows - Have taken courses or completed research in the areas of operating systems, files systems, big data, machine learning, compilers, algorithms and data structures, or cloud computing - Knowledge of or experience with Hadoop, MapReduce, Cassandra, Zookeeper, or other large scale distributed systems preferred - Interest or experience working with virtualization technologies from VMware, Microsoft (Hyper-V), or Redhat (KVM) preferred - Detailed oriented with strong focus on code and product quality - The passion & ability to learn new things, while never being satisfied with the status quo Qualifications and Experience: - Pursuing a PhD degree in Computer Science or a related engineering field required. - Available to work up to 40 hours per week for 12 weeks over the summer months Nutanix is an equal opportunity employer. The Equal Employment Opportunity Policy is to provide fair and equal employment opportunity for all associates and job applicants regardless of race, color, religion, national origin, gender, sexual orientation, age, marital status, or disability. Nutanix hires and promotes individuals solely on the basis of their qualifications for the job to be filled. Nutanix believes that associates should be provided with a working environment that enables each associate to be productive and to work to the best of his or her ability. We do not condone or tolerate an atmosphere of intimidation or harassment based on race, color, religion, national origin, gender, sexual orientation, age, marital status or disability. We expect and require the cooperation of all associates in maintaining a discrimination and harassment-free atmosphere. Apply *Please mention PhdJobs to employers when ()
          

Engineering: Machine Learning Engineer - Alamo, California

 Cache   
Join Hired and find your dream job as a Machine Learning Engineer at one of 10,000+ companies looking for candidates just like you. Companies on Hired apply to you, not the other way around. You???ll receive salary and compensation details upfront??? - before the interview - and be able to choose from a variety of industries you???re interested in, to find a job you???ll love in less than 2 weeks. We're looking for a talented AI expert to join our team. Responsibilities Engaging in data modeling and evaluation Developing new software and systems Designing trials and tests to measure the success of software and systems Working with teams and alone to design and implement AI models Skills An aptitude for statistics and calculating probability Familiarity in Machine Learning frameworks, such as Scikit-learn, Pytorch and Keras TensorFlow An eagerness to learn Determination - even when experiments fail, the ability to try again is key A desire to design AI technology that better serves humanity These Would Also Be Nice Good communication - even with those who do not understand AI Creative and critical thinking skills A willingness to continuously take on new projects Understanding the needs of the company Being results-driven Requirements: Hired ()
          

Rage Against the Machine Learning

 Cache   
Yeah, you’ve seen the creepy robot dogs opening doors and getting knocked down only to get back up again (which I know I’ve seen in a movie scene starring a certain Austrian-born muscle dude who may have possibly run the state of California at one time) But the machines are really coming to disrupt the […]
          

Google's AI education tool makes it easy to train models for your projects

 Cache   
Google's Teachable Machine is no longer just a handy lesson in AI -- you can now put it to work. The tech giant has launched Teachable Machine 2.0 with the ability to use your machine learning model in apps, websites and other projects. You can upl...
          

Google updates Teachable Machine so you can train an AI without code

 Cache   

Machine learning and artificial intelligence are complex subjects and while you might see them being mentioned every day, you might not necessarily understand how they work. Two years ago, Google launched a site called Teachable Machine, which let you train a simple model using their camera without any code. Now, it’s launching an updated version so you can train more advanced models. The earlier version allowed you to train three classes through your camera. The new model, not only lets you define more than three classes, it also allows you to use images, audio clips, pose data, or your own…

This story continues at The Next Web

          

2019-78828 - Stage - Développement d'algorithme d'estimation du régime moteur H/F

 Cache   
Main domain/Job field : Research, design and development/Electrical and mechatronics
Employment type : Internship / Student
Position description :
Au sein de la division essais sol & vol, le département « Mesures & Instrumentation » a pour mission de mettre à disposition de la Direction Technique les moyens de mesures nécessaires à la réalisation des essais moteurs et composants. Au sein du département, le service « Développement Tip Timing et Mesures de Jeux » a pour mission principale de garantir le développement, l'acquisition et de la qualité des mesures du tip timing et de la mesure de jeux. Le service développe aussi des mesures innovantes afin de répondre à des nouveaux besoins dans le cadre des activités recherche et technologies. Dans ce cadre vous serez en charge du développement et la conception des mesures que ce soit dans un contexte d'essai moteur ou de recherche et technologie. Description de la mission Lors d'un essai moteur, de nombreux moyens de mesure sont déployés afin de rendre compte au mieux des phénomènes physiques présents dans la machine. Les mesures "Hautes Fréquences" permettent d'observer sur une large plage fréquentielle les phénomènes auquel une pièce ou un milieu mesuré répond. La grande majorité des phénomènes en jeux étant synchrones par rapport aux régimes de rotation, il est capital d'être précis lors de son estimation. Le régime de rotation est généralement estimé à partir d'un signal carré ou sinusoïdal issu du passage d'une roue denté, placée sur l'arbre, devant un capteur capacitif. De nombreux parasites peuvent venir altérer cette mesure ce qui influe négativement sur l'estimation finale. Ces parasites peuvent être liés à la chaîne de mesure mais aussi à la physique du moteur (ingestion d'oiseau ou perte d'aube). L'objectif de ce stage est de développer un algorithme qui permet d'estimer avec précision un régime moteur même lorsque ce dernier est parasité. Le stage se déroulera en trois parties: • Réalisation d'une étude bibliographique sur les différentes méthodes d'estimation du régime (bayésienne, spectrale, machine learning…) • Réalisation d'un benchmark sur les méthodes retenues • Implémentation de la méthode finalement sélectionnée dans l'outil d'analyse des mesures Hautes Fréquences

Spécialité stagiaire : mathématiques appliquées et traitement du signal Autres compétences/connaissances : environnement Matlab, esprit de rigueur et de synthèse Niveau de formation : 3eme année d'école d'ingénieur (stage de fin d'étude)
City (-ies) : MOISSY-CRAMAYEL
Minimum education level achieved : Master Degree

          

The Morning Brew #2868

 Cache   
Software Announcing TypeScript 3.7 – Daniel Rosenwasser Join the Visual Studio for Mac ASP.NET Core Challenge – Jordan Matthiesen Azure Machine Learning – ML for all skill levels – Venky Veeraraghavan Now available: Azure DevOps Server 2019 Update 1.1 RC – Erin Dormier Released: Microsoft.Data.SqlClient 1.1 Preview 2 – David-Engel The November 2019 release of […]
          

Comment les nouvelles technologies révolutionnent la gestion des restaurants

 Cache   
boulversdement technologique dans la restaurationCloud, data, intelligence artificielle, machine learning, algorithme… Voilà des termes que nous avons de plus en plus l’habitude d’entendre. Pour autant, saisissons-nous réellement l’ampleur de la transformation technologique qu’ils impliquent ? L’ensemble de la société en sera probablement impactée, mais de quelles façons ? Devant tant de possibilités, le futur paraît particulièrement nébuleux. Pour y […]
          

Microsoft Ignite 2019: How To Curb Ethics Concerns Around AI And Machine Learning

 Cache   
none
          

Comment on The Globotics Upheaval: Globalization, Robotics, and the Future of Work by sf4444

 Cache   
I thought it was interesting to learn how machine learning is changing fashion, and how the industry is responding to it. I thought it was interesting how Baldwin made the point, that many jobs will go but occupations will stay and used the analogy of the farmer and the tractor. This shows that although machine learning can help us us in many different areas, there is still a human touch to many occupations that cannot be replaced by machines. In the future, I will be interested to see how many more processes will continue to be automated, and how this will effect the need for human labor.
          

Trustworthy Human-Centric AI

 Cache   
Time: 2019-11-21 13:00 till 15:00
Type: Seminarium
Place: Eden hörsal, Eden, floor 1, Allhelgona kyrkogata 14, Lund

Welcome to an urgent and exciting public lecture by Dr. Fredrik Heintz, Associate Prof. of Computer Science at Linköping University, and guest researcher at the Department of Communication and Media, Lund University.
This talk will present the European approach to trustworthy human-centric AI. To be trustworthy an AI-system should be lawful, ethical and robust, as defined by the European Commission. To operationalize this will require new research. Heintz will give an overview of the state-of-the-art and potential future solutions to these vast challenges.RegistrationThe seminar is free of charge and open to students and staff at Lund university as well as to attendees from industry, public sector and the general public. However, please register at http://ai.lu.se/events/registration-2019-11-21BioDr. Fredrik Heintz is an Associate Professor of Computer Science at  Linköping University, Sweden and a guest researcher at the Department of  Communication and Media, Lund University. He leads the Stream Reasoning  group within the Division of Artificial Intelligence and Integrated  Systems (AIICS) in the Department of Computer Science. His research  focus is artificial intelligence especially autonomous systems and the  intersection between knowledge representation and machine learning. He  is the Director of the Graduate School for the Wallenberg AI, Autonomous Systems and Software Program (WASP), the President of the Swedish AI  Society and a member of the European Commission High-Level Expert Group  on AI. He is also very active in education activities both at the  university level and in promoting AI, computer science and computational  thinking in primary, secondary and professional education. Fellow of the  Royal Swedish Academy of Engineering Sciences (IVA).ContactThe presentation is co-hosted by the department of communication and media (KOM) and AI Lund (formerly AIML).Mia-Marie HammarlinStefan LarssonJonas [dot] Wisbrant [at] cs [dot] lth [dot] se (Jonas Wisbrant)
          

Data Scientist Lead - Schneider National - Green Bay, WI

 Cache   
Experience with machine learning software (e.g., R, Python, SPSS, SAS), data access/manipulation (e.g., SQL, pandas, dplyr) and NoSQL databases (e.g., MongoDB,…
From Schneider National - Thu, 13 Jun 2019 16:52:28 GMT - View all Green Bay, WI jobs
          

Data Scientist Lead - Schneider - Green Bay, WI

 Cache   
Experience with machine learning software (e.g., R, Python, SPSS, SAS), data access/manipulation (e.g., SQL, pandas, dplyr) and NoSQL databases (e.g., MongoDB,…
From Schneider - Thu, 13 Jun 2019 17:36:38 GMT - View all Green Bay, WI jobs
          

Algorithmia CEO Diego Oppenheimer to Present on Continuous Deployment for Machine Learning at AWS re:Invent

 Cache   

          

Machine Learning and Artificial Intelligence: Here's how  they differ

 Cache   

Artificial intelligence and Machine Learning are increasingly used in modern factories. But do you know how to tell the two apart?


          

Predictive Model Ensembles: Pros and Cons

 Cache   
Many recent machine learning challenges winners are predictive model ensembles. We have seen this in the news. Data science challenges are hosted on many platforms. Techniques included decision trees, regression, and neural networks. And, winning ensembles used these in concert. But, let’s understand the pros and cons of an ensemble approach. Pros of Model Ensembles […]
          

The FLOSS ecosystem of PLC and robotics

 Cache   

Open Source robotics has now been present for more than 10 years and has achieved some success, at least in research and education. There are numerous projects for open source robotics

Although less known, it has been possible for more than a decade to deploy a complete  solution for industrial automation based on free software and open hardware. The first success case was demonstrated by SSAB in Sweden in a fairly large factory which produces steel.

This page tries to collect all success cases and initiatives related to open source robotics and industrial automation. It is a work-in-progress. Feel free to contribute by write to sven (dot) franck (at) nexedi (dot) com or by suggesting new entries on Nexedi's contact page.

Success Cases

Lists

Software

Hardware

Integrators

Presentations

Tutorials

Articles

Standard

  • Modbus is one of the most open standards for PLC integration over TCP/IP with I/O from WagoAdvantech (ADAM) ou ICPDAS
  • DIN Rail is a standard format for industrial enclosures

          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

How Microsoft is trying to become more innovative

 Cache   
Microsoft Research is a globally distributed playground for people interested in solving fundamental science problems. These projects often focus on machine learning and artificial intelligence, and since Microsoft is on a mission to infuse all of its products with more AI smarts, it’s no surprise that it’s also seeking ways to integrate Microsoft Research’s innovations […]
          

Data Scientist

 Cache   
London-London, Data Scientist As one of Vodafone key strategic partners we currently have an exciting opportunity for 2 Data Scientist's to join the team in Paddington on an initial 6 month contract. Joining us as a Data Scientist and AI Expert, you can be part of Vodafone's empowering Big Data Team. With Vodafone you will: Develop Machine Learning and AI models that are scaled and replicated across markets, lev
          

Azure Machine Learning Pipelines | AI Show

 Cache   

This video talks about Azure Machine Learning Pipelines, the end-to-end job orchestrator optimized for machine learning workloads. With Azure ML Pipelines, all the steps involved in the data scientist's lifecycle can be stitched together in a single pipeline improving inner-loop agility, collaboration, and reuse of data and code, while maintaining high reliability.

 Learn More: 

The AI Show's Favorite links:


          

Machine Learning Course in Chandigarh

 Cache   
Looking for Industrial training for the students to enhance their skills and bridge the talent gaps and also helps them to speed up career Infowiz is the leading industrial training institute in Chand
          

Broker Expo: Open GI sets focus on machine learning

 Cache   
Following purchase of machine learning specialist MLP, Open GI CEO Simon Badley outlined the future strategy for both companies.
          

Tara O'Shea, Rebecca Moore and Carlos Souza on NextGenMap and forest monitoring

 Cache   
On a panel moderated by Tara O'Shea (Planet), Rebecca Moore (Google) and Carlos Souza (Imazon) converse about NextGenMap's ability to revolutionize our abilities to monitor forests. Historically, monitoring forests and land-use change has involved a compromise between the resolution and frequency of available satellite images and limited computing capacity for analysis. Recent developments in satellite imagery, machine learning and land use classification offer new opportunities to disrupt these limitations.

          

h1 magazine

 Cache   
Contributed by Brando Corradini


Source: https://www.behance.net Alessandro Nobile. License: All Rights Reserved.





<h1> is a digital magazine made as a student project by Alessandro Nobile, Nicolò Accoto, and Gianmarco Caforio at the Istituto Europeo di Design (IED) in Milan, Italy.

The theme of h1 is Uomo e Macchina (“Man and Machine”). It analyzes the relationship between man and machine by taking into account the most common technological devices (for example: cell phones, tablets and computers). These topics are treated from a real point of view, without considering science fiction ideas, with the aim of making the reader feel a little anguish. The covered topics include Elon Musk, machine learning, Google Home, and artifical intelligence.

The font in use are Space Mono (Colophon Foundry), Mhitrogla (Brando Corradini), Avenir Next (Adrian Frutiger), and Domaine Text (Klim).




Source: https://www.behance.net Alessandro Nobile. License: All Rights Reserved.


Source: https://www.behance.net Alessandro Nobile. License: All Rights Reserved.


Source: https://issuu.com License: All Rights Reserved.


Source: https://issuu.com License: All Rights Reserved.


Source: https://issuu.com License: All Rights Reserved.


          

Adobe presenta herramientas basadas en IA que detectan si un rostro ha sido 'photoshopeado' o eliminan los 'uhms' de un audio

 Cache   

Adobe presenta herramientas basadas en IA que detectan si un rostro ha sido 'photoshopeado' o eliminan los 'uhms' de un audio

Adobe Max es una de las mayores conferencias del sector creativo, en la que se reúnen desarrolladores, diseñadores gráficos y líderes empresariales para conocer los últimos avances tecnológicos del sector. Entre estos avances se incluyen los 'sneaks', proyectos experimentales realizados por ingenieros de Adobe que, con el tiempo, muchas veces terminan integrándose en los productos oficiales de la compañía.

Pero en la edición de este año, la conferencia ha acogido la presentación de un 'sneak' especialmente interesante: una funcionalidad llamada 'About Face', capaz de detectar (haciendo uso del machine learning) si el rostro de una fotografía ha sido manipulado de algún modo mediante edición gráfica. Es decir, que los creadores de Adobe Photoshop nos permiten saber ahora cuándo un rostro ha sido 'photoshopeado'.

Basta con aplicar 'About Face' a la imagen, y la herramienta estimará la probabilidad de que se haya llevado a cabo dicha manipulación, un proceso muy fácil de llevar a cabo hoy en día gracias a múltiples clases de software. 'About Face' no se fija tanto en el rostro en su conjunto (como podría hacer un algoritmo de identificación facial) como en píxeles individuales, proporcionando así un mapa de calor de las zonas de la imagen probablemente alteradas.

Si tenemos suerte, la herramienta puede incluso intentar revertir la manipulación; resulta especialmente útil cuando ésta ha sido realizada haciendo uso del filtro 'Licuar' del Adobe Photoshop.

El progresivo aumento de los fakes y deepfakes, sobre todo como medio para la difusión de bulos a través de medios sociales, y la influencia que esto está teniendo sobre la reputación de los afectados, así como sobre el debate público, está animando a las grandes compañías a invertir en nuevas tecnologías capaces de automatizar la detección de esta clase de manipulaciones.

En el caso de los deepfakes es más complicado, porque la tecnología evoluciona rápidamente, pero con los fakes tradicionales, que muchas veces son producto del uso de Photoshop, herramientas como 'About Face' pueden resultar notablemente útiles.

Otras aplicaciones de la IA presentadas en el Adobe Max

Pero el Adobe Max 2019 también ha permitido dar a conocer otros 'sneaks' basados en el uso de inteligencia artificial...

All In: ¿Nunca has tenido el problema de querer hacer una foto de grupo sin verte obligado a quedarte fuera de la misma? Pues esta herramienta permite justo eso: hacer una foto en la que no apareces y añadirte a partir de otra foto tomada en la misma zona, sin que se note el 'photoshopeo'.

Sound Seek / Awesome Audio: ¿Tienes clips de audio en los que no haces más que decir "aahhh", "uhm" y sonidos repetitivos por el estilo? Pues la herramienta 'Sound Seek' recurre también al machine learning para borrarlos con un solo clic. Si el problema son los ruidos ambientales o la presencia de eco, 'Awesome Audio' se encargará de limpiar el fragmento para obtener un sonido profesional.

También te recomendamos

30 años menos gracias a la IA: el último 'deepfake' viral rejuvenece con gran realismo a David Hasselhoff

Científicos noruegos proponen recurrir a los deepfakes para salvaguardar el anonimato de las personas manteniendo su expresividad

Ha comenzado la carrera para crear la tecnología capaz de detectar los deepfakes, pero los falsificadores llevan ventaja

-
La noticia Adobe presenta herramientas basadas en IA que detectan si un rostro ha sido 'photoshopeado' o eliminan los 'uhms' de un audio fue publicada originalmente en Xataka por Marcos Merino .


          

Google Nest Mini, análisis: cambios imperceptibles para la vista, pero no para el oído

 Cache   

Google Nest Mini, análisis: cambios imperceptibles para la vista, pero no para el oído

Aunque la segunda generación del Google Home Mini ha perdido el apellido de Google por el camino, por lo demás este Nest Mini recuerda mucho a la versión anterior... al menos a simple vista. Porque los cambios más importantes están en su interior, con un tercer micrófono, un chip dedicado para mejorar el sonido y unos bajos el doble de potentes, según explica la gran G. Hemos probado el Nest Mini para ver cómo es y suena esta evolución.

Ficha técnica

Nest mini
Dimensiones 98 mm (altura) x 42 mm (diámetro), peso 181 gramos
Altavoces Transductor de 40 milímetros
Micrófonos Tres (de largo alcance)
Conectividad Wi-Fi 802.11 (2.4GHz y 5GHz), Bluetooth 5.0, Chromecast Built-in
Alimentación Adaptador de 15W, conector de corriente DC
Otros Switch para silenciar micrófono
Precio 59 euros

Cambios imperceptibles para la vista, pero no para el oído

Culo

Si algo funciona, mejor no tocarlo. La segunda generación del Google Home Mini llega con un diseño prácticamente idéntico a su precedesor. Esto es, con forma de pastilla y unas dimensiones que caben en la palma de la mano. Tanto es así, que si los pones boca arriba, es imposible distinguirlos por su mallado. Eso sí, mientras que en la primera generación había hasta cuatro colores para elegir, con el Nest Mini Google lo ha reducido las opciones a dos tonos dentro de la escala de grises. Al darles la vuelta, comprobamos que el conservadurismo de Google ha hecho una concesión a la practicidad incorporando un agujero para colgarlo como si fuera un cuadro. Y sinceramente, viene muy bien para dejarlo recogido. Eso sí, sigue teniendo cable para alimentarse y unos recogecables muy útiles para no dejarlo colgando.

Así pues, el Nest Mini llega con el mismo tamaño y materiales que el Google Home Mini, y su planteamiento de uso sigue siendo el mismo: fundamentalmente la de un dispositivo compacto para controlar la domótica compatible del hogar. Aunque también podemos escuchar música, la calidad del sonido será inferior a la de otras propuestas de la familia de mayores dimensiones. ¿La razón? El tamaño de los drivers tiene su incidencia en la calidad de sonido.

En este sentido, el Nest Mini repite el tamaño del driver del Home Mini (40 milímetros), pero Google ha apostado por mejorar la calidad del sonido integrando un micrófono más, algo que debería mejorar su función de escucha, y un chip dedicado. Entre las funciones de este chipset de machine learning se encuentra la mejora del sonido – según detalla Google – el procesado de operaciones desde el propio dispositivo y no en la nube, como hacía el modelo anterior. Gracias al combo de chip y conjunto de micrófonos, el Nest Mini capta el ruido ambiental y ajusta automáticamente el volumen.

¿Qué cambios producen estas novedades en la práctica? Hemos situado el Nest Mini en el mismo lugar que teníamos el Google Home Mini, probando cómo es la recepción de órdenes desde cualquier punto de la casa y qué tal suena la música a través de este compacto dispositivo. Ambos tienen problemas cuando hablamos con un tono normal desde la otra punta de la casa o cuando estamos en la terraza con el ruido exterior, pero el Nest Mini falla algo menos, escuchando ligeramente mejor. Pero la mejoría es muy leve.

Mapa Colocamos el altavoz donde se encuentra el icono de HOME y probamos a lanzarle órdenes y escuchar música

A pesar de contar con un transductor del mismo tamaño, sí que notamos mejoras en la salida de sonido, algo que se percibe más al llevar sus volúmenes al máximo escuchando música, donde el Nest Mini ofrece un sonido más profundo especialmente con los graves, generando la sensación de que suena más alto y contundente.

Este Nest Mini sigue necesitando de Wi-Fi para la configuración y para su uso, si bien ahora ha heredado una conectividad de su hermano mayor el Google Home: el Bluetooth, que en este caso llega en la versión 5.0. Esta novedad aporta un uso nuevo: aunque seguimos sin poder conectar otro altavoz mediante jack 3,5 mm porque carece de este puerto, sí que podemos conectarlos mediante Bluetooth. Otro cambio más lo encontramos en la forma de alimentarse: del microUSB de 9W del Google Home Mini, ahora requiere 15W de energía a través de un conector DC.

Algunos detalles importantes sobre cómo es la experiencia de uso

Hall

Cuando me compré el Google Home Mini lo hice porque quería un altavoz pequeño para reproducir la música desde mi tableta o teléfono que sustituyera a un modelo con Bluetooth que dejó de funcionar. Con el Google Home Mini perdía la libertad de la ausencia de cables (aunque hay accesorios como baterías para usarlos sin necesidad de enchufe) pero ganaba un dispositivo que me permitiera probar la domótica del hogar y lo hacía prácticamente por el mismo precio. En este sentido, el Nest Mini repite ambas funcionalidades pero va un paso más allá acústicamente hablando: pese a lo pequeño que es, suena muy bien. Y gracias al agujero, lo cuelgas en la pared y ni se nota.

Dar el salto de mi Google Home Mini habitual al Nest Mini no ha representado un desafío a nivel de aprendizaje: configurarlo es igual de sencillo y rápido, requiriendo instalar la app Google Home en mi teléfono y tener disponible la contraseña de mi Wi-Fi doméstico para que ambos se encuentren conectados a la misma red. Ojo porque si te lo quieres llevar unos días a otra ubicación, es necesario ir a la app y borrar del dispositivo esa Wi-Fi, de lo contrario no podrás hacerlo. En este sentido, sería interesante que tuviera un botón para restaurarlo de fábrica.

Hablando de botones, el Nest Mini integra como su predecesor un switch mecánico que permite desactivar los micrófonos para aquellos momentos en los que queremos privacidad. En la práctica, cuesta lo mismo acercarse y desenchufarlo de la corriente. Sería interesante que se pudiera desactivar la escucha no solo de forma manual, sino también mediante una orden de voz.

A partir de aquí, las palabras mágicas son "Ok Google", seguidas de la orden en cuestión. En este sentido, hablar con Google Assistant sigue siendo algo no apto para todos: el éxito o fracaso de la conversación depende de la forma en la que pides las cosas o incluso de cómo las pronuncies.

App Interfaz de la app Google Home, con dispositivos instalados

Mi domicilio es bastante simple domóticamente hablando, ya que a nivel de dispositivos compatibles dispongo un Chromecast conectado a la tele, una bombilla inteligente y un robot aspirador conectado. En cuanto a servicios, he conectado mis cuentas de Spotify (gratis) y Netflix. No obstante, el ecosistema de Google es junto con el de los Echo de Amazon de los más completos en cuanto a oferta de dispositivos compatibles.

Entre las órdenes que puedo darle a Google Assistant se encuentran algunas como "Ok Google, ponme 'La casa de las flores' en Netflix", "Ok Google, pon la Roomba", "OK Google, enciende la luz", "Ok Google, ¿está encendida la luz?". Pero la verdad es que mi uso principal es darle los buenos días para que me cuente los titulares del día y poner música en Spotify. Si tienes la cuenta gratuita como es mi caso, las funciones son limitadas, de modo que no puedes reproducir artistas o canciones concretas, pero sí puedes pedir tus propias listas de reproducción o radios.

Nest Mini, la opinión de Xataka

Alta

Con el Nest Mini, Google repite la receta de su modelo Mini: un altavoz inteligente compacto y asequible para aquellas personas que o bien buscan expresamente un dispositivo con dimensiones reducidas o tienen interés en acercarse a la domótica y los asistentes de voz sin tener claro el partido que pueden sacarle.

En este sentido, el Nest Mini cumple con creces y ofrece un añadido importante: contar con el vasto ecosistema de dispositivos compatibles con Google Assistant y el músculo de los de Mountain View en el área técnica. Y es que que el frasco sea pequeño no debe engañarnos: la funcionalidad de su asistente de voz es la misma que otros modelos más caros y ambiciosos en otros aspectos, con todo lo bueno y lo malo de hablarle a una máquina.

Tanto si le acabamos sacando partido al asistente como si no, siempre nos quedará el recurso de usarlo como lo que es: un altavoz. Aquí Google es donde más esfuerzos ha hecho con su Nest Mini, dando un paso adelante en contundencia. No es ni mucho menos un altavoz para emplear como salida de audio de un televisor ni para que los usuarios más exquisitos lo empleen como equipo de sonido doméstico, pero si lo que queremos es escuchar música por la casa con una calidad y volumen aceptables, sorprende que de algo tan pequeño salga un sonido así.

El dispositivo ha sido cedido para la prueba por parte de Google. Puedes consultar nuestra política de relaciones con empresas.

También te recomendamos

Google Nest Mini: la evolución del Home Mini puede colgarse de la pared y llega con chip dedicado para sonido

Google elimina el ecosistema Nest y todas sus aplicaciones compatibles, pero hará una excepción con Alexa de Amazon

Nest Mini ya a la venta en España: así queda el catálogo de los altavoces inteligentes de Google

-
La noticia Google Nest Mini, análisis: cambios imperceptibles para la vista, pero no para el oído fue publicada originalmente en Xataka por Eva Rodríguez de Luis .


          

[Interview] This Vancouver-based Startup Plans To Boost Drug Design With AI

 Cache   

Variational AI is a newly formed artificial intelligence (AI)-driven molecule discovery & drug design startup out of Vancouver, British Columbia, Canada. The company has developed Enki, an AI-powered small molecule discovery service. 

The founders of Variational AI are planning to build on top of their state-of-the-art expertise in machine learning, reflected in more than 40 research publications, including those presented at NIPS/NeurIPS, ICML, ICLR, CVPR, ICCV, and other top events in the area of artificial intelligence research.


          

Au Machine Learning Chloe Agathe

 Cache   
Au Machine Learning Chloe Agathe
          

Teaching assistant in Machine learning and natural language processing

 Cache   
Ph.d. & forskning, Deltidsjob hos Copenhagen Business School - CBS, Storkøbenhavn, Øresundsregionen (Ansøgningsfrist: 22.11.2019)
          

PhD Project in Geometric Machine Learning

 Cache   
Ph.d. & forskning hos Danmarks Tekniske Universitet (DTU), Storkøbenhavn, Østsjælland, Øresundsregionen (Ansøgningsfrist: 03.01.2020)
          

Neural Magic gets $15M seed to run machine learning models on commodity CPUs

 Cache   
Neural Magic, a startup founded by a couple of MIT professors, who figured out a way to run machine learning models on commodity CPUs, announced a $15 million seed investment today. Comcast Ventures led the round, with participation from NEA, Andreessen Horowitz, Pillar VC and Amdocs. The company had previously received a $5 million pre-seed, […]
          

Remixing just got easier

 Cache   
The engineering team behind streaming music service Deezer just open-sourced Spleeter, their audio separation library built on Python and TensorFlow that uses machine learning to quickly and freely separate music into stems [their component tracks]. ... But how are the results? I tried a handful of tracks across multiple genres, and all performed incredibly well.

          

Customer Success Manager

 Cache   
CUSTOMER SUCCESS MANAGER Location: Cambridge, UK Contact: careers@speechmatics.com “Working for a company focussing on machine learning and speech recognition makes you realise you are working in the thick of the next big leap in tech
          

Lead Platform Engineer - Machine Learning | Parks, Experiences and Products

 Cache   
Lake Buena Vista, Florida, Responsibilities: Lead a team of engineers to design and develop production grade frameworks for feature engineering, model architecture selection, model training, model interpretability, A/B test
          

[AN #72] Errata

 Cache   
Published on November 7, 2019 7:10 PM UTC

Due to a publishing error, one of the opinions in yesterday's newsletter was overwritten by the opinion for the Ought update. Here's the article, with the summary sent yesterday and the actual opinion.

Learning human intent

Norms, Rewards, and the Intentional Stance: Comparing Machine Learning Approaches to Ethical Training (Daniel Kasenberg et al) (summarized by Asya) (H/T Xuan Tan): This paper argues that norm inference is a plausible alternative to inverse reinforcement learning (IRL) for teaching a system what people want. Existing IRL algorithms rely on the Markov assumption: that the next state of the world depends only on the previous state of the world and the action that the agent takes from that state, rather than on the agent’s entire history. In cases where information about the past matters, IRL will either fail to infer the right reward function, or will be forced to make challenging guesses about what past information to encode in each state. By contrast, norm inference tries to infer what (potentially temporal) propositions encode the reward of the system, keeping around only past information that is relevant to evaluating potential propositions. The paper argues that norm inference results in more interpretable systems that generalize better than IRL -- systems that use norm inference can successfully model reward-driven agents, but systems that use IRL do poorly at learning temporal norms.

Asya's opinion: This paper presents an interesting novel alternative to inverse reinforcement learning and does a good job of acknowledging potential objections. Deciding whether and how to store information about the past seems like an important problem that inverse reinforcement learning has to reckon with. My main concern with norm inference, which the paper mentions, is that optimizing over all possible propositions is in practice extremely slow. I don't anticipate that norm inference will be a performance-tractable strategy unless a lot of computation power is available.

Rohin's opinion: The idea of "norms" used here is very different from what I usually imagine, as in e.g. Following human norms (AN #42). Usually, I think of norms as imposing a constraint upon policies rather than defining an optimal policy, (often) specifying what not to do rather than what to do, and being a property of groups of agents, rather than of a single agent. (See also this comment.) The "norms" in this paper don't satisfy any of these properties: I would describe their norm inference as performing IRL with history-dependent reward functions, with a strong inductive bias towards "logical" reward functions (which comes from their use of Linear Temporal Logic). Note that some inductive bias is necessary, as without inductive bias history-dependent reward functions are far too expressive, and nothing could be reasonably learned. I think despite how it's written, the paper should be taken not as a denouncement of IRL-the-paradigm, but a proposal for better IRL algorithms that are quite different from the ones we currently have.



Discuss
          

Hyundai Motor Group sviluppa la tecnologia Smart Cruise Control con Machine Learning

 Cache   
Hyundai Motor Group sviluppa la tecnologia Smart Cruise Control con Machine Learning

Hyundai Motor Group ha annunciato lo sviluppo del primo Smart […]

L'articolo Hyundai Motor Group sviluppa la tecnologia Smart Cruise Control con Machine Learning sembra essere il primo su Villaggio Tecnologico.


          

Mitchell Machine Learning Solutions

 Cache   
Mitchell Machine Learning Solutions
          

How to operationalize data science and machine learning without IT hurdles

 Cache   

Learn about RapidMiner Managed Server, our services offering to install, configure, and maintain a RapidMiner environment for you.

The post How to operationalize data science and machine learning without IT hurdles appeared first on RapidMiner.


          

Announcing ML.NET 1.4 general availability (Machine Learning for .NET)

 Cache   

Coinciding with the Microsoft Ignite 2019 conference, we are thrilled to announce the GA release of ML.NET 1.4 and updates to Model Builder in Visual Studio, with exciting new machine learning features that will allow you to innovate your .NET applications.

The post Announcing ML.NET 1.4 general availability (Machine Learning for .NET) appeared first on .NET Blog.




Next Page: 10000

© Googlier LLC, 2019