Next Page: 10000

          

STG2020-28-2733 - STAGE de fin d'études - Spécialité matériaux H/F

 Cache   
Type de contrat : Stage
Le poste :
Le stage s'inscrit dans le cadre de la maîtrise du vieillissement des matériaux d'équipements du procédé en amont et en aval du cycle électronucléaire. Le stage se déroule en 2 étapes (chronologiques): Etape 1 : Etape bibliographique sur la corrosion d'aciers inoxydables, centrée sur la vérification d'une base de données existantes, qui s'appuie sur un nombre important de documents (rapports d'essais & publications). Cette vérification s'articulera sur : * la lecture et la compréhension de ces documents, puis la vérification minutieuse de la base de données document par document, * éprouver la base de données avec des affichages simples, pour répondre à des demandes des divers experts, et recueillir leurs remarques. Etape 2 : Analyses plus poussées, s'appuyant sur la base de données, centrées sur des problématiques concrètes de l'ingénierie, liées aux vieillissements d'équipements industriels. En fonction de l'avancement du/de la stagiaire, des outils de statistiques avancés comme la régression & la classification par Machine Learning, pourront être utilisés. Le/la stagiaire sera mis en avant en ayant des échanges directs avec les chargés d'affaires et les différents organes de l'ingénierie (métiers, experts, projets), lors de réunions d'avancement et pour rechercher des informations relatives à son stage. L'encadrant pourra aider le/la stagiaire à rédiger son mémoire de fin de stage, par relectures et commentaires. Le/la stagiaire Matériaux acquerra : des connaissances appliquées en vieillissement des matériaux en milieu industriel, en particulier concernant la corrosion des inox. des connaissances sur le cycle du combustible nucléaire. la compréhension des interfaces entre exploitant industriel, chargés d'affaires R&D, laboratoires de recherche, et l'ingénierie (au sein de laquelle il fera son stage). des bases en Python (ou l'enrichissement de ses connaissances Python si le stagiaire à déjà des affinités en programmation), avec des applications à des cas industriels.

Le stage s'inscrit dans le cadre de la maîtrise du vieillissement des matériaux d'équipements du procédé en amont et en aval du cycle électronucléaire. Le stage se déroule en 2 étapes (chronologiques): Etape 1 : Etape bibliographique sur la corrosion d'aciers inoxydables, centrée sur la vérification d'une base de données existantes, qui s'appuie sur un nombre important de documents (rapports d'essais & publications). Cette vérification s'articulera sur : * la lecture et la compréhension de ces documents, puis la vérification minutieuse de la base de données document par document, * éprouver la base de données avec des affichages simples, pour répondre à des demandes des divers experts, et recueillir leurs remarques. Etape 2 : Analyses plus poussées, s'appuyant sur la base de données, centrées sur des problématiques concrètes de l'ingénierie, liées aux vieillissements d'équipements industriels. En fonction de l'avancement du/de la stagiaire, des outils de statistiques avancés comme la régression & la classification par Machine Learning, pourront être utilisés.   Le/la stagiaire sera mis en avant en ayant des échanges directs avec les chargés d'affaires et les différents organes de l'ingénierie (métiers, experts, projets), lors de réunions d'avancement et pour rechercher des informations relatives à son stage. L'encadrant pourra aider le/la stagiaire à rédiger son mémoire de fin de stage, par relectures et commentaires. Le/la stagiaire Matériaux acquerra : des connaissances appliquées en vieillissement des matériaux en milieu industriel, en particulier concernant la corrosion des inox. des connaissances sur le cycle du combustible nucléaire. la compréhension des interfaces entre exploitant industriel, chargés d'affaires R&D, laboratoires de recherche, et l'ingénierie (au sein de laquelle il fera son stage). des bases en Python (ou l'enrichissement de ses connaissances Python si le stagiaire à déjà des affinités en programmation), avec des applications à des cas industriels.
Ville : Equeurdreville

          

Le machine learning pour les nuls

 Cache   
Le machine learning pour les nuls de J.Mueller, L.Massaron
          

Upscaling urban data science for global climate solutions

 Cache   

Upscaling urban data science for global climate solutions

Creutzig, F., Lohrey, S., Bai, X., Baklanov, A., Dawson, R., Dhakal, S., Lamb, W. F., McPhearson, T., Minx, J., Munoz, E. & Walsh, B., 1 Jan 2019, In : Global Sustainability. 2, e2.

Research output: Contribution to journalJournal articleResearchpeer-review

Non-technical summary Manhattan, Berlin and New Delhi all need to take action to adapt to climate change and to reduce greenhouse gas emissions. While case studies on these cities provide valuable insights, comparability and scalability remain sidelined. It is therefore timely to review the state-of-the-art in data infrastructures, including earth observations, social media data, and how they could be better integrated to advance climate change science in cities and urban areas. We present three routes for expanding knowledge on global urban areas: mainstreaming data collections, amplifying the use of big data and taking further advantage of computational methods to analyse qualitative data to gain new insights. These data-based approaches have the potential to upscale urban climate solutions and effect change at the global scale. Technical summary Cities have an increasingly integral role in addressing climate change. To gain a common understanding of solutions, we require adequate and representative data of urban areas, including data on related greenhouse gas emissions, climate threats and of socio-economic contexts. Here, we review the current state of urban data science in the context of climate change, investigating the contribution of urban metabolism studies, remote sensing, big data approaches, urban economics, urban climate and weather studies. We outline three routes for upscaling urban data science for global climate solutions: 1) Mainstreaming and harmonizing data collection in cities worldwide; 2) Exploiting big data and machine learning to scale solutions while maintaining privacy; 3) Applying computational techniques and data science methods to analyse published qualitative information for the systematization and understanding of first-order climate effects and solutions. Collaborative efforts towards a joint data platform and integrated urban services would provide the quantitative foundations of the emerging global urban sustainability science.

Original languageEnglish
Article numbere2
JournalGlobal Sustainability
Volume2
DOIs
Publication statusPublished - 1 Jan 2019

          

Zo zet je AI en machine learning in voor servicemanagement.

 Cache   
none
          

Adobe appoints Nanda Kambhatla as India research team head

 Cache   
Based out of Bengaluru, Kambhatla will lead Adobe's research initiatives in the region, including artificial intelligence, natural language processing, machine learning, big-data analytics and insight, and content intelligence technologies. He will report to Adobe Research Vice President. Before joining Adobe, Kambhatla was the VP of Enterprise AI at Symphony AI.
          

CFPB shares 3 focus areas as AI and machine learning impact underwriting

 Cache   
none
          

知能機械の望ましくない動作を防止する

 Cache   

Intelligent machines using machine learning algorithms are ubiquitous, ranging from simple data analysis and pattern recognition tools to complex systems that achieve superhuman performance on various tasks. Ensuring that they do not exhibit undesirable behavior—that they do not, for example, cause harm to humans—is therefore a pressing problem. We propose a general and flexible framework for designing machine learning algorithms. This framework simplifies the problem of specifying and regulating undesirable behavior. To show the viability of this framework, we used it to create machine learning algorithms that precluded the dangerous behavior caused by standard machine learning algorithms in our experiments. Our framework for designing machine learning algorithms simplifies the safe and responsible application of machine learning.


          

Las mejores ofertas en cursos del Black Friday 2019

 Cache   

Las mejores ofertas en cursos del Black Friday 2019

Black Friday 2019 ya está aquí y con él famoso día llega una avalancha total de ofertas no solo el mismo viernes, sino el los días previos y posteriores. En Genbeta hemos estado recogiendo algunas de las oportunidades más interesantes relacionadas con software y servicios y ahora haremos lo propio con cursos online en español sobre computación, programación, hacking y ciberseguridad.

Si bien hay mucho que se puede aprender online gratis, algunas ofertas educativas más completas, con certificación y demás beneficios suelen ser de pago. Pero las plataformas de educación en linea no pierden la oportunidad durante estos últimos días de noviembre y principios de diciembre para también unirse con sus propias promociones.

Cursos a 9.99 euros en Udemy

Adi Goldstein Mdinbvq1sfg Unsplash

Cursos a 9.90 en Domestika

Kelly Sikkema Yk0hpwwdj1i Unsplash

Cursos con 70% de descuento en Tutellus

Kobu Agency 67l18r4tw W Unsplash

Utiliza el cupón BLACKFRIDAY antes de inscribirte para ahorrar hasta un 70%.

Descuentos de 25% en los cursos online de Securizame

Fabian Grohs Dc6pb2jdaqs Unsplash

Apple Coding Academy

Hasta el 5 de diciembre puedes obtener un 30% de descuento en los cursos de Swift 5.1 y Desarrollo de apps con SwiftUI. Si combinas las ofertas y tomas los dos cursos puedes obtener 35% de descuento. Los cursos de Apple Coding en Udemy tienen un descuento del 52%.

Más ofertas

  • 3 meses de Amazon Kindle Unlimited por 29,97 euros gratis.
  • 4 meses de Amazon Music Unlimited por 0,99 euros.
  • 30 días de Amazon Prime gratis.

Puedes estar al día y en cada momento informado de las principales ofertas y novedades de Xataka Selección en nuestro canal de Telegram o en nuestros perfiles de Twitter , Facebook y la revista Flipboard. Puedes echar un vistazo también a los cazando gangas de Xataka Móvil, Xataka Android, Xataka Foto, Vida Extra, Espinof y Applesfera, así como con nuestros compañeros de Compradicción. Puedes ver todas las gangas que publican en Twitter y Facebook, e incluso suscribirte a sus avisos vía Telegram.

También puedes encontrar aquí las mejores ofertas del Black Friday 2019.

Xataka Selección
ofrece:
Descubre las mejores ofertas en Tecnología del Black Friday que hemos seleccionado para ti en Xataka Selección. ¡No te las pierdas!

Nota: algunos de los enlaces aquí publicados son de afiliados. A pesar de ello, ninguno de los cursos mencionados han sido propuestos por las webs, siendo su introducción una decisión única del equipo de editores.

También te recomendamos

Gracias a esta web puedes verificar si una oferta de Black Friday de verdad vale la pena antes de comprar

Las mejores ofertas en software del Black Friday 2019

Las mejores ofertas en software, VPN y cursos en la semana previa al Black Friday 2019

-
La noticia Las mejores ofertas en cursos del Black Friday 2019 fue publicada originalmente en Genbeta por Gabriela González .


          

IT / Software / Systems: Lead Software Engineer - Full Stack (Salt Lake City, UT) - Salt Lake City, Utah

 Cache   
Earnest empowers people with the financial capital to live better lives We're an accomplished team of design, math, finance and technology geeks who believe consumer lending can be radically improved and are doing something about it. We created a company that combines data science, streamlined design, and technology to: Build products that simplify the lending process, Provide them to more people, and Engage with our customers through more human experiences. As a Lead Engineer at Earnest you'll provide technical direction and build the software that is revolutionizing consumer lending, automating the loan approval process and orchestrating the transfer of billions of dollars. In addition to the $3+ billion in loans serviced, we build tools to maximize Earnest's growth while providing the best possible client experience. Our focus is on building a modern platform that allows us to move faster over time. This means a willingness to rethink domains from first principles and an ability to collaborate well across technical and non-technical teams. In this role, you will: Create architecture plans and present them to the engineering team Mentor and provide guidance on best practices to team members Launch new features such as an integration with strategic partners, acceleration of loan funding, integration of data science and machine learning models to automate loan decisions Work with Product Managers on priorities with team At Earnest, we use Node.js and TypeScript on the server-side. On the front-end we use React/Redux for building new things and Angular for everything else. We deploy services in Docker and Kubernetes on AWS. We integrate with other internal microservices (written in Node.js and Scala) and store the bulk of our data in Postgres and Amazon S3. Ideal background and expertise: 6+ years of professional development experience Experience with server-side concepts, e.g. microservices, database, caching, performance, monitoring and scalability Extensive experience with modern Node.js preferred Professional experience in React/Redux desirable Relevant data modeling experience and integration with databases such as PostgresSQL Experience working in Fintech, Banking, or related Consumer Financial Services companies is a plus Earnest Perks & Benefits: Health, Dental, & Vision benefits plus savings plans Employee Stock Purchase Plan 401(k) plan to help you save for retirement plus a company match Tuition reimbursement program $1000 flight on each Earnie-versary to anywhere in the world and 25 days of annual PTO Earnest provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, sexual orientation, disability, genetics, gender identity or expression, or veteran status. Qualified applicants with criminal histories will be considered for the position in a manner consistent with the Fair Chance Ordinance. ()
          

ARUBA: Learning-to-Learn with Less Regret

 Cache   
In the classical machine learning setup, we aim to learn a single model for a single task given many training samples from the same distribution. However, in many practical applications, we are in fact exposed to several distinct yet related tasks that have only a few examples each. Because the data now come from different training distributions, simply learning a single global model, e.g., via stochastic gradient descent (SGD), may result in poor performance on each task. As a result, designing algorithms for learning-to-learn from multiple tasks has become a major area of study in machine learning, with the promise of improving performance on variety of tasks ranging from personalized next-character prediction on your smartphone to fast robotic adaptation in diverse environments.
          

Federated Learning: Challenges, Methods, and Future Directions

 Cache   
What is federated learning? How does it differ from traditional large-scale machine learning, distributed optimization, and privacy-preserving data analysis? What do we understand currently about federated learning, and what problems are left to explore? In this post, we briefly answer these questions, and describe ongoing work in federated learning at CMU.
          

Amazon introduced a new version of unmanned machine DeepRacer

 Cache   
Amazon introduced a new version of unmanned machine DeepRacer

Amazon has unveiled a new version of the unmanned machine DeepRacer, which is positioned as a platform for the study of machine learning algorithms by engineers and programmers. The new model received a stereo camera and lidar. In addition, the company will add obstacles to the races, as well as allow the races of two […]

The post Amazon introduced a new version of unmanned machine DeepRacer appeared first on Revyuh.


          

Remaining human in an AI recruitment environment

 Cache   

Technology has always played a role in the recruitment process, but more recent advances in artificial intelligence (AI) and machine learning have catapulted it to the lead item in the agenda of every chief human resource officer. From talent engagement to employer branding and candidate assessment, a recent JazzHR study revealed that six in ten […]

The post Remaining human in an AI recruitment environment appeared first on Recruitment, Hiring and Job Board Blog.


          

DLME: Distributed Log Mining Using Ensemble Learning for Fault Prediction

 Cache   
Fault prediction problems in network systems are often manifested as very onerous for better network management. One of the effective measures is to constantly monitor and analyze the unceasing generation of network logs that capture the activities of a network. The learning algorithms are quite useful for this purpose. However, due to the dynamic nature of network systems, a frequent drift in the logged data may occur which in turn affects the efficiency of the learning algorithms. In this paper, we present a general purpose algorithmic framework for developing easily parallelizable distributed log mining approach, which uses machine learning and distributed processing to achieve a better quality of network services. Our proposed approach monotonously handles the dynamic nature of network logs by tracking the changes in the distribution of logs and takes adequate actions according to that. The entire problem is illustrated as a distributed learning environment, where the complete set of logs is partitioned into assorted data chunks and a distributed weighted ensemble of the information is generated from these chunks. Furthermore, our method is tested on real dataset and experimental analysis shows that a fair amount of scalability and accuracy can be obtained.
          

ML-Assisted DVFS-Aware HEVC Motion Estimation Design Scheme for Mobile APSoC

 Cache   
High-efficiency video coding (HEVC) is the latest video standard, and a variety of HEVC chips are being incorporated into the application processor system-on-a-chip (APSoC) within mobile devices. However, the coding bandwidth accessed for the motion estimation (ME) operation within HEVC has changed over time because of the adoption of the intelligent power management mechanism within the APSoC. To achieve a low-power high-performance HEVC design, the on-demand coding bandwidth must be considered in the design. In this paper, we develop an intelligent dynamic voltage-frequency-scaling-aware (DVFS-aware) coding-bandwidth-efficient HEVC ME controller algorithm model. We also develop a low-power high-performance VLSI hardware architecture by joining a machine learning (ML) scheme and a convex optimization (CO) method that is adaptive to the time-altering coding bandwidth. The proposed HEVC ME controller design can be integrated into ME to realize a coding bandwidth, coding bit rate, and coding-quality-optimized HEVC ME design for mobile APSoC, therein utilizing an intelligent power management mechanism. The experimental results show the effectiveness of the power and performance of the proposed design.
          

Servicisation : La solution Syncron Uptime accélère la transition du service après-vente traditionnel vers le concept du Product-as-a-Service (PaaS)

 Cache   
Cette nouvelle solution Cloud dédiée aux fabricants d’équipements s’appuie sur les technologies de Machine Learning et d’Intelligence Artificielle pour prédire les pannes et maximiser la disponibilité des actifs.
Syncron, éditeur de solutions Cloud de gestion des pièces de rechange...


          

Senior Business Analyst - Data

 Cache   
You are a passionate Senior Analyst who lives and breathes the product details and customer’s requirements. You can have insightful conversations with business owners (product managers), Product Owners, Data Scientists, Machine Learning Engineers and technical architects to deliver value for your customers that you represent. You have an analytical mind but have the interpersonal skills to elicit the needs from what a customer wants. You have worked in agile teams, working closely with the plethora of roles needed to define a successful product in the market. You are an enthusiastic and self-starter partnering with the Product Owner to refine and execute a product backlog Working closely with the Product team, Architecture and Developers across London, Luton and TUI’s Source Markets around Europe to deliver data science based product. You will be maintaining, grooming and prioritising a backlog of product features with technical PO. You will be working with Data Scientist and ML Engineers in agile manner to provide the Insights for the analytics foundations of TUI. You will Interact with the customer domain to identify, capture and analyse business requirements. You will perform data analysis including data mapping, report analysis, and help define data interfaces. You will be working in our London Bridge Office or Luton with some home working. You will be a pivotal member of our team and have the opportunity to lay the foundations of TUI’s future digital presence in the market place. This role will focus on delivering excellence in data analytics by identifying different types of data that need tracking to improve business performance. You will look at spotting trends in data, specific to the business area being supported. Develop and maintain data models in collaboration with Data & Analytics Business Partners. Conduct market/other research relevant to the business area and prepare and present regular and ad hoc analytics findings/recommendations for action (if applicable) to the business area leadership management team as appropriate. At TUI, we never stop looking ahead, seeking new ways to delight our customers and grow our business. We recognise the power of digital and the massive contribution this brings to creating a truly unique and differentiated customer experience. TUI Group is the world’s number one integrated tourism business. The Group umbrella consists of strong tour operators, 1,800 travel agencies and leading online portals, six airlines with more than 130 aircraft, over 300 hotels with 210,000 beds, twelve cruise liners and countless incoming agencies in all major holiday destinations around the globe.  All this enables us to provide our 30 million customers with an unmatched holiday experience in 180 regions. What you will be doing Be the data experts for that delivery functions Collaborate with Data Scientist to explore opportunities to deploy advanced analytics such as machine learning and predictive analytics. Actively participating in a TUI Analytics Community. Help more junior analysts in their roles. Be able to understand the business requirements and plot the data required to serve those requirements Provide in-depth marketing, sales and Service analysis across all channels as and when required, including analysis on: Customer interaction patterns and frequencies Proactively make clear recommendations to help maximise performance What we are looking for 5 years of experience as a business analyst Requirements management (Stories and Scenarios) Detailed analytical abilities Written and verbal communication, preferably with technical writing skills Good understanding of Agile best practice Understanding of Data science practices Facilitation and Elicitation Skills Previously worked on a ”Big Data platform”, enabling non-technical users to gain insight into key business metrics Some SQL and data profiling skills is desirable Experience modelling and evolving complex data structures Solid understanding of data warehousing principles, concepts and best practices (e.g. ODS, Data Mart, Data Lakes) A natural curiosity and passion for diving into the data and driving game-changing decision-making Comfortable working in a dynamic environment with a certain degree of uncertainty A real can-do attitude with a willingness to just get on with the job and make great things happen Strong experience in user testing and project management is desirable Experience with Machine Learning and Models in terms what they can do (rather than how) is desirable Working within TUI group TUI Travel’s vision is to make travel experiences special. To fulfil this vision, we never stop looking ahead, seeking new ways to delight our customers and grow our business. We recognise the power of digital and the massive contribution this brings to creating a truly unique and differentiated customer experience. TUI Group is one of the world’s leading leisure travel groups, with over 220 trusted brands in 180 countries and more than 30 million customers.  Help make our customers smile and in return you’ll be exceptionally rewarded with competitive salary, pension scheme, additional benefits include holiday, flight and foreign exchange discounts, a childcare voucher scheme and free annual travel insurance* Please visit our website for more information. How to apply To apply for this role, please log on to https://tuijobsuk.co.uk where you will be required to log-in, provide personal details and complete an online application form
          

Postdoctoral researcher/Researcher in machine learning

 Cache   
none
          

Exploiting GPUs for Efficient Gradient Boosting Decision Tree Training

 Cache   
In this paper, we present a novel parallel implementation for training Gradient Boosting Decision Trees (GBDTs) on Graphics Processing Units (GPUs). Thanks to the excellent results on classification/regression and the open sourced libraries such as XGBoost, GBDTs have become very popular in recent years and won many awards in machine learning and data mining competitions. Although GPUs have demonstrated their success in accelerating many machine learning applications, it is challenging to develop an efficient GPU-based GBDT algorithm. The key challenges include irregular memory accesses, many sorting operations with small inputs and varying data parallel granularities in tree construction. To tackle these challenges on GPUs, we propose various novel techniques including (i) Run-length Encoding compression and thread/block workload dynamic allocation, (ii) data partitioning based on stable sort, and fast and memory efficient attribute ID lookup in node splitting, (iii) finding approximate split points using two-stage histogram building, (iv) building histograms with the aware of sparsity and exploiting histogram subtraction to reduce histogram building workload, (v) reusing intermediate training results for efficient gradient computation, and (vi) exploiting multiple GPUs to handle larger data sets efficiently. Our experimental results show that our algorithm named ThunderGBM can be 10x times faster than the state-of-the-art libraries (i.e., XGBoost, LightGBM and CatBoost) running on a relatively high-end workstation of 20 CPU cores. In comparison with the libraries on GPUs, ThunderGBM can handle higher dimensional problems which the libraries become extremely slow or simply fail. For the data sets the existing libraries on GPUs can handle, ThunderGBM achieves up to 10 times speedup on the same hardware, which demonstrates the significance of our GPU optimizations. Moreover, the models trained by ThunderGBM are identical to those trained by XGBoost,- and have similar quality as those trained by LightGBM and CatBoost.
          

Chief Architect in AI and Machine Learning - Leidos - Arlington, VA

 Cache   
Advising C-suite and Leidos business areas leaders on a broad range of technology, strategy, and policy issues associated with AI. Yes, 10% of the time.
From Leidos - Tue, 12 Nov 2019 06:01:04 GMT - View all Arlington, VA jobs
          

Sr Software Engineer - Computer Vision, Embedded and Distributed Systems - West Pharmaceutical Services - Exton, PA

 Cache   
Work with internal stakeholders to understand business needs and translate to technical designs. Familiarity with Machine Learning is a plus.
From West Pharmaceutical Services - Sat, 17 Aug 2019 08:19:33 GMT - View all Exton, PA jobs
          

Customer Data Scientist (Chicago) - h2o.ai - Chicago, IL

 Cache   
Engineers, business people, and executives. Training advanced machine learning models at scale in distributed environments, influencing next generation data…
From h2o.ai - Fri, 23 Aug 2019 05:24:30 GMT - View all Chicago, IL jobs
          

Customer Data Scientist (Mountain View) - h2o.ai - Mountain View, CA

 Cache   
Engineers, business people, and executives. Training advanced machine learning models at scale in distributed environments, influencing next generation data…
From h2o.ai - Fri, 23 Aug 2019 05:24:30 GMT - View all Mountain View, CA jobs
          

Sustaining Engineer (Multiple States) - h2o.ai - Mountain View, CA

 Cache   
Understanding of Data Science, Machine Learning concepts, algorithms etc.,. Ability to work with Data Science, Machine Learning, BigData Hadoop technologies.
From h2o.ai - Fri, 01 Mar 2019 06:27:40 GMT - View all Mountain View, CA jobs
          

Customer Data Scientist (New York) - h2o.ai - New York, NY

 Cache   
Engineers, business people, and executives. Training advanced machine learning models at scale in distributed environments, influencing next generation data…
From h2o.ai - Fri, 23 Aug 2019 05:24:30 GMT - View all New York, NY jobs
          

Deployment Engineer (Machine Learning) - Clarifai - Washington, DC

 Cache   
Experience with machine learning or data science experiments. You will develop reusable modules, components, and build tools for both internal and external use…
From Clarifai - Thu, 12 Sep 2019 16:52:34 GMT - View all Washington, DC jobs
          

Workers With Bachelor’s Degree Tend To Lose Jobs To AI 5 Times Over

 Cache   

A new report by Brookings Institution has debunked the notion that blue-collar jobs like fast food preparation, machine operation will be most affected due to AI. The report concludes that highly-educated and high paying jobs will face the worst brunt of artificial intelligence as opposed to the previously established notion.

The study is conducted by Stanford University doctoral candidate Michael Webb who analyzed more than 16,000 AI patents and over 800 job descriptions.

The study mentions that workers having a bachelor’s degree will be impacted up to 5 times more by the advent of AI than the workers who hold a high school degree. The reason behind it is the fact that AI tends to take over the jobs that involve planning, reasoning, problem-solving, and predicting outcomes.

Brookings study has also listed the high-paying jobs that will witness the highest exposure to AI. Here’s the list:

  • Chemical engineers
  • Political scientists
  • Nuclear technicians
  • Physicists
  • Occupational therapists
  • Gas plant operators
  • Administrative law judges, adjudicators, and hearing officers

The job profiles listed above have a median yearly salary of around $100,000. For example, according to the data by the Bureau of Labor Statistics, the median salary of a Chemical engineer is $104,910 per year; a nuclear technician earns the median salary of $79,140 per year.

This is a clear indication that AI will severely impact high-paying jobs and workers with higher educational degrees.

Speaking to CNBC, Webb says, “AI is good at tasks that involve judgment and optimization, which tend to be done by higher-skilled workers. So if you’re optimizing ads as an online marketer or a radiologist interpreting medical scans, all of these things take a long time for humans to be good at them. But when it comes to algorithms, once you have the right training data, they tend to be better than humans.”

What can be done to save your job from AI?

To mitigate the exposure of AI in jobs, Anima Anandkumar, the director of machine learning research at Nvidia, says that workers should ask themselves the following three questions:

  • Is my job fairly repetitive?
  • Does my job involve a trove of data that could be used for training an AI system?
  • Are there any objectives that could be used for evaluating my job?

Anima says that workers must develop skills for jobs that require human interaction and creativity.


          

Bajaj invests $ 8 million in Yulu to boost EV adoption in India

 Cache   

Bengaluru: Leading bike maker Bajaj Auto on Tuesday announced an investment of $8 million in Bengaluru-based bike-sharing platform Yulu to boost electric vehicle adoption in India.

The fresh round of investment will be utilised for further strengthening of the mobility platform and deepening of the technology solutions for rapid expansion, Yulu said.

Yulu deploys machine learning and artificial intelligence (AI) to accurately predict the demand and supply of its assets and resources which ensures vehicle availability and operational efficiency.

"Yulu is the leading electric micro-mobility service provider that requires reliable, durable and comfortable electric vehicles to serve its customers, hence a committed manufacturing partner is crucial to our success. In Bajaj, Yulu finds this strategic partnership and it is a win-win relationship," Amit Gupta, Co-Founder and CEO, Yulu, said in a statement.

"Yulu's electric two-wheelers will help Indian commuters with the first and the last mile connectivity option. This partnership aims to solve the mobility challenges of urban India in an eco-friendly manner," Gupta said.

Yulu said it plans to increase its fleet size to 100,000 electric two-wheelers by December 2020 with an extensive network of its battery-swapping stations across the cities where it operates.

As part of their strategic relationship, Yulu will source from Bajaj electric two-wheelers which have been co-designed and manufactured exclusively for shared micro-mobility.

Bajaj will also consider facilitating the vehicle finance needs of Yulu for a large scale deployment of its micro-mobility electric vehicles.

"At BAL (Bajaj Auto Limited), we believe that the two factors of congestion reduction and pollution control will drive the segment of shared micro-mobility in the future," said Rajiv Bajaj, Managing Director of Bajaj.

"That coupled with the expansion of Mass Rapid Transport System like Metro in large cities will further boost the demand for flexible last-mile connectivity," Bajaj said.

Consumer adoption of EVs has been limited in India due to practical factors like range anxiety and availability.

Yulu said it is building an ecosystem of EV led micro-mobility and aims to expand its services to eight mega cities and select smart cities.



          

Google releases open-source, browser-based BodyPix 2.0

 Cache   
Over the last couple of years I’ve pointed out a number of cool projects (e.g. driving image search via your body movements) powered by my teammates’ efforts to 1) deliver great machine learning models, and 2) enable Web browsers to run them efficiently. Now they’ve released BodyPix 2.0 (see live demo): We are excited to announce the […]
          

deep learning artificial intelligence

 Cache   
We are working on a research project and following emotions should be detected from 40 students in class room. We are already identifying sad, happy, neutral and angry etc but we need following behaviors extracted... (Budget: ₹1500 - ₹12500 INR, Jobs: Deep Learning, Machine Learning (ML), Python)
          

 Cache   
ud, Deceptions, and Downright Lies About Machine Learning Mathematics Exposed Want to Know More About Machine Learning Mathematics? The relational database maintains the output created by the info extraction. Typically, the option of activation function at the output layer write my paper is determined by the sort of cost function. The example above is extremely […]
          

Detecting Thermal Discomfort of Drivers Using Physiological Sensors and Thermal Imaging

 Cache   
Recent technological developments have been used extensively in manufacturing vehicles in order to improve the driving experience and add multiple safety features. This article introduces a novel machine learning approach using physiological sensors and thermal imaging of the subjects to detect human thermal discomfort in order to develop a fully automated climate control system in the vehicles that does not need any explicit input from individuals. To achieve this goal, a dataset of thermal videos and physiological signals from 50 subjects is collected, an extensive analysis of different feature sets is conducted, a multimodal approach is experimented, and a cascaded classification system is proposed. Our results evidently show the capability of specific feature sets of detecting human thermal discomfort as well as the superior performance of integrating multimodal features.
          

Llega el a-commerce, ¿estás preparado?

 Cache   
El crecimiento exponencial del comercio electrónico o e-commerce es algo innegable. Y, su rápida evolución, al calor de nuevas tecnologías como la Inteligencia Artificial, la realidad aumentada o el machine learning, es imparable. Una de las tendencias, dentro del comercio electrónico, que más auge está teniendo y que más volumen...
          

Scaling HPC Education

 Cache   
With the explosion in artificial intelligence and machine learning, modeling, simulation, and data analytics, High Performance Computing (HPC) has grown to become an essential tool across academic disciplines. However, HPC expertise remains in short supply with a shortage of people who know how to make HPC systems work and how to use them. At September’s … Continue reading Scaling HPC Education
          

Machine Learning—state of the artThe critical role that machine learning can play in advancing cardiology was outlined at a packed session at ESC 2019

 Cache   
Speakers examined what machine learning can offer cardiology in the future, and also—in the abstract-based element of the session—focused on specific examples and studies where machine learning has been embraced to deliver results that may not otherwise have been attainable. In providing a perspective on machine learning, Professor Nicholas Duchateau from the University of Lyon noted that it was not a new concept and that there were previous machine learning booms in the 1960s and 1980s. ‘The difference with the boom we are undergoing is the spread of it and our access to more data—Big Data—and much more computational power’.
          

JavaScript Image Annotator Modification With ML

 Cache   
Category: AJAX, Javascript, Machine Learning (ML), PHP, Software Architecture
Budget: $250 - $750 USD

I am planning to use the VGG image annotator available here: http://www.robots.ox.ac.uk/~vgg/software/via/downloads/via3/via-src-3.0.6.zip To build a web-based image annotator. Now I want to load the data from a server over JSON string, not over the file open local json file dialog...
          

Parallel Data Lab Receives Computing Cluster from Los Alamos National Lab

 Cache   

A photo of a computer cluster

Carnegie Mellon University has received a supercomputer from Los Alamos National Lab (LANL) that will be reconstructed into a computing cluster operated by the Parallel Data Lab (PDL) and housed in the Data Center Observatory. This new cluster will augment the existing Narwhal, also from LANL and made up of parts of the decommissioned Roadrunner supercomputer technology, the fastest supercomputer in the world from June 2008 to June 2009.

This new supercomputer, tentatively named Wolf, will be an important part of educating CMU's next generation of computer science professionals, researchers and educators. The system recently was retired from LANL's open institutional computing environment. While no longer efficient for simulation science, it still has high value as a training tool and for computer science research. Wolf is made up of 616 computing nodes, each containing two eight-core Intel Xeon Sandy Bridge processors, totaling 9,856 processing cores across the entire cluster. The cluster interconnect is QDR InfiniBand, providing a network that is 30 times faster than Narwhal. Altogether, it will have the capability of about 200 teraflops, where one teraflop represents one trillion computations per second.

"Wolf's processing cores are each significantly faster than the previous system, and it consists of about 50 percent more computing nodes," said George Amvrosiadis, assistant research professor of electrical and computer engineering and the Parallel Data Lab (PDL). "We will be retiring the Narwhal nodes. Our experienced PDL team, with Jason Boles leading the installation effort, is doing this gradually to make sure everything works as expected."

In the five years since they received Narwhal from LANL, the researchers of the Parallel Data Lab have developed several projects with the computing cluster in service of educating the world's next thought leaders in several areas of computer science including: scalable storage, cloud computing, machine learning and operating systems.

 


          

Gilman, Roeder Named 2020 AAAS Fellows

 Cache   

Fred Gilman and Kathryn Roeder

Carnegie Mellon University's Fred Gilman and Kathryn Roeder have been selected fellows of the American Association for the Advancement of Science (AAAS).

AAAS is the world's largest general scientific society and publisher of several highly regarded journals, including "Science." Fellows are elected by their peers to honor their scientifically or socially distinguished efforts to advance science or its applications. This year, AAAS has elevated 443 of its members with this honor.

Gilman, the Buhl Professor of Theoretical Physics was recognized for his work elucidating the fundamental nature of CP violation and his sustained and successful leadership in the particle physics and cosmology communities.

Gilman has a record of national and international professional service and leadership. Most recently, he served for six years as chair of the committee overseeing the construction of the Large Synoptic Survey Telescope. From 1999 to 2005, he chaired the High Energy Physics Advisory Panel, which advises the National Science Foundation and the Department of Energy (DOE) on setting the nation's priorities for particle physics. For over a decade he was one of three senior advisors designated by the DOE under the U.S.-China Agreement on Cooperation in High Energy Physics. This led to new Chinese facilities and increased collaboration in particle physics experiments in both China and the United States.

Gilman joined the Carnegie Mellon faculty in 1995. He was the head of the Department of Physics from 1999-2008, where he led its growth in emerging fields such as cosmology and biological physics. He was dean of the Mellon College of Science (MCS) from 2007-2016, where he ushered in many new innovations. He was deeply engaged in and actively oversaw the development and implementation of a new MCS Core Education for undergraduate students; introduced a framework for decreasing bias and increasing diversity in recruiting that has been adopted by other colleges at the university; joined with the dean of the College of Engineering to provide seed funding for new interdisciplinary collaborations; and led the construction of a major neurobiology facility while renovating many labs and creating interactive common spaces in the Mellon Institute.

Prior to arriving at Carnegie Mellon, Gilman was the associate director of the Superconducting Supercollider (SSC) and led the physics research portion of the SSC project. He previously spent 23 years as a faculty member at Stanford University.

Gilman's research has focused on the physics of heavy quarks and leptons and on the difference in the behavior of matter and antimatter (CP violation), which in turn is a key component of an explanation of the dominance of matter over antimatter in the universe.

Roeder, UPMC Professor of Statistics and Life Sciences, is being recognized for her distinguished contributions to statistical genetics and genomics methodology, outstanding research in genetics of autism spectrum disorder and contribution to statistical theory for mixture models.

Roeder started her research career in biology but was soon drawn to statistics. According to Roeder, "every question that interested me could only be answered by solving an even more intriguing statistical puzzle." Her first major data project was in DNA forensics, helping to solidify the credibility of this form of evidence in the judicial system.

As her scientific career advanced, Roeder transitioned to developing statistical and machine learning tools for finding associations or patterns in data. She focuses on high dimensional inference problems with applications such as analyzing variation in the whole human genome and how it relates to disease. Her work has contributed to a better understanding of schizophrenia, autism and other genetic disorders.

"My results for mixture models have been used to better understand a broad range of scientific phenomena," Roeder said. "This, for me, is the most satisfying aspect of statistics, when methods you develop are applied to answer important scientific questions."

Roeder holds a joint appointment at CMU, joining the Department of Statistics & Data Science in the Dietrich College of Humanities and Social Sciences in 1994 and the Computational Biology Department in the School of Computer Science in 2004. She also served as the university's vice provost for faculty from July 2015 through June 2019.

Roeder has published more than 150 scholarly articles and was elected to the National Academy of Sciences in 2019. She is an elected fellow of the American Statistical Association and the Institute of Mathematical Statistics. She received the Snedecor Award for outstanding work in statistical applications, the Janet L. Norwood Award for outstanding achievement by a woman in statistical sciences and the Committee of Presidents of Statistical Societies Presidents' Award for the outstanding statistician under age 40.

Roeder previously served as the statistics section chair for AAAS and has played an integral role in organizing conference sections aimed at helping her community improve how statistics are communicated across scientific disciplines, as well as to the public.

"I am deeply honored to receive this recognition by my peers," Roeder said. "I will continue to push the boundaries of statistics to uncover the genetic underpinning of diseases and disorders, like autism, and I will continue to work with my colleagues in other fields to translate these findings into biology and therapeutics."

Gilman and Roeder are among 33 AAAS Fellows who have called CMU home. They will be inducted on Saturday, Feb. 15, 2020 during the AAAS annual meeting in Seattle.


          

"60 Minutes" Highlights CMU Brain Science Research Advances

 Cache   

Image of Marcel Just

Ten years ago, Lesley Stahl, a correspondent with the CBS program "60 Minutes," interviewed Carnegie Mellon University's Marcel Just and Tom Mitchell about the use of brain imaging and machine learning to identify thoughts — based on brain activation patterns or neural signatures. During the program, the researchers showed how functional MRI could be used to identify the thought of a physical object, like a hammer, from a person’s brain scans. Read more from "60 Minutes."

With the decade drawing to a close, Stahl returned to CMU's Pittsburgh campus for an update. Just, the D.O. Hebb University Professor of Psychology at the Dietrich College of Humanities and Social Sciences, and his colleagues can now apply their research method to see brain activation patterns for scientific concepts. "It's like being an astronomer when the first telescope is discovered," Just said. His work is shining light on how abstract concepts, including emotions like "jealousy" and faith," form in the brain. His latest work focuses on detecting whether or not a person has been thinking about suicide.

Watch Sunday's program on the "60 Minutes" website. 

As the birthplace of artificial intelligence and cognitive psychology, CMU brain scientists have had real-world impact for over 50 years. Learn more about CMU's Neuroscience Institute.


          

IT / Software / Systems: Sr. Mgr, Software Engineering - Plano, Texas

 Cache   
Locations: TX - Plano, United States of America, Plano, Texas At Capital One, we're building a leading information-based technology company. Still founder-led by Chairman and Chief Executive Officer Richard Fairbank, Capital One is on a mission to help our customers succeed by bringing ingenuity, simplicity, and humanity to banking. We measure our efforts by the success our customers enjoy and the advocacy they exhibit. We are succeeding because they are succeeding. Guided by our shared values, we thrive in an environment where collaboration and openness are valued. We believe that innovation is powered by perspective and that teamwork and respect for each other lead to superior results. We elevate each other and obsess about doing the right thing. Our associates serve with humility and a deep respect for their responsibility in helping our customers achieve their goals and realize their dreams. Together, we are on a quest to change banking for good. Sr. Mgr, Software Engineering At Capital One, we think big and do big things. We are a Top-10 bank by deposits, a high-tech company, scientific laboratory, and a nationally recognized brand all in one that reaches tens of millions of consumers. We're all-in on the cloud and a leader in the adoption of open source, RESTful APIs, microservices, and containers. We build our own products and release them with a speed and agility that allows us to get new customer experiences to market quickly. We're going boldly where no bank has gone before. And, as a founder-led company, we're inspired and empowered to make, break, do, and do good. So, let's do something great together. Capital One is seeking experienced Technology Leads and Sr Manager to lead and own work-streams for critical business initiatives in our financial services businesses, including our Rewards business. You'll work on everything from customer-facing web and mobile applications using cutting-edge Open Source frameworks, to highly-available RESTful services, to back-end Java based systems. We're looking for team members who are well-versed in emerging and traditional technologies which may include: Java, REST, Spark, Cassandra NoSQL databases, and AWS/Cloud Infrastructure. Your #LifeatCapitalOne Looking to work somewhere with the flexibility of a start-up but the financial muscle of a Top-10 bank? You're in the right place! And here's what that means for you---You'll have a flexible work schedule-we want to understand where and when you're at your best so you have a healthy work-life balance. Diversity and Inclusion are cultural norms here-you'll have access to active local chapters of Women in Tech, Blacks in Tech, and Hispanics in Tech and more. Plus, you'll be given time to support the next generation of technologists by volunteering with youth programs like Capital One Coders - our engineer-led experience that teaches middle school students in under-served communities how to code. Want to learn more? See what our associates are up to at #LifeatCapitalOne! What you'll do: --- Work with product owners to understand desired application capabilities and testing scenarios --- Continuously improve software engineering practices --- Work within and across Agile teams to design, develop, test, implement, and support technical solutions across a full-stack of development tools and technologies --- Lead the craftsmanship, availability, resilience, and scalability of your solutions --- Bring a passion to stay on top of tech trends, experiment with and learn new technologies, participate in internal & external technology communities, and mentor other members of the engineering community --- Encourage innovation, implementation of cutting-edge technologies, inclusion, outside-of-the-box-thinking, teamwork, self-organization, and diversity Here's what you'll need to be successful: You have full Stack hands-on expertise: Java, RESTful API, Micro-service & Reactive Architecture --- You are experienced / Contributor in Open Source software: Apache Cassandra, Spark, Kafka, In-memory processing, Stream processing --- You have a passion for shipping software in quick iterations: Test driven development & CI/CD & you are an Agilist --- You are experienced to operate in an elastic cloud infrastructure such as AWS. --- You have a passion to build a high performing engineering team. --- You thrive well in an environment of innovation, implementation of cutting-edge technologies, teamwork, self-organization, and diversity Basic Qualifications: --- Bachelor's Degree --- At least 10 years' experience in full-stack development utilizing Java and API's --- At least 5 years' experience developing REST APIs --- At least 3 years' of people leadership or Tech Lead experience --- At least 2 years' experience in developing applications using AWS Preferred Qualifications: --- Master's Degree --- 4+ years' experience hands-on coaching and development of software engineers --- 4+ years' experience with distributed technologies including one of the following: Apache Spark, Apache Kafka, or Apache Cassandra. --- Experience with Microservices Architecture --- Experience with Machine Learning or AI --- Experience in DevOps and Automation using CICD tools and processes (Jenkins, Maven, Ansible, Artemis) --- Experience in serverless architecture in AWS or other cloud platforms Capital One will consider sponsoring a new qualified applicant for employment authorization for this position. ()
          

IT / Software / Systems: Audit Manager - Technology - Plano, Texas

 Cache   
Audit Manager - TechnologyCharlotte, North Carolina;Jacksonville, Florida; Plano, Texas; New York, New York; Pennington, New Jersey; Wilmington, DelawareJob Description:Oversees and executes assigned areas of audit work, providing day to day coaching and guidance to teammates. Executes audit strategy for the sound application of risk based auditing by defining audit scope, audit program, and test procedures. Typically acts as Auditor-in-Charge (AIC). Demonstrates strategic thinking and supports change. Oversees audit testing to ensure timely execution within quality standards and conformance to audit policies and procedures. Assesses issues for impact to business processes, controls and strategies, recommends severity ratings and escalation of broad themes or trends. Drafts quality and timely audit reports and shares results with business leaders. Manages business partner relationships when conducting specific audits; primary engagement is with line management. Exercises critical thinking and judgment to effectively influence management to improve the control environment. When leading an audit engagement, is responsible for day to day coaching, mentoring, and performance feedback. Fosters an inclusive work environment.Primary Responsibilities Include: Managing risk based technology audits across multiple line of business including Global Information Security, Chief Technology Organization, Consumer, Global Banking & Market, Global Wealth & Investment Management and Control Functions (Risk, Compliance etc.) Demonstrate subject matter knowledge in critical areas of technology and information security, identify and assess key risk and controls and develop effective test plans for engagements as assigned with limited guidance. Oversees audit testing to ensure timely execution within quality standards and conformance to audit policies and procedures. Assesses issues for impact to business processes, controls and strategies, recommends severity ratings and escalation of broad themes or trends Manages business partner relationships when conducting specific audits; primary engagement is with line management. Exercises critical thinking and judgment to effectively influence management to improve the control environment. When leading an audit engagement, is responsible for day to day coaching, mentoring, and performance feedback. Conduct continuous monitoring of business initiatives / processes to identify areas of increase and emerging risk within designated focus area, escalate risk issues as appropriate, follow-up on solutions and propose specific coverage.Ideal candidate will be intellectually curious and have experience with internal audit methodology. Experience with internal audit and banking products is strongly preferred. Candidate will have the ability to operate in a fast paced environment with multiple concurrent priorities.Minimum Qualifications: College degree required or relevant experience 8+ years of relevant IT Audit experience. Desired Qualifications: Audit experience at large financial services institutions Professional Audit Certification desired (e.g. CISA, CISSP) Solid knowledge and understanding of audit methodologies and tools that support audit processes Ability to execute in a fast paced, high demand, environment while balancing multiple priorities Experience with automation, machine learning, or artificial intelligence Working knowledge of regulatory landscape Working knowledge of core Banking concepts and products Strong written and verbal communications at all levels of managementShift:1st shift (United States of America)Hours Per Week:40Full timeJR-19057952Manages People: NoTravel: Yes, 15% of the timeTalent Acquisition Contact:Referral Bonus:Bank of America and its affiliates consider for employment and hire qualified candidates without regard to race, religious creed, religion, color, sex, sexual orientation, genetic information, gender, gender identity, gender expression, age, national origin, ancestry, citizenship, protected veteran or disability status or any factor prohibited by law, and as such affirms in policy and practice to support and promote the concept of equal employment opportunity and affirmative action, in accordance with all applicable federal, state, provincial and municipal laws. The company also prohibits discrimination on other bases such as medical condition, marital status or any other factor that is irrelevant to the performance of our teammates. Candidates must possess authorization to work in the United States, as it is not the practice of Bank of America to sponsor individuals for work visas.To view the "EEO is the Law" poster, CLICK HERE at https://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf .To view the "EEO is the Law" Supplement, CLICK HERE at https://www.dol.gov/ofccp/regs/compliance/posters/pdf/OFCCPEEOSupplementFinalJRFQA508c.pdf .Bank of America aims to create a workplace free from the dangers and resulting consequences of illegal and illicit drug use and alcohol abuse. Our Drug-Free Workplace and Alcohol Policy (Policy) establishes requirements to prevent the presence or use of illegal or illicit drugs or unauthorized alcohol on Bank of America premises and to provide a safe work environment.To view Bank of Americas Drug-free workplace and alcohol policy, CLICK HERE . ()
          

Other: Quantitative Analytics Mgr 1 / Lead - Model Monitoring and Quality Review - AI MD CoE - Irving, Texas

 Cache   
Important Note During the application process, ensure your contact information (email and phone number) is up to date and upload your current resume prior to submitting your application for consideration. To participate in some selection activities you will need to respond to an invitation. The invitation can be sent by both email and text message. In order to receive text message invitations, your profile must include a mobile phone number designated as Personal Cell or Cellular in the contact information of application. At Wells Fargo, we want to satisfy our customers financial needs and help them succeed financially. We re looking for talented people who will put ou customers at the center of everything we do. Join our diverse and inclusive team where you ll feel valued and inspired to contribute your unique skills and experience. Help us build a better Wells Fargo. It all begins with outstanding talent. It all begins with you. Data Management and Insights (DMI) is transforming the way that Wells Fargo uses and manages data. Our work enables Wells Fargo to empower and inform our team members, deliver exceptional experiences for our customers, and meet the elevated expectations of our regulators. The team is responsible for designing the future data environment, defining data governance and oversight, and partnering with technology to operate the data infrastructure for the company. This team also provides next generation analytic insights to drive business strategies and help meet our commitment to satisfy our customers financial needs. The Artificial Intelligence Model Development Center of Excellence (AI MD CoE) team is a data science team, responsible for developing and deploying machine learning and AI solutions for a number of domain areas such as fraud prevention, credit risk, experience personalization, customer listening, anomaly detection and operational cost improvement. The CoE partners closely with the AI Enterprise Solutions and the AI Technology teams at the bank, and brings a cross-functional approach to identifying, developing and deploying AI solutions. The CoE requires high-skill/ high-motivation individuals who enjoy working collaboratively in a team setting, used to taking decisions autonomously and comfortable with a dynamic work environment. The Lead Model Monitoring and Review role would be responsibl executing on the model monitoring and model rapid refresh functions of the CoE. The role would create a standardized and efficient process for model monitoring so that risks around model degradation and failure are identified in a consistent and timely way. Also that the model retraining on a fresh set of data is performed in a speedy manner, but with the appropriate controls to ensure model robustness and stability. Additional responsibilities would include the quality review of model related technical documentation going to Model Risk Mgmt as well as other audit and regulatory bodies. A key requirement out of this role would be to make sure that industry best practices for model monitoring and refresh are appropriately brought into the processes followed by the AI Model Development CoE. This would include identification and selection of the potential external solutions that meet this overall requirement, as well as bringing in useful open source frameworks. The role of the Leader would be to hire and manage a team of data scientists and documentation review experts that can deliver on the objectives of the role. Key_Responsibilities_Include * Defining the operating framework for model monitoring operations monitoring, drift monitoring, performance etc. * Defining the framework for rapid refresh of models and operational approach. * Undertaking the model monitoring and simple model refresh tasks, per the model monitoring and update framework (see above) and per the needs of specific models, defined by the Model Developer. Also identification of solutions to improve process efficiency through automation approaches, and implementing the same working closely with AI Enterprise Solutions and WF Technology. * Working closely with Model Risk and other governance partners to ensure that the model refresh framework is implemented in a controlled manner, within the current policy framework. * Define and implement a documentation and artifact review process for model documentation and artifacts, created as part of the model development work performed by model development teams. * People management responsibilities such as hiring, performance management, routine travel and equipment/ software related approvals. As a Team Member Manager, you are expected to achieve success by leading yourself, your team, and the business. Specifically you will * Lead your team with integrity and create an environment where your team members feel included, valued, and supported to do work that energizes them. * Accomplish management responsibilities which include sourcing and hiring talented team members, providing ongoing coaching and feedback, recognizing and developing team members, identifying and managing risks, and completing daily management tasks. Required_Qualifications * 4+ years of experience in an advanced scientific or mathematical field * 2+ years of leadership experience * A master's degree or higher in a quantitative field such as mathematics, statistics, engineering, physics, economics, or computer science * 2+ years of experience in Artificial Intelligence, Natural Language Processing, Machine Learning, Distributed Computing, Chatbot, and Virtual Assistant * 2+ years of Python experience Other_Desired_Qualifications * Experience with building advanced statistical and machine learning models, in a banking or financial services context * Experience with building NLP (Natural Language Processing) solutions in the financial industry * Experience with monitoring or retraining advanced statistical and machine learning models * Close familiarity with machine learning and statistical modeling techniques using open-source languages like Python or R * Solid understanding of Model Risk Management requirements for banks and financial services companies * Familiarity with Big Data technology for Data Management and Data Science * Disclaimer All offers for employment with Wells Fargo are contingent upon the candidate having successfully completed a criminal background check. Wells Fargo will consider qualified candidates with criminal histories in a manner consistent with the requirements of applicable local, state and Federal law, including Section 19 of the Federal Deposit Insurance Act. Relevant military experience is considered for veterans and transitioning service men and women. Wells Fargo is an Affirmative Action and Equal Opportunity Employer, Minority/ Female/Disabled/Veteran/Gender Identity/Sexual Orientation. Reference Number *******-6 ()
          

Engineering: Senior Data Engineer - Lewisville, Texas

 Cache   
SunIRef:it Senior Data Engineer Cognizant Technology Solutions 12,256 reviews - Lewisville, TX 75022 Cognizant Technology Solutions 12,256 reviews Read what people are saying about working here. Our strength is built on our ability to work together. Our diverse backgrounds offer different perspectives and new ways of thinking. It encourages lively discussions, inspires thought leadership, and helps us build better solutions for our clients. We want someone who thrives in this setting and is inspired to craft meaningful solutions through true collaboration. If you're comfortable with ambiguity, excited by change, and excel through autonomy, we'd love to hear from you. Why Choose Cognizant? It takes a lot to succeed in today's fast-paced market, and Cognizant Digital Business has become a proven leader in the industry. Cognizant love big ideas and even bigger ambitions. We stand out because we put human experiences at the core. We help clients engage customers by envisioning and building innovative products and services. But we don't stop there. We develop go-to-market strategies and invent entirely new business models, ensuring that every company we work with walks away with both inspiration and a plan. Everything we do at Cognizant we do with passionfor our clients, our communities, and our organization. It's the defining attribute that we look for in our people. JD Skills : Senior Data Engineer At least 8 years of experience in software application development At least 3 years' experience with Big Data / Hadoop architecture and related technologies Hands-on experience with Spark - RDDs, Datasets, Dataframes, Spark SQL, Hands-on experience with streaming technologies such Spark Streaming and Kafka Hands-on experience using SQL, Spark SQL, HiveQL and performance tuning for big data operations Hands-on experience with Java 8 and use of IDEs for the same Hands-on experience using technologies such as Hive, Pig, Sqoop, Experience building micro-services based application Experience dealing with SQL and NoSQL databases such as Oracle, DB2, Teradata, Cassandra Experience using CI/CD processes for application software integration and deployment using Maven, Git, Jenkins, Jules Experience building scalable and resilient applications in private or public cloud environments and cloud technologies Experience using SDLC and Agile software development practices Experience building enterprise applications enabled for logging, monitoring, alerting and operational control Experience enabling scheduling for big data jobs Hands-on experience working in unix environment Good written, verbal, presentation and interpersonal communication skills, given an opportunity willing to work in a challenging and cross platform environment. Strong Analytical and problem-solving skills. Ability to quickly master new concepts and applications Preferable - experience in Financial industry Preferable - experience in Data Science, Machine Learning, Deep learning, Business Intelligence and Visualization. Cognizant is one of the world's leading professional services companies, transforming clients' business, operating and technology models for the digital era. Our excellent industry-based, consultative approach helps clients envision, build and run more creative and efficient businesses. Headquartered in the U.S., Cognizant, a member of the NASDAQ-100, ranked 205 on the Fortune 500 and consistently listed among the most admired companies in the world. Technical Skills SNo Primary Skill Proficiency Level * Rqrd./Dsrd. 1 Big Data Management PL1 Required 2 Apache Spark PL4 Required 3 Apache Hadoop PL3 Required 4 SQL Scripting PL4 Required 5 Core Java PL1 Required 6 Unix Shell Scripting PL1 Desired Domain Skills SNo Primary Skill Proficiency Level * Rqrd./Dsrd. 1 Acquirer & Acquirer Processor NA Required Proficiency Legends Proficiency Level Generic Reference PL1 The associate has basic awareness and comprehension of the skill and is in the process of acquiring this skill through various channels. PL2 The associate possesses working knowledge of the skill, and can actively and independently apply this skill in engagements and projects. PL3 The associate has comprehensive, in-depth and specialized knowledge of the skill. She / he has extensively demonstrated successful application of the skill in engagements or projects. PL4 The associate can function as a subject matter expert for this skill. The associate is capable of analyzing, evaluating and synthesizing solutions using the skill.Employee Status : Full Time Employee Shift : Day Job Travel : Yes, 5 % of the Time Job Posting : Nov 23 2019 About Cognizant Cognizant (Nasdaq-100: CTSH) is one of the world's leading professional services companies, transforming clients' business, operating and technology models for the digital era. Our unique industry-based, consultative approach helps clients envision, build and run more innovative and efficient businesses. Headquartered in the U.S., Cognizant is ranked 193 on the Fortune 500 and is consistently listed among the most admired companies in the world. Learn how Cognizant helps clients lead with digital at ***************** or follow us @Cognizant. Cognizant is recognized as a Military Friendly Employer and is a coalition member of the Veteran Jobs Mission. Our Cognizant Veterans Network assists Veterans in building and growing a career at Cognizant that allows them to leverage the leadership, loyalty, integrity, and commitment to excellence instilled in them through participation in military service. Cognizant - Just posted report job - original job ()
          

Pode o meu PC falar?

 Cache   
Uma das áreas que tem despertado mais interesse, principalmente no dia-a-dia do meu trabalho de recrutamento na área de IT, é a de machine learning, uma vertente bastante popular dentro da inteligência artificial
          

Le tecnologie del futuro: la robotica

 Cache   

Domenica 19 Gennaio 2020 alle ore 16:00 il Museo propone un laboratorio dedicato alle Tecnologie del futuro: la robotica. La robotica studia e sviluppa metodi che permettono alle macchine di eseguire compiti specifici simulando il lavoro umano. I robot sono sempre più presenti non solo nel mondo scientifico e industriale ma anche nella vita quotidiana. Le applicazioni più avanzate della robotica consentono l’utilizzo di bracci robotici e manipolatori sempre più complessi. Infatti, i robot del presente – e quelli del futuro – non si limitano ad eseguire comandi, ma sono in grado di interagire con il mondo reale e collaborare con gli uomini. Non solo: grazie all'intelligenza artificiale e al machine learning le macchine sono sempre più capaci di apprendere e aiutare l’uomo in molti ambiti: dalla chirurgia all’astrofisica. Nel corso dell’attività sarà possibile utilizzare il braccio robotico modulare EDO così da capirne il funzionamento con giochi e piccoli esperimenti. Infine, la visita allo spazio dedicato alla “Fabbrica del Futuro” consentirà di introdurre i più giovani alle tecnologie che caratterizzano i settori più avanzati dell’industria contemporanea con cui il nostro territorio dall’alta vocazione manifatturiera deve confrontarsi. Il laboratorio costa 5 euro mentre l’ingresso al Museo è gratuito per i ragazzi e un accompagnatore. L’ingresso al Museo è – per tutti gli altri – a pagamento (5-3 euro). Per informazioni e prenotazioni: (obbligatoria entro venerdì 17 Gennaio alle ore 13:00) telefono 051.6356611.
          

The Impact of Automation on Jobs

 Cache   
The Impact of Automation on Jobs


Ever since conclusion of the first Industrial Revolution around the 18th century, there has been a lot of discussion about how machines are taking over the tasks that were previously done by humans. To understand that, we must know why they were so successful in the first place. A definition of a machine that is taught to many young students is that it is something “that makes life easier”; not far off from the truth but not quite the complete information either. Some of the first jobs that machines took were those that required a lot of labor or effort to do, for instance, moving bricks or harvesting a field, both of which could be easily done with a tractor. Not only does it cut the costs a company has to make for the salaries of the required labor but the tractor is efficient, as it saves time and is more reliable. We can take a more recent example of this situation, back in 1979, General Motors had about 800,000 employees and made about USD 11 billion, now, fast forward a few years till 2012 and you have Google making USD 14 billion with only 58,00 people.

It is also true that with the rise of the industrial development and technology, more opportunities or fields were created. The jobs of people also became more specialized and they could now focus on more advanced areas that required research. No longer someone had to keep putting a lid on the jars or caps on the bottles in a factory that mass produced them, there was simply a robotic arm that could do all of that. However, in our day and age, machines have long evolved past that. With the rise of emerging technologies, like Artificial Intelligence or Machine learning, your computers are able to do tasks that humans did not think were possible a computer could do; it always was restricted to science-fiction. An example of this is how there was a stat-up in New York that aimed to completely remove the middle management in a company by a software. This start-up is called WorkFusion and the software would do what a Project Manager would, divide tasks, look for freelancers or instruct the company employees and track their progress. With the population of the world rising and unemployment rates already being a pressing issue, does the advancement in such fields help? Can people really innovate and create more jobs faster than the machines can take them? Plus, will the companies find their products to be beneficial if there are so few people who could afford them?


          

E1005: Scale AI CEO & Co-founder Alexandr Wang creates training data for all AI applications to improve machine learning, shares insights on the future of autonomous vehicles, China’s AI advantages over US, importance of humans focusing on higher-value work & next major trends in AI

 Cache   

The post E1005: Scale AI CEO & Co-founder Alexandr Wang creates training data for all AI applications to improve machine learning, shares insights on the future of autonomous vehicles, China’s AI advantages over US, importance of humans focusing on higher-value work & next major trends in AI appeared first on This Week In Startups.


          

ProSiebenSat.1 using AWS as primary cloud provider

 Cache   
German media group ProSiebenSat.1 has opted for Amazon’s AWS as its primary cloud provider. The contract will extend across its broadcast and digital media businesses, production companies, and e-commerce platforms. ProSiebenSat.1 also wants to reduce the time to market of new applications, and introducing advanced analytics and machine learning (ML) technologies in its home markets […]
          

Alibaba Cloud passe en open source un algorithme de machine learning

 Cache   

          

Alibaba Cloud โอเพนซอร์สไลบรารี Alink รวบรวมอัลกอริทึม Machine Learning ไว้ในชุดเดียว

 Cache   

Alibaba Cloud ธุรกิจฝั่งองค์กรของ Alibaba ปล่อยไลบรารี Alink ที่เป็นไลบรารีคอมพิวเตอร์เรียนรู้ได้ (machine learning) ชุดใหญ่ สำหรับการสร้างบริการในกลุ่มคอมพิวเตอร์เรียนรู้ได้ เช่น ระบบแนะนำสินค้า, การทำนายข้อมูลในอนาคต

ไลบรารีประกอบด้วยกลุ่มอัลกอริทึม เช่น จัดหมวดหมู่ข้อมูล (classification), ทำนายข้อมูล (regression), จับกลุ่มข้อมูล (clustering), หาชุดข้อมูลผิดปกติ, การคำนวณค่าสถิติ โดยรวมนับว่าใกล้เคียงกับโครงการ scikit-learn แต่ Alink ออกแบบมาเพื่อใช้งานกับ Apache Flink เป็นหลัก แม้จะมีโมดูล PyAlink ให้ทำงานกับไพธอนด้วยก็ตาม

สัญญาอนุญาตโครงการ Alink เป็นแบบ Apache 2.0 ใช้งานได้ค่อนข้างอิสระ

ที่มา - Alizia

No Description


          

What is Chaos Computing and Why Does it Matter?

 Cache   
Chaos computing has the potential to be the most important computational development since machine learning and artificial intelligence. This computer science approach uses chaotic systems to exponentially increase the power of modern computers. While this technology is still in its…
          

Bayesian Machine Learning in Python: A/B Testing for $24

 Cache   
Expires November 23, 2022 23:59 PST
Buy now and get 80% off

KEY FEATURES

A/B testing is used everywhere, from marketing, retail, news feeds, online advertising, and much more. If you're a data scientist, and you want to tell the rest of the company, "logo A is better than logo B," you're going to need numbers and stats to prove it. That's where A/B testing comes in. In this course, you'll do traditional A/B testing in order to appreciate its complexity as you elevate towards the Bayesian machine learning way of doing things.

  • Access 40 lectures & 3.5 hours of content 24/7
  • Improve on traditional A/B testing w/ adaptive methods
  • Learn about epsilon-greedy algorithm & improve upon it w/ a similar algorithm called UCB1
  • Understand how to use a fully Bayesian approach to A/B testing

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels, but knowledge of calculus, probability, Python, Numpy, Scipy, and Matplotlib is expected
  • All code for this course is available for download here, in the directory ab_testing

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

          

Deep Learning: GANs and Variational Autoencoders for $25

 Cache   
Expires November 23, 2022 23:59 PST
Buy now and get 86% off

KEY FEATURES

Variational autoencoders and GANs have been two of the most interesting recent developments in deep learning and machine learning. GAN stands for generative adversarial network, where two neural networks compete with each other. Unsupervised learning means you're not trying to map input data to targets, you're just trying to learn the structure of that input data. In this course, you'll learn the structure of data in order to produce more stuff that resembles the original data.

  • Access 41 lectures & 5.5 hours of content 24/7
  • Incorporate ideas from Bayesian Machine Learning, Reinforcement Learning, & Game Theory
  • Discuss variational autoencoder architecture
  • Discover GAN basics

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels, but knowledge of calculus, probability, object-oriented programming, Python, Numpy, linear regression, gradient descent, and how to build a feedforward and convolutional neural network in Theano and TensorFlow is expected
  • All code for this course is available for download here, in the directory unsupervised_class3

Compatibility

  • Internet required

THE EXPERT

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

          

Java Machine Learning

 Cache   
Java Machine Learning
          

TensorFlow 2.1.0 will include breaking changes: First release candidate available

 Cache   

The machine learning platform TensorFlow, currently in version 2.0, is making its way toward the minor release 2.1.0: TensorFlow 2.1.0-rc0 is the first release candidate and includes some breaking changes. The upcoming version will be the last to support Python 2.7.

The post TensorFlow 2.1.0 will include breaking changes: First release candidate available appeared first on JAXenter.


          

Data Science at the Intersection of Emerging Technologies

 Cache   

Kirk Borne, principal data scientist at Booz Allen Hamilton, gave a keynote presentation at this year’s Oracle Code One Conference on how the connection between emerging technologies, data, and machine learning are transforming data into value. Emerging technological innovations like AI, robotics, computer vision and more, are enabled by data and create value from data.

By Carol McDonald
          

Intelligence artificielle vulgarisée - Le Machine Learning et le Deep Learning par la pratique, un livre de Aurélien VANNIEUWENHUYZE, une critique de rawsrc

 Cache   
Bonjour chers membres du club,

Je vous invite à lire la critique de rawsrc au sujet du livre :


Intelligence artificielle vulgarisée - Le Machine Learning et le Deep Learning par la pratique


Avec cet excellent livre, le domaine d'application auquel ce livre fait référence, « l'intelligence artificielle » (notez les guillemets c'est important) est un domaine qui n'est pas si récent théoriquement parlant mais un domaine qui a connu une renaissance assez spectaculaire depuis...
          

95: Data Science Pipeline Testing with Great Expectations - Abe Gong

 Cache   
Data science and machine learning are affecting more of our lives every day. Decisions based on data science and machine learning are heavily dependent on the quality of the data, and the quality of the data pipeline. Some of the software in the pipeline can be tested to some extent with traditional testing tools, like pytest. But what about the data? The data entering the pipeline, and at various stages along the pipeline, should be validated. That's where pipeline tests come in. Pipeline tests are applied to data. Pipeline tests help you guard against upstream data changes and monitor data quality. Abe Gong and Superconductive are building an open source project called Great Expectations. It's a tool to help you build pipeline tests. This is quite an interesting idea, and I hope it gains traction and takes off. Special Guest: Abe Gong.
          

Actividades para la semana del 2 al 7 de Diciembre #CyberMonday @0xWord @MyPublicInbox1 @luca_d3 @elevenpaths

 Cache   
Este año 2019 ya se ha terminado para mí en lo que se refiere a eventos, conferencias, charlas u actos públicos. No por nada, sino porque había puesto todos los deadlines de mis equipos para finales de Noviembre, y no me planifiqué ningún evento en Diciembre. Es verdad que mis compañeros van a seguir teniendo actividad, pero yo solo voy a estar en una cosas que organicé para los niños de los compañeros. El CDCO Kids en el que abrimos las puertas de la oficina a los niños y les damos algunas sorpresas.

Figura 1: Actividades para la semana del 2 al 7 de Diciembre

Pero esta semana hay actividades en las que mis compañeros se van a ver involucrados, así que os dejo la lista de ellas, que además el lunes tenemos el CyberMonday y vamos a tener campaña en 0xWord y a extender la campaña en MyPublicInbox. Así que, por la primera compra de Tempos en MyPublicInbox hay una bonificación de 300 Tempos, y usando el código CYBERMONDAY2019 en 0xWord tendréis un 10% de descuento durante todo el día de mañana.

Figura 2: Cybermonday en 0xWord y MyPublicInbox

Esta semana tenemos a nuestros compañeros del Lab de ElevenPaths y al equipo de Ideas Locas presentando dos herramientas en BlackHat Europe. El primero de ellos, con Sergio de los Santos, Pablo San Emeterio y David García que presentarán "The THE (Threat Hunting Experience)" y el segundo de ellos, con Pablo González y Fran Ramirez que presentarán HomePWN. Del 2 al 5 de Diciembre en Londres, BlackHat Europe.

Figura 3: Del 2 al 5 de Diciembre BlackHat Euope 2019 en Londres
Referencias:
- HomePWN: Swiss Army Knife for pentesting IoT Devices Parte 1
- HomePWN: Swiss Army Knife for pentesting IoT Devices Parte 2
- HomePWN: Ataques de Replay BLE. Demo en BlackHat EU Parte 1
HomePWN: Ataques de Replay BLE. Demo en BlackHat EU Parte 2
La segunda de las citas es el Eleastic on Tour Barcelona, donde nuestro experto de ElevenPaths Julio Gómez Ortega estará el próximo 3 de diciembre estará presente en este interesante evento con una ponencia sobre nuestra herramienta Aldara y su uso para extraer Social Networks Intelligence. Más información sobre el evento y la iniciativa ElasticOnTour aquí.

Figura 4: Elastic on Tour Barcelona 3 de Diciembre

El día 5 de Diciembre, por la mañana, en el Espacio Fundación Telefónica en Madrid, tendrá lugar la entrega de los Data Science Awards que entregan nuestros compañeros de LUCA dentro del evento "La Inteligencia Artificial bajo el Microscopio". Un acto en el que podrás ver los mejores trabajos en el mundo de la ciencia de datos para mejorar la sociedad y la empresas, el uso de la IA, el proyecto LEIA para mejorar la relación de la IA con la lengua española, y más sorpresas. Tienes toda la información en la web de los premios, y la agenda del evento en la web del mismo. Este evento también lo puedes seguir online. Yo probablemente me acerque por allí.

Figura 5: La inteligencia artificial bajo el microscopio

También el 5 de diciembre, tenemos una charla online sobre las nuevas tecnologías como el Machine Learning aplicadas en ciberseguridad en nuestro 11Paths Talks: Machine Learning aplicado a la Ciberseguridad.

Figura 6: ElevenPaths Talks "Machine Learning aplicado a ciberseguridad"

Las técnicas de Machine Learning cada vez están más presentes en más tecnologías, se utilizan sobre todo en procesos de automatización, interacción con personas, etcétera. Pero ¿se están utilizando en ciberseguridad? ¿Cuáles son los casos de éxito más relevantes? ¿Seremos capaces de predecir el comportamiento de los cibercriminales?

Figura 7: Libro de Machine Learning aplicado a Ciberseguridad
*Atento a los cupones del Cibermonday para conseguirlo con descuento*

En esta sesión tendrás una buena introducción, y puedes completar su estudio y profundidad con el libro de "Machine Learning aplicado a ciberseguridad" que publicamos en 0xWord y donde participan compañeros como Fran Ramírez o Carmen Torrano

Figura 8: Despistaos en Toledo. Concierto acústico gratuito.

Y para termina, tenemos un concierto gratuito a finales de semana en Toledo de nuestros amigos Despistaos, así que si te pilla cerca, puedes empezar el puente con buena música escuchado a Dani y Krespo tocando Grita fuerte mi nombre.

Saludos Malignos!

Autor: Chema Alonso (Contactar con Chema Alonso)

          

Oracle terá 19 palestras no TDC Porto Alegre

 Cache   

Antes de chegar à capital gaúcha, empresa marcou presença nas edições do evento para desenvolvedores em São Paulo, Recife, Belo Horizonte e Florianópolis.

Por Redação Oracle

Após passar pelas edições de Florianópolis, Belo Horizonte, São Paulo e Recife entre abril e outubro, a Oracle também estará presente em Porto Alegre de 27 a 30 de novembro, quando acontece a última parada em 2019 do The Developer´s Conference (TDC), o maior evento relacionado a desenvolvimento de software no Brasil.

No total, a Oracle terá 19 palestras na conferência na capital gaúcha, que será realizada no Campus Zona Sul da UniRitter. As apresentações dos especialistas da empresa acontecerão em diferentes trilhas do evento e vão abordar temas como Java, realidade virtual (VR), blockchain, inteligência artificial (IA), cloud, acessibilidade e diversidade.

Ao longo de cada dia do The Developer´s Conference, existem mais de 10 trilhas paralelas, sendo que cada uma delas funciona como um evento independente de um dia inteiro e organizado por especialistas no assunto que são responsáveis por selecionar sete ou mais palestrantes via plataforma de Call4Papers.

A lista de especialistas da Oracle que farão palestras na próxima edição do TDC inclui Alberto Cardoso, Elder Moraes, Rafael Benevides, Fernando Galdino, Renato Caetano, Lucas Chung Man Leung e Lourenço Barrera Taborda – confira a lista completa de palestrantes, temas e trilhas na tabela abaixo.

A Oracle também conta com um stand no The Developer´s Conference Porto Alegre, em que os frequentadores podem conhecer mais sobre a empresa e suas soluções, por meio de atividades de hands on e trials, ministradas pela equipe presente no evento, composta por Andre Ambrozio, Diogo Shibata, Pedro Florence, Rodrigo Zilio e Wellington Rosa.

Além disso, a comitiva da Oracle na edição do TDC na capital gaúcha também é composta por Kate Almeida, que representará a empresa no stand de diversidade para falar sobre as iniciativas da companhia na área, que também contam com a participação de Beto Marques e Daniele Botaro.

Para mais informações sobre a edição do The Developer´s Conference, incluindo sua programação completa, clique neste link.

Palestrante                        Palestra
Alberto Cardoso                 Verificação de requisitos de acessibilidade em artefatos de software

Elder Moraes                      The quest to the language Graal: one JVM to rule them all

Rafael Benevides               Service Mesh e Sidecars com Istio e Envoy

Alberto Cardoso                 Veja o impacto das integrações de SaaS e como ela acelera o desenvolvimento do seu negócio

Fernando Galdino               Blockchain Tables no Banco de Dados Oracle

Alberto Cardoso                  CDX - Conversational Design Experience. Fale a Linguagem do seu Cliente

Renato Caetano                  Desenvolvendo apps AR/VR com React Native

Elder Moraes                       Como chaos engineering garante a resiliência dos seus serviços

Lucas Chung Man Leung    Gen O - Case de Recrutamento às cegas - será que funciona?

Alberto Cardoso                  Computação Natural. A vida inspirando a máquina!

Lourenço Barrera Taborda  Inovação com Analytics, Machine Learning e Cloud

Lourenço Barrera Taborda  Como planejar a adoção da computação em nuvem na sua empresa

Lourenço Barrera Taborda  Cloud + Design Thinking = sucesso na sua empresa!

Elder Moraes                      Como manter a disponibilidade dos seus serviços através do monitoramento de métricas

Elder Moraes                      Construa testes efetivos através do princípio F.I.R.S.T

Rafael Benevides               Service Mesh e Sidecars com Istio e Envoy

Alberto Cardoso                  Fala que eu te escuto. Melhores técnicas de como rever o seu diálogo.

Alberto Cardoso                  Datalab! Descubra o maior valor com seu projeto de Analytics

Rafael Benevides                Service Mesh e Sidecars com Istio e Envoy


          

Stocks Under 10 Based on Machine Learning: Returns up to 43.3% in 3 Days

 Cache   


Package Name: Stocks Under $10
Recommended Positions: Long
Forecast Length: 3 Days (11/24/2019 - 11/28/2019)
I Know First Average: 9.33%

Read The Full Forecast


Stocks Under 10
Stocks Under 10 chart

The post Stocks Under 10 Based on Machine Learning: Returns up to 43.3% in 3 Days appeared first on Stock Forecast Based On a Predictive Algorithm | I Know First |.


          

Emergence Of Machine Learning Training In India

 Cache   
Thursday, November 28, 2019 7:11 AM
          

Brazil

 Cache   

Authored by: 

Organization: 

Instituto de Pesquisa em Direito e Tecnologia do Recife (IP.rec)

We don’t need no observation: The use and regulation of facial recognition in Brazilian public schools

Introduction

The use of facial recognition technology in schools around the world in countries such as China and the United States has encouraged the similar use of this technology by other countries, including Brazil. However, it has also raised questions and concerns about the privacy of students. Because of this, analyses of the nature and consequences of the use of facial recognition technology in diverse scenarios are necessary.

This report presents a brief reflection on the use of facial recognition technologies in Brazilian public schools, including in the state of Pernambuco, where IP.rec is based, and considers their implications for citizens' rights to privacy, as well as the possibility of the technology being regulated by existing laws.

Background

Artificial intelligence (AI), algorithms, the “internet of things”, smart cities, facial recognition, biometrics, profiling, big data. When one tries to imagine the future of big cities, it is impossible not to think about these terms. But is the desire to make cities “smarter” jeopardising the privacy of Brazilian citizens? Does this desire turn people into mere guinea pigs for experimentation with new technologies in a laboratory of continental proportions?

The use of facial recognition technologies in Brazilian public schools has already been implemented in several cities such as Jaboatão dos Guararapes (in the state of Pernambuco), Cabo de Santo Agostinho (Pernambuco), Arapiraca (Alagoas), Cravinhos (São Paulo), Tarumã (São Paulo), Potia (São Paulo), Paranavaí (Paraná), Guaíra (Paraná), Viana (Espírito Santo), Anápolis (Goiás), Senador Canedo (Goiás) and Vila Bela da Santíssima Trindade (Mato Grosso).[1] Among the features provided by the so-called Ponto iD[2] system is the monitoring of the attendance of students at school without the need to take roll call. The system also aims to help optimise class time, as the time spent on the roll call is saved; help manage school meals, as cooks are notified of the exact number of students in class as soon as the gates close; and decrease the school drop-out rate, as guardians receive, through an app, notifications that their child is in the school. The last is noted as a primary social consequence of using the technology. To implement the system, the city of Jaboatão dos Guararapes, for example, has spent BRL 3,000 (USD 780) per month per school.

The technology provider's webpage states that the solution is designed in an integrated way, linking government departments. Because of this, a diverse range of public institutions can share information with each other. For example, according to the government of the city of Jaboatão dos Guararapes, if a student is absent for more than five days, the Guardianship Council is notified as the system also shares students’ information with that body.[3]

In 2015, the service provider also stated that the system would be connected to the Bolsa Família programme,[4] which is a direct income transfer programme aimed at families living in poverty and extreme poverty throughout the country, and intended to help them out of their vulnerable situation. In Brazil, more than 13.9 million families are served by Bolsa Família.[5] The receipt of the benefits is conditioned, among other duties of a student whose family is a beneficiary of the programme, to a minimum school attendance of 85% for children and adolescents from six to 15 years old and 75% for adolescents 16 and 17 years old.[6]

Privacy? Absent. Risks? Present

As previously observed by several scholars, digital technologies not only make behaviour easier to monitor, but also make behaviour more traceable.[7] Given this potential, a number of critical issues in relation to the application of facial recognition systems in educational contexts were identified.

According to the service provider, the system works off a platform that uses cloud computing capabilities, but it was not possible to identify any information regarding the level of security related to the storage of collected data in the company’s privacy policy available on its official website. Despite this, among the said benefits offered by the technology implemented are not only the monitoring of students’ attendance and their school performance, but the possibility of monitoring students’ personal health data.

The mayor of the city of Jaboatão dos Guararapes[8] states that in addition to facial recognition, the software also offers, through the collection of other information, the possibility of better planning the number of daily school meals. As soon as the school’s gates are closed, the cooks receive via SMS[9] the exact number of students who are in the classrooms. Meanwhile, according to the secretary of education of Jaboatão dos Guararapes, Ivaneide Dantas, at some point even the health problems of students will be identified and parents informed using the system.

However, the lack of information that is included in the company’s privacy policy,[10] or on city halls’ websites, constitutes a critical point in the relationship between students and the education system. The problem becomes all the more obvious since the solution involves sensitive data – biometric data – of minors.

The text of the Brazilian General Data Protection Law (LGPD),[11] recently approved unanimously in congress after almost 10 years of discussions and two years of proceedings, has undergone major changes due to the vetoes of former president Michel Temer at the time of its sanction and more recently by President Jair Bolsonaro. The changes to the text of the LGPD through Provisional Measure 869/2018[12] have resulted in the impairment of the effectiveness of the law as well as a series of setbacks. These setbacks have ignored issues considered already decided on in discussions that had popular participation, as well as the input of members of the executive and legislative branches. As argued in a report released by the joint committee set up to investigate the possible impacts caused by the Provisional Measure,[13] the revised act put at risk the effectiveness of the guarantees reached by the previous text.

The public sector, especially the police authorities, are using new technologies to monitor the population without the social consequences being considered or even measured. The adoption of facial recognition systems for public security purposes is already a reality in several Brazilian cities. Recently, the increase in the use of the tool in public and private spheres has led to the establishment of Civil Public Inquiry No. 08190.052289/18-94[14] by the Personal Data Protection Commission of the Public Prosecutor's Office of the Federal District and Territories (MPDFT), as well as a public hearing held on 16 April 2019.[15] The hearing sought not only to promote debates about the use of facial recognition tools by businesses and the government, but also to function as an open space for the participation of NGOs and civil society.

It is important to remember that as systems are being implemented in public schools around the country, much of the peripheral and vulnerable population is being registered in this "experiment" – that is, data is being collected on vulnerable and marginalised groups. As researchers have pointed out, biases are being increasingly incorporated into the variety of new technological tools.[16] These digital tools can act more quickly, on a larger scale, and with actions that can be hidden by a greater complexity, such as, for example, through profiling systems that use biased machine learning algorithms that police, profile and punish minorities. The struggle is still against the perpetuation of the same problem: the deprivation of civil rights of certain groups of society as a result of social inequalities and power relations in society.

It is not uncommon for technology to be used in a way in Brazil that suggests the possibility of a future Orwellian dystopia. The use of facial recognition technology during the Carnival of 2019 in the cities of Rio de Janeiro and Salvador, resulting in a number of arrests, drew the attention of Brazilians.[17] The apparent relativism of fundamental rights and the relaxation of laws that limit surveillance due to the magnitude of events,[18] and the lack of official information on the effectiveness of the new automated measures, as well as on the storage and sharing of biometric data, were just some of the various problems identified.

The need to manage large events with thousands of participants is not the only reason why surveillance technologies are used in Brazilian urban centres. As Marcelo Souza from the Federal University of Rio de Janeiro (UFRJ) explains,[19] the increase in punitive policies and the militarisation of the police are the main reasons behind the increasing use of dystopian and invasive AI devices and technologies, typical of combat zones.[20] Even after a series of changes that have taken place in Brazil since the promulgation of the 1988 Constitution, public security institutions have not been significantly modified. The culture of war against the "internal enemy", for example, remains as present as in the days of the military dictatorship.[21]

Given that the recently approved LGPD does not apply to data processing for public security purposes, it would be possible for authorities to argue that the biometric database from the facial recognition system in schools can be used to identify suspects and improve public security. This would place its use even further outside the remit of current legislation. As the LGPD states, data processing for public security purposes "shall be governed by specific legislation, which shall provide for proportionate and strictly necessary measures to serve the public interest" (Article 4, III, § 1, LGPD).[22]

However, a specific law does not exist so far. According to the Brazilian lawyer and privacy advocate Rafael Zanatta,[23] the civic battle in Brazil will be around the shared definition of "proportional measures" and "public interest”. The solution proposed by some researchers and activists for the time being is the protection offered by the constitution, such as the presumption of innocence, and the general principles of the LGPD itself, which guard against the improper use of collected data. On the other hand, it is believed that a tremendous effort will be needed to consolidate jurisprudence where these principles are applied in cases of state surveillance.

There is also a lack of easy access to public documents and information on the use of surveillance technologies. Often information related to the operation of these technologies depends on ad hoc statements made by public agents or private companies, whistleblowers or, when granted, requests for access to public information made possible by the Access to Information Law (LAI).

The discussion on the regulation of AI systems in Brazil is still very new. According to Brazilian researchers Bruno Bioni and Mariana Rielli,[24] the general debate on data protection around the globe has different degrees of maturity depending on the region and the specific issues faced. In addition, a paradigm shift has been observed through the transition from a focus on the individual's right to self-determination with regards to his or her private information to a model of risk prevention and management with respect to data-processing activities.

However, the use of AI for the collection and processing of sensitive personal data, such as the biometric data in question, makes these risks difficult to measure. In part this is because the general population does not have sufficient knowledge of the technology to recognise the extent of its impact on their personal lives.

In this way, an imbalance of power with regard to public awareness and the use of the technology has been created in Brazil.

The need for impact reports on the protection of personal data is an important requirement that has been gaining prominence in legislation, such as European legislation and the recently approved LGPD. However, Bioni and Rielli draw attention to the few requirements placed on developers of AI technologies in Brazil, as well as on the consumer who buys and implements the technology. In particular, there is no law in relation to the purchase and use of facial recognition devices for public service and public safety purposes in Brazil, unlike similar public projects elsewhere that seek an informed public debate and the inclusion of citizens' interests in decision-making processes (e.g. the ordinance on the acquisition of surveillance technology recently adopted in San Francisco, in the United States).[25]

Conclusion

A decrease in school drop-out rates: this is the main advantage of facial recognition technology in schools, according to the company that has developed the technology. But are we assigning to technology something that would be the responsibility of society and the state?

As the Brazilian educator and philosopher Paulo Freire[26] showed, many of the most common practices in the Brazilian educational system are dictated by the Brazilian elite. This includes the use of educational content that does not correspond to the reality of students from lower classes, but instead inhibits their ability to think critically of their reality, and in the end discourages them from attending school.

Easy and safe access to school is also an important consideration impacting on the student's educational performance. The Brazilian Statute of the Child and Adolescent, in chapter IV, art. 53, inc. V,[27] states that one of the rights of the child and adolescent is access to public and free schools near his or her home. However, when distance is not an impediment to school attendance, issues related to the safety of the student's route to school should also be considered. For example, in some communities in Rio de Janeiro,[28] there are frequent incidents of armed confrontation between police and drug traffickers, visible abuses of power by the authorities, stray bullets and other incidents endangering the lives of passers-by, and even street executions, all of which are daily threats to residents. In addition, the reality faced by marginalised populations in Brazil raises another important question: the need for children and adolescents from low-income families to leave school in order to work and help support the household.

The problem of the school drop-out rate in Brazil is neither a new issue nor something that can be solved only with the implementation of a new school attendance system using AI. It is necessary for society to seek more meaningful ways to mitigate the present crisis facing the educational system.

In addition to projects that raise public awareness about issues related to new technologies in Brazilian urban centres, there is also a need to strengthen legislation that governs how sensitive data is shared and used in public projects in order to maintain the quality of public services. When the fundamental rights and guarantees of citizens are understood and respected, a relationship of trust is established.

Cities in Brazil should strengthen privacy protection instruments and create legal certainty through the establishment of fundamental principles, rights and duties for the operation of an ever-changing array of technologies in society. Regulating the use of personal data in the public sphere has the power to promote the conscious, transparent and legitimate use of this information.

Action steps

The following advocacy priorities are suggested for Brazil:

  • Formulate regional policies and strategies for the use of AI in Latin America and the Caribbean.
  • Develop legally enforceable safeguards, including robust transparency and accountability measures, before any facial recognition technology is deployed.
  • Promote national campaigns and public debate over surveillance technology given the impact such technologies may have on civil rights and civil liberties.
  • Include a social, cultural and political understanding of the needs of vulnerable groups in the country's education strategies to make the learning environment more attractive for these groups.
  • Develop open source AI systems that enable wider community use, with appropriate privacy protections.

Footnotes

[1] www.pontoid.com.br/home

[2] https://www.youtube.com/watch?v=YRMihVh0cew&t=24s

[3] Folha de Pernambuco. (2017, 19 April). Tecnologia para registrar presença nas escolas de Jaboatão. Folha de Pernambuco Folha de Pernambuco. https://www.folhape.com.br/noticias/noticias/cotidiano/2017/04/19/NWS,24738,70,449,NOTICIAS,2190-TECNOLOGIA-PARA-REGISTRAR-PRESENCA-NAS-ESCOLAS-JABOATAO.aspx

[4] https://www.youtube.com/watch?v=YRMihVh0cew&t=24s

[5] https://en.wikipedia.org/wiki/Bolsa_Fam%C3%ADlia

[6] https://www.caixa.gov.br/programas-sociais/bolsa-familia/Paginas/default.aspx

[7] Lessig, L. (2006). Code: Version 2.0. New York: Basic Books. Monitoring is mostly related to observation in real time and tracking can be done afterwards based on certain information.

[8] Santos, N. (2017, 18 April). Jaboatão inicia reconhecimento facial nas escolas. LeiaJá. https://m.leiaja.com/carreiras/2017/04/18/jaboatao-inicia-reconhecimento-facial-nas-escolas

[9] https://www.youtube.com/watch?v=YRMihVh0cew&t=24s

[10] www.pontoid.com.br/politica_privacidade_education.jsp

[11] IAPP. (2018). Brazil's General Data Protection Law (English translation). https://iapp.org/resources/article/brazils-general-data-protection-law-english-translation

[12] Medida Provisória nº 869, de 27 de dezembro de 2018. www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/Mpv/mpv869.htm

[13] https://legis.senado.leg.br/sdleg-getter/documento?dm=7945369&ts=1556207205600&disposition=inline

[14] Inquérito Civil Público n.08190.052289/18-94 Reconhecimento Facial. www.mpdft.mp.br/portal/pdf/noticias/março_2019/Despacho_Audiencia_Publica_2.pdf

[15] https://www.youtube.com/watch?v=pmzvXcevJr4

[16] Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. New York: St Martin’s Press.

[17] Távora, F., Araújo, G., & Sousa, J. (2019, 11 March). Scanner facial abre alas e ninguém mais se perde no Carnaval (e fora dele). Agência Data Labe. https://tab.uol.com.br/noticias/redacao/2019/03/11/carnaval-abre-alas-para-o-escaner-facial-reconhece-milhoes-e-prende-seis.html

[18] Graham, S. (2011). Cities Under Siege: The New Military Urbanism. New York: Verso Books.

[19] Souza, M. (2008). Fobópole. Rio de Janeiro: Bertrand Brasil

[20] Kayyali, D. (2016, 13 June). The Olympics Are Turning Rio into a Military State. Vice. https://www.vice.com/en_us/article/wnxgpw/the-olympics-are-turning-rio-into-a-military-state

[21] Machado, R. (2019, 19 February). Militarização no Brasil: a perpetuação da guerra ao inimigo interno. Entrevista especial com Maria Alice Rezende de Carvalho. Instituto Humanitas Unisinos. www.ihu.unisinos.br/159-noticias/entrevistas/586763-militarizacao-no-brasil-a-perpetuacao-da-guerra-ao-inimigo-interno-entrevista-especial-com-maria-alice-rezende-de-carvalho

[22] IAPP. (2018). Op. cit. 

[23] https://twitter.com/rafa_zanatta/status/1085583399186767875

[24] Bioni, B., & Rielli, M. (2019). Audiência Pública: uso de ferramentas de reconhecimento facial por parte de empresas e governos. Data Privacy Brasil. https://dataprivacy.com.br/wp-content/uploads/2019/04/Contribui%C3%A7%C3%A3o-AP-reconhecimento-facial-final.pdf

[25] Johnson, K. (2019, 14 May). San Francisco supervisors vote to ban facial recognition software. VentureBeat. https://venturebeat.com/2019/05/14/san-francisco-first-in-nation-to-ban-facial-recognition-software

[26] Freire, P. (1976). Education: The Practice of Freedom. London: Writers and Readers Publishing Cooperative.

[27] www.planalto.gov.br/ccivil_03/leis/l8069.htm 

[28] Brito, R. (2017, 2 October). Rio’s kids are dying in the crossfire of a wave of violence. AP News. https://www.apnews.com/efeeaed43c7b47a0ae4a6cfaa8b871e2

Notes:
This report was originally published as part of a larger compilation: “Global Information Society Watch 2019: Artificial intelligence: Human rights, social justice and development"
Creative Commons Attribution 4.0 International (CC BY 4.0) - Some rights reserved.
ISBN 978-92-95113-12-1
APC Serial: APC-201910-CIPP-R-EN-P-301
978-92-95113-13-8
ISBN APC Serial: APC-201910-CIPP-R-EN-DIGITAL-302

Country: 

Themes: 


          

Coming Soon: Explain IT Season 2

 Cache   

Coming in January 2019, Explain IT season 2 covering topics such as AI and Machine learning, the work space of the future and 5G and SD WAN.


Send us a photo showing that you've subscribed ready for season 2 and we'll send you an exclusive Explain IT laptop sticker.


          

Dec 11, 2019: How to Hit HIV Where It Hurts - Gladstone Center for Cell Circuitry Seminar at Gladstone Institutes

 Cache   

Arup K. Chakraborty, PhD, MIT and Harvard

Vaccines have revolutionized modern medicine, and saved more lives than any other medical procedure. But vaccines have failed against some pathogens, among them HIV, a highly mutable virus.

Arup K. Chakraborty and his team combine theoretical and computational biology, rooted in statistical physics and machine learning, with basic and clinical immunology to understand how the virus’s ability to propagate infection depends on its sequence. They then validate the predictions emerging from this analysis using in vitro and clinical data.

In this seminar, Chakraborty will describe how a T cell–based HIV vaccine was designed based on these findings, and tested in pre-clinical studies. He will also discuss his work on affinity maturation, and understanding how it may be manipulated by antigens and vaccination protocols to elicit broadly neutralizing antibodies against highly mutable pathogens, such as HIV and influenza.

About the Speaker

Arup K. Chakraborty is currently the Robert T. Haslam Professor of Chemical Engineering, and Professor of Physics and Chemistry at MIT. After obtaining his PhD in chemical engineering and postdoctoral studies, he joined the faculty at UC Berkeley in 1988. In 2005, Chakraborty moved to MIT.

Chakraborty’s work has largely focused on bringing together immunology and the physical and engineering sciences. He is specifically focused on the intersection of statistical mechanics and immunology. His interests span T cell–signaling, T cell–development and repertoire, and a mechanistic understanding of HIV evolution, antibody evolution, and vaccine design.

He is a member of the National Academy of Sciences, the National Academy of Engineering, and the National Academy of Medicine.

Hosted By:

Gladstone Center for Cell Circuitry

View on site | Email this event


          

Researchers use machine learning tools to reveal how memories are coded in the brain | Mind & Brain - Science Daily

 Cache   
NUS researchers have made a breakthrough in the field of cognitive computational neuroscience, by discovering a key aspect of how the brain encodes short-term memories, notes Science Daily.

These findings indicate that stable short-term memory information exists within a population of neurons with dynamic activity.
Photo: National University of Singapore
The researchers working in The N.1 Institute for Health at the National University of Singapore (NUS), led by Assistant Professor Camilo Libedinsky from the Department of Psychology at NUS, and Senior Lecturer Shih-Cheng Yen from the Innovation and Design Programme in the Faculty of Engineering at NUS, discovered that a population of neurons in the brain's frontal lobe contain stable short-term memory information within dynamically-changing neural activity.

This discovery may have far-reaching consequences in understanding how organisms have the ability to perform multiple mental operations simultaneously, such as remembering, paying attention and making a decision, using a brain of limited size.
The results of this study were published in the journal Nature Communications on 1 November 2019...

The researchers are currently extending these studies to explore of how multiple brain regions interact with each other with the objective of transferring and processing different types of information.
Read more... 

Additional resources  
Journal Reference:
  1. Aishwarya Parthasarathy, Cheng Tang, Roger Herikstad, Loong Fah Cheong, Shih-Cheng Yen, Camilo Libedinsky. Time-invariant working memory representations in the presence of code-morphing in the lateral prefrontal cortex. Nature Communications, 2019; 10 (1)  
  2. DOI: 10.1038/s41467-019-12841-y 
Source: Science Daily

          

Top 15 Deep Learning Applications In 2020 | Deep Learning - Robots.net

 Cache   
Machine learning applications have gained popularity over the years and now, incorporated with advanced algorithms has been introduced, deep learning applications, as Robots.net reports. 

Photo: Gerd Altmann on Pixabay
It may have evolved quickly but deep learning applications have been getting more attention compared to other machine learning applications. But what sets it apart from a machine learning application?

What Is Deep Learning? 
Deep learning is an artificial intelligence that mimics the workings of a human brain in processing different data, creating patterns and interpreting information that is used for decision making. It is a subfield of machine learning in artificial intelligence. Its networks has the capability to learn, supervised or unsupervised, from data that is either structured or labelled...

Types Of Deep Learning
There are two types of deep learning, supervised and unsupervised. Supervised learning is when you give an AI a set of input and tell it the expected results. Basically, if the output generated is wrong, it will readjust its calculation and will be done repeatedly over the data set until it makes no more mistakes. Unsupervised learning is the process of machine learning using data sets with no structure specified.

Applications of deep learning have been applied to several fields including speech recognition, social network filtering, audio recognition, natural language processing, machine translation, bioinformatics, computer design, computer vision, drug design, medical image analysis, board games programs and material inspection where they need to produce results that are comparable to or superior to human experts.

Let’s go over more details on applications of deep learning and what can deep learning do.

Source: Robots.net

          

Terraview aterriza en España con su gestión inteligente del viñedo: inteligencia artificial, realidad aumentada y machine learning para el vino

 Cache   

Identificar variaciones hídricas en el suelo, momento adecuado para la poda, detectar de forma temprana infecciones baterianas y hongos y una previsión meteorológica precisa. Estas son solo algunas de las posibilidades que ofrece Terraview, esta start-up basada en inteligencia artificial, machine learning y realidad aumentada...

The post Terraview aterriza en España con su gestión inteligente del viñedo: inteligencia artificial, realidad aumentada y machine learning para el vino appeared first on Tecnovino.


          

Cards Against Humanity's Thanksgiving livestream pits a machine learning model against human joke writers

 Cache   

Cards Against Humanity asked Spencer Kelly to teach a computer to write mean, funny joke-cards for a new, AI-based expansion pack to the game; Kelly trained the popular GPT-2 generative language model (previously) on existing cards, and now the company is livestreaming a 16-hour competition between its AI and its human joke-writers, with a voting system to up/downvote the resulting jokes (at the end of the day, these votes will be "tallied up and thrown in the garbage"). You can choose to buy the resulting packs, and if the human team outsells the robots, it will receive a $5,000 bonus. If they fail, they will all be fired.

Presumably, the last part is a joke (the CAH folks are extremely good eggs and they pull weird pranky stunts every Black Friday).

CAH has also opened a board-game cafe in Chicago with two escape rooms, a full bar, and high-quality kitchen, which is pretty danged exciting.

Cards Against Humanity's Black Friday A.I. Challenge Read the rest


          

Hai Wang — Multi-Objective Online Ride-Matching, Nov 26

 Cache   
Abstract: We propose a general framework to study the on-demand shared ride-sourcing transportation systems, and focus on the multi-objective matching between demand and supply. The platforms match passengers and drivers in real time without observing future information, considering multiple objectives such as pick-up time, platform revenue, and service quality. We develop an efficient online matching policy that adaptively balances the trade-offs between multiple objectives in a dynamic setting and provide theoretical performance guarantees for the policy. We prove that the proposed adaptive matching policy can achieve the solution that minimizes the Euclidean distance to any pre-determined multi-objective target. Through numerical experiments and industrial testing using real data from a ride-sourcing platform, we demonstrate that our approach is able to obtain a delicate balance between multiple objectives and bring value to all the stakeholders in the ride-sourcing ecosystem. We also explore an advanced policy addressing ride-matching problems under non-stationary decision scenarios.
Bio: Dr. Wang is now a Visiting Assistant Professor at Heinz College of Information Systems and Public Policy at Carnegie Mellon University. He received a bachelor degree from Tsinghua University, dual Master’s degrees in operations research and transportation from MIT, and a doctoral degree in operations research from MIT. Dr. Wang is also an Assistant Professor in the School of Information Systems at Singapore Management University. His research has focused on methodologies on operations research, analytics and optimization, data-driven modeling and machine learning algorithms, and the applications in smart cities and urban systems, including innovative transportation, advanced logistics, and intelligent healthcare systems. He has papers published in leading journals such as Transportation Science, American Economic Review Papers & Proceedings, Manufacturing & Service Operations Management, and Transportation Research Part B: Methodological. Dr. Wang serves as the guest-editor for the Special Issue on Innovative Shared Transportation in Transportation Research Part B, as a reviewer for over 25 different academic journals, and named as Chan Wui & Yunyin Rising Star Fellow in Transportation. Dr. Wang was nominated for MIT’s top teaching award for graduate students, the Goodwin Medal, and won the excellent teaching award for junior faculty at Singapore Management University. During his Ph.D. at MIT, he also served as the co-President of MIT Chinese Students & Scholars Association and Chair of MIT-China Innovation and Entrepreneurship Forum.
          

BIDS Forum: Statistics and Machine Learning Forum, Dec 2

 Cache   
Full details about this meeting will be posted here: https://bids.berkeley.edu/events.

The Berkeley Statistics and Machine Learning Forum meets biweekly to discuss current applications across a wide variety of research domains and software methodologies. Hosted by UC Berkeley Physics Professor and BIDS Senior Fellow Uros Seljak, these active sessions bring together domain scientists, statisticians and computer scientists who are either developing state-of-the-art methods or are interested in applying these methods in their research. Practical questions about the meetings can be directed to BIDS Fellow Francois Lanusse. All interested members of the UC Berkeley and LBL communities are welcome and encouraged to attend. To receive email notifications about the meetings and upvote papers for discussion, please register here.
          

Multi-Scale Structure Control for Advanced Functional Materials, Dec 5

 Cache   
Abstract: Economic growth demands better products; hence, better materials. Specifically, product space in energy, defense, aerospace, and automotive industries requires materials that are multi-functional, lightweight, reliable, and tough. In addition, environmental regulations require sustainability. Comprehensive optimization of these requirements is possible through multi-scale structure control in multi-material systems. In this talk, I will discuss how we can control structure from nano-to-macro-scale using additive manufacturing and solution synthesis routes. Examples from hierarchical composites and ceramics will be provided with a design perspective. The use of kHz-range vibrations in local three-dimensional structure control will be introduced. Our recent work showed that vibration-assisted fused filament fabrication (VA-FFF) technique enhanced strength, toughness, and reliability of short-fiber-reinforced composites. We also reported improved mechanical, electrical, and optical properties in polymers via quantum dots and nano-scale structure control. The mechanistic origins of the toughness and reliability increases in additively manufactured biomimetic porous materials will be detailed. I will describe machine learning and high-throughput approaches towards the discovery of new electro-active material systems for the next-generation product innovation. In addition, the discussion will include evidence-based engineering education approaches and problems. For example, sustainable innovation of better products is strongly correlated to creativity at individual level, but how can we cultivate creative engineers? I will exemplify the use of virtual reality and artificial intelligence education approaches to enhance creativity and diversity.

Biography: Dr. Keles is an Assistant Professor of Chemical and Materials Engineering at San Jose State University. He received his B.S. and M.S. degrees from the Department of Metallurgical and Materials Engineering at Middle East Technical University, and his Ph.D. in Materials Engineering from Purdue University in 2013. Following, he joined Illinois Institute of Technology as a research associate, where he investigated the reliability of porous glasses and porous pharmaceutical compacts. His work on the deviations from Weibull statistics in porous ceramics was highlighted at the Gordon Research Conferences and awarded by the American Ceramic Society. His current research interests are multi-scale structure control, materials informatics, and engineering education. In 2019, Dr. Keles was selected as the advisor of the year at SJSU for his contributions to materials student clubs. He is also a photographer and digital artist who uses aesthetically appealing images and computer visualizations to improve student engagement, to aid student learning, and to foster creativity in engineering students. His work at the intersection of engineering, education, and arts was also highlighted in the The Member Journal of TMS.
          

Safe Cities : sécuriser au rythme du numérique

 Cache   
L'utilisation de technologies de plus en plus sophistiquées pour assurer la sécurité et le bon fonctionnement des villes est maintenant une réalité, grâce aux progrès de l'intelligence artificielle, du Machine Learning, des données, du cloud computing et des puissants systèmes de gestion vidéo.
          

Amazon Athena adds support for invoking machine learning models in SQL queries

 Cache   

Today, Amazon Athena released a new feature that allows users to easily invoke machine learning models for inference directly from their SQL queries. The ability to use machine learning models in SQL queries makes complex tasks such anomaly detection, customer cohort analysis, and sales predictions as simple as invoking a function in a SQL query.  


          

Amazon Aurora Supports Machine Learning Directly from the Database

 Cache   

You can now use Amazon Aurora to add machine learning (ML) based predictions to your applications, using a simple, optimized, and secure integration with Amazon SageMaker and Amazon Comprehend. Aurora machine learning is based on the familiar SQL programming language, so you don’t need to build custom integrations, move data around, learn separate tools, or have prior machine learning experience.  


          

Add ML predictions using Amazon SageMaker models in Amazon QuickSight

 Cache   

You can now preview Amazon QuickSight’s integration with Amazon SageMaker: a new feature that makes it faster, easier, and more cost effective for customers to augment their business data with ML predictions. With just a few clicks, business analysts, data engineers, and data scientists can perform machine learning inferencing in QuickSight to make decisions on new data. Using SageMaker models, popular use cases include predicting likelihood of customer churn, scoring leads conversion, and assessing credit risk for loan applications.


          

Cards Against Humanity's Thanksgiving livestream pits a machine learning model against human joke writers

 Cache   

Cards Against Humanity asked Spencer Kelly to teach a computer to write mean, funny joke-cards for a new, AI-based expansion pack to the game; Kelly trained the popular GPT-2 generative language model (previously) on existing cards, and now the company is livestreaming a 16-hour competition between its AI and its human joke-writers, with a voting system to up/downvote the resulting jokes (at the end of the day, these votes will be "tallied up and thrown in the garbage"). You can choose to buy the resulting packs, and if the human team outsells the robots, it will receive a $5,000 bonus. If they fail, they will all be fired.

Presumably, the last part is a joke (the CAH folks are extremely good eggs and they pull weird pranky stunts every Black Friday).

CAH has also opened a board-game cafe in Chicago with two escape rooms, a full bar, and high-quality kitchen, which is pretty danged exciting.

Cards Against Humanity's Black Friday A.I. Challenge Read the rest


          

A Crystal Ball For Predicting Terrorist Effectiveness? We Created One

 Cache   
In our latest research, we took an unconventional approach of thinking of terrorist groups as early-stage businesses to gauge their hidden capabilities and resources as predictors of their future impact. We need to use machine learning to better predict terrorism.
          

Google RankBrain

 Cache   

DATE GOOGLE CONFIRMED EXISTENCE OF RANKBRAIN: OCTOBER 26TH, 2015 RankBrain is a component of Google’s core algorithm which uses machine learning (the ability of machines to teach themselves from data inputs) to determine the most relevant results to search engine queries. Pre-RankBrain, Google utilized its basic algorithm to determine which results to show for a […]

The post Google RankBrain appeared first on David M. Higgins II.


          

Agricultural cropland extent and areas of South Asia derived using Landsat satellite 30-m time-series big-data using random forest machine learning algorithms on the Google Earth Engine cloud

 Cache   
The South Asia (India, Pakistan, Bangladesh, Nepal, Sri Lanka and Bhutan) has a staggering 900 million people (~43% of the population) who face food insecurity or severe food insecurity as per United Nations, Food and Agriculture Organization’s (FAO) the Food Insecurity Experience Scale (FIES). The existing coarse-resolution (>250-m) cropland maps lack precision in geo-location of individual farms and have low map accuracies. This also results in uncertainties in cropland areas calculated from such products. Thereby, the overarching goal of this study was to develop high spatial resolution (30-m or better) baseline cropland extent product of South Asia for the year 2015 using Landsat satellite time-series big-data and machine learning algorithms (MLAs) on the Google Earth Engine (GEE) cloud computing platform. To eliminate the impact of clouds, ten time-composited Landsat bands (blue, green, red, NIR, SWIR1, SWIR2, Thermal, EVI, NDVI, NDWI) were derived for each of the 3 time-periods over 12 months (monsoon: Julian days 151-300; winter: Julian days 301-365 plus 1-60; and summer: Julian days 61-150), taking the every 8-day data from Landsat-8 and 7 for the years 2013-2015, for a total of 30-bands plus global digital elevation model (GDEM) derived slope band. This 31-band mega-file big data-cube was composed for each of the 5 agro-ecological zones (AEZ’s) of South Asia and formed a baseline data for image classification and analysis. Knowledge-base for the Random Forest (RF) MLAs were developed using spatially well spread-out reference training data (N=2179) in 5 AEZs. Classification was performed on GEE for each of the 5 AEZs using well-established knowledge-based and RF MLAs on the cloud. Map accuracies were measured using independent validation data (N=1185). The survey showed that the South Asia cropland product had a producer’s accuracy of 89.9% (errors of omissions of 10.1%), user’s accuracy of 95.3% (errors of commission of 4.7%) and an overall accuracy of 88.7%. The National and sub-national (districts) areas computed from this cropland extent product explained 80-96% variability when compared with the National statistics of the South Asian Countries. The full resolution imagery can be viewed at full-resolution, by zooming-in to any location in South Asia or the world, at www.croplands.org and the cropland products of South Asia downloaded from The Land Processes Distributed Active Archive Center (LP DAAC) of National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS): https://lpdaac.usgs.gov/products/gfsad30saafgircev001/
          

Machine Learning magic for your web application with TensorFlow.js (Chrome Dev Summit 2019)

 Cache   


As a web developer, you may have felt that all this buzz and excitement about machine learning seems to require Python, and how you, as a JavaScript developer, can jump in and use machine learning. I want to show you, that now, machine learning in JavaScript is real, powerful, and useful, and you can do some amazing things with it.

Presented by: Sandeep Gupta

Learn more:
TensorFlow.js → https://goo.gle/2XLhMe0
Tensorflow.js Github → https://goo.gle/2DcgLCe

#ChromeDevSummit All Sessions → https://goo.gle/CDS19

Subscribe to the Chrome Developers channel → https://goo.gle/ChromeDevs

Event photos →  https://goo.gle/CDS19Photos
          

Entab Infotech Pvt Ltd

 Cache   
Entab Infotech Pvt Ltd

School Management Software is used by some of the best schools around the globe to automate and handle tedious administrative tasks. School management ERP helps in saving not only time but also money and human resources. When it comes to choosing the best school management ERP, no technology comes close to Entab’s fastest and most advanced school management ERP. Equipped with machine learning and data analytics, Entab’s school management ERP smartly handles diverse school operations, ranging from school admissions to online fee payments. Evolved over 20 years, it is capable of sending automatic reminders and instant notifications to parents.


Category: Enterprise Resource Planning Solutions
: B-227, Pocket B, Okhla I
: New Delhi
: National Capital Territory of Delhi
: India
: https://www.entab.in/school-management-software.html
:
:
          

Researchers train AI to map a person’s facial movements to any target headshot

 Cache   
What if you could manipulate the facial features of a historical figure, a politician, or a CEO realistically and convincingly using nothing but a webcam and an illustrated or photographic still image? A tool called MarioNETte that was recently developed by researchers at Seoul-based Hyperconnect accomplishes this, thanks in part to cutting-edge machine learning techniques. The researchers claim it […]
          

Las mejores ofertas en cursos del Black Friday 2019

 Cache   

Las mejores ofertas en cursos del Black Friday 2019

Black Friday 2019 ya está aquí y con él famoso día llega una avalancha total de ofertas no solo el mismo viernes, sino el los días previos y posteriores. En Genbeta hemos estado recogiendo algunas de las oportunidades más interesantes relacionadas con software y servicios y ahora haremos lo propio con cursos online en español sobre computación, programación, hacking y ciberseguridad.

Si bien hay mucho que se puede aprender online gratis, algunas ofertas educativas más completas, con certificación y demás beneficios suelen ser de pago. Pero las plataformas de educación en linea no pierden la oportunidad durante estos últimos días de noviembre y principios de diciembre para también unirse con sus propias promociones.

Cursos a 9.99 euros en Udemy

Adi Goldstein Mdinbvq1sfg Unsplash

Cursos a 9.90 en Domestika

Kelly Sikkema Yk0hpwwdj1i Unsplash

Cursos con 70% de descuento en Tutellus

Kobu Agency 67l18r4tw W Unsplash

Utiliza el cupón BLACKFRIDAY antes de inscribirte para ahorrar hasta un 70%.

Descuentos de 25% en los cursos online de Securizame

Fabian Grohs Dc6pb2jdaqs Unsplash

Apple Coding Academy

Hasta el 5 de diciembre puedes obtener un 30% de descuento en los cursos de Swift 5.1 y Desarrollo de apps con SwiftUI. Si combinas las ofertas y tomas los dos cursos puedes obtener 35% de descuento. Los cursos de Apple Coding en Udemy tienen un descuento del 52%.

Más ofertas

  • 3 meses de Amazon Kindle Unlimited por 29,97 euros gratis.
  • 4 meses de Amazon Music Unlimited por 0,99 euros.
  • 30 días de Amazon Prime gratis.

Puedes estar al día y en cada momento informado de las principales ofertas y novedades de Xataka Selección en nuestro canal de Telegram o en nuestros perfiles de Twitter , Facebook y la revista Flipboard. Puedes echar un vistazo también a los cazando gangas de Xataka Móvil, Xataka Android, Xataka Foto, Vida Extra, Espinof y Applesfera, así como con nuestros compañeros de Compradicción. Puedes ver todas las gangas que publican en Twitter y Facebook, e incluso suscribirte a sus avisos vía Telegram.

También puedes encontrar aquí las mejores ofertas del Black Friday 2019.

Xataka Selección
ofrece:
Descubre las mejores ofertas en Tecnología del Black Friday que hemos seleccionado para ti en Xataka Selección. ¡No te las pierdas!

Nota: algunos de los enlaces aquí publicados son de afiliados. A pesar de ello, ninguno de los cursos mencionados han sido propuestos por las webs, siendo su introducción una decisión única del equipo de editores.

          

DH Data Engineer

 Cache   

DATA ENGINEER
Life takes energy. The *** Technology + Innovation Lab works with data that powers our products to improve safety and reliability. By working hands-on with ground-breaking technology, the lab pioneers the development of innovative products through small agile teams. Our teams incorporate a variety of multidisciplinary skills, including industrial predictive algorithms, machine learning, and sentiment analysis.
As a Data Engineer, you'll help ingest, transform and store clean and enriched data in ready for business intelligence consumption.
WHO YOU ARE
• You'll have experience in a Data Engineer role (5+ years), with a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
• You build and maintain optimal data pipeline architecture.
• You assemble large, complex data sets that meet functional / non-functional business requirements.
• You identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, data quality checks, minimize Cloud cost, etc.
• You build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, DataBricks, No-SQL
• You build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
• You document and communicate standard methods and tools used.
• You work with other data engineers, data ingestion specialists, and experts across the company to consolidate methods and tool standards where practical.
• You're experienced using the following software/tools:
• Big data tools: Hadoop, HDI, & Spark
• Relational SQL and NoSQL databases, including COSMOS
• Data pipeline and workflow management tools: DataBricks (Spark), ADF, Dataflow
• Microsoft Azure
• Stream-processing systems: Storm, Streaming-Analytics, IoT Hub, Event Hub
• Object-oriented/object function scripting languages: Python, Scala, SQL


          

DH-Technology Lead

 Cache   

TECHICAL LEAD


Life takes energy. The *** Technology + Innovation Lab works with data that powers our products to improve safety and reliability. By working hands-on with ground-breaking technology, the lab pioneers the development of innovative products through small agile teams. Our teams incorporate a variety of multidisciplinary skills, including industrial predictive algorithms, machine learning, and sentiment analysis.
As the Techical Lead, you will be responsible for the design and execution of front-end and back-end development. This involves designing and implementing overall architecture of the application,
WHO YOU ARE
• You have 8+ years of full stack engineering experience (e.g. UWS, .NET, MERN strack) and have led other developers
• You understand data architecture and Platform as a Service (PaaS)
• You are familiar with data migration, transformation, and scripting
• You are experienced in management of hosting environment, including database administration and scaling an application to support load changes
• You implement automated testing platforms and unit tests
• You are proficient in source control tools and familiar with development aiding tools
• You are an ambitious, organized self-starter who is self-motivated, but also a great teammate with a professional presence and a passion for digital, notably around user experience and continuously improving status quo
• You bring a high-energy and passionate outlook to the job and can influence those around you
• You build a sense of trust and rapport that creates an effective workplace
• You are passionate for innovation with a “can do” attitude
• You hold a Bachelor’s or Master’s Degree in Information Technology, Computer Science, or a related quantitative discipline


          

Machine Learning Mortality-Classification in Clinical Documentation with Increased Accuracy in Visual-Based Analyses.

 Cache   
Related Articles

Machine Learning Mortality-Classification in Clinical Documentation with Increased Accuracy in Visual-Based Analyses.

Acta Paediatr. 2019 Nov 24;:

Authors: Slattery SM, Knight DC, Weese-Mayer DE, Grobman WA, Downey DC, Murthy K

Abstract
AIM: The role of machine learning on clinical documentation for predictive outcomes remains undefined. We aimed to compare three neural networks on inpatient providers' notes to predict mortality in neonatal hypoxic-ischemic encephalopathy (HIE).
METHODS: Using Children's Hospitals Neonatal Database, non-anomalous neonates with HIE treated with therapeutic hypothermia were identified at a single-centre. Data were linked with the initial seven days of documentation. Exposures were derived using the databases and applying convolutional and two recurrent neural networks. The primary outcome was mortality. The predictive accuracy and performance measures for models were determined.
RESULTS: The cohort included 52 eligible infants. Most infants survived (n=36, 69%) and 23 had severe HIE (44%). Neural networks performed above baseline and differed in their median accuracy for predicting mortality (p=0.0001): recurrent models with long short-term memory 69% (65, 73%) and gated-recurrent model units 65% (62, 69%) and convolutional 72% (64, 96%). Convolutional networks' median specificity was 81% (72, 97%).
CONCLUSION: The neural network models demonstrated fundamental validity in predicting mortality using inpatient provider documentation. Convolutional models had high specificity for (excluding) mortality in neonatal HIE. These findings provide a platform for future model training and ultimately tool development to assist clinicians in patient assessments and risk-stratifications.

PMID: 31762098 [PubMed - as supplied by publisher]


          

Data engineering specialist for Amazon sellers product в Newxel, Киев

 Cache   

Необходимые навыки

Requirements
— 3+ years of experience in Java/Scala (desire to work with Scala) server-side development with cloud services (preferably AWS) and building distributed systems.
— Computer science background or university degree in related fields.
— Hands-on experience with data pipeline and data processing technologies (ex Apache Spark, Kafka).
— Experience with SQL-databases and BigData DBs
— Experience in Agile methodologies and working in distributed teams
— Good English communication skills — spoken and written

Будет плюсом

Will be a plus
— Experience with CI/CD.
— Knowledge of Python

Предлагаем

Benefits
● Competitive salary
● Team buildings
● Free meals, fruits, sweets and cookies
● Flexible working schedule
● Medical insurance
● Modern and comfortable office near the Vystavkovyi center

Обязанности

In this role you will
— Create data and algorithmic processes with cutting-edge technology model as a SaaS.
— Collaborate with stakeholders and implement software development needs.
— Design and develop server-side software, including documentation and coding. Coordinate server-side development and UI development matching.
— Design and build data pipelines based on third-party integrations.

О проекте

Are you buying on Amazon? There’s a good chance what you paid was calculated by our technology and created by our Top-Notch engineers.

Feedvisor is the ’AI-First‘ optimization and intelligence platform for large sellers and brands on Amazon — Feedvisor uses Big Data, Machine Learning and AI Algorithms to facilitate automatic business-critical processes and decisions for online retailers, helping them grow their business and win the competition. At Feedvisor, our technology makes a real and lasting impact on our customers’ businesses. We are seeking a passionate and talented Engineer to join our Top-Notch Engineering team.

At Feedvisor we care a great deal about our company culture. You will be a fit if:
You are taking extreme ownership
You seek to understand the world through other people’s eyes
You are constantly asking “How can I help?”
You see opportunity in demanding situations
You are trustful: you walk your talk
You are a proactive problem-solver who gets stuff done
You are humble, radically candid, and open to feedback


          

Fresh efforts at Google to understand why an AI system says yes or no launches Explainable AI product

 Cache   
Google has announced a new Explainable AI feature for its cloud platform, which provides more information about the features that cause an AI prediction to come up with its results. Artificial neural networks, which are used by many of today’s machine learning and AI systems, are modelled to some extent on biological brains. One of Read more about Fresh efforts at Google to understand why an AI system says yes or no launches Explainable AI product[…]
          

Full Stack Web Developer with AI & ML Integration

 Cache   

Full Stack Web Developer Course with Machine Learning and Artificial Intelligence Integration using Python and Django What you’ll learn in Full Stack Web Developer Will Learn Web Development From Scratch Built Real-life Projects and Fully Functional Web Apps in No Time Integrate Machine Learning Models to built Machine Learning Web Apps Professional Full Stack Web […]

The post Full Stack Web Developer with AI & ML Integration appeared first on SaveItSafer.


          

Sustainability Solutions Take Center Stage at Demo Day for Black & Veatch Cleantech Accelerator

 Cache   

Seven U.S. entrepreneurs chosen to participate in the Black & Veatch IgniteX Cleantech Accelerator recently took the stage in downtown Kansas City to showcase their business models and demonstrate how they’re poised to rewrite how the enterprise focuses on sustainability.

Powered by LaunchKC, the Cleantech Accelerator’s Demo Day drew 250 engineers, entrepreneurs, investors, students and professionals who watched as Accelerator founders presented their compelling proofs of how they’re gaining traction in the market.

“When we launched the Accelerator at Black & Veatch, we announced we were looking for the boldest, brightest entrepreneurs and startups in the cleantech space,” said Hyleme George, program director for the Cleantech Accelerator and Demo Day host. “Combining the startups’ novel technology and nimbleness with Black & Veatch’s scale and expertise can move us further and faster towards our mission of building a world of difference through innovation in sustainable infrastructure.”

View this video to learn more about the IgniteX Cleantech Accelerator.

The Nov. 13 Demo Day forum provided the founders with a platform to share their business models via eight-minute pitches that spanned each company’s value proposition and market opportunity.

“It has been incredibly effective to focus on a target area with an established and forward-looking Kansas City company like Black & Veatch to be able to identify and sort new technologies that can create jobs and follow-on investment in that industry,” said Drew Solomon, competition chair of LaunchKC and senior vice president at the Economic Development Corp. of Kansas City, MO.

The IgniteX Cleantech Accelerator cohort entered the Demo Day fresh from a 12-week accelerator boot camp with Black & Veatch personnel and other industry experts, who assisted them in planning, designing and executing real-world projects in the clean technology space.

Cohort demos included:

AWARE VEHICLES, Kansas City, MO: Smart drones for precision agriculture and infrastructure.

P.J. Piper, president & CEO, introduced an automated, innovative mobile docking platform for drones – including launch, recovery, and data transmission and charging – combined with artificial intelligence (AI) for the real-time study of farmland – down to moisture levels in individual plants - and infrastructure often in remote locations. https://www.awarevehicles.com/

BUILT ROBOTICS, San Francisco, CA: Autonomous construction machinery for efficiency, safety and faster results.

“We’re developing artificial intelligence guidance systems that will enable heavy equipment to operate autonomously,” said Jimmy Kim, Business Operations Lead. “Our focus is on building the software… the brains of the robot… to allow it to operate (heavy equipment) intelligently on job sites, and equipping them with the same capabilities and sensibilities that a human operator would have to ensure safety and efficiency. http://www.builtrobotics.com/

ecoSPEARS, Altamonte Springs, FL: Green remediation of PCBs and dioxins in water, soils, and sediments.

Co-founder Serg Albino is working with NASA and Black & Veatch to clean up the environment. “Imagine a world where every human being has access to clean food, clean soil, clean air and clean water,” said Albino, a former NASA aerospace engineer. ecoSPEARS has an exclusive license to leverage NASA’s proprietary technology to extract, contain and destroy PCBs and other contaminants in the environment. https://ecospears.com/

ELECTRIPHI, San Francisco, CA: Fleet electrification for reducing carbon emissions.

“We are on the cusp of the biggest energy transition in history. This is a multi-trillion-dollar transition from oil to electrons,” said Muffi Ghadiali, co-founder and CEO, who unveiled a fleet electrification planning tool. “If all of transportation, as we know it today, would electrify, that will mean we need to double or triple the grid capacity.

“This will disrupt the entire value chain from how energy is produced, how energy is delivered, how energy is consumed. And this is creating amazing opportunities for existing companies to reinvent themselves and for new companies, like us, to be created. The thread that ties all of this together is data, intelligence and an ecosystem of collaboration.” https://electriphi.ai/

EXTENSIBLE ENERGY, Berkeley, CA: Software to reduce demand charges in commercial buildings.

John Powers, Founder & CEO, has unlocked the software that makes solar energy more cost-effective for commercial building owners. It begins with making buildings smarter to better manage their use of energy through demand control.

“The U.S. generates about 2% of its electricity from solar today; in 10 years that will be 20 percent. That’s a $1 trillion opportunity,” Powers said. “It’s the largest energy infrastructure opportunity of the next decade.”

Powers closed his presentation by announcing work toward a reseller agreement with Black & Veatch to bring the Extensible Energy integrated solution, Demand X, to market. “When an innovator like Extensible Energy and an engineering powerhouse like Black & Veatch come together with a solution that addresses the biggest energy opportunity of the next decade, the result for all of us will be a cleaner world.” https://www.extensibleenergy.com/

INFRALYTIKS, Urbandale, IA: Artificial intelligence data analytics for sustainable infrastructure improvement.

“In the last two years more data has been created than in the entire history of mankind. However, only about half of 1 percent of all that data gets analyzed. And that’s where we come in,” said Kevin Prendergast, president. “Machine learning unlocks the value of the data and makes it work for us.”

Applying large-scale, real-time data analytics is the heartbeat of InfraLytiks, including a Black & Veatch assignment to automate the process of adding telecom lines and antennae to utility poles across the country – potentially all 180 million utility poles in the U.S. “There is huge demand for lines on the poles. This is valuable real estate,” he said. “The value for consumers, of course, is connectivity.” https://infralytiks.com/

NOVONUTRIENTS, Sunnyvale, CA: Converting industrial CO2 emissions into protein-rich feed.

Black & Veatch is committed to finding new technologies that will reshape the food and agriculture system as we know it. And, that’s where David Tze, CEO, enters the picture.

NovoNutrients has created a gas fermentation technology that will convert industrial carbon dioxin emissions into protein for aquafeed – the essential food in aquaculture and its farm-raised seafood – that is far more cost-effective and plentiful than aquafeed products now on the market.

With about 40 billion tons per year of greenhouse gases for NovoNutrients to “farm,” Tze said six sectors are well-suited for the NovoNutrients technology. https://www.novonutrients.com/

“These seven companies are on the front lines of the cleantech revolution,” George said. “Through our partnership with LaunchKC and by bringing the cohort together with our professionals at Black & Veatch, this program is an emphatic endorsement about how new technology can be used to solve our biggest resource and infrastructure challenges.”

---

Editor’s Note: A livestream video of the Black & Veatch Demo Day is available at https://youtu.be/JGIofiDgIhE

About Black & Veatch

Black & Veatch is an employee-owned engineering, procurement, consulting and construction company with a more than 100-year track record of innovation in sustainable infrastructure. Since 1915, we have helped our clients improve the lives of people in over 100 countries by addressing the resilience and reliability of our world's most important infrastructure assets. Our revenues in 2018 were US$3.5 billion. Follow us on www.bv.com and on social media.

About LaunchKC

LaunchKC has evolved from a 5-year-old grants competition into a tech accelerator platform that leverages grants, as well as creates new opportunities for investors, entrepreneurs, workers and the tech ecosystem of Kansas City. Its bottom line is to attract scalable companies to create more jobs and opportunities while growing the local economy. LaunchKC is an initiative of the Downtown Council (DTC) and the Economic Development Corporation (EDCKC) both of Kansas City, Missouri.

Media Contact Information:

CHRISTOPHER CLARK, Black & Veatch, 913-458-2778

MIKE HURD, LaunchKC, 816-447-2136


          

The Nest I/O to host fireside chat with Faraz Hoodbhoy on 3rd December

 Cache   

By Rafiq Vayani KARACHI: The Nest I/O is hosting a fireside chat with Faraz Hoodbhoy, Director, Ecosystem & Outreach at AT&T, on Technology Entrepreneur’s Journey and the upcoming opportunities in Artificial Intelligence and Machine Learning on Tuesday 3rd Dec 2019, from 5.30 – 7.30 pm at The Nest I/O. Faraz was the Founder & CEO …

The post The Nest I/O to host fireside chat with Faraz Hoodbhoy on 3rd December appeared first on Biz Today.


          

4 paradojas de la Transformación Digital, y cómo abordarlas

 Cache   
No hay duda que el mantra de los negocios y las organizaciones de esta época es la transformación digital. Está en boca de todos. Inteligencia Artificial, Cloud, Machine Learning, Big Data, etc., se perfilan como la gran apuesta organizacional si no se quiere perder el tren y quedar rezagado en esta especie de carrera que [...]
          

영남대의료원, ‘빅데이터정밀의료 국제 심포지엄’ 개최

 Cache   

【브레이크뉴스 대구】이성현 기자= 영남대의료원(의료원장 김태년) 빅데이터정밀의료연구회는 지난 11월 27일 ‘빅데이터 정밀의료 국제 심포지엄’을 성황리에 개최했다고 2일 밝혔다.

 

▲ 영남대의료원, ‘빅데이터정밀의료 국제 심포지엄’ 개최     ©영남대의료원

 

이번 국제 심포지엄은 영남대의료원 개원 40주년을 기념하여 영남대의료원 의과학연구센터 주관 및 빅데이터정밀의료연구회 주최로 영남대 의과대학 죽성강의실에서 개최되었다. 행사 동안 ‘4차 산업혁명 시대의 빅데이터와 정밀의료 연구전략’을 주제로 빅데이터 연구의 현주소와 활용 가능성에 대한 논의가 이루어졌다.

 

빅데이터정밀의료연구회 공동회장 장병익 교수(소화기내과)의 개회사, 영남대 서길수 총장의 축사, 김태년 영남대의료원장의 환영사에 이은 3세션에서는 빅데이터 관리 전략과 활용, 의료서비스에서 머신러닝과 인공지능의 적용에 대한 주제로 연자 강의와 활발한 토론이 진행됐다.

 

첫 번째 세션은 ‘Current Status of Big Data Management Strategy’ 라는 주제로 ‘한국과 일본의 빅데이터 활용 전략의 현재 상황과 앞으로의 전망’에 대해 일본 의료정보학회 회장인 일본 규슈의대의 Naoki Nakashima교수와 현재 대한의료정보학회 이사장이자 전 한국보건의료연구회 원장을 역임한 충북대 의대 이영성 교수가 발표를 맡았다.

 

두 번째 세션에서는 연세대의료원의 ICT를 이끌고 있는 김현창 교수가 ‘Improving Disease Prediction with Big Data Analysis’, 일본에서 활발한 빅데이터 연구를 펼치고 있는 규슈의대 Jinsang Park 교수가 연자로 초청되어 ‘Current Status and Issue of Real World Data in Japan’ 를 주제로 발표해 빅데이터와 정밀의료가 융합되었을 때의 무한한 가능성을 보여줬다.

 

세 번째 세션은 마이크로소프트 공공교육사업부 의료분야 기술전략 담당 전종수 부장이 ‘AI in Healthcare’, BRFrame 강희재 연구소장이 ‘Predicting Depression Based on Machine Learning Using Bio-Signal Data Collected by Wearable Device’를 발표해 인공지능과 머신러닝을 의료분야에 활용한 내용에 대해 강의했다.

 

각 세션에서 영남대 의대 김종연 교수, 계명대 의대 이중정 교수, 영남대 의대 정승필 교수가 좌장을 맡았다.

 

빅데이터정밀의료 연구회 공동회장인 이경수 교수(예방의학교실)와 장병익 교수(소화기내과)는 “이번 심포지엄을 통해 국내외 전문가들이 한자리에 모여 빅데이터 정밀의료분야의 향후 나아갈 방향을 모색하고 돌파구를 마련하는 자리가 되길 바란다. 앞으로 대구․경북지역에서 의료빅데이터연구의 선도적인 역할을 해나갈 것이다. 한일 공동 빅데이터 연구도 추진할 계획이다”라고 소감을 전했다.

 

 


          

How AI and ML will drive increased use of software defined storage

 Cache   

Few technologies have been as hyped as artificial intelligence and machine learning in recent years, but then few technologies have the same potential for transformative change. You won’t find a CMO anywhere on the planet unmoved by the possibility of increasing sales and customer loyalty through personalisation at scale: the right offer to the right […]

The post How AI and ML will drive increased use of software defined storage appeared first on SUSE Communities.


          

The U.S. Army’s Worst Tradition: Never Ready for the Next War

 Cache   

The U.S. Army’s Worst Tradition: Never Ready for the Next WarGettySince the end of the Cold War, the U.S. Army has been consistently ranked as the most capable land force on the globe by defense analysts of all stripes. So why are so many people in the American military community today worried about the Army’s ability to deter conflicts with likely adversaries or prevail against those adversaries in future wars?The short answer is that warfare, always a mysterious amalgam of art, science, and guts, has become an increasingly complicated and unpredictable enterprise. America’s leading potential adversaries, China and Russia, have shown no small measure of imagination and dexterity in identifying the U.S. armed forces’ vulnerabilities, and exploiting them through the development of subtle yet aggressive geopolitical strategies, and increasingly lethal armed forces.Both “near peer competitors” may well be ahead of the U.S. military in applying newly emerging technologies—artificial intelligence, machine learning, autonomous systems, hypersonic weapons, and nanotechnology—to the ancient military problems of constricting an adversary’s maneuver, neutralizing its offensive weapons, and disrupting its command and control.These cutting-edge technologies, writes Christian Brose, Senior Fellow at the Carnegie Endowment for International Peace, “will enable new battle networks of sensors and shooters to rapidly accelerate the process of detecting, targeting, and striking threats, what the military calls the ‘kill chain.’”Mattis: ‘No Enemy’ Has Done More Harm to Military Readiness Than CongressHow is it that “the most lethal land force in world history” finds itself in this unenviable position? While the Army exhausted itself fighting two frustrating and inconclusive wars in Afghanistan and Iraq over the last 19 years, both Russia and China embarked on grand strategies of regional hegemony designed to undermine the rules-based international order that emerged after World War II under American leadership. Both of these rising powers have developed myriad ways to sew discord and dissent in America’s network of alliances and to expand their spheres of influence.Beijing presents its ambitious Belt and Road Initiative (BRI) as the best path for underdeveloped countries in Asia and Africa to gain access to modern infrastructure, capital, and prosperity. In practice, it’s plain that under the guise of building ports, roads, and communications infrastructure around the globe, China is engaged in predatory lending practices meant to gain political leverage and privileged access to foreign assets.In the South China Sea, Beijing has militarized seven hotly disputed islets, and is attempting to pinch the U.S. forces out of this strategically sensitive area entirely, even though international courts have declared China’s claims to these waters to be without foundation.Meanwhile, Vladimir Putin has run rings around the Obama and Trump administrations in the chess game of international politics. He successfully annexed the Crimea in 2014 from Ukraine, and interfered in the presidential election of 2016 via “active measures,” i.e., information warfare aimed at creating confusion and conflict in the American body politic. Moscow also successfully intervened on behalf of the brutal Assad regime in Syria, and Russia is now a major player in the Middle East.As demonstrated in the Ukraine, the Russians are the master practitioners of “hybrid warfare,” in which conventional military operations—and the threat of such operations—are closely integrated with propaganda, proxy campaigns, cyber warfare, coercive diplomacy, and economic threats.Both Russia and China have revitalized creaky and obsolete military establishments into first-class warfighting organizations. The consensus among Western military analysts is that in their respective spheres of influence, both countries have sufficiently sophisticated “anti-access area denial” (A2AD) capabilities to inflict severe punishment on American forces attempting to penetrate those spheres in order to challenge aggression or come to the aid of an ally. According to Army General Mark Milley, chair of the Joint Chiefs of Staff, both Russia and China are “deploying capabilities to fight the United States through multiple levels of standoff in all domains—space, cyber, air, sea, and land. The military problem we face is defeating multiple levels of standoff… in order to maintain the coherence of our operations.”Gen. Milley and the rest of the Army’s top brass are well aware that their service is currently a rusty instrument for carrying out high intensity operations warfare against either potential adversary. The Army Strategy, an 11-page, single-spaced document published in October 2018, provides a rough blueprint for the service’s plan to transform itself from a counterinsurgency-oriented organization into the leading practitioner of high intensity war by 2028.It won’t be easy. The Army Strategy calls for truly sweeping, even revolutionary, changes in doctrine, training, and organization of forces.For the first time since the Cold War, the Army has to reconfigure itself to be able to fight and win in a contested environment, where it will not have undisputed control over the air and sea. At the same time, it must prepare to engage potential adversaries more or less continuously in “gray zone conflict.” General Joseph Votel, the recently retired head of Special Operations Command, succinctly defines this concept as “conflicts characterized by intense political, economic, informational, and military competition more fervent in nature than normal diplomacy, yet short of conventional war.”The Army Strategy describes four lines of effort to reach the service’s chief objective by 2028, in this order of priority: Readiness, modernization, department reform, and building alliances and partnerships. The last two lines are more or less pro forma in every American military strategy document I’ve read over the last 30 years: reduce waste and inefficiency, and work with allies to insure military interoperability. The first two lines are worth a close look, for they illuminate the broad contours of the service’s quest to regain its pre-eminence in great power conflict. The quest to enhance readiness begins with plans to increase the size of the regular army to over half a million men from its current level of 476,000. In a departure from recent practice, all units earmarked for contingency operations and overseas deployments will be fully manned and given state of the art equipment before deploying. In order to increase the size of the service, the quality and quantity of recruiters and instructors will be increased.The focus of Army unit training will shift from counterinsurgency operations to high intensity fighting, where the adversary is assumed to have cutting edge A2AD, offensive weapons, and cyber systems.Deployments of Army units around the world will be less predictable and more rapid that they’ve been to date, as the Army and the other armed services begin to put the “Dynamic Force Deployment” concept to work. This concept is closely associated with former Secretary of Defense James Mattis. It’s also classified, and few details have been released for public consumption. But the core idea, as Mattis explained in 2018, is for the U.S. military to “stop telegraphing its punches.” Combat forces and their support units will be moving in and out of potential flashpoint areas more frequently and at unpredictable intervals in order to proactively shape the strategic environment.Improving readiness also involves important upgrades in the Army’s defensive missile systems to counter China and Russia’s formidable A2AD systems. A new lower-tier air and missile defense sensor project will enhance the ability of Patriot missiles to identify and track targets at long range by 2022. Beginning in 2021, Stryker light armored vehicles will be equipped with a new air defense system to protect mechanized battalions and brigades as they maneuver in harm’s way.Missile system upgrades, coupled with an entirely new generation of combat vehicles, both manned and unmanned, will allow the Army of the future to penetrate adversary defenses with an acceptable degree of loss.Ensuring readiness to fight is the top priority of the Army until 2022. After that date, the service plans to turn close attention to implementing entirely new operational concepts and “technologically mature” systems that are currently in the research and development phase.The overarching goal is to be able to conduct sustained “multi-domain operations” against either potential adversary, and win, by 2028. In the modernization phase, the Army plans to introduce a host of new long-range precision weapons, including hypersonic missiles that travel at more than five times the speed of sound. An entirely new generation of combat vehicles and vertical lift aircraft, i.e., new helicopters and aircraft with capabilities similar to those of the V-22 Osprey, both manned and unmanned, are currently in the works.The new Army Network will be an integrated system of hardware, software, and infrastructure capable of withstanding formidable cyber assaults.The leading war-fighting concept at the foundation of the Army’s modernization effort, though, is clearly “multi-domain operations (MDO).” The first thing to be said about the concept is that it’s very much inchoate. Discussions with several active-duty Army officers suggest even those “in the know” about this classified concept have only a hazy idea of how such operations will work in the field, for the simple reason that many of the systems such operations hope to integrate are still in the early stages of development.The Army has only one experimental MDO unit on active duty. It is deployed in the Indo-Pacific Command and built around a conventional rocket and missile brigade. The brigade contains a unique battalion devoted to intelligence, information, cyber, electronic warfare and space operations (I2CEWS). According to Sydney J. Freedberg Jr., an editor at Breaking Defense, the I2CEWS battalion “appears to not only pull together data from outside sources—satellites, drones, spy planes—to inform friendly forces of threats and targets, it also wages war in cyberspace and across the electronic spectrum, hacking and jamming the sensors and networks that tell the enemy where to shoot.”The commander of Army forces in the Indo-Pacific, Gen. Robert Brown, recently told reporters that his experimental brigade has performed brilliantly “in at least ten war games” against what are presumably Chinese and Russian forces. Before the advent of the new unit, American forces repeatedly failed to penetrate either rivals’ anti-access area denial systems with acceptable casualties in war games. Another experimental brigade is expected to enter service in Europe soon.The U.S. Army has a long and unenviable history of being ill-prepared to fight the next war. The French and British had to train U.S. Army units before they were deployed in World War I. The Army entered World War II as the 17th largest army in the world, with underpowered tanks, airplanes, and ancient rifles. The Army that went to Vietnam, Iraq, and Afghanistan had trained long and hard to engage in conventional operations against nation states, but was ill-prepared, psychologically or organizationally, for counter-insurgency war. The Army’s ability to adapt to new developments has long been hampered by infighting and excessive conservatism in the upper reaches of the service’s hierarchy.To remedy this problem, in July 2018 the Army created the Futures Command (AFC). Its purpose is to unify the service-wide modernization effort under a single command, and oversee the development of new doctrine, equipment, organization, and training. According to Gen. John Murray, its head, the AFC “will conduct war-fighting and technology experimentation together, producing innovative, field-informed war-fighting concepts and working prototypes of systems that have a low risk of… being rejected by future war fighters. There are no game-changing technologies. There are only game-changing combinations of war-fighting concepts, technologies and organizations.”To say that General Murray has his work cut out for him is a massive understatement. He surely has one of the most difficult and important assignments in modern military history. Read more at The Daily Beast.Get our top stories in your inbox every day. Sign up now!Daily Beast Membership: Beast Inside goes deeper on the stories that matter to you. Learn more.



          

AWS extends Alexa voice controls to low-powered devices

 Cache   

Las Vegas: To the delight of third-party developers, Amazon's Cloud arm Amazon Web Services (AWS) has decided to bring Alexa voice control capabilities to low-powered devices.

Currently, Alexa Voice Service (AVS) has a minimum requirement of at least 100MB of on-device RAM and an ARM Cortex "A" class microprocessor, reports SiliconANGLE.

Amazon is also expanding the capabilities of itsAAWS IoT Greengrass service, which extends AWS functions to connected devices.

"When you know the state of your physical assets, you can solve a lot of things," AWS Vice President of IoT Dirk Didascalou told SiliconANGLE.

"You can also create a lot of new services. A lot of our customers have this need," he added.

To cut back on costs, Amazon is transferring tasks such as processing requirements, retrieving, buffering, decoding and mixing audio on devices to the cloud, making voice control, and potentially biometrics, possible even for light switches, the report added.

AWS customers can also create their own machine learning image analysis thanks to a new feature added to Amazon Rekognition called Amazon Rekognition Custom Labels, available from December 3.

AWS is also introducing more connectivity and control services to make life easier for IoT developers.

These include Fleet Provisioning for AWS IoT Core, which makes it simpler to onboard a wide range of connected products, be it vacuum cleaners or construction excavators, the report mentioned.

AWS is set to kick off its flagship annual re: Invent conference here from Dec 2.



          

IT / Software / Systems: Staff Software Engineer - Particle Counting - Loveland, Colorado

 Cache   
Staff Software Engineer - Particle Counting North America-North America-United States-CO-Loveland Beckman Coulter s Particle Counting and Characterization business is growing and needs intelligent, hardworking skilled software engineers to work as part of our team to develop and deliver new products. You will apply modern development methodologies and new software technologies to create world class products. Work with cross-functional project teams to develop new products and sustain existing product lines. Are you an expert in building scalable systems, leading technical discussions, participating in code reviews and guiding the team in engineering best practices? Are you a highly technical, hands-on coder who will also provide technical guidance and thought leadership to team members? * Architect, design, implement and maintain software systems in a collaborative team environment to deliver products that change the world. * Ability to architect and build software from scratch as well as improve existing code * Provide strong technical leadership to the team. Lead architecture efforts to develop the technical road map of projects. * Work closely with less experienced software staff to design new software components. * Ensure design and code reviews, analysis of code components and code coverage of unit test cases. * Drive end-to-end quality with effective automation of unit level, component level and system level testing * Use user-centered design to deliver products that delight our customers. * Cultivate innovation. * Challenge status quo whilst supporting a world class product development life cycle. * Mentor and be mentored by your team members in development techniques and technologies. * Participate in the testing process through test review and analysis, test witnessing, and certification of software. Understand and apply automated test strategies to the entire development life cycle. * Gain experience with a broad range of software technologies and the ability to make technical choices objectively. * Participate in global development efforts. * Build relationships with peers across the business and uses these relationships to drive innovation. * Actively track developments in the software community and incorporate them where appropriate. * Provide input to the organization for strategic planning and technology. * Become savvy in new technologies by taking courses, learning from peers, or self-driven education * BS with 9+ years experience or MS with 7+ years experience in Computer Science, Computer Engineering or a related field * Familiarity with developing production software for desktop computing or embedded devices * Exceptional problem solving, critical thinking and communication skills * Relentless focus on optimizing time to market while increasing quality * Ability to find and share tools that optimize your work * We are agile, we inspire and embrace change, we expect that from you too Your experience includes a good set of these crucial skills * Experience architecting and building software from scratch * Experience in designing and developing complex framework and platform solutions with practical use of design patterns * Experience mentoring others * Ability to communicate software and system design * Automated testing * Automated software delivery pipelines * Continuous Integration * Ability to work with customers to understand their needs and translate them into successful solutions * User Experience (UX) / User Interface design and implementation * Excellent Communication and Presentation Skills * Modern software development methodologies including Agile and Scrum * 5+ years of experience with at least two of: C++, C#, Python or Typescript * Experience with the following is a plus: Ruby, Python, MVVM, WPF, MVC, Golang * Experience with technologies used on: Linux OS, Windows IoT and Desktop OS * Understanding of Localization and Internationalization Tools we use * Container services (Docker, Kubernetes, etc) * Maven, Jenkins, Git, Jira, Confluence * C#, C++, Typescript, Ruby, Python, CSS, HTML5 * Windows, Linux, AWS, .Net, Mono, QT * Frameworks such as Angular, React, NodeJS or GraphQL * Relational DB - MySQL, SQL Server, PostgreSQL * Machine learning * Numerical analysis Diversity & Inclusion At Danaher, we are dedicated to building and sustaining a truly diverse and inclusive culture. These are not just words on a page Diversity and Inclusion is a top priority for the company, and it ties deeply to each of our core values. Danaher Corporation and all Danaher Companies are equal opportunity employers that evaluate applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity, or other characteristics protected by law. #LI-SM1 Full-time ()
          

Can AI Predict the Stock Market? No, But the Attempt Was Interesting

 Cache   
"We all want to be rich by having a computer just generate piles of money for us," writes long-time Slashdot reader TekBoy. "Here's one man's attempt at using AI to predict the market. From the article (by tinkerer/writer/network guy Jason Bowling): Models that did great during their initial training and validation runs might do ok during runs on later data, but could also fail spectacularly and burn all the seed money. Half the time the simulation would make money, and half of the time it would go broke. Sometimes it would be just a few percentage points better than a coin toss, and other times it would be far worse. What had happened? It had looked so promising. It finally dawned on me what I had done. The results cycling around 50% was exactly what you'd expect if the stock price was a random walk. By letting my program hunt through hundreds of stocks to find ones it did well on, it did stumble across some stocks that it happened to predict well for the validation time frame. However, just a few weeks or months later, during a different slice of the random walk, it failed. There was no subtle underlying pattern. The model had simply gotten lucky a few times by sheer chance, and I had cherry picked those instances. It was not repeatable. Thus, it was driven home -- machine learning is not magic. It can't predict a random sequence, and you have to be very careful of your own biases when training models. Careful validation is critical. I am sure I will not be the last to fall victim to the call of the old treasure map in the attic, but exercise caution. There are far less random time series to play with if you are looking to learn. Simulate, validate carefully, and be aware of your own biases.

Read more of this story at Slashdot.


          

'Pre-Crime' AI Is Driving 'Industrial-Scale Human Rights Abuses' In China's Xinjiang Province

 Cache   
Long-time Slashdot reader clawsoon writes: Among Sunday's releases from the International Consortium of Investigative Journalists on leaked Chinese documents about the detention of Xinjiang Uighurs — which they are calling the largest mass internment of an ethnic-religious minority since World War II — is a section on detention by algorithm which "is more than a 'pre-crime' platform, but a 'machine-learning, artificial intelligence (AI), command and control' platform that substitutes artificial intelligence for human judgment...." "The Chinese have bought into a model of policing where they believe that through the collection of large-scale data run through AI and machine learning that they can, in fact, predict ahead of time where possible incidents might take place, as well as identify possible populations that have the propensity to engage in anti-state anti-regime action," reports James Mulvenon, director of intelligence integration at SOS International LLC, an intelligence and information technology contractor for several U.S. government agencies. "And then they are preemptively going after those people using that data." The Chinese government responded by calling the leaked documents "fake news."

Read more of this story at Slashdot.


          

Research Scientist, Hydrology - NOKIA - Austin, TX

 Cache   
Nokia Analytics and IoT is building enterprise class solutions where machine learning is applied to natural process modeling to determine dynamic and multi…
From Nokia - Mon, 21 Oct 2019 21:22:39 GMT - View all Austin, TX jobs
          

Bell Labs Intern Augmented Human Sensing - NOKIA - Murray Hill, NJ

 Cache   
Present research findings through internal oral presentation. Have a deep understanding of machine learning/IP networking and expertise in related areas such as…
From Nokia - Thu, 24 Oct 2019 03:25:22 GMT - View all Murray Hill, NJ jobs
          

Tapping NGINX for AI-Powered Insight into API Traffic

 Cache   

New insights into your API traffic are made available by leveraging data science and applying machine learning to data derived from your API traffic. To obtain such data, you need to tap into the network or obtain metadata indirectly from a source that has visibility into the API traffic, such as a gateway or load [...]

Read More...

The post Tapping NGINX for AI-Powered Insight into API Traffic appeared first on NGINX.


          

VA – Springfield – Data Scientist, Senior, Financial Management (EM45-10) - RTI Consulting, LLC. - Springfield, VA

 Cache   
Demonstrated experience of machine learning techniques and algorithms. Creating meaningful data visualizations that communicate findings and potential for…
From RTI Consulting, LLC. - Wed, 22 May 2019 15:09:29 GMT - View all Springfield, VA jobs
          

VA – Springfield – Data Scientist, MidLevel, Financial Management (EM45-11) - RTI Consulting, LLC. - Springfield, VA

 Cache   
Demonstrated experience with machine learning techniques and algorithms. Creating meaningful data visualizations that communicate findings and potential for…
From RTI Consulting, LLC. - Wed, 22 May 2019 15:09:25 GMT - View all Springfield, VA jobs
          

Internal Audit - Analytics Manager - Deloitte - Dallas, TX

 Cache   
Knowledge of machine learning and statistical concepts a plus. Advising clients on process efficiency, fraud detection, operational quality, internal control…
From Deloitte - Fri, 22 Nov 2019 01:34:43 GMT - View all Dallas, TX jobs
          

Internal Audit - Analytics Consultant - Deloitte - Dallas, TX

 Cache   
Knowledge of machine learning and statistical concepts a plus. Advising clients on process efficiency, fraud detection, operational quality, internal control…
From Deloitte - Tue, 19 Nov 2019 01:34:56 GMT - View all Dallas, TX jobs
          

Internal Audit - Analytics Senior Consultant - Deloitte - Dallas, TX

 Cache   
Knowledge of machine learning and statistical concepts a plus. Advising clients on process efficiency, fraud detection, operational quality, internal control…
From Deloitte - Fri, 20 Sep 2019 19:39:27 GMT - View all Dallas, TX jobs
          

Medical Unmet Needs NLP/Text Analytics Analyst - GSK - Philadelphia, PA

 Cache   
2+ years of unstructured data analysis/text analytics/natural language processing and/or machine learning application for critical business decisions.
From GlaxoSmithKline - Sat, 05 Oct 2019 03:53:32 GMT - View all Philadelphia, PA jobs
          

Medical Unmet Needs NLP/Text Analytics Analyst (Oncology) - GSK - Philadelphia, PA

 Cache   
2+ years of unstructured data analysis/text analytics/natural language processing and/or machine learning application for critical business decisions.
From GlaxoSmithKline - Sat, 05 Oct 2019 03:53:32 GMT - View all Philadelphia, PA jobs
          

Internal Audit - Analytics Consultant - Deloitte - New York, NY

 Cache   
Knowledge of machine learning and statistical concepts a plus. Advising clients on process efficiency, fraud detection, operational quality, internal control…
From Deloitte - Tue, 19 Nov 2019 01:34:56 GMT - View all New York, NY jobs
          

Internal Audit - Analytics Senior Consultant - Deloitte - New York, NY

 Cache   
Knowledge of machine learning and statistical concepts a plus. Advising clients on process efficiency, fraud detection, operational quality, internal control…
From Deloitte - Fri, 20 Sep 2019 19:39:27 GMT - View all New York, NY jobs
          

Internal Audit - Analytics Manager - Deloitte - New York, NY

 Cache   
Knowledge of machine learning and statistical concepts a plus. Advising clients on process efficiency, fraud detection, operational quality, internal control…
From Deloitte - Mon, 16 Sep 2019 13:35:15 GMT - View all New York, NY jobs
          

Webinar: High-fidelity search and knowledge management

 Cache   
Show Registration Button: 
Show Registration Button

Enterprise Search with iManage RAVN

Are you struggling to share knowledge effectively within your business? Finding, curating, and assimilating content can take significant amounts of human effort. This is why iManage RAVN has developed AI and machine learning solutions—to remove much of the manual pain—empowering you to focus on where you can add value to the firm. In our new webinar we answer many of these common questions:

Publication date: 
Thu, 28/11/2019 - 9:55am
Content promotion
Promote to homepage: 
Promote to home page
Promote to channel: 
Promote to channel
Promote to newsletter: 
0
Promote to event box: 
Promote to event box
Community hero
Promote to community hero: 
0
Weight: 
0

read more


          

Machine Learning, Predictive Analytics, and Clinical Practice

 Cache   
Can the Past Inform the Present? Predictive Analytics – Physicians’ minds, no matter how bright or experienced, are fallible—unable to adequately store, recall, and correctly analyze the millions of pieces of medical information needed to optimally care for patients. The promise of machine learning (ML) and predictive analytics is that clinicians’ decisions can be augmented…
Read more
          

Technology And Trends Shaping Insurtech In 2019 And Beyond

 Cache   
InsurTech – Multiple disruptive forces are reshaping the global insurance sector. Technology such as artificial intelligence (AI), machine learning (ML), blockchain and internet of things (IoT) solutions stand on one end, and nimble, innovative insurtech startups sit on the other. These technology advancements are forcing traditional incumbents to rethink their business models and accelerate their…
Read more
          

Juan Pablo Vielma — Mixed Integer Programming Methods for Machine Learning and Statistics, Dec 2

 Cache   
Abstract: More than 50 years of development have made mixed integer programming (MIP) an extremely successful tool. MIP’s modeling flexibility allows it describe a wide range of business, engineering and scientific problems, and, while MIP is NP-hard, many of these problems are routinely solved in practice thanks to state-of-the-art solvers that nearly double their machine-independent speeds every year. In this talk we show how a careful application of MIP modeling techniques can lead to extremely effective MIP-based methods for three problems in machine learning and statistics.

The first problem concerns causal inference of treatment effects in observational studies [1]. For this problem we introduce a MIP-based matching method that directly balances covariates for multi-variate treatments and produces samples that are representative of a target population. We show how using the right MIP formulation for the problem is critical for large data sets, and illustrate the effectiveness of the resulting approach by estimating the effect that the different intensities of the 2010 Chilean earthquake had on educational outcomes. The second problem concerns the design of adaptive questionnaires for consumer preference elicitation [2]. For this problem we introduce an approximate Bayesian method for the design of the questionnaires, which can significantly reduce the variance of the estimates obtained for certain consumer preference parameters. We show how carefully modeling the associated question selection using MIP is crucial to achieving the required near-realtime selection of the next question asked to the consumer. The third problem concerns certifying that a trained neural network is robust to adversarial attacks [3]. For this problem we introduce strong MIP formulations that can significantly reduce the computational time needed to achieve the certification.

[1] Building Representative Matched Samples with Multi-valued Treatments in Large Observational Studies. M. Bennett, J. P. Vielma and J. R. Zubizarreta. Submitted for publication, 2019. arXiv:1810.06707

[2] Ellipsoidal methods for adaptive choice-based conjoint analysis. D. Saure and J. P. Vielma. Operations Research 67, 2019. pp. 295-597.

[3] Strong mixed-integer programming formulations for trained neural networks. R. Anderson, J. Huchette, C. Tjandraatmadja and J. P. Vielma. In A. Lodi and V. Nagarajan, editors, Proceedings of the 20th Conference on Integer Programming and Combinatorial Optimization (IPCO 2019), Lecture Notes in Computer Science 11480, 2019. pp. 27-42.

Bio: Juan Pablo Vielma is the Richard S. Leghorn (1939) Career Development Associate Professor at MIT Sloan School of Management and is affiliated to MIT’s Operations Research Center. Dr. Vielma has a B.S. in Mathematical Engineering from University of Chile and a Ph.D. in Industrial Engineering from the Georgia Institute of Technology. His current research interests include the theory and practice of mixed-integer mathematical optimization and applications in energy, natural resource management, marketing and statistics. In January of 2017 he was named by President Obama as one of the recipients of the Presidential Early Career Award for Scientists and Engineers (PECASE). Some of his other recognitions include the NSF CAREER Award and the INFORMS Computing Society Prize. He is currently an associate editor for Operations Research and Operations Research Letters, a member of the board of directors of the INFORMS Computing Society, and a member of the NumFocus steering committee for JuMP.
          

Machine learning for Java developers, Part 2: Deploying your machine learning model

 Cache   

My previous tutorial, "Machine Learning for Java developers," introduced setting up a machine learning algorithm and developing a prediction function in Java. I demonstrated the inner workings of a machine learning algorithm and walked through the process of developing and training a machine learning model. This tutorial picks up where that one left off. I'll show you how to set up a machine learning data pipeline, introduce a step-by-step process for taking your machine learning model from development into production, and briefly discuss technologies for deploying a trained machine learning model in a Java-based production environment.

To read this article in full, please click here


          

Machine learning for Java developers, Part 1: Algorithms for machine learning

 Cache   

Self-driving cars, face detection software, and voice controlled speakers all are built on machine learning technologies and frameworks--and these are just the first wave. Over the next decade, a new generation of products will transform our world, initiating new approaches to software development and the applications and products that we create and use.

As a Java developer, you want to get ahead of this curve, especially because tech companies are beginning to seriously invest in machine learning. What you learn today, you can build on over the next five years, but you have to start somewhere.

This article will get you started. You will begin with a first impression of how machine learning works, followed by a short guide to implementing and training a machine learning algorithm. After studying the internals of the learning algorithm and features that you can use to train, score, and select the best-fitting prediction function, you'll get an overview of using a JVM framework, Weka, to build machine learning solutions. This article focuses on supervised machine learning, which is the most common approach to developing intelligent applications.

To read this article in full, please click here


          

IT services, tech startups put India on route to AI: ETILC

 Cache   
For artificial intelligence and machine learning algorithms to run on big data, Indian companies require technology that will digitise and store data. Industry is now increasingly adopting this infrastructure. Since legacy systems are slow to change, they are the laggards.
          

Sustainability Solutions Take Center Stage at Demo Day for Black & Veatch Cleantech Accelerator

 Cache   

Seven U.S. entrepreneurs chosen to participate in the Black & Veatch IgniteX Cleantech Accelerator recently took the stage in downtown Kansas City to showcase their business models and demonstrate how they’re poised to rewrite how the enterprise focuses on sustainability.

Powered by LaunchKC, the Cleantech Accelerator’s Demo Day drew 250 engineers, entrepreneurs, investors, students and professionals who watched as Accelerator founders presented their compelling proofs of how they’re gaining traction in the market.

“When we launched the Accelerator at Black & Veatch, we announced we were looking for the boldest, brightest entrepreneurs and startups in the cleantech space,” said Hyleme George, program director for the Cleantech Accelerator and Demo Day host. “Combining the startups’ novel technology and nimbleness with Black & Veatch’s scale and expertise can move us further and faster towards our mission of building a world of difference through innovation in sustainable infrastructure.”

View this video to learn more about the IgniteX Cleantech Accelerator.

The Nov. 13 Demo Day forum provided the founders with a platform to share their business models via eight-minute pitches that spanned each company’s value proposition and market opportunity.

“It has been incredibly effective to focus on a target area with an established and forward-looking Kansas City company like Black & Veatch to be able to identify and sort new technologies that can create jobs and follow-on investment in that industry,” said Drew Solomon, competition chair of LaunchKC and senior vice president at the Economic Development Corp. of Kansas City, MO.

The IgniteX Cleantech Accelerator cohort entered the Demo Day fresh from a 12-week accelerator boot camp with Black & Veatch personnel and other industry experts, who assisted them in planning, designing and executing real-world projects in the clean technology space.

Cohort demos included:

AWARE VEHICLES, Kansas City, MO: Smart drones for precision agriculture and infrastructure.

P.J. Piper, president & CEO, introduced an automated, innovative mobile docking platform for drones – including launch, recovery, and data transmission and charging – combined with artificial intelligence (AI) for the real-time study of farmland – down to moisture levels in individual plants - and infrastructure often in remote locations. https://www.awarevehicles.com/

BUILT ROBOTICS, San Francisco, CA: Autonomous construction machinery for efficiency, safety and faster results.

“We’re developing artificial intelligence guidance systems that will enable heavy equipment to operate autonomously,” said Jimmy Kim, Business Operations Lead. “Our focus is on building the software… the brains of the robot… to allow it to operate (heavy equipment) intelligently on job sites, and equipping them with the same capabilities and sensibilities that a human operator would have to ensure safety and efficiency. http://www.builtrobotics.com/

ecoSPEARS, Altamonte Springs, FL: Green remediation of PCBs and dioxins in water, soils, and sediments.

Co-founder Serg Albino is working with NASA and Black & Veatch to clean up the environment. “Imagine a world where every human being has access to clean food, clean soil, clean air and clean water,” said Albino, a former NASA aerospace engineer. ecoSPEARS has an exclusive license to leverage NASA’s proprietary technology to extract, contain and destroy PCBs and other contaminants in the environment. https://ecospears.com/

ELECTRIPHI, San Francisco, CA: Fleet electrification for reducing carbon emissions.

“We are on the cusp of the biggest energy transition in history. This is a multi-trillion-dollar transition from oil to electrons,” said Muffi Ghadiali, co-founder and CEO, who unveiled a fleet electrification planning tool. “If all of transportation, as we know it today, would electrify, that will mean we need to double or triple the grid capacity.

“This will disrupt the entire value chain from how energy is produced, how energy is delivered, how energy is consumed. And this is creating amazing opportunities for existing companies to reinvent themselves and for new companies, like us, to be created. The thread that ties all of this together is data, intelligence and an ecosystem of collaboration.” https://electriphi.ai/

EXTENSIBLE ENERGY, Berkeley, CA: Software to reduce demand charges in commercial buildings.

John Powers, Founder & CEO, has unlocked the software that makes solar energy more cost-effective for commercial building owners. It begins with making buildings smarter to better manage their use of energy through demand control.

“The U.S. generates about 2% of its electricity from solar today; in 10 years that will be 20 percent. That’s a $1 trillion opportunity,” Powers said. “It’s the largest energy infrastructure opportunity of the next decade.”

Powers closed his presentation by announcing work toward a reseller agreement with Black & Veatch to bring the Extensible Energy integrated solution, Demand X, to market. “When an innovator like Extensible Energy and an engineering powerhouse like Black & Veatch come together with a solution that addresses the biggest energy opportunity of the next decade, the result for all of us will be a cleaner world.” https://www.extensibleenergy.com/

INFRALYTIKS, Urbandale, IA: Artificial intelligence data analytics for sustainable infrastructure improvement.

“In the last two years more data has been created than in the entire history of mankind. However, only about half of 1 percent of all that data gets analyzed. And that’s where we come in,” said Kevin Prendergast, president. “Machine learning unlocks the value of the data and makes it work for us.”

Applying large-scale, real-time data analytics is the heartbeat of InfraLytiks, including a Black & Veatch assignment to automate the process of adding telecom lines and antennae to utility poles across the country – potentially all 180 million utility poles in the U.S. “There is huge demand for lines on the poles. This is valuable real estate,” he said. “The value for consumers, of course, is connectivity.” https://infralytiks.com/

NOVONUTRIENTS, Sunnyvale, CA: Converting industrial CO2 emissions into protein-rich feed.

Black & Veatch is committed to finding new technologies that will reshape the food and agriculture system as we know it. And, that’s where David Tze, CEO, enters the picture.

NovoNutrients has created a gas fermentation technology that will convert industrial carbon dioxin emissions into protein for aquafeed – the essential food in aquaculture and its farm-raised seafood – that is far more cost-effective and plentiful than aquafeed products now on the market.

With about 40 billion tons per year of greenhouse gases for NovoNutrients to “farm,” Tze said six sectors are well-suited for the NovoNutrients technology. https://www.novonutrients.com/

“These seven companies are on the front lines of the cleantech revolution,” George said. “Through our partnership with LaunchKC and by bringing the cohort together with our professionals at Black & Veatch, this program is an emphatic endorsement about how new technology can be used to solve our biggest resource and infrastructure challenges.”

---

Editor’s Note: A livestream video of the Black & Veatch Demo Day is available at https://youtu.be/JGIofiDgIhE

About Black & Veatch

Black & Veatch is an employee-owned engineering, procurement, consulting and construction company with a more than 100-year track record of innovation in sustainable infrastructure. Since 1915, we have helped our clients improve the lives of people in over 100 countries by addressing the resilience and reliability of our world's most important infrastructure assets. Our revenues in 2018 were US$3.5 billion. Follow us on www.bv.com and on social media.

About LaunchKC

LaunchKC has evolved from a 5-year-old grants competition into a tech accelerator platform that leverages grants, as well as creates new opportunities for investors, entrepreneurs, workers and the tech ecosystem of Kansas City. Its bottom line is to attract scalable companies to create more jobs and opportunities while growing the local economy. LaunchKC is an initiative of the Downtown Council (DTC) and the Economic Development Corporation (EDCKC) both of Kansas City, Missouri.

Media Contact Information:

CHRISTOPHER CLARK, Black & Veatch, 913-458-2778

MIKE HURD, LaunchKC, 816-447-2136


          

MIT Technology Review Article

 Cache   

The Shakespeare Conference: SHK 30.520  Friday, 29 November 2019

 

[1] From:        Al Magary <al@magary.com>

     Date:         November 27, 2019 at 3:23:07 PM EST

     Subj:         Re: SHAKSPER: MIT Tech Review 

 

[2] From:        Gabriel Egan <mail@gabrielegan.com>

     Date:         November 29, 2019 at 3:20:34 AM EST

     Subj:         Re: SHAKSPER: MIT Tech Review

 

 

[1]-----------------------------------------------------------------

From:        Al Magary <al@magary.com>

Date:         November 27, 2019 at 3:23:07 PM EST

Subject:    Re: SHAKSPER: MIT Tech Review

 

On 11/26/2019 12:35 PM, Ros Barber wrote:

 

I am troubled that a respected news outlet like MIT Technology Review should be trumpeting pre-prints from arXiv. These articles are essentially self-published and have not yet passed peer review. Journalists picking up articles from this source are effectively bypassing the peer-review process by giving oxygen to work that may not deserve it. A friend of mine who is not even in the field sent me a copy of this article yesterday in great excitement not knowing I had already read it in great detail and found it seriously wanting. She was shocked when I told her this, saying “MIT Tech Review is a source I trust”. 

 

I think Ros Barber’s alarm about the article on Fletcher/Henry VIII attribution (https://www.technologyreview.com/s/614742/machine-learning-has-revealed-exactly-how-much-of-a-shakespeare-play-was-written-by-someone/) is misplaced. First, MIT Technology Review is not a scholarly publication but a tech news magazine. Its mission: “MIT Technology Review adheres to the traditional best practices of journalism. The guiding principles are based on our responsibility to the reader to produce accurate, fair, and independent editorial...” (https://www.technologyreview.com/about/ethics-statement/) One need only go to the home page to prove it is a journalistic medium, with headlines like “This girl’s TikTok ‘makeup’ video went viral for discussing the Uighur crisis”; “Why we should be far more afraid of climate tipping points”; “A falling rocket booster just completely flattened a building in China”; and so forth.

 

Second, the offending article included a gently worded proviso: “Enter Petr Plecháč at the Czech Academy of Sciences in Prague, who says he has solved the problem using machine learning to identify the authorship of more or less every line of the play...” That is scarcely “trumpeting pre-prints from arXiv.” Actually, doesn’t the article put into circulation a theory Shakespeare specialists can knock down before it gets near peer review? 

 

In defense of journalism,

Al Magary

Former practitioner of WWWWW&H 

 

 

[2]-----------------------------------------------------------------

From:        Gabriel Egan <mail@gabrielegan.com>

Date:         November 29, 2019 at 3:20:34 AM EST

Subject:    Re: SHAKSPER: MIT Tech Review

 

Dear SHAKSPERians

 

I share Ros Barber’s general preference for peer review as a gate-keeping process to ensure high quality in published research. But it is by no means a guarantee of excellence. The ‘Journal of Early Modern Studies’ describes itself as a “peer-reviewed international journal”. In its volume for 2016 (on “The Many Lives of William Shakespeare Biography, Authorship and Collaboration”), one of its articles reads:

 

<<

The following exchange occurs in the Q2 (1603) edition of

'Hamlet' when the player is reciting a speech on

Priam's slaughter (2.2.505-7):

 

1st PLAYER: 'But who O who had seen the mobled queen-'

CORAMBIS: Mobled Queene is good, faith very good.

>> (p. 112)

 

The journal peer-review process failed to detect the following errors in the above:

  1. There was no “Q2 (1603)” edition of the play: Q1 was published in 1603 and Q2 in 1604-5. The use of the name “CORAMBIS” in the above quotation must mean that the Q1 edition is intended. This is not a one-off slip as the edition containing Corambis is again called “Q2” on page 113.
  2. The quoted speech prefixes are from neither the Q1 nor the Q2 edition, but the characters’ words are clearly from the Q1 edition, mislabeled as Q2/
  3. In the quotation from Q1, the author has (aside from altering both speech prefixes), added quotation marks around the first line, omitted the comma before “O”, changed “seene” to “seen”, changed “Queene” to queen” and added a dash at the end of the first line.

The author goes on to quote the Folio even more inaccurately:

 

<<

In the First Folio (1623), which inserts a questioning

line from Hamlet, the word is 'inobled':

 

1st PLAYER: 'But who, O who had seen the inobled queen-'

HAMLET: 'The inobled queen?'

POLONIUS: That's good; 'inobled queen' is good.

>> (p. 112)

 

Here the author has again altered the speech prefixes, added quotation marks to each of the three lines, omitted the comma after “O who”, changed “Queene” to “queen” twice and “Queen” to “queen” once, added a dash to the end of the first line, changed a colon to a semicolon after “good”, and changed “Inobled” to “inobled” in the last line. That is an awful lot of mistakes to make in a quotation of just 22 words.

 

In case SHAKSPERians think I have exaggerated the errors in this author’s quotations of the early editions, I have put the page in question at http://gabrielegan.com/scratch/JEMS.pdf

 

And the author of the peer-reviewed article quoted above?

 

It’s Ros Barber.

 

Regards

Gabriel Egan

 

 

 

 


          

Factual, a Location Data Company Leverages Machine Learning to Update Its Data Insights Solution

 Cache   

New Segments Identify Affinity or Intent Using Actionable Insights From Real-World Consumer Behavior to Identify, Reach and Engage Customers.
          

How TechStyle Uses Machine Learning for Personalization: Q&A With Danielle Boeglin

 Cache   

Danielle Boeglin, VP of data analytics, TechStyle, shares the most useful lessons around machine learning in the retail space. Dive in!
          

TikTok-ing Through the Apocalypse (Part 1)

 Cache   

In this two-part series we explore the international phenomenon and emerging social media platform: TikTok. From the failing Shanghai tech company that created Musical.ly to ByteDance’s $1b acquisition in 2017, we explore how TikTok became the highest-valued startup to date now totalling over $75b. In part 1 we’ll break down the app and its powerful machine learning technology, and how its Chinese roots are critical to understanding the future of the company.

Eating For Free is a bi-weekly gossip podcast reporting from the edge of the internet! We're a new wave of celebrity reporters at a time when pop culture is increasingly chaotic and media lacks the ability or moral direction to make sense of this capitalist nightmare!

Do you have a tip for us? Got some hot gossip? Need to get any questions off your chest? Call our hotline at Call 1-810-EAT-FREE (1-810-328-3733) or send us an email at questions@eatingforfree.com!

Become a Patreon backer for exclusive access to weekly bonus episodes and more! You can also find us on our websiteTwitter, and Instagram. For behind-the-scenes gossip and access, join our exclusive Facebook group: Girls & Gays (G.A.G.S.)

Sources: 


          

Modern Hiring Guide

 Cache   

This guide will look at how data, automation, AI, and machine learning will change the quality of your recruiting, hiring, and onboarding efforts. Let the machines do the work while you reap the benefits. :)



Request Free!

          

Blog Post: D365Tour Press Review – November 2019

 Cache   
D365Tour Newsletter - November 20189 Microsoft Preview features in Platform update 31 for Finance and Operations apps (January 2020) Microsoft blogs Announcing RPA, enhanced security, no-code virtual agents, and more for Microsoft Power Platform   #MSDyn365FO / #MSDyn365FIN / #MSDyn365SCM – blogs  Intercompany project cost analysis – Part 2  Number sequence scope Portée des souches de numéro  [FR] Preview features in Platform update 31 for Finance and Operations apps (January 2020)Salesforce signs a big new deal with Microsoft’s cloud to power one of its core products Dynamics 365 Accounts Payable: Invoice Register Q&A Project Stage Workflow D365FO and Azure Machine learning ML for Demand Forecasting – part 1 Prerequisites How to setup easy automatic role assignment. Find more exiting details about this feature in Microsoft Dynamics 365 for Finance and Operations Revenue recognition – Scheduling Container Management in D365FO Managing Consolidated Batch Orders in Process Manufacturing in Dynamics 365  Gaps in the Security Diagnostics for Task Recordings Feature in D365FO L’article D365Tour Press Review – November 2019 est apparu en premier sur D365Tour .
          

417: Machine Learning Magic

 Cache   
We explore the rapid adoption of machine learning, its impact on computer architecture, and how to avoid AI snake oil. Plus so-so SSD security, and a new wireless protocol that works best where the Wi-Fi sucks.
          

AI – Helping Brands Manage Their Online Reputation

 Cache   

The use of Artificial Intelligence (AI) and Machine Learning is on the rise but it’s important to know which reputation management processes should and shouldn’t...

The post AI – Helping Brands Manage Their Online Reputation appeared first on Telecoms Business.


          

Adafruit IoT Monthly: Machine Learning 101, PWNing the ESP32, and more!

 Cache   

IoT Projects

Running ML on Particle Hardware, ML 101

Tensorflow Lite now runs on newer Particle devices! Brandon Satrom published a very detailed tutorial about running TFLite on Particle devices. - Particle

Do your chores or else I’ll cut off the internet!

AccidentalRebel is creating a device “that monitors and logs if my kids have done their chores and daily tasks. If not, their devices won’t have access to the internet.”. When their chores for the day are complete, their device will automatically re-connect them to the internet. - Hackaday.io

ESPRing Clock

ESPRing is a NeoPixel ring with an onboard ESP module. The ESP connects to WiFi and fetches the NTP time. - Hackaday.io

AutoHome - Universal Home Automation with Raspberry Pi

rirozizo is building a home automation system powered by a Raspberry Pi “to remotely control any possible home appliances without the use of proprietary hardware and apps”. They’re using the (free) Adafruit IO service as the MQTT broker and for data visualization. - Github

Voice-Controlled PyPortal Smart Switch

Dan the Geek is improving their PyPortal-based Smart Switch. They connected it to Adafruit IO’s IFTTT integration so they can turn a light on or off using voice commands over Alexa or Google Assistant. - Twitter

Evaluating Motion Sensors, Microwave v.s. PIR

Akarush wrote a detailed log of his evaluation for two motion sensors - a RCWL-0516 and a PIR motion sensor. The results? Each sensor has unique advantages and disadvantages. - Hackaday.io

Bluetooth-based Costume Props using Arduino and ESP32

Juan Carlos Jiménez hosted a costume party and integrated their costume with the house decorations. This BLE-powered costume prop is spooky. - JCJC-Dev

RGB Weather Strip

This RGB LED Strip changes color based on the weather forecast outside. - Hackaday

Code-less IoT Projects with Node-RED on Raspberry Pi

Les Pounder posted a tutorial about using the Node-RED development tool…

Node-RED is an awesome tool and anyone, yes anyone can make something with it. All you need is a web browser and a device with Node-RED. Node-RED uses JavaScript syntax, but we do not have to write any code, rather we link nodes together.

Around the Internet - IoT News

PWNing MBEDTLS on ESP32

LimitedResults found vulnerabilities with the ESP32 which allows an attacker to compromise the cryptographic library on the ESP32, MbedTLS. It’s important to note that an adversary will need physical access to the ESP32 module as it’s been compromised using a voltage-glitching attack. While this doesn’t impact hobbyists, it is a an attack on the hardware module (you can not roll out new software to patch it). If you have an ESP32 module in the field, it is potentially vulnerable to this type of attack, given an attacker’s resources and time. It looks like Espressif is following this report. They tweeted after ESP32 was pwned a couple of months ago: “We have upgraded the hardware; stay tuned for ESP32v3 with improved security and performance!”. We are unsure if this impacts the ESP8266 or the upcoming ESP32-S2 module.

Adafruit joins the Zephyr Project

The Zephyr Project is a scalable real-time operating system (RTOS) supporting multiple hardware architectures, optimized for resource constrained devices, and built with safety and security in mind, and we’re thrilled to announced we’ve joined the project. - Adafruit

Mozilla is building a “Web of Things”

Mozilla is building an “open platform for monitoring and controlling devices over the web”.

The idea of the Web of Things is to create a decentralized Internet of Things by giving things URLs on the web to make them linkable and discoverable, and defining a standard data model and APIs to make them interoperable.

Read more on Mozilla IoT…

Amazon’s long-term plan for Alexa

An interview with Rohit Prasad, Alexa’s head scientist, revealed details about where Amazon wants to head with their powerful voice assistant. - TechnologyReview

Hackable Smart Watch powered by Espruino

Bangle.js is a hackable, open-source smartwatch that can be easily customized. It’s currently crowdfunding on Kickstarter and may fill the space on our wrists from Pebble’s acquisition by Fitbit. The bangle packs more of a punch than a pebble with a nRF52832, 64kB RAM, heart rate monitor, accelerometer, magnetometer and a 350mAh battery. - Kickstarter

Best Buy discontinues Insignia IoT Products

Insignia, Best Buy’s generic hardware brand, has shut down every product which replies on their app (including a freezer). Each time this happens, we think about how many products in our lives rely on “other peoples servers”. Do you have a contingency plan for the IoT devices in your life? - Hackaday

Recognizing AI Snake Oil

AI has been intertwined with IoT (AIOT). But, “Much of what’s being sold as ‘AI’ today is snake oil — it does not and cannot work.”. This paper addresses the important questions of “Why is this happening? How can we recognize flawed AI claims and push back?” - Princeton

Analyzing NB-IoT and LoRaWAN Sensor Battery Life

Low power wide area network (LPWAN) technologies like NB-IoT and LoRaWAN are perfect for your projects requiring small packets, long battery life, and long distances. But how long will the batteries in your IoT project really last? - Semtech Developer Journal

Adafruit IoT Updates

Promotion: 1 Year of Adafruit IO Plus Free with $250 Adafruit Purchase

We’re running a special promotion! As of November 20th, 2019 5:30pm, if you place an order of $250 or more at Adafruit, you’ll receive a 1 year subscription to Adafruit IO+. You’ll receive a minimal yet elegant Adafruit IO+ Subscription Card! This card comes with a code on the back and when typed into your Adafruit IO account, will activate a full year of Adafruit IO+ service for all the IoT projects you can dream up.

Promotion: Google AIY Voice Kit for Black Girls CODE

For a limited time, whenever you buy a Google AIY Voice Full Kit the regular price of $59.95 here, on this page, Google will automatically donate one to Black Girls CODE. Black Girls CODE goal is to empower young women of color ages 7-17 to embrace the current tech marketplace as builders + creators. Check out the bundle on Adafruit’s website.

What is Adafruit.IO?

Adafruit.io has over 14,000+ active users in the last 30 days and 850+ Adafruit IO Plus subscribers. Sign up for Adafruit IO (for free!) by clicking this link. Ready to upgrade? Click here to read more about Adafruit IO+, our subscription-based service. We don’t have investors and we’re not going to sell your data. When you sign up for Adafruit IO+, you’re supporting the same Adafruit Industries whose hardware and software you already know and love. You help make sure we’re not going anywhere by letting us know we’re on the right track.


          

International Russian-French workshop "Actual problems of artificial intelligence"

 Cache   

International Russian-French workshop "Actual problems of artificial intelligence" was held on November 18, 2019. This workshop gathers French experts in AI and specialists in AI from the Faculty of Computational mathematics and Cybernetics, Moscow State University. French experts and representatives from chairs of Mathematical Physics, Mathematical Methods for Forecasting, General Mathematics, Nonlinear Dynamic Systems and Control Processes, Algorithmic Languages, and Mathematical Methods of Image Processing Lab presented reports.

Alexander RAZGULIN, CMC MSU intro
Nicolas BOUSQUET, The French national strategy on AI: Towards hybrid AI for French industry
Archil MAYSURADZE, Data Driven Analysis and Control of Technical Systems
Romaric REDON, Overview of AI R&T roadmap for AIRBUS / Certifiable AI, open issues and research
Ilya SMIRNOV, Trajectory analysis and its applications for turbine balancing
Fabien MANGEANT, Big data & AI for autonomous vehicles / BI, analytics & data management: lessons learnt @ RENAULT
Oleg GONCHAROV, Application of artificial intelligence in control theory
Cédric BUCHE, Interactive Machine Learning
Alexander KHVOSTIKOV and Andrey KRYLOV, Active contour + CNN: segmentation of histological images
Jean-Michel LOUBES, Understanding and removing bias for a fair and explainable AI
Natalia LOUKACHEVITCH and Boris DOBROV,Ontology-based Natural Language Processing
Xavier VIGOUROUX, Computing resources, AI and trends
Yulia KORUKHOVA, Information retrieval for music and mathematics
Mireille REGNIER, AI research activities at INRIA


          

International Russian-French workshop "Actual problems of artificial intelligence"

 Cache   

International Russian-French workshop "Actual problems of artificial intelligence" was held on November 18, 2019. This workshop gathers French experts in AI and specialists in AI from the Faculty of Computational mathematics and Cybernetics, Moscow State University. French experts and representatives from chairs of Mathematical Physics, Mathematical Methods for Forecasting, General Mathematics, Nonlinear Dynamic Systems and Control Processes, Algorithmic Languages, and Mathematical Methods of Image Processing Lab presented reports.

Alexander RAZGULIN, CMC MSU intro
Nicolas BOUSQUET, The French national strategy on AI: Towards hybrid AI for French industry
Archil MAYSURADZE, Data Driven Analysis and Control of Technical Systems
Romaric REDON, Overview of AI R&T roadmap for AIRBUS / Certifiable AI, open issues and research
Ilya SMIRNOV, Trajectory analysis and its applications for turbine balancing
Fabien MANGEANT, Big data & AI for autonomous vehicles / BI, analytics & data management: lessons learnt @ RENAULT
Oleg GONCHAROV, Application of artificial intelligence in control theory
Cédric BUCHE, Interactive Machine Learning
Alexander KHVOSTIKOV and Andrey KRYLOV, Active contour + CNN: segmentation of histological images
Jean-Michel LOUBES, Understanding and removing bias for a fair and explainable AI
Natalia LOUKACHEVITCH and Boris DOBROV,Ontology-based Natural Language Processing
Xavier VIGOUROUX, Computing resources, AI and trends
Yulia KORUKHOVA, Information retrieval for music and mathematics
Mireille REGNIER, AI research activities at INRIA


          

Snipfeed personalised content platform targets Gen Zs and promises to reward creators | Mobile Marketing Magazine

 Cache   
Snipfeed has launched its one-stop platform for personalized content. It uses machine learning and AI to curate a customized version of the web for each user, with podcasts, videos, articles and other online content aggregated in one place. The platform is rolling out on an invitation-only basis initially, with a full launch early next year. […]
          

New Amazon capabilities put machine learning in reach of more developers

 Cache   
Today, Amazon announced a new approach that it says will put machine learning technology in reach of more developers and line of business users. Amazon has been making a flurry of announcements ahead of its re:Invent customer conference next week in Las Vegas. While the company offers plenty of tools for data scientists to build […]
          

A super-fast machine learning model for finding user search intent

 Cache   
Here's a speedy, low-cost, scaleable way to estimate search intent for SEOs and content marketers.
          

L Catterton-backed Il Makiage buys NeoWize

 Cache   
Makeup brand Il Makiage has acquired NeoWize, a data science startup that develops advanced active machine learning algorithms. No financial terms were disclosed. Il Makiage is backed by L Catterton.
          

Meet Tiny Sorter, a DIY experiment connecting Arduino + Teachable Machine

 Cache   
https://experiments.withgoogle.com/tiny-sorter/view/assets/img/hero/hero_wide.mp4 Getting started with physical computing and machine learning can be pretty intimidating. But it doesn’t have to be! Meet Tiny Sorter, a DIY experiment that teaches you how to...
          

SAS Integration With ArcGIS Online in 3-Tier Architecture

 Cache   
SAS Integration With ArcGIS Online in 3-Tier Architecture

Introduction

SAS has powerful tools for statistical computing along with the Consumer and Business Intelligence suite of products, which has been used in various fields like clinical research, finance, and healthcare. SAS also has Fraud Detection products that are based on machine learning and artificial intelligence, which are becoming handy to prevent fraud across multiple industries.

On the other hand, ArcGIS Online is a cloud-based software-as-a-service (SaaS) for web mapping and Geographic Information System. ArcGIS is the market-leading Geographic Information System for arranging geographical data in order to create and use maps in workflow-specific apps. When spatial data analysis capability of ArcGIS is added to SAS textual and numeric data analysis, the efficiency of the users who are using the system will improve as the users will now have immediate recognition and understanding of the results.


          

Udacity plans to double headcount in India

 Cache   
New Delhi, Dec 2 : Silicon Valley-based learning platform Udacity on Monday said it plans to double the headcount at its New Delhi office to support its growing student and enterprise client base in India. The company trains workers on next generation skills such as Artificial Intelligence, Machine Learning, automation, deep learning, data analytics, through […]
          

IT / Software / Systems: Senior Software Engineer - San Francisco, California

 Cache   
DescriptionSenior Software Engineer - Data EngineeringOverviewFirst Republic is an ultra-high-touch bank that provides extraordinary client service. We believe that one-on-one interactions build lasting relationships. We move quickly to serve our clients' needs so that their financial transactions are handled with ease and efficiency. Client trust and security are paramount in our line of business. Ultimately, our goal is unsurpassed client satisfaction which will lead to personal referrals - our number one source of new business. We recognize that our competitive advantage starts with our people and our culture. At First Republic, we work hard and move quickly as a very coordinated team. If you are looking for an opportunity to grow and contribute in a fun, fast-paced environment, First Republic is the place for you. We have exceptional people focused on providing extraordinary service.TheSenior Software Engineer is responsible for the development and maintenance of the FRB Data Engineering platform and all the processes supporting the management, ingestion and integration of data. Will also contribute in the design of new data lake components and data marts. Each team member is responsible for the quality of all of the integration processes and the software development cycle.Be part of a digital transformation. In the last six months, we've: - Built three AWS cloud environments from scratch - Built CI/CD pipelines where none existed before - Proven the agile development mindset, and gotten buy-in from the C-suite - Built an in-house team of high-impact developers from scratch - Built bridges with the "legacy" Operations and InfoSec teams to win hearts and change minds - Discovered countless places to partner with existing teams to deliver more value at higher quality, higher velocity, and lower cost than the "all vendor provided" mode of development we're moving away from all of this in a $100 billion company, and we're just getting started. I think you'd be hard-pressed to find a more rapid change story anywhere. Because of our success, our scope has been expanding, and we need good developers to help take ownership and drive the right kinds of change across many domains. Some areas where our scope has expanded to, and stuff we'll be doing: - Event sourcing across all transactional lines of business (Kafka, SQS, etc) - Banking CORE abstraction (hiding the details of how the bank core works, so we can do cool stuff with data as it flows in and out) - Cross-cutting backend dev using SNS, SQS, Kafka, Kubernetes, DynamoDB, Twilio, etc. - Backend services to support in-house data science efforts - Machine learning: fraud prevention, analytics, etc. Technologies we're working with: - Data: DynamoDB, Postgre, SQL Server, Redis, Elasticsearch, SQS, Kafka - App layer: ASP.NET Core, node.js, AWS Lambda, React native - DevOps: Docker, kubernetes, Terraform - Programming languages: C#, Python, JavaResponsibilitiesResponsibilities- Build software and data engineering expertise and own data quality for our processes.- Drive technical excellence and implementation of best software engineering practices.- Design and delivering large scale, 24-7, mission-critical data pipelines and features using modern cloud and big data architectures.- Develop Batch and Stream processing services such as Kafka, AWS Kinesis, AWS Glue, Apache Storm and Spark Streaming- Deliver solutions in any big data and database technologies - Hadoop, EMR, Amazon Redshift, Snowflake, or advanced analytics tools- Oversee the design, scoping, implementation, and testing in short agile release cycles of in-house development and vendor implementations end-to-end.- Demonstrated experience working in large-scale data environments which included real-time and batch processing requirements.- Strong understanding of ETL processing with large data stores. Strong data modeling skills (relational, dimensional and flattened). Strong analytical and SQL skills, with attention to detail.QualificationsRequired people skills- Patience with how the environment is, with an eye towards refactoring the environment into what it should be - An ability to win friends and influence people on both the technology and business sides - Clear and concise communication skills - Bias towards action, an ability to work autonomously, and navigate uncertainty with good humor - Empathy for our clients and stakeholders on both the technology and business side Tech skills- Track record of delivery in highly-functional tech environment, preferably in a cloud-first environment - Familiarity with cloud architectural patterns - Microservices, message queues, container orchestration, etc. - A strong preference for infrastructure-as-code - Deep familiarity with one or more mainstream programming languages - Experience with both SQL and nosql as well as their relevant data modeling approaches (relational, dimensional, flattened) and profiling tools. - An ability to articulate the pros and cons of various cloud data management strategies - You needn't be a DBA, but you should have mechanical sympathy with respect to data-centric workloads and workflows, and be able to teach others how to approach and reason about their data layer performance regardless of storage philosophy or technology - Work at the intersection of InfoSec and feature delivery would be a huge plus - Experience creating software in highly-regulated environments is also a big plus - Familiarity with ETL tools and architectural approaches - Hands-on experience building real-time or near-real-time data pipelines Our platform- Data layer: Kinesis, Glue, RDS, DynamoDB, Redshift, Snowflake, SQL Server, Oracle, and similar - Application layer: Docker, Lambda, etc. - Code: Node (TypeScript and Javascript), Python, C#, Java, etc. - Infrastructure: Docker, Terraform, AWS, OpenShift, S3, etc. Qualifications- 5-7 years of experience - 2+ years of building and administering distributed applications using a cloud platform - Deep familiarity with the intersection one or more cloud platforms and data management (e.g. Redshift, Athena, Lambda, Snowflake, etc.) Mental/Physical Requirements:- The ability to learn and comprehend basic instructions; understand the meanings of words and respond effectively; and perform basic arithmetic accurately and quickly.- Vision must be sufficient to read data reports, manuals and computer screens.- Hearing must be sufficient to understand a conversation at a normal volume, including telephone calls and in person.- Speech must be coherent to clearly convey or exchange information, including the giving and receiving of assignments and/or directions.- Position involves sitting most of the time, but may involve walking or standing for brief periods of time.- Must be able to travel in a limited capacity.Own your work and your career - apply nowAre you willing to go the extra mile because you love what you do and how you can contribute as a team? Do you want the freedom to grow and the opportunity to take charge of your own career? If so, then come join us.We want hard working team players. You'll have the independence to learn, lead and drive change. A culture of extraordinary service, empowerment and stability - that's the First Republic way.Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records, to the extent consistent with applicable federal and/or state law. Associated topics: back end, c#, design pattern, devops, java, perl, php, senior software developer, senior software engineer, software engineer lead ()
          

Other: DFM Intern - Santa Clara, California

 Cache   
Skills - Communication Skills - Python - Machine Learning The objective of the internship/Co-Op project is to develop code that utilizes machine learning for DFM applications. Specifically, new methodologies for fast critical area analysis will be developed. The intern/Co-Op will work directly with DFM applications for advanced technologies, and will be working with DFM engineers to create and verify updates to experimental code. The intern will have the opportunity to learn about DFM methodologies, advanced technology layouts, and advanced lithography methods like double patterning. The assignment would involve getting exposure to leading edge foundry EDA tools, learning about design flows, and yield / design interaction. MS Student enrolled in an accredited program in Electrical Engineering, Computer Engineering, Software Engineering, Computer Science, Physics, or related fields. Minimum of 3 months programming experience with Python. Comfortable working in UNIX environments, knowledge of UNIX commands. Preferred Qualifications: VLSI design courses / experience, Semiconductor Process Technology courses / experience, Experience using Python based machine learning algorithms. Excellent academic standing. strong written and oral communication skills. Attention to detail. Self-motivated; able to take ownership of assignments, develop work plans and proactively seek feedback to ensure objectives are aligned and met. Team player; able to succeed in a dynamic, fast paced environment. If you need a reasonable accommodation for any part of the employment process, please contact us by email at ************************************ and let us know the nature of your request and your contact information. Requests for accommodation will be considered on a case-by-case basis. Please note that only inquiries concerning a request for reasonable accommodation will be responded to from this email ress. An offer of employment with GLOBALFOUNDRIES is conditioned upon the successful completion of a background check and drug screen, as applicable and subject to applicable laws and regulations. GLOBALFOUNDRIES is fully committed to equal opportunity in the workplace and believes that cultural diversity within the company enhances its business potential. GLOBALFOUNDRIES goal of excellence in business necessitates the attraction and retention of highly qualified people. Artificial barriers and stereotypic biases detract from this objective and may be illegally discriminatory. All policies and processes which pertain to employees including recruitment, selection, training, utilization, promotion, compensation, benefits, extracurricular programs, and termination are created and implemented without regard to age, ethnicity, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, sexual orientation, gender identity or expression, veteran status, or any other characteristic or category specified by local, state or federal law. ()
          

Other: Data Scientist - Los Altos, California

 Cache   
COMPANY INFORMATION - ALTO NEUROSCIENCE - Alto Neuroscience is a private company headquartered in the San Francisco Bay Area. Alto is developing a new generation of neural biomarker-based diagnostic tests and personalized treatments for use in psychiatry, based on a platform built from data aggregated at scale and proprietary artificial intelligence analysis tools. The team combines world-leading neuroscientists and business executives with experience commercializing products for the brain.There currently is no objective way to diagnose psychiatric illness, nor to identify which treatment is best for an individual patient. Treatment selection is presently done by trial-and-error as there are no reliable blood tests nor objective brain measures, for example, that can inform these decisions. This results in clinical care that is highly inefficient and costly. Alto is focused on addressing this need through finding novel brain biomarkers that can define actionable subtypes of psychiatric illnesses, or inform the development and deployment of new treatments. To that end, we are building a platform capable of identifying a pipeline of new brain-based candidate biomarkers that can underpin new product opportunities.ROLE SUMMARY - DATA SCIENTISTUnder direct supervision of the chief technology and medical officers, the Data Scientist will complete programming tasks within a developing IT infrastructure surrounding 1) large-scale clinical and biomarker datasets and 2) incoming data from prospective clinical studies. Work will include developing data processing workflows, primarily in python, for the following tasks:A cloud-based, real-time system for data management, quality control, pre-processing, feature extraction, and machine learning from a multimodal set of brain signalsAutomated deidentification and extraction of relevant signals from electronic medical records (EMRs)Correlating brain signals with clinical information using machine learningWork closely with Alto's data science and medical teams to deliver novel diagnostic and therapeutic solutions for brain healthROLES AND RESPONSIBILITIESBuild and deploy machine learning models for brain signal-based prediction tasksDevelop visualization tools to interpret resultsDaily responsibilities will include algorithmic design and implementation, data analysis, coding/debugging, troubleshooting, and health monitoring/alertingQUALIFICATIONS AND EDUCATION REQUIREMENTSDegree in Computer Science, Information Systems Management, Engineering, or other quantitative fields2+ years of Python programming/development experience1+ years of machine learning experienceBe a team player while being able to work independentlyShare a desire to improve mental healthcare deliveryCoding experience in linux/CentOS environmentDeep understanding of predictive modeling, machine-learning, clustering and classification techniques, and algorithmsFluency in a programming language (Python, C,C++, Java, SQL)Good to have: Cloud development experience, MATLAB, and EEG data analysis experience ()
          

Dr. rer. nat. Physik & Algorithmenentwicklung (m/w/d)

 Cache   
Diese Aufgaben erwarten Sie: - Algorithmenentwicklung zur Qualifizierung der bestehenden Prozesse - Auswertung anhand Prozessanalyse- und Mustererkennungsalgorithmen - Weiterentwicklung und Implementierung von Algorithmen für die Bearbeitung von komplexen Komponenten - Durchführung von theoretischen und experimentellen Untersuchungen - Optimierung der Prozesse anhand statistischer Auswerteverfahren - Strategieentwicklung zur Vermeidung von Prozessschwankungen - Enge Zusammenarbeit mit internen und externen Partnern sowie Netzwerkausbau Unsere Anforderungen an Sie sind: - Sehr gutes Studium der Physik, Mathematik, o.Ä., idealerweise mit anschließender Promotion - Sehr gute Kenntnisse in der Algorithmenentwicklung und Optimierung nichtlinearer Gleichungssysteme - Gute Programmierkenntnisse, idealerweise in C++, C# und Matlab - Idealerweise erste Kenntnisse in der Auswertung großer Datenmengen (z.B. Machine Learning-Ansätze, Data-Mining, Mustererkennungsmethoden, SVM, neuronale Netze) - Hohes Maß an Eigeninitiative und Engagement - Ausgeprägte Team- und Kommunikationsfähigkeit - Sehr gute Deutsch- und Englischkenntnisse
          

Physiker / Mathematiker (m/w/d) Bildverarbeitung & Algorithmenentwicklung

 Cache   
Diese Aufgaben erwarten Sie: - Erstellen und Evaluieren innovativer und effizienter Bildverarbeitungsalgorithmen - Beobachtung des aktuellen Stands der Wissenschaft zum Thema Bildverarbeitung - Entwicklung automatisierter Tests - Identifikation und Ableitung von Lösungsansätzen sowie Mitarbeit in der Neugestaltung der Softwarearchitektur - Eigenverantwortliches Arbeiten in einem agilen Entwicklungsumfeld - Enge Zusammenarbeit mit anderen Entwicklungsabteilungen Unsere Anforderungen an Sie sind: - Erfolgreich abgeschlossenes Studium der Informatik, Physik oder Ingenieurwissenschaften - Mehrjährige praktische Erfahrung oder Promotion in der Bildverarbeitung - Erfahrung in der Algorithmenentwicklung und idealerweise in Machine Learning bzw. Deep Learning - Sehr gute Programmierkenntnisse in C oder C++ - Know-How im Umgang mit git, Bitbucker oder JIRA wünschenswert - Teamgeist und Engagement - Problemlösungsorientierte sowie zielgerichtete Arbeitsweise - Sehr gute Deutsch- und Englischkenntnisse
          

Disney+ Accounts Compromised, Ring IoT Doorbell Hacked, GitHub Preserves Code Beyond the Year 3000 - The Category5.TV Newsroom - Episode 633

 Cache   

The Category5.TV Newsroom

Here are the stories we're covering this week: - Thousands Of Disney+ Accounts Are Already Up For Sale On Hacking Forums. - A new AI is combining machine learning and computer vision to detect drowning people in real time. By using object recognition, it's able to tell if a person is swimming normally, or if they are at risk of drowning. What's best is that Drowning-Detector is open source, and can run on a single board computer such as the Raspberry Pi. - IoT doorbell, Ring, had a bug in its configuration app which sent Wi-Fi setup information unencrypted to some doorbell devices, exposing customers' home networks. - GitHub Will Preserve Open Source Code In An Arctic Vault. - A secretive clean energy company backed by Bill Gates has created a way to use mirrors and artificial intelligence to harness the heat of the sun, replacing the need to use fossil fuels industrial heat applications, cut back on CO2 emissions. Their invention creates concentrated solar energy so hot that they can manufacture steel, glass and cement with a carbon-free source that had not been available before.

Read the complete show notes, comment or rate this episode, view pictures and obtain links from this episode at https://category5.tv/shows/newsroom/episode/633/

Running time: 22 Minutes 57 Seconds


          

Using customer lifetime value (CLV) for ecommerce success

 Cache   

30-second summary:

  • Focusing on CLV as a topline metric has a significant impact on a company’s bottom line. Despite this, one UK study found that only 34% of marketers surveyed were aware of the term “customer lifetime value” and its implications.
  • In order to make use of the wealth of data that’s been collected, you need a CLV model that utilizes machine learning methods to make predictions.
  • As AI becomes more rooted in ecommerce, the gap between the capabilities of retailers utilizing AI vs those without it will grow too wide for a company without AI to stay competitive.
  • Write openly and conversationally; address your customers like living, breathing people, and you’ll keep them coming back.

Exponea is a customer data and experience platform with predictive analytics capabilities. They work with many notable online retailers including Sofology, FitFlop, and Benefit. Exponea’s recently published white paper, “The Formula for E-Commerce Success,” highlights the role of customer lifetime value (CLV) in keeping acquisition costs down and improving overall ecommerce success.

The 38-page report provides a simple formula that retailers can use to determine CLV, focuses on why this metric is important for growth and customer retention, and provides an ecommerce optimization guide which drills down into the specific ways that retailers can improve customer retention and conversions.

Content produced in collaboration with Exponea.

How is customer lifetime value useful to marketers?

Customer lifetime value represents a customer’s value to a company over a period of time.

The formula for determining the lifetime value of a customer is simple: average annual customer profit multiplied by average duration of customer retention.

Simply put, focusing on CLV as a topline metric has a significant impact on a company’s bottom line. Despite this, one UK study found that only 34% of marketers surveyed were aware of the term “customer lifetime value” and its implications.

Here are four ways CLV can be useful for marketers.

  • CLV helps inform what’s appropriate to spend on customer acquisition. Figuring out if you’re making money in the long run, versus the short term, can help you mitigate costs and allocate your marketing dollars appropriately.
  • CLV allows you to segment customers based on value. And segmentation allows for a personalized experience, which many customers now expect.
  • CLV is key for long-term company growth. CLV helps you focus on retaining customers by demonstrating their total value, thus motivating you to improve the overall customer experience.
  • CLV is an important learning process. Determining your CLV forces you to think beyond individual conversions or sales, evaluating the entire customer journey. This can help inform everything from how you reach out to new prospects to how you manage the sales process, to improving customer retention by streamlining your customers’ experience.
Data unification is critical for determining CLV

A key challenge that many companies face when assessing CLV is siloed or fragmented data. This can be a symptom of rapid company growth, complex technology stacks, or even a reflection of internal company culture—if each department operates independently toward certain goals, data can become fractured.

Exponea writes, “without unified customer profile data, it’s nearly impossible to get the sort of actionable results you want. In order to make use of the wealth of data that’s been collected, you need a CLV model that utilizes machine learning methods to make predictions. And that’s just not possible with siloed data.”

Data fragmentation is exacerbated by the fact that today’s customer makes purchases on multiple devices. This makes it difficult to glean meaningful insights from the various data streams. The use of a main dashboard which synthesizes data from multiple sources (i.e. a customer data platform) is critical for retailers who don’t want to be left behind compared with industry leaders who are taking a forward-thinking approach to data unification and analysis.

Another issue that companies face regarding data management and unification is a lack of in-house expertise.

Per Exponea, “Many companies who have not yet begun tracking CLV are dealing with a lack of qualified personnel to follow the data and produce actionable plans based on it. This, coupled with the need for an in-house dashboard for qualified personnel to use, creates a strong barrier to entry.”

Using customer lifetime value optimally

Once a company has addressed the above issues — namely, all customer data is in one place, you have a main dashboard with predictive analytics capabilities to synthesize and communicate this data, and experienced personnel to monitor everything — you can move on to leveraging CLV.

CLV can be used to:

  • Improve customer acquisition and retention
  • Prevent and reduce churn
  • Plan your marketing budget
  • Measure the performance of your ads
  • Acquire higher value customers
  • Secure future VIPs
  • Practice value-tier segmentation

The report concludes with some concrete steps that retailers can take to optimize conversions and increase a customer’s CLV. This section is extremely tactical, emphasizing customer retention over customer acquisition.

Exponea writes, “It’s not just about selling anymore; it’s about building a place for your customers to return to, again and again. Convert your first-time purchasers into repeat shoppers, and move them along the path to VIP.”

The ecommerce optimization guide provides details on four technological tactics that online retailers must focus on to improve customer retention. At the top of the list? Personalization.

The four tactics include:

  • Delivering personalized customer experiences
  • Automated email campaigns
  • Value segmentation
  • Converting customers into VIPs

Says Exponea, “As AI becomes more rooted in ecommerce, the gap between the capabilities of retailers utilizing AI vs those without it will grow too wide for a company without AI to stay competitive.”

There is much more information available in the e-book, including tactics focused on improving authenticity (and integrity) in communication between the retailer and the customer. Examples of authenticity include what to do—and what not to do. “Write openly and conversationally; address your customers like living, breathing people, and you’ll keep them coming back,” explains Exponea.

To learn more about how marketers can use CLV, check out the full white paper, The Formula for E-Commerce Success.

The post Using customer lifetime value (CLV) for ecommerce success appeared first on ClickZ.


          

Podcast #75: Toward Ubiquitous Digital Twins - A Conversation with Alfonso Velosa

 Cache   

Alfonso Velosa has been following the Industrial IoT market as an analyst for many years, and our conversation covered a wide range of topics. We discuss a number of the key use cases and best practices for success that he has seen, along with some of the key industries leading the way. We discuss the challenges involved with integrating business outcomes, technology and organizational culture, and some of the obstacles still commonly faced in the market. He shares his views on IoT platforms, AI and Machine Learning, Blockchain and the evolution of Digital Twins. Lastly, he shares his vision for the future where Digital Twins become integrative and anticipatory parts of business and daily life.   


          

Director of Machine Learning

 Cache   

We’re working on an exciting Director of Machine Learning opportunity with a growing online media start-up in the Los Angeles area.

 

This is a highly collaborative role, working closely internal teams to build new data products and machine learning applications to support the massive amounts of real-time data coming in on a daily basis. This new Director will help to produce machine learning services that will be integrated into other products including NLP, recommendation engines, etc.

 

While previous people management experience is preferred, very strong technical skills are more important than management experience. This is a great opportunity for someone who is excited about joining a well-funded late-stage start up and who is excited about building out a team over the next several years

 

Requirements:

  • 5+ years’ experience and a history of leading successful advanced analytics and data science projects
  • Strong technical skills and proficiency in Python
  • Fluency in a range of Machine Learning toolkits
  • Experience designing and employing production quality ML algorithms and code
  • Advanced degree in Statistics, Computer Science or Math (PhD preferred, Masters required)
  • Prior people management experience preferred
  • Interest in the media industry and streaming services

This role is located in the Los Angeles, CA area and has a generous compensation package (up to $200K). Strong preference for candidates already located in the L.A. area.

Green Card holders or U.S. Citizens only please.

 

 

 

 

Keywords: Statistics, Machine Learning, Data Science, ML engineering, Machine Learning Engineering, recommendation engines, NLP, Natural Language Processing, analytics, R, Python, SAS, SQL, statistical modeling, predictive analytics, regression modeling, Java, Scala


          

UCLA SCHOOL OF LAW PULSE FELLOWSHIP

 Cache   

UCLA School of Law PULSE Fellowship in Artificial Intelligence, Law, and Policy

UCLA School of Law’s Program on Understanding Law, Science, and Evidence, or PULSE, is now accepting applications for the PULSE Fellowship in Artificial Intelligence, Law, and Policy for the academic years 2018-2020. This fellowship is a full-time, two-year faculty position with a start date of July 1, 2018. The position primarily involves sustained research and writing on the social, economic, and legal implications of artificial intelligence and machine learning. The position will also involve teaching and assisting with PULSE projects, such as conferences and workshops.

Progress in artificial intelligence and machine learning has advanced rapidly in recent years, and additional advances may proceed at an accelerating pace. Large recent progress in automated translation, face and voice recognition, anticipating criminal sentencing consequences, and automated radiological diagnosis are just a few among many salient examples. Similar technologies will alter many aspects of human life, yielding societal disruption and a need for governance.

Rather than focusing on colorful fictional treatments or relatively immediate consequences, the PULSE fellow will engage in careful, critical advance thinking about large-scale potential impacts. The fellow will evaluate methods for assessment and prediction, as well as legal, economic, institutional, regulatory, and other forms of preparation and response. The fellow’s research will culminate in the authorship of papers suitable for publication in law journals or other respected legal, scholarly, and policy outlets. Throughout, the fellow will work in collaboration with Professor of Law Edward A. Parson and PULSE Co-Director Richard M. Re, among other UCLA faculty.

PULSE explores the complex connections between law, evidence, science, and technology. PULSE engages in cutting-edge and interdisciplinary research and programming to examine how basic “facts” about our world, provided through science and credited as evidence, influence venues of law and policy making. PULSE is co-directed by UCLA School of Law Dean Jennifer L. Mnookin and Assistant Professor of Law Richard M. Re.

Candidates for the PULSE fellowship should possess a J.D. or other advanced degree, a strong academic record, excellent analytical and writing skills, and demonstrated interest or background in the fields of law and science, artificial intelligence, or social risk assessment. Candidates with previous academic, research, or professional experience in artificial intelligence, machine learning, computer science, or related fields of science and technology are especially encouraged to apply. The salary is anticipated to be approximately $90,000 per year plus a competitive benefits package. UCLA School of Law has a special interest in enriching its intellectual environment through further diversifying the range of perspectives represented within the faculty.

Applicants should apply online at https://recruit.apo.ucla.edu/apply/JPF03499. Please submit a letter discussing your qualifications, scholarly and professional aims, and the interests you would wish to pursue while holding the fellowship; a resume; a transcript of studies in law school or graduate school; a writing sample of no more than ten pages; and contact information for three references.

To ensure full consideration, applications should be received by Wednesday, February 28, 2018 but will be considered thereafter through March 26, 2018 or until the position is filled.

Visit our website at http://www.law.ucla.edu/pulse for more information about our program.


          

Associate

 Cache   
NY-New York, Associate with Goldman Sachs & Co. LLC in New York, NY. Research, evaluate, implement, and validate various machine learning models to conduct predictive analytics. Review, evaluate, and approve code changes and new script submissions to production environment, and provide suggestions to improve code performance and quality. Requires: Master’s degree (U.S. or equivalent) in Computer Science, Opera
          

DSS 8440: Flexible Machine Learning for Data Centers

 Cache   
More high-performance machine learning possibilities Dell EMC is adding support for NVIDIA’s T4 GPU to its already powerful DSS 8440 machine learning server. This introduces a new high-performance, high capacity, reduced cost inference choice for data centers and machine learning service providers. It is the purpose-designed, open PCIe architecture of the DSS 8440 that enables us to readily expand accelerator options for our customers as the market demands. This latest addition to our powerhouse machine learning server is further proof of Dell EMC’s commitment to supporting our customers as they compete in the rapidly emerging AI ... READ MORE
          

HPR2956: HPR Community News for November 2019

 Cache   

New hosts

Welcome to our new hosts:
Nihilazo, Daniel Persson.

Last Month's Shows

Id Day Date Title Host
2935 Fri 2019-11-01 The work of fire fighters, part 3 Jeroen Baten
2936 Mon 2019-11-04 HPR Community News for October 2019 HPR Volunteers
2937 Tue 2019-11-05 Lord D's Film Reviews: His Girl Friday lostnbronx
2938 Wed 2019-11-06 Naming pets in space game tuturto
2939 Thu 2019-11-07 Submit a show to Hacker Public Radio in 10 easy steps b-yeezi
2940 Fri 2019-11-08 Better Social Media 05 - Mastodon Ahuka
2941 Mon 2019-11-11 Server Basics 107: Minishift and container management klaatu
2942 Tue 2019-11-12 Why I love lisps Nihilazo
2943 Wed 2019-11-13 Music as Life brian
2944 Thu 2019-11-14 ONICS Basics Part 4: Network Flows and Connections Gabriel Evenfire
2945 Fri 2019-11-15 Saturday at OggCamp Manchester 2019 Ken Fallon
2946 Mon 2019-11-18 Sunday at OggCamp Manchester 2019 Ken Fallon
2947 Tue 2019-11-19 The Mimblewimble Protocol mightbemike
2948 Wed 2019-11-20 Testing with Haskell tuturto
2949 Thu 2019-11-21 Grin and Beam: The 2 major mimblewimble blockchains mightbemike
2950 Fri 2019-11-22 NotPetya and Maersk: An Object Lesson Ahuka
2951 Mon 2019-11-25 A walk through my PifaceCAD Python code – Part 2 MrX
2952 Tue 2019-11-26 Publishing your book using open source tools Jeroen Baten
2953 Wed 2019-11-27 How I got started in Linux Archer72
2954 Thu 2019-11-28 Wrestling As You Like It episode 1 TheDUDE
2955 Fri 2019-11-29 Machine Learning / Data Analysis Basics Daniel Persson

Comments this month

These are comments which have been made during the past month, either to shows released during the month or to past shows. There are 16 comments in total.

Past shows

There are 2 comments on 1 previous show:

  • hpr1585 (2014-08-29) "36 - LibreOffice Calc - Financial Functions - Loan Payments" by Ahuka.
    • Comment 1: timttmy on 2019-11-30: "Thanks"
    • Comment 2: Ahuka on 2019-11-30: "I'm glad it helped"

This month's shows

There are 14 comments on 8 of this month's shows:

  • hpr2935 (2019-11-01) "The work of fire fighters, part 3" by Jeroen Baten.
    • Comment 1: Ken Fallon on 2019-11-05: "That sucks"
    • Comment 2: Ken Fallon on 2019-11-05: "That blows"
    • Comment 3: Ken Fallon on 2019-11-05: "You're Fired"

  • hpr2936 (2019-11-04) "HPR Community News for October 2019" by HPR Volunteers.
    • Comment 1: lostnbronx on 2019-11-04: "Ken's Voice Is Better Than espeak"
    • Comment 2: Jon Kulp on 2019-11-05: "Pots"
    • Comment 3: clacke on 2019-11-19: "Release order or episode order?"

  • hpr2939 (2019-11-07) "Submit a show to Hacker Public Radio in 10 easy steps" by b-yeezi.
    • Comment 1: Ken Fallon on 2019-11-07: "Clarification"

  • hpr2940 (2019-11-08) "Better Social Media 05 - Mastodon" by Ahuka.
    • Comment 1: ClaudioM on 2019-11-08: "Simple Mastodon Timeline View Option"

  • hpr2942 (2019-11-12) "Why I love lisps" by Nihilazo.

  • hpr2943 (2019-11-13) "Music as Life" by brian.
    • Comment 1: Carl on 2019-11-21: "Interesting Episode"

  • hpr2944 (2019-11-14) "ONICS Basics Part 4: Network Flows and Connections" by Gabriel Evenfire.
    • Comment 1: Dave Morriss on 2019-11-27: "This is wonderful"

  • hpr2955 (2019-11-29) "Machine Learning / Data Analysis Basics" by Daniel Persson.
    • Comment 1: b-yeezi on 2019-11-29: "Great first episode"

Mailing List discussions

Policy decisions surrounding HPR are taken by the community as a whole. This discussion takes place on the Mail List which is open to all HPR listeners and contributors. The discussions are open and available on the HPR server under Mailman.

The threaded discussions this month can be found here:

http://hackerpublicradio.org/pipermail/hpr_hackerpublicradio.org/2019-November/thread.html

Events Calendar

With the kind permission of LWN.net we are linking to The LWN.net Community Calendar.

Quoting the site:

This is the LWN.net community event calendar, where we track events of interest to people using and developing Linux and free software. Clicking on individual events will take you to the appropriate web page.

Any other business

Stand at FOSDEM

Our proposal for a “Free Culture Podcasts” stand at FOSDEM was accepted for the Sunday 2nd February. This is fantastic news as this is the largest FLOSS event in Europe and is absolutely thronged the whole day.

https://fosdem.org/2020/news/2019-11-19-accepted-stands/

Anyone going to FOSDEM, and who would like to help staff the booth on Sunday please get in touch.

Tags and Summaries

Thanks to the following contributor for sending in updates in the past month: Dave Morriss

Over the period tags and/or summaries have been added to 5 shows which were without them.

If you would like to contribute to the tag/summary project visit the summary page at https://hackerpublicradio.org/report_missing_tags.php and follow the instructions there.


          

HPR2955: Machine Learning / Data Analysis Basics

 Cache   

In this episode, I talk about different techniques that we can use to predict the outcome of some question depending on input features.

The different techniques I will go through are the ZeroR and OneR that will create a baseline for the rest of the methods.

Next up, we have the Naive Bayes classifier that is simple but powerful for some applications.

Nearest neighbor and Decision trees are next up that requires more training but is very efficient when you infer results.

Multi-layer perceptron (MLP) is the first technique that is close to the ones we usually see in Machine Learning frameworks used today. But it is just a precursor to Convolutional Neural Network (CNN) because of the size requirements. MLPs have the same size for all the hidden layers, which makes it unfeasible for larger networks.

CNNs, on the other hand, uses subsampling that will shrink the layer maps to reduce the size of the network without reducing the accuracy of the predictions.


          

Why Every Developer Needs to be a Generalist

 Cache   

Developer Generalist vs Specialist

Context, as they say, is king.

The age-old question of exactly what a software developer should focus on learning has been crossing my mind a lot lately. More than ever, our technology is evolving at a furious pace - and the coding world is definitely feeling the pressure. It can be overwhelming to choose where to pay attention and what to dismiss as a passing fad.

So what are you to do? Let’s look at what the next decade has in store for the development world.

Past Predictions

Cory House spoke convincingly on the merits of specializing in one area to become a known and trusted voice. A few years ago, Forbes came out with a high level article proclaiming the opposite. More recently, I stumbled upon this post on Hacker Noon embracing the notion of both specialist and generalist. Which way is a developer supposed to go? The answer to this question can feel largely opinion-based but there are some logical ways to examine it. Let’s get started.

How Do I Choose the Right Tech to Focus On

This is the question for a specialist: how to leap frog from one framework lilypad to another. It’s easy to fall in love with a specific area of coding and become obsessed - I’ve certainly done it. However, it can truly lead to a head-in-the-sand position when the world moves on without you (my condolences to Windows Phone developer friends, for example).

Are You Saying I Should be a Full Stack Developer

Great question! “Generalist” doesn’t always mean “Full Stack”; they aren’t interchangeable. The traditional view of a full stack programmer referred to the web (back end and front end) but there are many different places where code plays a role!

Personally, I’ve coded for voice, IoT, APIs, timer jobs, mobile apps, intranet sites, external websites, ETLs, and the list goes on. Is any of that knowledge evergreen? Some of it! Mostly the ways in which I interacted with my team and product owner - not how I specifically customized a Sharepoint page.

You can carry a cross-section of evergreen knowledge with you as a software engineer. A specific part of those “years of experience” is still applicable to whatever you need to work on now. And that part fits neatly into being a generalist developer.

Why Does Future Tech Require a Generalist Approach

Regardless of what kind of coding you do now, areas of our industry are developing in impossible to ignore ways. Remember when Javascript started taking over the world? If you wanted to do anything in the browser, you had to learn it. Now, not only does it influence the browser - Node.js & Reactive Native have hugely influenced API and mobile app development as well.

Remember when AI was just another fad? With advancements in Machine Learning, Deep Learning and Big Data analytics - it doesn’t look so dismissable anymore.

Even if your main gig is maintaining a legacy code base, you owe it to your future self to know what tools are out there, different than what you use today. This knowledge doesn’t have to be super deep to be powerful, but you do need to know enough about current and future industry techniques to understand where your experience can fit.

What Should All Developers Learn Right Now

I’m glad you asked! There are a few areas in particular that developers really can’t afford to ignore anymore.

1. Security

Naturally, the developer relations team here at Okta cares a lot about this topic! Often, developers are content to make a system ‘just work’ well enough to get out the door for a deadline. The result is company after company coming forward and admitting to their users that their data was not securely stored or collected. This is an area you HAVE to educate yourself on.

Get started with the basics like the OWASP Top 10 security vulnerabilities. The Cheat Sheet Series is another excellent resource for app devs looking to become knowledgeable on security quickly.

Next, make sure you are coding securely from the very beginning, from how you store API keys to the way you deploy your code. This is one area you cannot afford to cut corners. We’ve got lots of blog posts here at Okta to get you up to speed on user security specifically.

2. Machine Learning

Automation will come in many forms and affect all areas of technology. You should have at least a cursory understanding of how your data is fed into various algorithms and the decisions those algorithms can make.

You’ll need to use coding languages like Python and/or R to get started in this area and there are great tutorials on using Jupyter Notebooks to help. However, if you are interested in using machine learning as a service, Microsoft has come a long way with Cognitive Services, which will allow you to use REST APIs to do basic machine learning tasks like image recognition or text analysis. No matter what business you are part of, AI is here to stay in some capacity and you will probably need to interact with it in some way.

3. DevOps/TechOps

Even if you aren’t the keeper of the big red deployment button, it’s crucial for all developers to understand how code gets to production. From mastering pull-request procedures to knowing how your application architecture impacts hosting costs, development work is intrinsically tied to ops work. This is especially true with microservices architecture, which often impact the bottom line.

Reach out to your local DevOps or TechOps meetup or user group and get acquainted with a few people who really know the practice. If that’s not an option in your area, watch a few video courses on the subject. Look into scripting tools for DevOps automation like Terraform or Pulumi. Playing with infrastructure as code can actually be quite enjoyable and a nice change of pace coming from application development. Whether you are the only technical person both building and deploying code, or you are one part of a large department with a separate DevOps team, take the time to become educated on this flow.

What Do You Think Developers Should Learn

Being a specialist can be rewarding, but being a generalist is a necessity. You truly do need a bit of both; just remember not to sacrifice general knowledge in order to focus on your preference. Security, Machine Learning, and Dev/Tech Ops are the top 3 topics I believe have strong merit at the moment, but that list is certainly not exhaustive. Comment below with what you believe every coder needs to add to their ever-growing toolbox!

Learn more about Developer Careers, Tools, and Security

If you’d like to continue reading our thoughts on developer careers, we’ve published a number of posts that might interest you:

For other great content from the Okta Dev Team, follow us on Twitter and Facebook!


          

If You Want to Grow Fast, You Need to do This | Ep. #1215

 Cache   

In episode #1215, we discuss what you need to do to rapidly increase your growth. Providing your audience with innovative, value-adding freebies is a great way to get people talking about your business. Tune in to hear why you have to be willing to take risks to reap big rewards!

TIME-STAMPED SHOW NOTES:

  • [00:25] Today’s topic: If You Want to Grow Fast, You Need to do This
  • [00:36] Think about what you can give away for free, and we don’t mean an eBook or tool.  
  • [00:57] Examples of the kinds of value-adding freebies you could give people.
  • [01:10] Why eBooks are no longer a good option. 
  • [01:36] The value of giving something that others would normally charge for. 
  • [02:14] The shift towards machine learning and AI, and staying ahead of the curve. 
  • [03:30] Considering how you can be 10 times better than your competition. 
  • [03:53] Why these free offerings do not have to cost you a fortune. 
  • [04:22] The importance of being willing to take risks.  
  • [05:23] To stay updated with events and learn more about our mastermind, go to the Marketing School site for more information.

Links Mentioned in Today’s Episode:

 

  • What should we talk about next? Please let us know in the comments below
  • Did you enjoy this episode? If so, please leave a short review.

 

Connect with Us: 


          

Engineering: Image Analysis Scientist/Engineer - Las Vegas, Nevada

 Cache   
US CITIZENSHIP/GREEN CARD REQUIRED Summary We are searching for exceptional software developers with Image Analysis background. Join our team of brilliant mathematicians, physicists and engineers on the forefront of imaging and image analysis with tools from Machine Learning, Image Analysis and Pattern Recognition for aviation security and medical arena. Knowledge of recent advances in deep learning, support vector machine, image reconstruction, volume rendering with deep knowledge of software engineering is a huge plus. Job Description The job involves the development of advanced imaging and image processing/recognition algorithms. The ability to analyze the imaging system in detail for the selection/development of appropriate algorithms will be highly valued. The successful applicant will be assigned to any one or more of the following tasks: (1) recognition of objects (e.g., threats) in cluttered images, including X-ray projection and volumetric CT images, (2) advanced 3-D volume rendering workstations, (3) development of related grants and proposals and (4) present/publish papers in conferences and journals. The application software will be developed in a combination of C++ and Python. Knowledge of modern software tools such as Visual Studio, Qt and others will be required. Experience or desire to learn TensorFlow, Keras, etc. is a huge plus. Qualifications The applicant will have a degree in Engineering, Computer Science, Physics, or Mathematics, preferably a Ph.D. (B.S./M.S also acceptable). The ideal candidate will have expert knowledgeable in one or more of the following areas: (1) application programming, (2) machine learning, (3) statistical image/signal processing, and (4) X-ray and CT physics. Recent graduates as well as experienced senior level engineers will be considered. Experienced candidates must have 5-10 years of experience in one of the areas above. Submit your resumes to: ()
          

Future-proofing skills through agile learning programmes

 Cache   
Features

Elliot Gowans looks at the importance of social learning in L&D.

Reading time: 6 minutes

Everything we know about employment and the workplace is changing. Technological innovation is the key driving force behind industry disruption and increasingly agile work.

With the development of AI and automation, businesses are undergoing complex transformations, bringing profound change to their services and overall operations. Such trends are redefining job roles and employment expectations.

As AI and automation technology continues to develop, manual, repetitive tasks are becoming increasingly automated.

This is not to say that only the simplest jobs are becoming the preserve of machines, as complex tasks in professional sectors such as financial services are increasingly being automated.

Reports suggest that in the next 10 years AI technology could potentially replace up to 50% of people currently working in banking and finance alone.

The half-life of skills is shortening, and the disciplines required to succeed within the workplace are shifting year to year.

According to the World Economic Forum, 35% of the skills workers need will have changed by 2020 and as such, approximately 10% of an employee’s time will need to be devoted to upskilling or retraining to ensure their skills are in keeping with the needs of the job market.

Such an environment calls for the retraining of employees, but more importantly, the need to teach durable skills that are not easily replicated by technology.

Soft skills such as emotional intelligence, leadership, creativity and communication will be of upmost importance for the future workforce.

However, these skills are not easily accounted for in traditional teaching models. Employees will require continuous mentoring and feedback; flexible social-learning programmes that account for personal development.

 

Soft skills such as emotional intelligence, leadership, creativity and communication will be of upmost importance for the future workforce

The reskilling challenge

For its part, the UK Government has recently announced a £100m investment in the National Retraining Scheme (NTR) as part of its industrial strategy to address the skills mismatch posed by technology.

While this shows that the Government recognises the need to improve productivity and prepare for future changes to the workplace and the economy, there is a misdiagnosis of the severity of the issue. A tacit assumption that automation will only affect certain professions.

For example, the NTR currently only caters for those who do not have a degree qualification. While academic qualifications are respected, often employment requires on-the-job training. The future workforce is no exception.

This is an issue that will need to be addressed in a three-pronged approach – education, government and industry.

Curriculums must expand and evolve to meet the demands of the market, to better equip future employees with the skills necessary for employment.

Educators need to take more responsibility in preparing students for their future and instilling soft and hard skills that will come to define the digital workplace.

Likewise, employers will need to invest in new avenues for development, supplying more in-work vocational training for their staff, offering alternative learning pathways, that allow individuals to develop the necessary skills and ultimately, instil a culture of lifelong learning and continuous professional development.

Education – whether through university, an apprenticeship or technical training programme during employment – will need to be as fluid and flexible as the roles students are applying for.

 

In order to meet the requirements of the future workplace and skills challenges, there needs to be more emphasis on continual development

Unfortunately, from the business side of things, the majority of employers are not providing an adequate learning culture to achieve this.

According to a recent survey by ATD Research, only 31% of organisations offer a well-developed learning culture, and the average employee is provided a mere 24 minutes of vocational training per week.

Likewise, a 2018 research piece from Fosway found that 60% of L&D departments are failing to systematically drive the development of mastery and expertise.

Changing this requires a shift away from purely providing learning resources, transitioning away from just thinking about what people know, into thinking about what people need to be able to do.

In order to meet the requirements of the future workplace and skills challenges, there needs to be more emphasis on continual development.

As automation will inevitably replace the human element in monotonous business processes, it is important that employees are encouraged to develop an evolved skillset and an open mind.

They must acknowledge the need to continuously upskill and retrain, which must be nurtured by their employer, who must provide innovative and more compelling systems of learning.

Digital learning solutions

Given the nature of the reskilling challenge, a demand for technical knowledge as well as a call for personal skills, businesses need to reconsider their systems of learning and development.

Traditional learning models are no longer viable. Instead, business leaders must opt for a system that inspires the continuous development of their workforces. They must be fully invested in their upskilling.

Interestingly, technology – the driver behind the reskilling challenge – can offer a practical solution. The skills required for the 21st-century workplace are complex and require equally complex learning strategies.

 

Blended learning in a variety of forms can effectively change employee attitudes and behaviour

Consider the number of employees across an organisation, the need to instil durable skills around their busy schedules on such a large scale, can only be solved with effective EdTech solutions.

The use of digital applications, which enable agile and remote learning around their work and busy lives, complemented with effective machine learning diagnostics and learning analytics will ensure employers can more accurately measure and actualise real learner data and account for each individual learner’s education cycle.

Most importantly, providing a range of content within a programmatic learning programme, will enable real change:

  • Blended learning in a variety of forms can effectively change employee attitudes and behaviour.
  • Adding social learning to the L&D programme and offering real-life application as part of the learning cycle can allow employees to build on experience, particularly customer service, leadership and presentations.
  • Incorporating AR/VR as part of a digital training programme, given the flexibility they enable and being content driven, can allow for the microlearning necessary for continuous cycles.

 



 

Corporate/education partnership

In order to tackle the reskilling issue, it needs to be addressed from every angle. There needs to be collaboration between industry, government and educational institutions – all of which have the responsibility of equipping today’s workforce with the durable skills of tomorrow. A partnership where both academia and industry allow for vocational training is the ideal.

With the flexibility of all parties, this can be established through technology. Consider a degree apprenticeship unified by a single learning platform, where a student, personal tutor and potential employer can communicate and evaluate progress through an online portal and monitor the learner’s progress whilst maintaining a constant dialogue.

In doing so, there is a customised learning pathway that nurtures skills and encourages personal growth.

The fourth industrial revolution will require us to rethink the best practices for learning, and it is in these partnerships, with the application of technology, that society can address these issues and allow for continuous re-education.

 

About the author

Elliot Gowans is SVP International at D2L


          

Introduction to Machine Learning and Data Science with Python and TensorFlow (DT017)

 Cache   
none
          

AI in retail: Survival depends on getting smart

 Cache   

The retail sector is the poster child for the use of artificial intelligence. Self-driving delivery robots, automated warehouses, intelligent chatbots, personalized recommendations, and deep supply chain analytics have been making significant impact on the bottom line — if you’re Amazon.com.

Other retailers, however, are struggling to adapt. In fact, only 19 percent of large retailers in the U.S., UK, Canada and Europe have deployed AI and are using it in production, according to Gartner.

          

The Lighter Side Of The Cloud – Machine Learning

 Cache   
none
          

Data Science and Star Science

 Cache   
I recently got a review copy of Statistics, Data Mining, and Machine Learning in Astronomy. I’m sure the book is especially useful to astronomers, but those of us who are not astronomers could use it as a survey of data analysis techniques, especially using Python tools, where all the examples happen to come from astronomy. […]
          

The Possible Minds Conference

 Cache   

I am puzzled by the number of references to what AI “is” and what it “cannot do” when in fact the new AI is less than ten years old and is moving so fast that references to it in the present tense are dated almost before they are uttered. The statements that AI doesn’t know what it’s talking about or is not enjoying itself are trivial if they refer to the present and undefended if they refer to the medium-range future—say 30 years.  —Daniel Kahneman

From left: W. Daniel Hillis, Neil Gershenfeld, Frank Wilczek, David Chalmers, Robert Axelrod, Tom Griffiths, Caroline Jones, Peter Galison, Alison Gopnik, John Brockman, George Dyson, Freeman Dyson, Seth Lloyd, Rod Brooks, Stephen Wolfram, Ian McEwan. In absentia: Andy Clark, George M. Church, Daniel Kahneman, Alex "Sandy" Pentland, Venki Ramakrishnan  (Click to expand photo) 


INTRODUCTION
by Venki Ramakrishnan

The field of machine learning and AI is changing at such a rapid pace that we cannot foresee what new technical breakthroughs lie ahead, where the technology will lead us or the ways in which it will completely transform society. So it is appropriate to take a regular look at the landscape to see where we are, what lies ahead, where we should be going and, just as importantly, what we should be avoiding as a society. We want to bring a mix of people with deep expertise in the technology as well as broad thinkers from a variety of disciplines to make regular critical assessments of the state and future of AI. 

Venki Ramakrishnan, President of the Royal Society and Nobel Laureate in Chemistry, 2009, is Group Leader & Former Deputy Director, MRC Laboratory of Molecular Biology; Author, Gene Machine: The Race to Decipher the Secrets of the Ribosome.  


[ED. NOTE: In recent months, Edge has published the fifteen individual talks and discussions from its two-and-a-half-day Possible Minds Conference held in Morris, CT, an update from the field following on from the publication of the group-authored book Possible Minds: Twenty-Five Ways of Looking at AI. As a special event for the long Thanksgiving weekend, we are pleased to publish the complete conference—10 hours plus of audio and video, as well as a downloadable PDF of the 77,500-word manuscript. Enjoy.] 
 
Editor, Edge


          

Cindicator Price Prediction and Analysis in December 2019

 Cache   

For our Cindicator price prediction, we will be looking at the past price trends and market opinions of the CND coin and estimate what value it will have in December 2019. Cindicator Overview Cindicator is a platform that uses machine learning and market analysis to allow users to manage and analyze financial assets. Cindicator’s Hybrid […]

The post Cindicator Price Prediction and Analysis in December 2019 appeared first on Coindoo.


          

Alibaba Cloud โอเพนซอร์สไลบรารี Alink รวบรวมอัลกอริทึม Machine Learning ไว้ในชุดเดียว

 Cache   
none
          

Senior Software Engineer - Web Security (remote-working, contractor position)

 Cache   
Home Office, Senior Software Engineer - Web Security (remote-working, contractor position) zyProtect is a web security company delivering a web application firewall (WAF) product that applies artificial intelligence and machine learning to protect enterprise-scale web sites from malicious attacks. We seek a Senior Software Engineer to contribute to our on-going development efforts. This is a full-time, work re
          

Four short links: 28 November 2019

 Cache   
Raspberry Pi Recovery Kit — Pi for Preppers. Machine Learning on Encrypted data without Decrypting it — an intro to homomorphic encryption, with examples in Julia. Reverse Engineering for Beginners (PDF) — a solid introduction to reading assembly language from decompiles, to understand wtf is going on. Learning Data Structure Alchemy — Harvard paper on […]
          

Education Landscape from Brick & Mortar to Click & Mortar

 Cache   
Aindril-De

The education landscape of today and the future is “click and mortar”. It not only appeals to those born in the new millennium or Generation Z as they are fondly referred to, but it also focuses on acquiring skill sets through a virtual medium. The new generation expects educational institutions to provide knowledge through mobile digital devices, platforms and other similar channels. As technology has made it possible for educators to reach a broader spectrum of students, it can safely be predicted that the strongest educational institutions will be the ones whose brand is founded on the “brick and mortar” model with the adoption of virtual education, making them a “click and mortar” as well.

The advent of the “click and mortar” education system has brought several changes with it. These changes, which Elearning has initiated, have sparked a revolution in the way education is imparted and received. The query that comes to mind here is, “What exactly are these changes?”

Elearning has and continues to influence education in three important arenas. First, by easing access to education, Elearning gives students the luxury of learning from within the comfort of their homes by using internet-connected devices like tablets, computers, and smartphones. It is essentially self-paced.

Second, eLearning provides quality education to students. This student- centric mode of learning assures learners of updated knowledge. Online guided discussions with professors and lecturers are a hallmark of eLearning, which allows sharing of expertise where both, the teacher and the student gain simultaneously.

Thirdly, eLearning gives students access to specialized courses that may not be available locally. It enables access to education that is not readily available in certain geographic locations.

It is a fact that the brick and mortar framework is still the mainstay of our education system as there are undeniable advantages to learning in a shared physical space. However, online education is progressively gaining popularity. The autonomy and flexibility that these courses offer make them extremely popular with working professionals and students. The entire eLearning industry landscape is changing rapidly. It is characterized by emerging trends which will be more evident in the future.

Artificial intelligence is emerging as an integral part of the eLearning ecosystem. We see several AI educational solutions coming to the fore. It is predicted that AI can fill the need-gaps in learning and teaching. It is expected to broaden the purview of schools and teachers. For example, Machine Learning, which is a subsection of AI, helps in assessing the ability of the student to answer questions correctly or a student’s performance on a related topic and more.

Microlearning is another trend that is catching up quickly. It is also referred to as bite-sized learning and includes short learning nuggets of 3 to 5 minute duration, designed to meet a specific learning outcome. The accessibility of microlearning on multiple devices like smartphones, desktops, tablets, and laptops combined with their brevity makes them extremely popular in the education system.

The importance of personalized education is slowly sinking in. Personalized attention to each student becomes possible with Elearning. It offers the freedom of customizing the lesson according to the needs of the student. The learners of today want a learning experience that fits their learning speed, personal needs, preferred learning style, and their learning pathway.

With the increasing adoption of edtech in educational institutions and an influx of educational technologies, it is very likely that the future years will see further integration of technologies like Blockchain, Cloud computing, Virtual Reality, Augmented Reality, ICT (Integrated Co-Teaching) classrooms and Edge Computing in the field of education.

In the age bracket of 5-24 years, India has the world’s largest population of about 500 million. With the paid online subscribers in the edtech sector witnessing a massive surge from 1.6 million in 2016, to almost 6 times more in the current year, the country is set to become the second-largest market for eLearning after the US.

“Click and mortar” education has triggered the concept of blended education where students can enjoy a combination of the traditional “brick and mortar” learning with the benefits of eLearning. This is in fact a boon to the education industry with both students and teachers reaping the best of both worlds.

The article is authored by Prof. Aindril De, Director – Academics, Amity University Online.


          

Retail commercial supervisor at Wefarm

 Cache   

Wefarm, the world’s largest farmer-to-farmer digital network, enables farmers to connect with each other and key partners over SMS to solve problems, share ideas, obtain vital products and services, and spread innovation, through utilising the latest machine learning technology.

Small-scale agriculture is the biggest industry on earth, with more than a billion farmers globally supplying 70% of the world's food and commodities, yet remaining digitally unconnected. Until Wefarm, no other had built a digital platform for these farmers to share their vital insights, without having to go online!

Since the launch in 2015 we have grown to serve 1.9 million farmers across the world, who share more than 30,000 Qs & As per day. Wefarm has recently secured $13 million in Series A funding from some of the world’s leading VCs, including True Ventures in Silicon Valley, and we are looking to add to a world-class team based across London, Nairobi, Kampala and Dar-es-Salaam.

Join Wefarm and be a part of the mission to build an ecosystem for global agriculture, with the farmer at the centre!

The role

The role of the Retail Commercial Supervisor is to ensure that farmers can easily access affordable and quality products and services from retailers.

Responsibilities will include:

  • Support pitching and recruitment of retailers in the designated region
  • Support onboarding and account management of retailers in the designated region
  • Manage Retail Commercial Representatives to deliver on agreed monthly and weekly targets
  • Resolve business challenges with retailers within the designated region
  • Prepare weekly and daily activity reports for the designated regions as per defined reporting templates
  • Improve engagement and performance of retailers in the designated region
  • Manage data collection processes as required 

Requirements

  • Minimum of a High School Diploma. A degree in Business Administration will be an added advantage
  • Knowledge and experience in sales
  • Marketing skills
  • Experience in managing teams
  • Experience working with retail stores/networks will be an added advantage
  • Experience working with a CRM tool would be an added advantage

Application process

Click here to access the original job post at Wefarm then apply through the portal.


          

Work orders - Value from structureless text in the era of digitisation

 Cache   
Title: Work orders - Value from structureless text in the era of digitisation Author(s): Salo, Erik; McMillan, David; Connor, Richard Abstract: Free text and hand-written reports are losing ground to digitization fast, however many hours of effort are still lost across the industry to the manual creation and analysis of these data types. Work orders in particular contain valuable information from failure rates to asset health, but at the same time present operators with such analytical difficulties and lack of structure that many are missing out on the value completely. This research challenges the current mainstream practice of manual work order analysis by presenting a methodology fit for today’s context of efficiency and digitization. A prototype text mining software for work order analysis was developed and tested in a user-oriented approach in cooperation with industrial partners. The final prototype combines classical machine learning methods, such as hierarchical clustering, with the operator’s expert knowledge obtained via an active learning approach. A novel distance metric in this context was adapted from information-theoretical research to improve clustering performance. Using the prototype tool in a case study with real work order data, analytical effort for certain datasets was reduced by 90% - from two working weeks to a day. In addition, the active learning framework resulted in an approach that end users described as "practical" and "intuitive" during testing. An in-depth review was also conducted regarding the uncertainty of the results – a key factor for implementation in a decision-making context. The outcomes of this work showcase the potential of machine learning to drive the digitization of not only new installations, but also older assets, where as a result the large amount of unstructured historical data becomes an advantage rather than a hindrance. User testing results encourage a wider uptake of machine learning solutions in the industry, and particularly a shift towards more accessible in-house analytical capabilities.
          

These science-backed mindfulness meditation exercises are built just for you

 Cache   


  • 83% of Americans suffer from work-related stress.
  • Aura Premium uses groundbreaking AI to tailor meditation exercises.
  • Mood-tracking technology adjusts your meditations to serve your specific needs.


None


If you find yourself nervous, anxious or generally stressed out at work, you are most definitely not in the minority.

In fact, the American Institute of Stress found 83% of U.S. workers suffer from work-related stress. Those stressors lead to $300 billion in business losses and almost 120,000 deaths each year.

While daily stresses stack up like traffic during the morning commute, the Aura Health app uses cutting-edge AI technology to help clear that road and get you on top of your stress.

Aura Premium takes advantage of groundbreaking AI advances to intuitively tailor short, science-backed mindfulness meditation exercises to your needs. You choose a 3-to-10-minute meditation, answer a few questions about the experience, then Aura begins contouring sessions to best serve your emotional state.

Aura uses its machine learning mood tracking tech to adjust to your patterns as you progress, even serving up relaxation suggestions when you’re most in need. Your responses are tracked with visual representations, allowing you to actually visibly see how much your stress level is improving.

Buy now: You can send stress and anxiety packing with a lifetime of Aura Premium service, normally a $499 package now on sale for only $79.99. In case you’d like to try Aura on a more limited basis, you can also pick up a one-year subscription for just $39.99 or a three-year plan for only $59.99.

Prices are subject to change.

Aura Meditation App Premium Subscriptions - $79.99

Get zen for $79.99

None


When you buy something through a link in this article or from our shop, Big Think earns a small commission. Thank you for supporting our team's work.



          

Technical Support Engineer - AdLightning - Ad Lightning - Seattle, WA

 Cache   
Interface with operations and business partners to resolve important customer issues. Perform data analysis and utilize machine learning techniques to…
From Ad Lightning - Thu, 24 Oct 2019 00:48:00 GMT - View all Seattle, WA jobs
          

AI, Machine Learning and Data Science Roundup: November 2019

 Cache   

A roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications from Microsoft and elsewhere that I've noted recently.

Open Source AI, ML & Data Science News

Python 3.8 is now available. From now on, new versions of Python will be released on a 12-month cycle, in October of each year.

Python takes the #2 spot in Github's annual ranking of programming language popularity, displacing Java and behind JavaScript.

PyTorch 1.3 is now available, with improved performance, deployment to mobile devices, "Captum" model interpretability tools, and Cloud TPU support.

The Gradient documents the growing dominance of PyTorch, particularly in research.

Keras Tuner, hyperparameter optimization for Keras, is now available on PyPI.

ONNX, the open exchange format for deep learning models, is now a Linux Foundation project.

AI Inclusive, a newly-formed worldwide organization to promote diversity in the AI community.

Industry News

Databricks announces the MLflow Model Registry, to share and collaborate on machine learning models with MLflow.

Flyte, Lyft's cloud-native machine learning and data processing platform, has been released as open source.

RStudio introduces Package Manager, a commercial RStudio extension to help organizations manage binary R packages on Linux systems.

Exploratory, a new commercial tool for data science and data exploration, built on R.

GCP releases Explainable AI, a new tool to help humans understand how a machine learning model reaches its conclusions.

Google proposes Model Cards, a standardized way of sharing information about ML models, based on this paper.

GCP AutoML Translation is now generally available, and the GCP Translation API is now available in Basic and Advanced editions.

GCP Cloud AutoML is now integrated with the Kaggle data science competition platform.

Amazon Rekognition adds Custom Labels, allowing users to train the image classification service to recognize new objects with as few as 10 training images per label.

Amazon Sagemaker can now use hundreds of free and paid machine learning models offered in Amazon Marketplace.

The AWS Step Functions Data Science SDK, for building machine learning workflows in Python running on AWS infrastructure, is now available.

Microsoft News

Azure Machine Learning service has released several major updates, including:

Visual Studio Code adds several improvements for Python developers, including support for interacting with and editing Jupyter notebooks.

ONNX Runtime 1.0 is now generally available, for embedded inference of machine learning models in the open ONNX format.

Many new capabilities have been added to Cognitive Services, including:

Bot Framework SDK v4 is now available, and a new Bot Framework Composer has been released on Github for visual editing of conversation flows.

SandDance, Microsoft's interactive visual exploration tool, is now available as open source.

Learning resources

An essay about the root causes of problems with diversity in NLP models: for example, "hers" not being recognized as a pronoun. 

Videos from the Artificial Intelligence and Machine Learning Path, a series of six application-oriented talks presented at Microsoft Ignite.

A guide to getting started with PyTorch, using Google Colab's Free GPU offer.

Public weather and climate datasets, provided by Google.

Applications

The Relightables: capture humans in a custom light stage, drop video into a 3-D scene with realistic lighting.

How Tesla builds and deploys its driving automation models with PyTorch (presentation at PyTorch DevCon).

OpenAI has released the full GPT-2 language generation model.

Spleeter, a pre-trained PyTorch model to separate a music track into vocal and instrument audio files.

Detectron2, a PyTorch reimplementation of Facebook's popular object-detection and image-segmentation library.

Find previous editions of the AI roundup here.


          

Machine learning for Java developers, Part 2: Deploying your machine learning model

 Cache   

My previous tutorial, "Machine Learning for Java developers," introduced setting up a machine learning algorithm and developing a prediction function in Java. I demonstrated the inner workings of a machine learning algorithm and walked through the process of developing and training a machine learning model. This tutorial picks up where that one left off. I'll show you how to set up a machine learning data pipeline, introduce a step-by-step process for taking your machine learning model from development into production, and briefly discuss technologies for deploying a trained machine learning model in a Java-based production environment.

To read this article in full, please click here


          

Machine learning for Java developers, Part 1: Algorithms for machine learning

 Cache   

Self-driving cars, face detection software, and voice controlled speakers all are built on machine learning technologies and frameworks--and these are just the first wave. Over the next decade, a new generation of products will transform our world, initiating new approaches to software development and the applications and products that we create and use.

As a Java developer, you want to get ahead of this curve, especially because tech companies are beginning to seriously invest in machine learning. What you learn today, you can build on over the next five years, but you have to start somewhere.

This article will get you started. You will begin with a first impression of how machine learning works, followed by a short guide to implementing and training a machine learning algorithm. After studying the internals of the learning algorithm and features that you can use to train, score, and select the best-fitting prediction function, you'll get an overview of using a JVM framework, Weka, to build machine learning solutions. This article focuses on supervised machine learning, which is the most common approach to developing intelligent applications.

To read this article in full, please click here


          

PhD Positions in Computational Engineering at the University at Buffalo for Fall 2020

 Cache   

Multiple Ph.D. students are being sought to fill openings in the Predictive Computational Engineering (PCE) Lab in the Department of Mechanical and Aerospace Engineering. PCE Lab concerns with multidisciplinary research at the intersect of multiscale modeling of materials, physics-based machine learning, and scientific computing.

The projects aim at developing new theoretical and computational methods for predictive modeling of complex materials systems. Of particular interests are predicting the responses of metal additive manufactured components as well as biomaterials, using a combination of enhanced continuum theories, finite element solutions, and Bayesian inference. Candidates should possess a master's degree in mechanical, civil, or other related engineering fields at the time of enrollment at UB. A strong background in computational and applied mechanics is desired. If interested, contact Dr. Faghihi directly at danialfa@buffalo.edu. Please include a CV along with a brief description of prior research experiences.


          

AWS announces DeepComposer, a machine-learning keyboard for developers

 Cache   
Today, as AWS re:Invent begins, Amazon announced DeepComposer, a machine learning-driven keyboard aimed at developers. “AWS DeepComposer is a 32-key, 2-octave keyboard designed for developers to get hands on with Generative AI, with either pretrained models or your own,” AWS’ Julien Simon wrote in a blog post introducing the company’s latest machine learning hardware. The […]
          

Amazon debuts automatic speech recognition service, Amazon Transcribe Medical

 Cache   
Amazon is expanding its automatic transcription service for AWS, Amazon Transcribe, to include support for medical speech, the company announced this morning at its AWS re:Invent conference. The new machine learning-powered service, Amazon Transcribe Medical, will allow physicians to quickly dictate their clinical notes and speech into accurate text in real time, without any human […]
          

Machine Learning Engineer - Cosmose - Warszawa, mazowieckie

 Cache   
Cosmose is a fast-growing software company on a path to revolutionize the $600BN advertising industry. Its OMNIcookie, a cookie for the physical world, reaches…
Od Cosmose - Tue, 05 Nov 2019 16:31:31 GMT - Pokaż wszystkie Warszawa, mazowieckie oferty pracy
          

Have a chat with us – how to create effective conversations for chatbots

 Cache   
  • Grüezi, Bonjour and hello out there! So nice to have you here 😃 I am Liipbot and I am happy to share my expertise on how to create effective conversations for chatbots with you. Ready? So here is what I have for you:
  • Chatbots intro
    Writing tips
    Pros and Cons
    Project insight
  • Let’s start with a short intro about chatbots first. And let’s keep it crispy. Here’s what you need to know: There are mainly two types of chatbots: (1) chatbots that learn and (2) chatbots that don’t.
  • (1) Chatbots that learn are based on artificial intelligence – uh, buzzword! When they get an answer, they create algorithms out of this answer. With every further question, they verify or falsify the algorithms constantly. And that’s how those machine learning-powered chatbots can participate in complex conversations.
  • (2) Chatbots that don't learn are rule-based. They have a set of questions and answers at their disposal. Thus they follow a predefined path. This plannable set-up makes it easier from a content perspective. Usually, they are the go-to solution for businesses as they are relatively simple to implement and serve a very clear purpose.
  • So far, so good?
  • Yes. Are there any other differences?
  • Oh yes, there are! Chatbots can be found built in a messenger like Whatsapp or Slack or as a standalone application. We also distinguish between graphic interfaces like chatbots and voice assistants. Well… surprisingly I prefer chatbots 😬
  • So yes, I guess that’s it in a nutshell. Shall we continue with my ingenious writing tips?
  • Take me there.
  • Chatbots intro
    Writing tips
    Pros and Cons
    Project insight
  • Straight forward, I like that. So let’s dive right in. Ready for my tips on how to write for chatbots?
  • Yes! Let’s do this.
  • Alright. So here we go:
    Ingenious tip #1: Know your users.
    Who is interacting with your chatbot? In which context? And on which device? This is crucial because you need to know who you’re addressing your content to and in which situation. Do some user research and maybe even outline a user journey – possibly with the help of your fellow UX team members.
  • Knowing your users will have an impact on the complexity of your content as well as your tone of voice. Need an example? Be trustworthy for e-banking clients vs. speaking everyday language with lots of emojis for teenage fashion shoppers. Also, the context is essential: users in a crowded train on their way to work need succinct and quick messages vs. relaxed users on a sunday morning who even love some prose like what I do here.
  • Ready for the second tip?
  • Yes, please!
  • Well, here we go:
    Ingenious tip #2: Use conversational language.
    This might seem rather obvious but is often done wrong. When writing for chatbots, think of a real conversation and follow a conversational structure: Start with a greeting, end with a closing, ask questions in between, acknowledge and comment on the answers like yes, yay, true, no way, you name it and – very important: Don’t forget to introduce the chatbot and its purpose. It’s always nice to know to whom you are talking, right? It will lead to less awkward human-machine-interaction.
  • But most of all: Keep it short. As we know from web writing in general none of us fancies reading a super long copy. And again, think of a real conversation: Balance listening and active talking. D’accord?
  • Not really. You also made a very long point right now 😃
  • Gotcha 😅 But I am sure you too will stumble upon more complex content that can’t be explained in one sentence. In that case, there is a secret key on how to make your text more compact. Ha, curious?
  • Tell me!
  • Be implicit. As in spoken human-to-human conversations, it is not necessary to explicitly name every detail as we have all kinds of codes for situations. Of course, clarity comes first but to create a dynamic flow this so-called conversational implicature is very essential. Subconsciously our brains will decode the message based on context, experience and a cooperative mindset.
  • 🤔 Can you give an example for this?
  • Of course. Here comes one by James Giangola from the Google Conversations Design Team:

  • But why the heck should I use those conversational impli-somethings?
  • You mean conversational implicatures 🤓 They make an interaction more fun as it plays with our minds. You humans usually like that.
  • If you say so 😉 What else is useful to know?
  • Well, besides implicit phrases, there are some other elements of a smooth conversation, for example, abbreviations, exclamations, informal expressions. Even though it is a written piece, chatbot conversations shouldn’t aim at formal standards for written texts. They should feel natural. Always remember it is a conversation, right?
  • That said also don’t forget to include a sense of humour. Why? It makes conversations more lively, and it’s much more fun to interact with. But keep in mind 🚨: Only if it fits your purpose and your company’s tone of voice.
  • Sounds reasonable.
  • Exactly. This also leads to another tip:
    Ingenious tip #3: Use consistent language
    A chatbot becomes implausible when the language differs during the conversation. Cheeky and with a wink at the beginning, serious and dry at the end? Come on, that’s not comprehensible. Try to create a chatbot persona and link the conversation to all the attributes of that persona. The same applies to specific terms and vocabulary. Always use the same, consistent wording – otherwise, you will confuse your counterpart unnecessarily.
  • True. What else is relevant to know?
  • Ingenious tip #4: Avoid dead ends. Technological and language wise.
    As a copywriter who writes the chatbot conversation, always consider the flow and different paths of your storytelling. The worst thing to happen in a chatbot conversation: dead ends. Make sure to set up a valuable error message or define trigger words that lead to another part of the conversation. Please don’t use the usual «Sorry for the inconvenience bla bla», because again: You are in a conversation. Therefore it should feel like one.
  • Better go for «Damn! I am really sorry. That doesn’t seem to work.» And always provide a possible solution like «May I report that to my colleagues at the office?» so that language-wise there are no dead ends, too. Alright?
  • Alright. I am slowly getting familiar with it. What else do you have for me?
  • Cool! I take that as a compliment 😃 I still have three more advice for you.
    Ingenious tip #5: Use visual content.
    Emojis, Gifs, you name it 🤪🤑🤩🤫🤐🤤😷🥳 Whatever eases the conversation and makes it more appealing to read. Especially in longer conversations, it makes sense to sprinkle some visual content to entertain your counterpart.
  • But come on, that’s not conversational. In a face-to-face conversation, there are no emojis or gifs.
  • True. But in face-to-face you have mimics and gestures. That’s what all these visual assets are about: To add another layer of expression to your chatbot conversation. But of course only if it’s appropriate for the topic. If your chatbot is about sensitive data like medical information or your bank account one would probably be very irritated by a poo emoji 💩.
  • Hm, okay, convinced. What’s next?
  • The next tip is more about how to organise yourself when writing a chatbot conversation:
    Ingenious tip #6: Use a conversational template.
    I can tell by experience: It helps so much if you don’t just write paragraph by paragraph in your usual doc file but use a diagram to visualise the conversational flow. It quickly gets confusing when you provide several options to choose from. A template keeps you on track – and helps to identify dead ends quickly.
  • I’ve never thought of this. That’s a good tip, thanks.
  • And now, get ready for the grand finale:
    Ingenious tip #7: Do user testing.
    A proof of concept always helps, right? After drafting your conversation, it’s worth to spend some time (and 💰) on user testing. Let a group of future users have a chat with the bot. Or at least some of your colleagues or friends. You will be surprised how easily and effectively you will detect shortcomings.
  • If there is no time (and 💰) for user testing, at least read the conversation out loud. Not quite close to a user testing but at least a first indicator for barriers in the conversation flow.
  • Okay, now you know all my ingenious tips. I hope I didn’t promise too much?
  • Nope. Thanks for all the tips. I want to dig even deeper into do’s and dont’s of conversational language. Can you elaborate on that?
  • Oh, I would love to! But as said before, chatbots should be very to the point. And we have two more topics coming up. But let me recommend you this Guide to Conversational Design for more details. But now - are you ready to move on?
  • Okay, let’s continue.
  • Chatbots intro
    Writing tips
    Pros and Cons
    Project insight
  • It is always important to check potentials and risks, right? There are two perspectives on the topic: the user and the business perspective. What interests you the most?
  • The user perspective, of course.
  • Good choice 👍 From a user perspective, a chatbot offers lots of advantages. First of all: It is always accessible, 24 hours, seven days a week. That’s nice, isn‘t it? Compared to its human counterpart that’s a big plus if you’re looking for information, support or guidance.
  • Ah, talking about guidance: A chatbot also navigates the users much easier to their specific goal than conventional navigation. No need of clicking through endless pages, no need for skim reading, yay! Just an effective guided search. What a relief in our busy times.
  • And – last but not least – a chatbot adds a human touch to the plain and often mundane presentation of information on the web. It’s much nicer to use a question like «where do you live?» instead of just «address» in conversational forms. This matches with the human need for socialising and interaction and users will feel much more delighted to browse, what else can we wish for, right?
  • Absolutely! But how about the business perspective?
  • It’s all about the money, I knew it. Okay, just kidding 😂 The business perspective is important, too, and of course, your investment in a chatbot needs to pay off. And it really does. I’ll tell you why.
  • If you use the chatbot for support purposes, for example in CRM or as a sales assistant, it will save you time and money. Chatbots can handle unlimited requests at a time. In contrast, a human employee can only focus on one or maybe two requests. So service costs will be reduced, and employees can focus on the more complex and tricky cases.
  • What else? It’s also another great opportunity to strengthen your brand identity and convey your tone of voice. This can be done much easier in a conversation than in a plain text – but it is also much harder to achieve 😃 Either you create a chatbot persona that personifies a certain aspect of your brand values or one that is aiming at a specific target group.
  • Not to forget: The fame and fortune! Conversational interfaces are still a hot topic. Let’s push your innovative tech reputation 💪
  • Okay, but that were only advantages, right? Didn’t you mention cons before, too?
  • Smart you 🤓 Yes, you are right. Of course, there are some downsides or risks to keep in mind. First of all, the question should be – as in every other UX-project: Does this solution add value to my purpose? If you just want to display your opening hours, there’s no need to dive into a conversation with a chatbot.
  • The same applies to your specific business case. As mentioned before it might be inappropriate in some cases to have a chat with a chatbot when a real human is needed.
  • Another shortcoming is the limited interaction palette for rule-based chatbots. Man, that can be annoying 🙄 The solution is as simple as expensive: AI-driven chatbot. Really smart chatbots are not cheap.
  • And finally, as this is a conversation about how to write for chatbots, I have to mention the potential risk that goes with writing conversations for it. If a chatbot is not well-written meaning not user-centred, it can evoke very awkward interactions. Don’t try to be funny and cool if you are not. But thank goodness, there are experts for that 😬
  • Hehe, I got your hint 😬
  • 😃 Sorry for the hidden agenda. But my colleagues are amazing! Don’t trust me? Let me give you an example, a little sneak peek, of a real cool project they did.
  • Chatbots intro
    Writing tips
    Pros and Cons
    Project insight
  • I prefer hands-on over theory so let’s sneak into a real case. The project in a nutshell: My colleagues at Liip created a conversational chatbot called Kafibot (which translates to coffee-bot) for Zurich based energy provider Energie 360°. It was an internal communication project and aimed at bringing employees with common interests together to drink coffee. Pretty nice of e360, isn’t it?
  • Wow, that’s really cool. I would love to have that in my company, too. Why did you choose a chatbot for this?
  • Well, as the whole project encourages exchange between employees, my colleagues thought of an equally interactive and conversational approach for the process.
  • Sounds reasonable. Can I have a look at it?
  • Yes, yes, yes, so excited! Here you go 🚀

  • Isn’t that funky? Did I make you even more curious?
  • Indeed. That looks pretty cool already.
  • Well nice, then stay tuned. There will be more about the project coming soon, I promise 😃
  • Now that you got an insight into a chatbot conversation project, you know about chatbots in general and the pros and cons, and you gained some good tips on how to write for a chatbot… I think I am done with my job here. Are there any questions left?
  • Uh, I guess you answered a lot. Thanks 🙏 I feel much more confident in starting my own conversational interface project now.
  • Oh wow! I am flattered 🤗 If you want to know even more, here are some excellent resources that I used for my recommendations:

    Learn how to build your own chatbot (written by my fellow colleague Thomas Ebermann)
    Logic and conversation
    Guide to Conversational Design
    Applying Built-in Hacks of Conversation to Your Voice UI
  • Awesome. So courteous and user-centred! 😃
  • Yup, that’s my purpose. Thanks a lot, fellow chatbot friend 🙏 It was great to have a conversation with you. Arrivederci 👋
`

The experts behind this article
Big thanks to Pedro who magically created this chatbot out of plain text. Thanks to Plotti, Caro, Jenny and Janina for your valuable inputs and sharp eyes regarding copy and content. This article would not have been possible without you!


          

PhD Position – Climate and the carbon cycle: identifying responses and impacts using satellite remote sensing and machine learning

 Cache   
Summary You will develop approaches to describe the direct and lagged effects of climate on the terrestrial carbon cycle, informed by decades of remote sensing observations and state of the art machine learning methods. Project background Large-scale climate variations, e.g. El Nino, can have immediate effects...
          

Remote Machine Learning Developer (Junior)

 Cache   

Location: Anywhere,None,None

Scopic Software is seeking a Remote Machine Learning Developer (Junior) to join our team of 250+ professionals across 40+ countries. The successful candidate will work with a team of talented developers, designers, and project managers to develop industry-leading applications with the latest technologies. Current projects include a hair-loss simulator, gender change simulator, and voice recognition processor.

 

Requirements:

  • 1+ years of professional experience in software development
  • Advanced C++ and/or Python programming skills, both preferred
  • Solid foundation in algorithms and mathematics
  • Demonstrable machine learning expertise – both theory and in practice
  • Proficiency in one or more deep learning frameworks
  • Author on one or more published machine/deep learning papers, preferred
  • Masters degree or higher
  • Intermediate+ written and spoken English

 

This is a full-time, home-based position.

 

Compensation: Depending on skills and experience.


          

Other: Delivery Lead - Irvine, California

 Cache   
Product Lead Smart Energy Water ()Full time permanentIrvine CAWho we are -SEW is the # 1 Energy and Water Cloud Platform, providing cloud-based Software-as-a-Service (SaaS) solutions for Digital Customer Engagement, Field Workforce Engagement, and Smart AI / Machine Learning to the Energy and Utility sector. We help utilities improve their customer service, operational efficiency, and maximize return on investments through the SEW platform applications leveraging the mobile, AI, Machine Learning and Cloud technologies.Searching for your dream job? At SEW, we strive to help our employees find passion and purpose. Join us in changing the way the world use energy & water and become part of our innovative and talented team!What you'll do (Roles & Responsibilities) Manage SEW product line that has the opportunity to help globally millions of people to engage, empower and educate about energy and water saving.Planning and execution of the product roadmap features, integrations and releaseWork cross-functionally with dynamic and passionate teams, including Engineering to sustain, build, and scale a rapidly growing productMove rapidly in an ever-changing environment while balancing multiple prioritiesWork with customers and all customer-facing organizations to plan and prioritize features and product iterationsBuild relationships of empathy, trust, and respect with other team members and our customers and partnersInspire and lead future team members to live and breathe SEW missionBuild business case and influence teams and stakeholders to invest in product vision and goalIntegrate usability studies, research, and market analysis into product requirements to enhance customer satisfactionDefine and analyze metrics that inform the success of productsMaximize efficiency in a fast-moving environment where creative solutions are appreciatedWork with product team and Lead product feature from idea to release / beyondWork with product stakeholder to develop the product roadmap for a suite of portfolio management features for energy & water customer service and operation teams and a Web / mobile app for their customers.Work and lead a cross-functional team of engineers and designers through an agile development processUnderstand customer pain points through product delivery , qualitative and quantitative research, come up with solutions, and then prototype, iterate, and launch frequentlyHelp design simple but feature-rich customer experiences that delight energy & water customer service and operational groupsBe an accountable and owner of features and products, end to end- from concept to build, release to user adoption and supportWork within aggressive timelines to prioritize your work for maximum impactWhat you'll need to have (Qualifications)Min 5+ years of hands-on product management experience with both web and mobile products.Proven track record of building products that users love.Experience with customer research and product analytics.Experience solving complex business problems in a scalable manner.Entrepreneurial attitude and a deep empathy for our customers.Prior experience building SaaS products for public facing customer and SMBs.Familiarity with customer facing industry products and analytics products or willingness to learn.Strong written and interpersonal communication skills and ability to interact persuasively with engineering and design teams.Affinity for a fast-paced, high growth environment.Deep passion for working on a mission-driven product that is designed to solve a very real problemExperience working with B2B2C productDeeply analytical mindset and metric drivenStrong technical backgroundExcitement to roll up sleeves and make a big impact on a rapidly growing productExperience and success working cross-functionally between departments, particularly engineeringPerks @SEW Family Be a member of an exceptional team - we're growing and your career and opportunities with us will, too!Rich suite of benefit plans - Employee premiums paidGenerous Paid Time Off planRetirement savings - 401k plan with a Company matchBeautiful Irvine /Newport Beach office in a great location, with stunning viewsConveniently located close to public transportationOpen, transparent culture that includes weekly All Hands meetings, Lunch-and-Learns, all-company offsite, etc.Commuter and Parking monthly subsidyAccess to corporate gym membership rates and other discounts and employee perks!*Smart Energy Water is proud to be an equal opportunity employer. We celebrate diversity and we are committed to creating an inclusive environment for all employees ()
          

QA software test Engineer

 Cache   
QA software test Engineer - Engenharia

At StepStone we take great strides forward to stay the leading online recruitment marketplace. The answer lies in a combination of two important factors: market leading products and truly exceptional, driven and highly talented people. Do you want to be part of a company you can be proud of and help it growing? Then keep reading. As QA / software test Engineer you will be part of an expanding international R&D team working in a fast-paced and innovative environment. The team you will join oversees building the search engines behind every site of the StepStone Group.  This involves Machine learning, Data Mining, Information Retrieval, and NLP technologies that are deployed on more than 25 platforms. You’ll work inside a Scrum development team directly among the developers and you will take ownership of all QA activities within your team. Help team analyzing user stories and/use cases/requirements for validity and feasibility Execute different levels of testing (System, Integration, a...

    Empresa: StepStone
    Tipo de trabalho: Tempo integral

          

QA software test Engineer

 Cache   
QA software test Engineer - Engenharia

At StepStone we take great strides forward to stay the leading online recruitment marketplace. The answer lies in a combination of two important factors: market leading products and truly exceptional, driven and highly talented people. Do you want to be part of a company you can be proud of and help it growing? Then keep reading. As QA / software test Engineer you will be part of an expanding international R&D team working in a fast-paced and innovative environment. The team you will join oversees building the search engines behind every site of the StepStone Group.  This involves Machine learning, Data Mining, Information Retrieval, and NLP technologies that are deployed on more than 25 platforms. You’ll work inside a Scrum development team directly among the developers and you will take ownership of all QA activities within your team. Help team analyzing user stories and/use cases/requirements for validity and feasibility Execute different levels of testing (System, Integration, a...

    Empresa: StepStone
    Tipo de trabalho: Tempo integral

          

Lead Data Scientist | EC1 Partners

 Cache   
London, United Kingdom, As the Lead Data Scientist, you will be responsible for delivering successful projects using machine learning and other advanced techniques while demonstrating a strong understanding of the various p
          

Senior Data Scientist | Bank of Montreal

 Cache   
Toronto, Canada, Summary: Machine learning and Artificial Intelligence (ML and AI) is a strategic initiative in risk management. The candidate will participate in new projects and be hands-on along side internal expe
          

دیدگاه‌ها برای آموزش یادگیری ماشین (Machine Learning) (تئوری – عملی) – بخش اول با حسین شفاهی

 Cache   
این اموزش مانند اموزشهای دیگر اقای شیرافکن برای افراد بسیار اموزنده و عالی هست.
          

How Google Is Stealing Your Personal Health Data

 Cache   

Expert Review by Maryam Heinen

Google, by far one of the greatest monopolies that ever existed, and poses a unique threat to anyone concerned about health, supplements, food and your ability to obtain truthful information about these and other issues.

This year, we’ve seen an unprecedented push to implement censorship across all online platforms, making obtaining and sharing crucial information about holistic health increasingly difficult.

As detailed in “Stark Evidence Showing How Google Censors Health News,” Google’s June 2019 update, which took effect June 3, effectively removed Mercola.com and hundreds of other natural health sites from Google search results. Google is also building a specific search tool for medical and health-related searches.1

And, while not the sole threat to privacy, Google is definitely one of the greatest. Over time, Google has positioned itself in such a way that it’s become deeply embedded in your day-to-day life, including your health.

In recent years, the internet and medicine have become increasingly intertwined, giving rise to “virtual medicine” and self-diagnosing — a trend that largely favors drugs and costly, invasive treatments — and Google has its proverbial fingers in multiple slices of this pie.

Health Data Mining Poses Unique Privacy Risks

For example, in 2016, Google partnered with WebMD, launching an app allowing users to ask medical questions.2 The following year, Google partnered with the National Alliance on Mental Illness, launching a depression self-assessment quiz which turned out to be little more than stealth marketing for antidepressants.3,4

Google and various tech startups have also been investigating the possibility of assessing mental health problems using a combination of electronic medical records and tracking your internet and social media use.

In 2018, Google researchers announced they’d created an artificial intelligence-equipped retinal scanner that can appraise your risk for a heart attack.5

According to a recent Financial Times report,6 Google, Amazon and Microsoft collect data entered into health and diagnostic sites, which is then shared with hundreds of third parties — and this data is not anonymized, meaning it’s tied to specifically to you, without your knowledge or consent.

What this means is DoubleClick, Google’s ad service, will know which prescriptions you’ve searched for on Drugs.com, thus providing you with personalized drug ads. Meanwhile, Facebook receives information about what you’ve searched for in WebMD’s symptom checker.

“There is a whole system that will seek to take advantage of you because you’re in a compromised state,” Tim Lebert, a computer scientist at Carnegie Mellon University told Financial Times.7 “I find that morally repugnant.”

While some find these kinds of technological advancements enticing, others see a future lined with red warning flags. As noted by Wolfie Christl, a technologist and researcher interviewed by Financial Times:8

“These findings are quite remarkable, and very concerning. From my perspective, this kind of data are clearly sensitive, has special protections

The following graphic, created by Financial Times, illustrates the flow of data from BabyCenter.com, a site that focuses on pregnancy, children’s health and parenting, to third parties, and the types of advertising these third parties then generate.

user data sent to third parties

Tech Companies Are Accessing Your Medical Records

As described in the featured Wall Street Journal video,9 a number of tech companies, including Amazon, Apple and the startup Xealth, are diving into people’s personal electronic medical records to expand their businesses.

Xealth has developed an application that is embedded in your electronic health records. Doctors who use the Xealth application — which aims to serve most health care sectors and is being rapidly adopted as a preferred “digital formulary”10 — give the company vast access to market products to their patients. The app includes lists of products and services a doctor believes might be beneficial for certain categories of patients.

When seeing a patient, the doctor will select the products and services he or she wants the patient to get, generating an electronic shopping list that is then sent to the patient. The shopping links direct the patient to purchase these items from Xealth’s third-party shopping sites, such as Amazon.

As noted in the video, “Some privacy experts worry that certain Xealth vendors can see when a patient purchased a product through Xealth, and therefore through their electronic health record.” In the video, Jennifer Miller, assistant professor at Yale School of Medicine says:

”In theory, it could boost adherence to physician recommendations, which is a huge challenge in the U.S. health care system. On the other side, there are real worries about what type of information Amazon in particular is getting access to.

So, from what I understand, when a patient clicks on that Xealth app and is taken to Amazon, the data are coded as Xealth data, which means Amazon likely knows that you purchased these products through your electronic health records.”

Amazon Is Mining Health Records

Amazon, in turn, has developed software, called Amazon Comprehend Medical, which uses artificial intelligence (AI) to mine people’s electronic health records. This software has been sold to hospitals, pharmacies, researchers and various other health care providers.

The software reveals medical and health trends that might otherwise go unnoticed. As one example, given in the video, a researcher can use this software to mine tens of thousands of health records to identify candidates for a specific research study.

While this can certainly be helpful, it can also be quite risky, due to potential inaccuracies. Doctors may enter inaccurate data for a patient, for example, data that, were it accurate, would render that patient a poor test subject.

Apple is also getting in on the action through its health app. It facilitates access to electronic medical records by importing all your records directly from your health care provider. The app is meant to be “helpful” by allowing you to pull up your medical records on your iPhone and present them to any doctor, anywhere in the world.

What Does This Mean for Your Privacy

While tech companies like Amazon and Apple claim your data are encrypted (to protect it from hacking) and that they cannot view your records directly, data breaches have become so common that such “guarantees” are next to worthless.

As noted in the video by Dudley Adams, a data use expert at the University of California, San Francisco, “No encryption is perfect. All it takes is time for that encryption to be broken.” One very real concern about having your medical records hacked into is that your information may be sold to insurance companies and your employer, which they can then use against you, either by raising your rates or denying employment.

After all, sick people cost insurance companies and employers more money, so both have a vested interest in avoiding chronically ill individuals. So, were your medical records to get out, you could potentially become uninsurable or unemployable.

Google Collects Health Data on Millions of Americans

Getting back to Google, a whistleblower recently revealed the company amassed health data from millions of Americans in 21 states through its Project Nightingale,11,12 and patients have not been informed of this data mining. As reported by The Guardian:13

“A whistleblower who works in Project Nightingale … has expressed anger to the Guardian that patients are being kept in the dark about the massive deal.

The anonymous whistleblower has posted a video on the social media platform Daily Motion that contains a document dump of hundreds of images of confidential files relating to Project Nightingale.

The secret scheme … involves the transfer to Google of healthcare data held by Ascension, the second-largest healthcare provider in the U.S. The data is being transferred with full personal details including name and medical history and can be accessed by Google staff. Unlike other similar efforts it has not been made anonymous though a process of removing personal information known as de-identification …

Among the documents are the notes of a private meeting held by Ascension operatives involved in Project Nightingale. In it, they raise serious concerns about the way patients’ personal health information will be used by Google to build new artificial intelligence and other tools.”

The anonymous whistleblower told The Guardian:

“Most Americans would feel uncomfortable if they knew their data was being haphazardly transferred to Google without proper safeguards and security in place. This is a totally new way of doing things. Do you want your most personal information transferred to Google? I think a lot of people would say no.”

On a side note, the video the whistleblower uploaded to Daily Motion has since been taken down, with a note saying the “video has been removed due to a breach of the Terms of Use.”

According to Google and Ascension, the data being shared will be used to build a search tool with machine-learning algorithms that will spit out diagnostic recommendations and suggestions for medications that health professionals can then use to guide them in their treatment.

Google claims only a limited number of individuals will have access to the data, but just how trustworthy is Google these days? Something tells me that since the data includes full personal details, they’ll have no problem figuring out a way to eventually make full use of it.

Google Acquires Fitbit

In November 2019, the company also acquired Fitbit for $2.1 billion, giving Google access to the health data of Fitbit’s 25.4 million active users14 as well. While Google says it won’t sell or use Fitbit data for Google ads, some users have already ditched their devices for fear of privacy breaches.15 As reported by The Atlantic on November 14, 2019:16

“Immediately, users voiced concern about Google combining fitness data with the sizeable cache of information it keeps on its users. Google assured detractors that it would follow all relevant privacy laws, but the regulatory-compliance discussion only distracted from the strange future coming into view.

As Google pushes further into health care, it is amassing a trove of data about our shopping habits, the prescriptions we use, and where we live, and few regulations are governing how it uses these data.”

How HIPAA Laws Actually Allow This Data Mining

The HIPAA Security Rule is supposed to protect your medical records, preventing access by third parties — including spouses — unless you specifically give your permission for records to be shared. So, just how is it that Google and other tech companies can mine them at will?

As it turns out, the Google-Ascension partnership that gives Google access to medical data is covered by a “business associate agreement” or BAA. HIPAA allows hospitals and medical providers to share your information with third parties that support clinical activities, and according to Google’s interpretation of the privacy laws and HIPAA regulations, the company is not in breach of these laws because it’s a “business associate.” 

The Department of Health and Human Services’ Office for Civil Rights has opened an investigation into the legality of this arrangement.17 As reported by The Atlantic:18

“If HHS determines that Google and its handling of private information make it something more akin to a health care provider itself (because of its access to sensitive information from multiple sources who aren’t prompted for consent), it may find Google and Ascension in violation of the law and refer the matter to the Department of Justice for potential criminal prosecution.

But whether or not the deal goes through, its very existence points to a larger limitation of health-privacy laws, which were drafted long before tech giants started pouring billions into revolutionizing health care.”

Patients Bear the Risk While Third Parties Benefit

BAA agreements only allow for the disclosure of protected health information to entities that help the medical institution to perform its health care functions. The third party is not permitted to use the data for its own purposes or in any independent way.

I personally find it hard to believe that Google would not find a way to profit from this personal health data, considering its web-like business structure that ties into countless other for-profit parties. Even if they don’t, there does not appear to be any distinct advantages to patients whose records are being shared. As reported by STAT News:19

“Jennifer Miller, a Yale medical school professor who studies patient privacy issues, said the way health information is being shared, whether legal or not, is far from ideal. Patients — whose data are shared without their knowledge or specific consent — end up with all the risks, she said, while the benefits, financial or otherwise, go to Google, Ascension, and potentially future patients.”

As reported by Health IT Security20 in March 2019, Democratic senator of Nevada, Catherine Cortez Masto, has also introduced a data privacy bill “that would require companies not covered by HIPAA to obtain explicit consent from patients before sharing health and genetic data.”

“The bill covers the collecting and storing of sensitive data, such as biometrics, genetics, or location data,” Health IT Security writes.21 “The consent form must outline how that data will be used.

And the bill will also let consumers request, dispute the accuracy of their records, and transfer or delete their data “without retribution” around price or services offered.

Further, organizations would need to apply three standards to all data collection, processing, storage, and disclosure. First, collection must be for a legitimate business or operation purpose, without subjecting individuals to unreasonable risks to their privacy.

Further, the data may not be used to discriminate against individuals for protected characteristics, such as religious beliefs. Lastly, companies may not engage in deceptive data practices.”

Google Partnership Spurs Class-Action Lawsuit

The fact that patients don’t want Google to access their medical records is evidenced by a class-action lawsuit filed in the summer of 2019 against the University of Chicago Medical Center which, like Ascension, allowed Google access to identifiable patient data through a partnership with the University of Chicago. As reported by WTTW News June 28, 2019:22

“All three institutions are named as defendants in the suit, which was filed … by Matt Dinerstein, who received treatment at the medical center during two hospital stays in 2015.

The collaboration between Google and the University of Chicago was launched in 2017 to study electronic health records and develop new machine-learning techniques to create predictive models that could prevent unplanned hospital readmissions, avoid costly complications and save lives …

The tech giant has similar partnerships with Stanford University and the University of California-San Francisco. But that partnership violated federal law protecting patient privacy, according to the lawsuit, by allowing Google to access electronic health records of ‘nearly every patient’ at the medical center from 2009 to 2016.

The suit also claims Google will use the patient data to develop commercial health care technologies … The lawsuit claims the university breached its contracts with patients by ‘failing to keep their medical information private and confidential.’ It also alleges UChicago violated an Illinois law that prohibits companies from engaging in deceptive practices with clients.”

Like Ascension, the University of Chicago claims no confidentiality breaches have been made, since Google is a business associate. However, the lawsuit claims HIPAA was still violated because medical records were shared that “included sufficient information for Google to re-identify patients.”

The lawsuit also points out that Google does indeed have a commercial interest in all of this information, and can use it by combining it with its AI and advanced machine learning.

According to the plaintiffs, Google’s acquisition of DeepMind “has allowed for Google to find connections between electronic health records and Google users’ data.” The news report also points out that:23

“In 2015, Google and DeepMind obtained patient information from the Royal Free NHS Trust Foundation to conduct a study, which a data protection watchdog organization said ‘failed to comply with data protection law.’”

Health-Tracking Shoes and Other Privacy Abominations

Google is also investing in other wearable technologies aimed at tracking users’ health data, including:24

  • Shoes designed to monitor your weight, movement and falls
  • “Smart” contact lenses for people with age-related farsightedness and those who have undergone cataract surgery25 (a glucose-sensing contact lens for diabetics was canceled in 2018 after four years of development26)
  • A smartwatch to provide information for clinical research27
  • An all-in-one insulin patch pump for Type 2 diabetics that is prefilled and connected to the internet28

Google also has big plans for expanding the use of AI in health care. According to CB Insights,29 “The company is applying AI to disease detection, new data infrastructure, and potentially insurance.”

As mentioned earlier, insurance companies can jack up premiums based on your health. So, what could possibly go wrong by having Google’s AI wired into the insurance market?

Google has also partnered with drugmaker Sanofi, which “will leverage Google’s cloud and AI technologies and integrate them into its biological innovations and scientific data which in turn will accelerate the medicine discovery process,” according to a Yahoo! Finance report.30

According to Yahoo! Finance, “the collaboration will aid in the identification of various type of treatments suitable for patients. Additionally, Google’s AI tools are likely to be utilized by Sanofi in improving marketing and supply efforts and in forecasting sales.”

In plain English, this partnership will help Sanofi sell more drugs, which can hardly be said to be for the patients’ best interest, but rather that of Sanofi and Google. As mentioned earlier, Verily, Google’s health care division, is also collaborating with Sanofi, Novartis, Otsuka and Pfizer to help them identify suitable patients for clinical drug trials.31

To boost drug sales even further, Verily is working with Walgreens to deploy a “medication adherence” project, in which patients are equipped with devices to ensure they’re taking their medication as prescribed.32

Amazon also plays a part in the drug adherence scheme with its recent buyout of Pillpack, an online pharmacy that offers prepackaged pill boxes with all the different medications you’re taking.

According to Yahoo! Finance, Amazon is also planning to develop at-home medical testing devices, and is rolling out the option to make medical-related purchases from Amazon using your health savings account. All of these things generate health-related data points that can then be used for other purposes, be it personalized marketing or insurance premium decisions.

Have You Had Enough of Google’s Privacy Intrusions Yet? 

Add to all of this data mining the fact that Google is actively manipulating search results and making decisions about what you’re allowed to see and what you’re not based on its own and third party interests — a topic detailed in a November 15, 2019 Wall Street Journal investigation.33 The dangers ahead should be self-evident.

Now more than ever we must work together to share health information with others by word-of-mouth, by text and email. We have built in simple sharing tools at the top of each article so you can easily email or text interesting articles to your friends and family.

My information is here because all of you support and share it, and we can do this without Big Tech’s support. It’s time to boycott and share! Here are a few other suggestions:

Become a subscriber to my newsletter and encourage your friends and family to do the same. This is the easiest and safest way to make sure you’ll stay up to date on important health and environmental issues.

If you have any friends or relatives that are seriously interested in their health, please share important articles with them and encourage them to subscribe to our newsletter.

Consider dumping any Android phone the next time you get a phone. Android is a Google operating system and will seek to gather as much data as they can about you for their benefit. iPhone, while not perfect, appears to have better privacy protections.

Use the internal Mercola.com search engine when searching for articles on my site.

Boycott Google by avoiding any and all Google products:

  • Stop using Google search engines and recognize that even engines that honor privacy like Start Page, still use Google as their back end and provide censored results. Alternatives include DuckDuckGo34 and Qwant35
  • Uninstall Google Chrome and use Brave or Opera browser instead, available for all computers and mobile devices.36 From a security perspective, Opera is far superior to Chrome and offers a free VPN service (virtual private network) to further preserve your privacy
  • If you have a Gmail account, try a non-Google email service such as ProtonMail,37 an encrypted email service based in Switzerland
  • Stop using Google docs. Digital Trends has published an article suggesting a number of alternatives38
  • If you’re a high school student, do not convert the Google accounts you created as a student into personal accounts

Sign the “Don’t be evil” petition created by Citizens Against Monopoly


          

STFC Machine Learning Group Deploys Elastic NVMe Storage to Power GPU Servers

 Cache   

At SC19, Excelero announced that the Science and Technology Facilities Council (STFC) has deployed a new HPC architecture to support computationally intensive analysis including machine learning and AI-based workloads using the NVMesh elastic NVMe block storage solution. "Done in partnership with Boston Limited, the deployment is enabling researchers from STFC and the Alan Turing Institute to complete machine learning training tasks that formerly took three to four days, in just one hour – and other foundational scientific computations that researchers formerly could not perform."

The post STFC Machine Learning Group Deploys Elastic NVMe Storage to Power GPU Servers appeared first on insideHPC.


          

What I talk about when I talk about AI x-risk: 3 core claims I want machine learning researchers to address.

 Cache   
Published on December 2, 2019 6:18 PM UTC

Recently, as PCSOCMLx, I (co-)hosted a session with the goal of explaining, debating, and discussing what I view as "the case for AI x-risk". Specifically, my goal was/is to make the case for the "out-of-control AI killing everyone" type of AI x-risk, since many or most ML researchers already accept that there are significant risks from misuse of AI that should be addressed.

I'm sharing my outline, since it might be useful to others, and in order to get feedback on it. Please tell me what you think it does right/wrong!

Some background/context

I estimate I've spent ~100-400 hours discussing AI x-risk with machine learning researchers during the course of my MSc and PhD. My current impression is that rejection of AI x-risk by ML researchers is mostly due to a combination of:

  • Misunderstanding of what I view as the key claims (e.g. believing "the case for x-risk hinges on short-timelines and/or fast take-off").
  • Ignorance of the basis for AI x-risk arguments (e.g. no familiarity with the argument from instrumental convergence).
  • Different philosophical groundings (e.g. not feeling able/compelled to try and reason using probabilities and expected value; not valuing future lives very much; an unexamined apparent belief that current "real problems" should always take precedence of future "hypothetical concerns" resulting in "whataboutism").

I suspect that ignorance about the level of support for AI x-risk concerns among other researchers also plays a large role, but it's less clear... I think people don't like to be seen to be basing their opinions on other researchers'. Underlying all of this seems to be a mental move of "outright rejection" based on AI x-risk failing many powerful heuristics. AI x-risk is thus commonly viewed as a Pascal's mugging: "plausible" but not plausible enough to compel any consideration or action. A common attitude is that AI take-over has a "0+epsilon" chance of occurring.

I'm hoping that being more clear and modest in the claims I/we aim to establish can help move discussions with researchers forward. I've recently been leaning heavily on the unpredictability of the future and making ~0 mention of my own estimates about the likelihood of AI x-risk, with good results.

The 3 core claims:

1) The development of advanced AI increases the risk of human extinction (by a non-trivial amount, e.g. 1%), for the following reasons:

  • Goodhart's law
  • Instrumental goals
  • Safety-performance trade-offs (e.g. capability control vs. motivation control)

2) To mitigating this existential risk (x-risk) we need progress in 3 areas:

  • Knowing how to build safe systems ("control problem")
  • Knowing that we know how to build safe systems ("justified confidence")
  • Preventing people from building unsafe systems ("global coordination")

3) Mitigating AI x-risk seems like an ethical priority because it is:

  • high impact
  • neglected
  • challenging but tractable

Reception:

Unfortunately, only 3 people showed up to our session (despite something like 30 expressing interest). So I didn't learn to much about the effectiveness of this presentation. My 2 main take-aways are:

  • Somewhat unsurprisingly, claim 1 had the least support. While I find this claim and the supporting arguments quite compelling and intuitive, there seem to be inferential gaps that I struggle to address quickly/easily. A key sticking point seems to be the lack of a highly plausible concrete scenario. I think it might also require more discussion of epistemics in order to move people from "I understand the basis for concern" to "I believe there is a non-trivial chance of an out-of-control AI killing everyone".
  • The phrase "ethical priority" raises alarm bells for people, and should be replaced of clarified. Once I clarified that I meant it in the same way as "combating climate change is an ethical priority", people seemed to accept it.

Some more details on the event:

The title for our session was: The case for AI as an existential risk, and a call for discussion and debate.

Our blurb was: A growing number of researchers are concerned about scenarios in which machines, instead of people, control the future. What is the basis for these concerns, and are they well-founded? I believe they are, and we have an obligation as a community to address them. I can lead with a few minutes summarizing the case for that view. We can then discuss what nuances, objections, and take-aways.

I also started with some basic background to make sure people understood the topic:

  • X-risk = risk of human extinction
  • The 3 kinds of risk (misuse, accident, structural)
  • The specific risk scenario I'm concerned with: out of control AI




Discuss
          

DOE Announces $15 Million for Development of AI and Machine Learning Tools

 Cache   

According to a recent press release, “Today, the U.S. Department of Energy’s (DOE’s) Advanced Research Projects Agency-Energy (ARPA-E) announced $15 million in funding for 23 projects to accelerate the incorporation of machine learning and artificial intelligence into the energy technology and product design processes as part of the Design Intelligence Fostering Formidable Energy Reduction (and) […]
The post DOE Announces $15 Million for Development of AI and Machine Learning Tools appeared first on DATAVERSITY.


          

Seattle Seahawks Select AWS as Its Cloud, Machine Learning, and AI Provider

 Cache   

A recent press release reports, “Today, Amazon Web Services, Inc. (AWS), an Amazon.com company, announced that AWS is now a cloud, machine learning (ML), and artificial intelligence (AI) provider for the Seattle Seahawks. In addition to moving the vast majority of its infrastructure to AWS, the National Football League (NFL) team will use the breadth […]
The post Seattle Seahawks Select AWS as Its Cloud, Machine Learning, and AI Provider appeared first on DATAVERSITY.


          

Coloring WW2DB Images with Machine Learning

 Cache   

Having been a minor contributor to WW2DB for a few years, I've been itching to apply some visualization, data analysis, and machine learning to the data amassed on this site. I recently started a project to create machine learning models to automatically color black and white images. I used image...

          

What I talk about when I talk about AI x-risk: 3 core claims I want machine learning researchers to address.

 Cache   
Published on December 2, 2019 6:20 PM UTC

Recently, as PCSOCMLx, I (co-)hosted a session with the goal of explaining, debating, and discussing what I view as "the case for AI x-risk". Specifically, my goal was/is to make the case for the "out-of-control AI killing everyone" type of AI x-risk, since many or most ML researchers already accept that there are significant risks from misuse of AI that should be addressed.I'm sharing my outline, since it might be useful to others, and in order to get feedback on it. Please tell me what you think it does right/wrong!

Some background/context

I estimate I've spent ~100-400 hours discussing AI x-risk with machine learning researchers during the course of my MSc and PhD. My current impression is that rejection of AI x-risk by ML researchers is mostly due to a combination of:

  • Misunderstanding of what I view as the key claims (e.g. believing "the case for x-risk hinges on short-timelines and/or fast take-off").
  • Ignorance of the basis for AI x-risk arguments (e.g. no familiarity with the argument from instrumental convergence).
  • Different philosophical groundings (e.g. not feeling able/compelled to try and reason using probabilities and expected value; not valuing future lives very much; an unexamined apparent belief that current "real problems" should always take precedence of future "hypothetical concerns" resulting in "whataboutism").

I suspect that ignorance about the level of support for AI x-risk concerns among other researchers also plays a large role, but it's less clear... I think people don't like to be seen to be basing their opinions on other researchers'. Underlying all of this seems to be a mental move of "outright rejection" based on AI x-risk failing many powerful and useful heuristics. AI x-risk is thus commonly viewed as a Pascal's mugging: "plausible" but not plausible enough to compel any consideration or action. A common attitude is that AI take-over has a "0+epsilon" chance of occurring.I'm hoping that being more clear and modest in the claims I/we aim to establish can help move discussions with researchers forward. I've recently been leaning heavily on the unpredictability of the future and making ~0 mention of my own estimates about the likelihood of AI x-risk, with good results.

The 3 core claims:

1) The development of advanced AI increases the risk of human extinction (by a non-trivial amount, e.g. 1%), for the following reasons:

  • Goodhart's law
  • Instrumental goals
  • Safety-performance trade-offs (e.g. capability control vs. motivation control)

2) To mitigating this existential risk (x-risk) we need progress in 3 areas:

  • Knowing how to build safe systems ("control problem")
  • Knowing that we know how to build safe systems ("justified confidence")
  • Preventing people from building unsafe systems ("global coordination")

3) Mitigating AI x-risk seems like an ethical priority because it is:

  • high impact
  • neglected
  • challenging but tractable

Reception:

Unfortunately, only 3 people showed up to our session (despite something like 30 expressing interest). So I didn't learn to much about the effectiveness of this presentation. My 2 main take-aways are:

  • Somewhat unsurprisingly, claim 1 had the least support. While I find this claim and the supporting arguments quite compelling and intuitive, there seem to be inferential gaps that I struggle to address quickly/easily. A key sticking point seems to be the lack of a highly plausible concrete scenario. I think it might also require more discussion of epistemics in order to move people from "I understand the basis for concern" to "I believe there is a non-trivial chance of an out-of-control AI killing everyone".
  • The phrase "ethical priority" raises alarm bells for people, and should be replaced of clarified. Once I clarified that I meant it in the same way as "combating climate change is an ethical priority", people seemed to accept it.

Some more details on the event:

The title for our session was: The case for AI as an existential risk, and a call for discussion and debate. Our blurb was: A growing number of researchers are concerned about scenarios in which machines, instead of people, control the future. What is the basis for these concerns, and are they well-founded? I believe they are, and we have an obligation as a community to address them. I can lead with a few minutes summarizing the case for that view. We can then discuss what nuances, objections, and take-aways.I also started with some basic background to make sure people understood the topic:

  • X-risk = risk of human extinction
  • The 3 kinds of risk (misuse, accident, structural)
  • The specific risk scenario I'm concerned with: out of control AI



Discuss
          

It should be the goal of every business to protect our planet

 Cache   

Today, at the start of the 25th annualUnited Nations Climate Change Conference, Google is joining 70 other companies and union leaders to call for the United States to stay in the Paris Agreement. We’re also sharing what Google is doing as a global innovator in renewable energy markets, and to build responsible supply chains and products that use AI to drive sustainability. 

We firmly believe that every business has the opportunity and obligation to protect our planet. To that end, we’re focused on building sustainability into everything that we do—from designing efficient data centers to creating sustainable workplaces to manufacturing better devices and creating more efficient supply chains. But our goal is much bigger: to enable everyone—businesses, policy makers and consumers—to create and live in a more sustainable world. 

Catalyzing the market for renewable energy

Google has been a carbon-neutral company since 2007 and we’ve matched our entire annual electricity consumption with renewable energy since 2017. Purchasing at Google’s scale helps grow the market for renewable energy, makes it easier for other corporate buyers to follow suit and supports a future where everyone has access to 24x7 carbon-free energy.  

  • Following Sundar’s September announcement of our biggest renewable energy purchase to date, we now have a portfolio of 52 wind and solar projects totaling more than 5 gigawatts, driving some $7 billion in expected new investments and thousands of related jobs around the world. Once these projects come online, they will produce more electricity than cities the size of Washington, D.C. or countries such as Lithuania or Uruguay use each year—all with renewable energy. 

  • We insist that all projects add new renewable energy sources to the grid—which catalyzes new  wind and solar projects. This approach also drives economic growth in the regions where we operate. For example, in Europe alone, Google’s purchases of renewable energy have generated €2.3 billion in capital investment in new renewable projects.

  • Google’s renewable energy purchases have helped make significant progress towards our long-term aspiration to power our operations with carbon-free energy in all places, at all times. Reaching 24x7 carbon-free energy will require innovations across policy, technology and business models and we are working hard to advance progress in these areas. For example, we recently signed a hybrid solar-wind agreement in Chile, which will increase our hourly carbon-free energy match from 75 percent to more than 95 percent.

  • As a founding member of the Renewable Energy Buyers Alliance (REBA), we are leading an effort to bring together more than 300 renewable energy buyers, developers, and service providers to pave the way for any company to access and purchase renewable energy. Collectively this group has committed to purchasing 60 gigawatts of renewable energy by 2025; that’s more than six times the amount of solar and wind installed in the U.S. in 2018. 

  • We’re also partnering with businesses to drive policy change to create broad access to renewable energy purchasing for everyone. For example, in the state of Georgia, we worked with Walmart, Target and Johnson & Johnson to establish the first corporate renewable energy purchasing program with Georgia Power, the local utility.

Building responsible supply chains and products

In areas where we manufacture hardware products, we view it as our responsibility to make sure our suppliers and the surrounding communities have access to clean energy. We’re also committed to integrating sustainability into every step of our hardware process, from design to manufacturing to shipping: 

  • In October, we committed to invest approximately $150 million into renewable energy projects in key regions where our Made by Google products are manufactured. Our investment commitment, alongside partners, aims to catalyze roughly $1.5 billion of capital into renewable energy. With these investments, we expect to help generate renewable energy that is equivalent to the amount of electricity used to manufacture Google consumer hardware products. 

  • One-hundred percent of this year’s Nest products include recycled content plastic. 

  • One-hundred percent of all shipments to and from customers of Made by Google products are carbon neutral. 

  • On an individual level, our products and services help consumers reduce their own environmental impact on the planet. For example, the Nest Learning Thermostats have helped people save more than 41 billion kilowatt hours of energy—enough to power all of Estonia's electricity needs for six years.

  • We’re also making it easier for people to give their old devices a second life. Customers can responsibly recycle devices for free—whether made by Google or not—via our take-back program for all products, available in 16 countries, and via our U.S. Pixel trade-in program.

Using AI to build a more sustainable world

Google’s expertise in AI is a key part of how we think about sustainability. Here are just a few of the ways AI is helping to tackle some of the world’s most challenging environmental problems:

  • We built an AI-powered efficiency recommendation system that directly controls data center cooling. This first-of-its-kind cloud-based system is delivering energy savings of roughly 30 percent. We’re now working to give our Cloud customers access to this same technology.

  • We’re using AI to optimize wind farms in our global fleet of renewable energy projects. After DeepMind and Google started applying machine learning algorithms to 700 megawatts of wind power in the central U.S., the value of that wind energy has been boosted by roughly 20 percent.

  • AI powers Global Fishing Watch, a platform we launched in partnership with Oceana and SkyTruth that promotes ocean sustainability by visualizing, tracking and sharing data about global fishing activity in near real-time and for free.

  • We’re also working to reduce the impact of our changing climate on vulnerable people. It’s estimated that every year, 250 million people around the world are affected by flooding. Our flood forecasting initiative in the Patna region of India is aimed at providing accurate real-time flood forecasting information and alerts to those in affected regions.

Providing resources to accelerate action beyond Google

Many organizations doing the most important work to address environmental challenges lack the funding and internal expertise to achieve their goals. That’s why we’re committed to empowering businesses, nonprofits, researchers and policy makers to take action:

  • Our first-ever Google AI Impact Challenge awarded $25 million in Google.org funding, product credits and mentorship from Google experts. Winners include organizations that are driving critical work in climate, conservation and energy. For example, WattTime is working to replace expensive, on-site power plant emissions monitors with a globally accessible, open-source monitoring platform. This will help make critical emissions reduction initiatives more accessible to communities that might not otherwise be able to afford them. 

  • The Google for Startups Accelerator will support social impact startups addressing climate, poverty and inequality. It gives startups access to expertise on technology, monetization of a social impact business and capital. 

  • More than 70 percent of global emissions are generated by cities. Our Environmental Insights Explorer (EIE) makes it easier for cities to access and act upon new climate-relevant datasets. 

Climate change is one of the most significant global challenges of our time and Google is committed to doing its part. We’re aggressively building sustainability into our operations and supply chains—efforts that are detailed in our annual Environmental Report andResponsible Supply Chain Report. We’ll continue to lead and encourage others to join us in improving the health of our planet. 


          

Sales: Inside Sales Representative - Reston, Virginia

 Cache   
Job SummaryThe Inside Sales representative) is an accomplished enterprise sales professional and demand generation specialist. The ISR works closely with Marketing team members, Regional Account Managers, and Solutions Architects to develop new business pipeline opportunities within a geographic territory and to support revenue growth objectives. Our ideal candidate is smart, analytical, and has experience in high-growth, early-stage technology organizations.We currently work with new and exciting technologies in cybersecurity, machine learning, artificial intelligence and DevOps, where we are building intelligent and deep learning data analytics as a service to change the world.Responsibilities - Leverage open internet and Fractal-sourced prospecting tools to identify new contacts and organizations in assigned geographic territory - Develop and maintain a comprehensive understanding of the technical and business value of QOMPLX's solutions to communicate most effectively to prospective customers - Develop and execute timely and effective campaigns to drive traffic to online and to in-person events and to successfully progress opportunity development objectives - Maintain professional and positive engagement with prospects through all aspects of targeting and qualification (BANT), with ultimate successful handoff to Regional Account Managers - Perform all job responsibilities in alignment with QOMPLX's core values, mission, and objectives - Achieve or exceed monthly, quarterly and annual demand generation and new opportunity performance objectives - Accurately and timely administer CRM for daily activities, new opportunity related information, and overall sales activity managementQualifications - Reside in the greater Washington D.C. area, with the ability to work daily in QOMPLX's, VA HQ - Bachelor's Degree preferred; a combination of relevant experience and education may be considered - 1-2 years of quota-carrying technology sales, focused on Cyber security, data services and/or IaaS preferred - Proven track record of consistently meeting and exceeding sales quota - Experience in establishing and maintaining relationships at VP and CXO level in a customer organization - Outstanding relationship building skills - High degree of integrity - Energetic, tenacious team player - Passion for helping customers solve complex problems with innovative technology - Strong attention to detail - Ability to work independently and collaborate with other team members - Proficient with Mac applications, MSFT Office, CRM, and social networking tools - Excellent communication, interpersonal and organizational skills.Desirable - Top-performer track record of successful lead generation and opportunity generation leveraging modern social media and outreach platforms - SaaS and on-premise delivery model experience - Career path and growth oriented - a strong desire to take-on additional responsibilities - Experience selling Cyber security, AI/ML, and/or data platform as-a-service solutions ()
          

Amazon ends creepy program that sent samples based on purchase history

 Cache   

It's normally a bad thing when companies take freebies away, but you might not mind quite so much in this case. Amazon is ending a Product Sampling program that sent free samples of cosmetics, protein bars and other goods based on your shopping habits. While the company didn't explain why it was closing the machine learning-based program in a statement to CNBC, it did say the initiative would shut down sometime in 2020. It's not hard to see reasons why Amazon might shutter the program, mind you.

Via: CNBC

Source: Business Insider (sub. required)


          

CIC a unicorn in its own right after wooing £1bn investment to Cambridge

 Cache   

Funding powerhouse Cambridge Innovation Capital has now attracted £1 billion of investment into Cambridge companies and managing partner Andrew Williamson has hired more Silicon Valley talent to ensure the technology cluster maintains its upward trajectory on the global stage.

The £1bn funding landmark has come within the last five years of CIC’s six-year existence but Williamson tells Business Weekly that Cambridge can leverage even more international capital as it builds on AI, deep learning, life science and therapeutic market leads.

CIC’s leading position as a gateway to accessing world-leading innovation was underlined by its performance in the six months to the end of September.

CIC has simultaneously unveiled Vin Lingathoti, a 10-year Valley veteran in the deep technology sector, as a partner to focus on enterprise software. Williamson himself spent 20 years in the US technology heartland while investment director Michael Anstey excelled at The Boston Consulting Group’s office in Toronto and has advised multinational healthcare businesses across North America, Europe, India, and Japan.

Williamson said the team was wired to help steer a new wave of growth for the Cambridge technology sector. He said: “The high quality of opportunities afforded to CIC as a result of our preferential access to IP from the University of Cambridge and our superior network through the Cambridge ecosystem ensures we are the gateway for accessing world-leading innovation.”

In the six months under review, CIC invested £22.8 million into three new and 11 existing portfolio companies; Riverlane, Sense Biodetection and PredictImmune joined CIC’s portfolio.

CMR Surgical closed a £195m Series C to commercialise its next-generation surgical robotic system, clinching unicorn status at the same time.

Gyroscope Therapeutics raised £50.4m of Series B funding round including from CIC and lead investor Syncona for the development of gene therapies and surgical delivery systems for retinal diseases.

Cytora closed a £25m Series B financing round to develop its artificial intelligence-powered insurance technology platform and PROWLER.io raised $24m to support product expansion and growth in artificial intelligence decision-making. CIC also figured in prominent deals for Storm Therapeutics, Audio Analytic and Bicycle Therapeutics.

Investing from its £275 million first fund, CIC has injected capital into 29 companies to date. Williamson stressed that international big hitters had invested alongside CIC to accumulate the landmark total. 

Thrilled with what he called a show of confidence in Cambridge, Williamson said the local deep technology market was set for further unprecedented growth because the cluster was so uniquely endowed with IP-rich, transformative businesses.


Cambridge Innovation Capital managing partner, Andrew Williamson

In an exclusive interview with Business Weekly, Williamson said Cambridge gloried in companies creating differentiated technologies. These were prolific, thanks in no small measure to Cambridge University and its exponential success in nurturing and spinning out transformational life science and hi-tech companies.

CIC’s close relationship with the university and follow-up companies is evidenced by the fact that 18 of the 29 businesses in whom it has invested to date are Cambridge University spin-outs.

What is less well known is that the astute and highly forensic CIC team has seen around  1500 investment opportunities to date and engaged closely with around 1000 of those without committing investment. So the portfolio businesses are in an elite minority. The common denominator is that besides having great technology they also have the capability to scale rapidly on a global basis, says Williamson.

This Solomon-esque insight makes CIC more than just another investor and more like an anchor institution within the burgeoning cluster.

Williamson told me: “Not every investment opportunity converts immediately; sometimes we bide our time and look at investments over a number of years: 97 per cent of companies we see we don’t invest in but by engaging with them closely we are able to suggest how they can get themselves investment ready. 

“So in addition to capital, we invest a lot of time building relationships. And the model works: The quality and number of high-potential companies is growing year by year. We have acknowledged the reality that some companies may be exceptionally IP rich but take a little bit longer to develop than ventures based on more traditional business models.”

This is where CIC’s globally experienced team comes in once more. 

Williamson says: “CIC has a lot of PhDS on its teams with deep tech backgrounds. Every business we invest in is a global business and a significant amount of capital we have invested has been globally secured, so we are seen as a safe pair of hands by investing entrepreneurs and funds across the planet.

“A lot of promising deep tech businesses are too small to build into $1bn businesses as things stand which is why it is vital to incorporate a US, Asian and European growth strategy.

“While all our portfolio businesses tend to have started in Cambridge and developed disruptive Science & Technology within the cluster, many have opened up markets on the West Coast of the US or in Asia, for example. 

“We see some ventures that are fantastic in terms of their own specific propositions but because of their business model they remain too small currently to build into a global business. For example, we don’t get involved in consumer related brands - it is just not our model. We and our co-investors require that businesses we back have the ability to scale as rapidly as possible.”

Williamson makes the point that the quality of talent emanating from Cambridge University – principally the calibre of its engineers – is second to none on the worldwide stage.


CMR Surgical CEO Martin Frost with the company’s surgical robotic system –Versius®

And the best of our companies – such as CMR Surgical and PROWLER.io – have learned how to attract and retain top global talent by offering stimulating work environments and employment packages that encourage good people to stay and grow with the business.

Williamson says: “In the US engineers can move to new roles almost on an annual basis depending on the packages on offer. In Cambridge, engineers are attracted by the quality of the work, they can become shareholders – personally and professionally they are in a very good place here and they tend to stay loyal to a progressive, switched on employer.

“Swim.ai - which started with commercial operations in San Jose – came to Cambridge because of access to the high quality engineering talent and brainpower available through the university. Their model is sustainable: Swim.ai is already hiring big and filling all their slots. 

“Hiring top talent to sustain scalability on an international basis is clearly a challenge for the cream of our technology companies locally but businesses like PROWLER.io, CMR Surgical and Swim.ai provide highly productive workplaces, challenging working environments and great incentives to be part of a business that can transform technology sectors globally.

“The ability of our top life science and technology companies to grow sustainably and consistently recruit top people is possibly the greatest cultural change in the Cambridge science & technology environment in the last 20 years.”

Similarly, while many tech entrepreneurs and investors continue to look to Silicon Valley as an exemplar, Cambridge is no longer a generation behind in terms of maturation compared to the West Coast ecosystem. 

“Our own ecosystem now compares favourably,” says Williamson. He praised the energy and growing global influence of Cambridge Enterprise, the university’s commercialisation arm, and the prodigious input of financial and business building expertise from Cambridge Angels.


Vin Lingathoti, partner, Cambridge Innovation Capital

New executive recruit Vin Lingathoti is a software engineer by training and has held roles across multiple functions including engineering, product management, corporate strategy, private equity and corporate development. 

Most recently he was regional head of Venture Investments and Acquisitions at Cisco Europe, where he led multiple direct and fund-of-fund investments and played a vital role in helping Cisco’s executive leadership team develop its European investment strategy.

He says: “The Cambridge cluster has many similarities to Silicon Valley. The University of Cambridge produces some of the best engineering talent in the world, on a par with Stanford and MIT. 

“It has one of the most active angel and seed investor communities in the UK and is home to prominent deep tech companies such as PROWLER.io and Riverlane. Many of the global software giants such as Microsoft and Amazon have opened R & D centres in Cambridge in pursuit of hard-to-acquire engineering talent.

“CIC is uniquely positioned to leverage this Cambridge advantage. We have a close relationship with the University of Cambridge and deep connections to the local startup community. 

“We have an exceptional team of investment professionals with deep domain expertise and global perspective. All of us have lived in multiple countries and held roles in large corporates and startups and understand the challenges faced by early-stage founders. We take a collaborative approach in helping founders navigate through their journey of building world-class companies.”

Stars in the CIC firmament

Riverlane
Riverlane, where CIC led the £3.3m seed round in which Cambridge Enterprise also participated, is a quantum computing software developer transforming the discovery of new materials and drugs. 

Riverlane’s software leverages the capabilities of quantum computers, which operate using the principles of quantum mechanics. In the same way that graphics processing units accelerate machine learning workloads, Riverlane uses quantum computers to accelerate the simulation of quantum systems. 

Riverlane is working with leading academics and companies on critical early use cases for its software, such as developing new battery materials and drug treatments. The company will use its seed funding to demonstrate its technology across a range of quantum computing hardware platforms, focused on early adopters in materials design and drug discovery. It will also expand its team of quantum software researchers and computational physicists.

Sense Biodetection
Sense Biodetection, where CIC co-led the £12.3m Series A funding round alongside Earlybird, and which is developing a portfolio of instrument-free, point-of-care molecular diagnostic tests, is pioneering a new class of diagnostic product. 

Sense Biodetection plans to invest the new funds in the development and manufacture of a range of tests utilising its novel and proprietary rapid molecular amplification technology, targeting in the first instance infectious disease applications such as influenza. 

PredictImmune  
PredictImmune, in which CIC participated in a £10m Series B round alongside Cambridge Enterprise and other new and existing investors, is developing pioneering prognostic tools for guiding treatment options and improving patient outcomes in immune-mediated diseases. The Series B round cements PredictImmune’s strong financial position, enabling it to build on the successful launch of its first product, PredictSURE IBD™, with a major focus on continued commercial expansion across Europe, the US and other territories.

CMR Surgical
CMR Surgical closed a £195m Series C funding round, Europe’s largest ever private financing round in the medical technology sector, to commercialise its next-generation surgical robotic system, Versius®. 

CIC was an early investor in CMR Surgical having first backed the company’s Series A round in 2016 and has continued to provide financial support and guidance, enabling the realisation of the potential of the Versius® system. CMR Surgical has launched initially in hospitals India with further expansion across the NHS and elsewhere globally expected in short order.

Gyroscope Therapeutics
Gyroscope Therapeutics, in which CIC participated in a £50.4m Series B funding round alongside lead investor Syncona, is developing gene therapies and surgical delivery systems for retinal diseases. With this new round Gyroscope Therapeutics will continue to advance: the clinical development of the company’s investigational gene therapy GT005 for dry age related macular degeneration (dry-AMD), the leading cause of permanent vision impairment for people aged 65 and older.

Cytora
Cytora closed a £25m Series B financing round to continue developing its artificial intelligence-powered insurance technology platform that enables insurers to underwrite more accurately, reduce frictional costs and achieve profitable growth. 

Cytora’s underwriting platform applies Machine Learning and Natural Language Processing techniques to public and proprietary data sets, including property construction features, company financials and local weather.

PROWLER.io
PROWLER.io, where CIC participated in the $24m funding round to support product expansion and growth, continues to define the artificial intelligence decision-making market, developing the world’s first technology that can help businesses and organisations make better decisions in processing dynamic, real-time data in complex and uncertain environments. 

Storm Therapeutics
Storm Therapeutics closed a £14m extension to its Series A financing, bringing the total Series A financing to £30m. Storm is a drug discovery company that is tackling disease through modulating RNA modifying enzymes. 

Audio Analytic
Audio Analytic, in which CIC participated in a $12m Series B funding round, has has developed cutting-edge AI sound recognition technology which can be embedded into consumer devices to make them more helpful to people, by understanding and reacting to the contextual information provided by sounds. 

Bicycle Therapeutics
CIC also participated in Bicycle Therapeutics’ Nasdaq IPO to progress the company’s lead candidate, BT1718, through the clinic and continue to advance its preclinical programmes, including toxin drug conjugates and immune modulators to treat cancer and other debilitating diseases. Bicycle is the first company in CIC’s portfolio to conduct an IPO.


          

Iotic raises €7.5m to target new technology at $260bn market

 Cache   

Cambridge-based Iotic has secured investment of €7.5 million to accelerate growth and meet increasing demand for its pioneering digital twin technology. 

Iotic enables enterprises and their ecosystems of assets, objects, companies and people to interact automatically and securely. It operates in a $260 billion industry that’s expected to double by 2021.

The digital software company provides the secure operating environment and tools to create digital twins of any thing, enabling their secure interactions and building true interoperable ecosystems. 

The investment from leading European VCs IQ Capital, Talis Capital and Breed Reply, will drive rapid deployment, deepen channel partnerships and expand market adoption of its patented Iotic Operating Environment, Twin technology and Event Analytics. 

The funding will allow Iotic to capitalise on its patented technology and unique market position.

The Iotic vision is a world where any thing can interact with any other thing - from the smallest sensor, to the largest power station, engine, train and plane along with people, suppliers and customers. 

The digital version of a thing, the Twin, has access to all its data and controls throughout its entire life, converting those end points into meaningful events – empowering enterprises to deliver on the promise of AI and Machine Learning, and to truly be digital. 

The investment enables Iotic to build on its global pipeline of enterprise customers, including Rolls-Royce Power Systems and BAM Nuttall, who have deployed Iotic’s technology to overcome fractured, inflexible IT infrastructure and data management problems to solve significant business challenges and create new services and better customer experiences.

Robin Brattel, CEO of Iotic said: “This investment is a further major endorsement of our operating environment and tools and the business strategy behind them. 

“Having already secured a number of high-profile clients, we are focused on further development and scaling – initially targeting high-value manufacturing, construction and infrastructure sectors. 

“Our longer-term vision is for our interoperable Twins and their Event Streams to be incorporated into every single technology stack that will help to underpin digital transformation and to deliver a strong return on investment for our customers.”

Founded in 2014 out of Cambridge, growing enterprise and channel demand globally has opened up new markets enabled by Iotic’s new North American operations hub in Raleigh, North Carolina. 

This has been supported by the expansion of the management team with the hiring of new COO Hans Weinberg, (previously CIO at ABB, North America), and Kathy Reppucci, (who joins Iotic as VP Marketing from IBM) to deliver global integrated marketing strategies.

Ed Stacey, Partner, IQ Capital, said: “Iotic is a leader in interoperable technology, which is the biggest evolution of data management since relational databases. 

“This technology will underpin future digital transformation projects in manufacturing and many other industries, enabling companies to integrate their data streams much more easily, securely and flexibly and at any level of scale.”


          

Sampling can be faster than optimization

 Cache   
Optimization algorithms and Monte Carlo sampling algorithms have provided the computational foundations for the rapid growth in applications of statistical machine learning in recent years. There is, however, limited theoretical understanding of the relationships between these 2 kinds of methodology, and limited understanding of relative strengths and weaknesses. Moreover, existing results have been obtained primarily in the setting of convex functions (for optimization) and log-concave functions (for sampling). In this setting, where local properties determine global properties, optimization algorithms are unsurprisingly more efficient computationally than sampling algorithms. We instead examine a class of nonconvex objective functions that arise in mixture modeling and multistable systems. In this nonconvex setting, we find that the computational complexity of sampling algorithms scales linearly with the model dimension while that of optimization algorithms scales exponentially.
          

New European Patent Office guidelines protect AI and machine learning 'inventions'

 Cache   
New European Patent Office guidelines protect AI and machine learning 'inventions'

Withers & Rogers' Karl Barnfather examines the European Patent Office's 'Guidelines for Examination', which took effect on 1st November


          

OpenWorld 2019 Enterprise and Cloud Manageability Sessions Now Available For Download

 Cache   

We’re excited to share with you the many Enterprise and Cloud Manageability presentations from Oracle OpenWorld.  These presentations feature a variety of best practices, customer stories, feature deep-dives and roadmap information as presented at our largest annual event. 

For an overview of our product and strategy roadmap, we recommend:

We’ve provided the list of sessions sorted both by target area (managing Database, Applications, etc.) and by product (Enterprise Manager or OMC) so you can zero in on your interests.  We hope you enjoy the sessions.

SPECIAL CONTENT FOR QUEST MEMBERS:  If you are members of the QUEST Oracle User Group, we also wanted to call your attention to two sessions that were delivered during QUEST Experience Week as a special program for User Group Members.  You can visit the QUEST web site (http://www.questoraclecommunity.com) to see the recordings of “Innovations in Enterprise and Cloud Manageability: Roadmap for Oracle Enterprise Manager and Oracle Management Cloud” (https://questoraclecommunity.org/events/webinars/innovations-in-enterprise-and-cloud-manageability-roadmap-for-oracle-enterprise-manager-and-oracle-management-cloud/) and “Achieving Database Patching Success: Fleet Maintenance Best Practices” (https://questoraclecommunity.org/events/webinars/achieving-database-patching-success-fleet-maintenance-best-practices/) web events.  Click the “Register Now” link on the right hand side to access the recorded webinars.

So, without further ado, here are the Enterprise and Cloud Manageability Sessions from Oracle OpenWorld 2019.

 

Enterprise and Cloud Manageability OpenWorld 2019 Sessions by Target Area

Database Management Sessions

Applications/Middleware Management Sessions

Cross-Target Best Practices Sessions

 

Enterprise and Cloud Manageability OpenWorld 2019 Sessions by Product/Service

Oracle Enterprise Manager Sessions

 Oracle Management Cloud Sessions

 


          

Postdoctoral Research Fellow in Data Science and Artificial Intelligence

 Cache   

Hi

I am sharing this position on behalf of Prof. Tim Dodwell, University of Exeter:

 

The position is a 2-year postdoctoral research fellowship within the Institute of Data Science and Artificial Intelligence at the University of Exeter. The role will work with Turing AI Fellow Prof Tim Dodwell .

 

Summary of the role

 

The position will be initially held in Exeter for the first year, but the research fellow will be offered to spend an extended period to be based at the Alan Turing Institute within their second year. There is an opportunity for the position to be extended beyond the 2 years, subject to performance.

 

The position is split between two parts, which will run side-by-side for the length of the contract:

 

(1)  Role on Turing AI Fellowship Project - 50% of the role will be spent working directly on aspects specific to Prof Dodwell's fellowship. In particular, for this role, we are looking to appoint someone to work on'Bayesian Multi scale Methods' (further details can be found here). Thiswork is an exciting collaboration between Prof. Girolami (Cambridge), Prof. Marzouk(MIT) and the Henry Royce Institute.

 

(2)  Individually Lead Research Challenge - for 50% of the role the fellow, in collaboration  Prof Tim Dodwell, the follow will work an area proposed of their choice. This should be an ambitious research topic, which complements wider activities in the Data Centric Engineering, Uncertainty Quantification and Machine Learning research. In the application, a short description (less than page) which highlights the open research challenge, potential new collaborations and opportunities for scientific and wider impact should be highlighted. An appropriate budget will be made available to fellow to support their research and collaborations.

 

Please find the official advertisement for more information:

 

https://jobs.exeter.ac.uk/hrpr_webrecruitment/wrd/run/ETREC107GF.open?VACANCY_ID=455065QyIy&WVID=3817591jNg&LANG=USA

 

Best wishes

Amir


          

A postdoctoral position or a Ph.D position for data-driven computational dynamics at Kyung Hee University (2020 fall)

 Cache   

The M&S Lab in Department of Mechanical Engineering, Kyung Hee University (South Korea) is recruiting highly motivated a postdoctoral researcher (starting 2020 summer) and a Ph.D student (starting 2020 September). The candidate is expected to be highly self-motivated, independent, and high-level coding skill (For examples, Fortran, JAVA, C/C++, Python, matlab, etc). Excellent publication record additionally requires to the postdoctoral research position. High performance computation skills such as parallel and/or domain decomposition are good aspects in the positions. We ask those interested to send their curriculum vitae, representative publications and contact details of at least two references to jingyun.kim@khu.ac.kr. Positions are available in the following topics: 

  • Data driven computational dynamics (Vibration, flexible multibody dynamics, multiphysics, etc) using machine learning (DNN, CNN, RNN, etc),
  • Nonlinear reduced-order modeling for transient analysis (POD, DEIM, Modal derivatives, etc),
  • Uncertainty quantification / Stochastic finite element method

We will consider all disciplines from engineering to science: Mechanical/Civil/Aerospace engineering, Computer Science, Applied mathematics, etc.

Please contact the PI or visit our following webcite for more information on current research opportunities: 

https://sites.google.com/site/modelingnsimulation/.


          

Session on "Data driven materials science" at the DPG Spring Meeting (Dresden, Germany)

 Cache   

Dear colleagues, 

we would like to make you aware of the topical session 

"Data driven materials science"

which is part of the MM program during the DPG Spring Meeting 2020. The latter takes place March 15-20, 2020, in Dresden.  

If you are performing experiments or simulations in this emerging field, you are most welcome to contribute your abstract. You can find the session at the bottom of the list "Themenbereiche" on the abstract submission webpage 
https://www.dpg-tagung.de/dd20/submission.html ]. 

In addition we will have some outstanding invited talks given by (some of them not yet confirmed):
- Tilmann Beck (TU Kaiserslautern)
- Cecilie Hebert (EPFL Lausanne, Switzerland)
- Jan Janssen (MPIE Düsseldorf)
- Marcus J. Neuer (BFI Düsseldorf)
- Stefan Sandfeld (TU Freiberg)

Please, note that this is session is not identical with the symposium "Big data driven materials science (SYBD)", which is on invitation only. Our session in MM is more focussed on structure-composition-property relationships in materials science. Please see the abstract below for details. 

We are looking foward to a vivid session on innovative developments!  Abstract:----------
This session covers innovative high-throughput and materials-informatics approaches for the discovery, description and design of materials. The contributions should address recent developments in the fields of data mining, machine learning, and artificial intelligence for the identification of structure-composition-property relationships in the highly diverse, but often sparse materials data space. Contributions from experiment such as diffraction and various tomography techniques, materio-graphic feature identification, as well as simulation results from the atomistic up to the continuum level are foreseen. A particular focus will be on the consideration of extended materials defects (grain boundaries, stacking faults, dislocation cores) and microstructures. Furthermore, submissions of contributions on accumulating, analyzing, interpreting, storing, and sharing fundamental knowledge about materials is solicited. Contributions may range, and preferably bridge, from physics-based materials understanding to data-driven and application-oriented development.


          

Post-doctoral Positions at Jiangsu University, Zhenjiang, China

 Cache   

Postdoc positions are available at Jiangsu University, Zhenjiang, China. Those possessing PhDs in the area of computational mechanics, experimental mechanics and material science are sought. We are particularly interested in candidates with experience in advanced numerical methods, surrogate modeling/surrogate-based optimization, machine learning methods, uncertainty quantification, data-driven design and optimization,etc.

 

Job responsibilities (2-year recruitment)

Publish at least 2 papers indexed by SCI (JCR Q1or Q2).

 

Application requirements

Having a doctoral degree (or a doctoral candidate) with age below 35.

 

Remuneration package

Basic salary: RMB 250,000 Yuan/year before tax for those who get PhD degree awarded by universities ranked in world Top 300 or RMB 150,000 Yuan/year before tax for those who get PhD degree awarded by universities ranked out of world Top 300.

Health insurance will be covered by Jiangsu University.

Post-doctoral apartment can be provided for full-time post-doctors by university, or a monthly RMB 700 Yuan rent allowance.

Promotion to Faculty position at Jiangsu University may be given to those postdocs with priority if the final evaluation is excellent.

Other preferential treatments in “Post-doctoral Management Stipulations of China”.

 

Application materials

Personal CV (including basic information, education background, work experience and main research achievements).

Five representative papers in PDF format in the past five years.

 

Contact information

Professor Jian Zhang, email: jianzhang@ujs.edu.cn, please indicate the subject of the email: “Postdoc candidate - Name”.

 


          

And the Beep Goes On

 Cache   

image of a heart rate monitor

Artur Dubrawski is not a critical care physician, but his best friend is. Dubrawski, a research professor in Carnegie Mellon University's Robotics Institute, loves talking about disease symptoms with Michael Pinsky, a professor of critical care medicine, cardiovascular disease, bioengineering and more at the University of Pittsburgh Medical Center. They also love talking about data and, more important, what the data means.

The pair have been collaborating for a decade, combining the power of CMU artificial intelligence research with the clinical expertise at UPMC to unravel medical mysteries. Their skilled clinician investigator team, including University of Pittsburgh professors Gilles Clermont and Marilyn Hravnak, have been leading the way for bedside predictive analytics. In 2014, they began to identify and predict which patients would go into shock. They hung around in intensive care units, building algorithms to make sense from the data contained in thousands of patients’ medical records.

Spending so much time in ICUs, they couldn’t help but notice the "acoustic wallpaper" of constant beeps from machines trying to alert staff to a problem.

"Very often," Dubrawski said, "those alerts, even in the best clinical hospitals, do not really reflect a medically relevant change in health status." In other words, 85% of the time, nurses are listening to false alarms.

Whether there is a real emergency, an oxygen sensor that slipped from someone’s finger or a tangled ECG electrode, the machines not only signal into the semi consciousness of beep-fatigued employees, but they are not collecting data until the problem is rectified. Pinsky and Dubrawski analyzed the alerts and categorized the urgency of response, which led to them designing a protocol to help save lives.

Along the way, they found something else unexpected: they could predict tachycardia.

A cardiac condition as dangerous as it is difficult to spell, tachycardia is defined as a rapid heartbeat. Left unchecked, it can lead to any number of heart conditions for a "regular" patient. Dubrawski, in collaboration with colleague Lujie Chen, became interested specifically in tachycardia for ICU patients, where a rapid heartbeat leads to longer hospital stays, and can be the precursor to heart failure, coronary artery disease or even death.

Dubrawski and Pinsky paired up again to see if they could build another algorithm to predict which ICU patients might experience tachycardia and whether they could adapt nursing protocols to better help these patients.

They studied patient data collected over 25,000 ICU visits from a nationwide database, and together with their partners at UPMC, identified 42 vital sign features to use when developing their predictive algorithms. They backed up in the data from the time of the tachycardia events in one-minute intervals, to see if there’s a "magic window" when this condition could be identified and prevented.

They looked at everything from blood pressure to the use of norepinephrine to see what combination of factors might lead to rapid heartbeat.

"Some of these patients are at risk hours before the event," Dubrawski said. "And some of these signs are likely to be missed by nurses."

While patients might not demonstrate visible symptoms of an impending, massive crisis, the researchers found, with reasonable accuracy, some patterns in the data to predict its emergence. Pinsky and Dubrawski created algorithms that allowed a computer to "learn" the signals that an injured patient’s cardio-respiratory system is deteriorating before the damage becomes irreversible. They also created a mechanism to tell a robotic system to administer the best treatments and therapies to save that person’s life.

And the friends are just getting started. Their bedside data exploration also led to discoveries around cardio-respiratory insufficiency, a grave condition Dubrawski says is more general—meaning their work in this area can be more impactful. After tackling acute crises in the ICU, they’re exploring internal bleeding—another common and dangerous occurrence in intensive care. Pinsky said the team recently discovered this new secret hiding within the data recorded from commonly used intravenous catheters placed in the superficial vein to draw blood, deliver fluids or medications.

While it’s in there, the catheter records central venous pressure (CVP), which Pinsky said is "useless in defining the onset of bleeding." But when the data was compared to the CVP data collected when the patient was stable, the researchers were able to identify bleeding as well as the most advanced monitoring devices. "That finding is simply amazing for all of medicine," Pinsky said, "because it opens up a window into instability without the need for more invasive monitoring."

Dubrawski explained that routine clinical care and simple data review by a human could hardly identify all the patterns and symptoms that something urgent will happen medically. The data is simply too complex and it comes at the clinicians too rapidly. Artificial intelligence and machine learning can handle these complexities and paint a picture much more reliable.

"These tools are within our grasp," Dubrawski said, "and we are limited only by our imagination."

Learn more about Dubrawski's work with tachycardia


          

Other: Data Scientist - Applied Research - Bedford, Massachusetts

 Cache   
Why choose between doing meaningful work and having a fulfilling life? At MITRE, you can have both. That's because MITRE people are committed to tackling our nation's toughest challenges-and we're committed to the long-term well-being of our employees. MITRE is different from most technology companies. We are a not-for-profit corporation chartered to work for the public interest, with no commercial conflicts to influence what we do. The Research & Development centers we operate for the government create lasting impact in fields as diverse as cybersecurity, healthcare, aviation, defense, and enterprise transformation. We're making a difference every day-working for a safer, healthier, and more secure nation and world. Join the Data Analytics team, where you will bring your machine learning, advanced analytics, and applied research skills to bear on solving problems of critical national importance. MITRE's diverse work program provides opportunities to apply your expertise and creative thinking in challenging domains such as healthcare, transportation, and national security. Employees may also participate in MITRE's internal research and development program, which provides funding for innovative applied research that addresses our sponsors' hardest problems. The successful candidate for this position is an experienced data scientist who combines a solid theoretical and technical background with the ability to formulate problems, develop and evaluate solutions, and communicate results. Experience with advanced analytic techniques and methods (e.g., supervised and unsupervised machine learning, deep learning, data visualization) as well as hands-on software development skills are a must. Our organization values innovation and lifelong learning and believes that in today's fast-paced technological environment keeping up with the latest research and technologies is an essential part of working in this field. Responsibilities include: Apply a variety of analytical techniques to tackle customer challenges to include data mining, statistical models, predictive analytics, optimization, risk analysis, and data visualization Perform original research, development, test and evaluation, and demonstration of advanced analytic capabilities Build and test prototypes in MITRE, government labs, and commercial cloud environments Perform independent reviews of contractor proposed architectures, designs and products Apply state of the art techniques, using multiple programming languages and development environments and open source code to drive advances in mission capabilities Basic Qualifications: Bachelor's Degree in Data Science, Computer Science or related field At least 5 years of professional experience Required Qualifications: Must be a U.S. citizen with ability to possess and maintain DoD clearance Proficiency in use of Microsoft Office including Outlook, Excel, and Word Must have demonstrated proficiency and strength in verbal, written, PC, presentation, and communications skills Experience conducting original research using data science techniques, including machine learning, deep learning, statistical modeling, and data visualization Passion for working with data and solving real-world business problems or creatively advancing business operations Hands-on software development skills (Python, R, C , C#, Java, Javascript) Strong technical writing and presentation skills Proven ability to work effectively in a collaborative teaming environment Preferred Qualifications: Advanced degree in related field of study Candidates that possess a current/active US Government Secret clearance are preferred Demonstrated technical leadership Knowledge of Cloud Computing technologies, particularly in the AWS environment Understanding of Big Data tools (e.g. NoSQL, Spark, Hadoop, ElasticSearch) MITRE's workplace reflects our values. We offer competitive benefits, exceptional professional development opportunities, and a culture of innovation that embraces diversity, inclusion, flexibility, collaboration, and career growth. If this sounds like the choice you want to make, then choose MITRE-and make a difference with us. For more information please visit https://www.mitre.org/careers/working-at-mitre. U.S Citizenship is required for most positions. ()
          

All-optical diffractive neural networks process broadband light

 Cache   
Diffractive deep neural network is an optical machine learning framework that blends deep learning with optical diffraction and light-matter interaction to engineer diffractive surfaces that collectively perform optical computation at the speed of light. A diffractive neural network is first designed in a computer using deep learning techniques, followed by the physical fabrication of the designed layers of the neural network using e.g., 3-D printing or lithography. Since the connection between the input and output planes of a diffractive neural network is established via diffraction of light through passive layers, the inference process and the associated optical computation does not consume any power except the light used to illuminate the object of interest.



Next Page: 10000

© Googlier LLC, 2019