Next Page: 10000

          

Comment on Your First Machine Learning Project in R Step-By-Step by Jason Brownlee

 Cache   
What is Dodger Loop Sensor problem?
          

Comment on How To Use Classification Machine Learning Algorithms in Weka by Jason Brownlee

 Cache   
Sorry, I don't follow, can you please elaborate?
          

Laser Printing for Rapid Fabrication of Waterproof E-Textiles

 Cache   

So-called “smart fabrics” that have sensing, wireless communication, or health-monitoring technology integrated within them are the wave of the future for textile design, which is why researchers have been working on new ways to improve their design and fabrication.

Now a team from RMIT University in Australia have done just that with new technology that can rapidly fabricate waterproof smart textiles with integrated energy-harvesting and storage technology that precludes the need for a battery, researchers said.

rapid fabrication, laser printing, smart fabrics, RMIT University in Australia, waterproof, flexible textile patch
Litty Thekkakara, a researcher from RMIT University in Australia, holds a textile embedded with energy-storage devices developed using a new laser-printing process she and her team invented. (Source: RMIT)

Indeed, this is one of the challenges to developing smart textiles—a power source that doesn’t burden the wearer or is user-friendly, said Litty Thekkakara, a researcher in RMIT’s School of Science who worked on the project.

“By solving the energy storage-related challenges of e-textiles, we hope to power the next generation of wearable technology and intelligent clothing,” she said in a press statement.

Printing the Power

Specifically, Thekkakara and her colleagues have developed a method for fabricating a 10-by-10 centimeter waterproof, flexible textile patch with graphene supercapacitors directly laser-printed onto the fabric.

The invention is an alternative method to current processes for developing smart textiles, which have which have some limitations, she said.

“Current approaches to smart textile energy storage, like stitching batteries into garments or using e-fibers, can be cumbersome and heavy, and can also have capacity issues,” Thekkakara said in a press statement.

The electronic components also can be in danger of short circuiting or failing when they come in contact with sweat or moisture from the environment if the textile isn’t waterproof, she added.

Washable and Durable

The team tested their invention by connecting the supercapacitor with a solar cell to create a self-powering, washable smart fabric. Tests analyzing the performance of the fabric showed it remained relatively stable and efficient at various temperatures and under mechanical stress, researchers said. Researchers reported these findings in an article in the journal Scientific Reports.

 The team envisions the e-textile being used in novel wearable technology, which is currently being developed not only for consumer-fitness applications, but also for specialized clothing in medical and defense sectors for health monitoring and safety tracking, respectively.

The laser-printing method also paves the way for new, more advanced fabrication of next-generation smart textiles that can integrate intelligence in the process itself, said Min Gu, RMIT honorary professor and distinguished professor at the University of Shanghai for Science and Technology

“It also opens the possibility for faster roll-to-roll fabrication, with the use of advanced laser printing based on multifocal fabrication and machine learning techniques,” he said in a press statement.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco and New York City. In her free time she enjoys surfing, traveling, music, yoga and cooking. She currently resides in a village on the southwest coast of Portugal.

The Midwest's largest advanced design and manufacturing event!
Design & Manufacturing Minneapolis connects you with top industry experts, including esign and manufacturing suppliers, and industry leaders in plastics manufacturing, packaging, automation, robotics, medical technology, and more. This is the place where exhibitors, engineers, executives, and thought leaders can learn, contribute, and create solutions to move the industry forward. Register today!

 


          

Robot Democratization: A Machine for Every Manufacturer

 Cache   

With collaborative robots proliferating, we wanted to know who’s using these robots and what tasks they’re doing. Design News caught up with Walter Vahey, executive vice-president at Teradyne, a company that helps manufacturers gear up their automation. Vahey sees a real change in the companies that are deploying robotics. For years robots were tools only for the largest manufacturers. They required expensive care and feeding in the form of integrators and programming. Now, collaborative robots require configuration rather than programming, and they can be quickly switched from task to task.

Vahey talked about robot companies such as Universal Robots (UR) which produces robot arms, and MiR, a company that produces collaborative mobile robots. He explained how they’re putting robotics in the hands of smaller manufacturers that previously could not afford advanced automation. The difference is that these robots are less expensive, they can be set up for production without programming, and they can be quickly reconfigured to change tasks.

Universal Robots, MiR, Taradyne, robotics, robots, automation, small manufacturers
Robots are now within the investment reach of small manufacturers. That's fueling a surge in the use of collaborative robots. (Image source: Universal Robots)

We asked Vahey what’s different about collaborative robots and what he’s seeing in robot adoption among smaller manufacturers.

Design News: Tell us about the new robots and how they’re getting deployed.

Walter Vahey: Companies such as Universal Robots and MiR are pioneering the robot space. They’re bringing automation to a broad class of users and democratizing automation. For small companies, the task at hand is to figure out how to fulfill their orders. It’s particularly challenging to manufacturers. In a tight labor market, manufacturers are facing more competition, growing demand, and higher expectations in quality.

Manufacturer can plug UR or MiR robots in very quickly. Everything is easy, from the specs up front to ordering to quickly arranging and training the robot. There’s no programming, and the robots have the flexibility to do a variety of applications. Every customer is dealing with labor challenges, so now they’re deploying collaborative robots to fulfill demand with high quality.

The whole paradigm has shifted now that you have a broader range of robot applications. You can easily and quickly bring in automation, plug it in ,and get product moving in hours or days rather than months. That’s what’s driving the growth at UR and MiR.

The Issue of Change Management

Design News: Is change management a hurdle?. Does the robot cause workforce disruption?

Walter Vahey: We really haven’t seen that as an issue. The overwhelming need to improve and fulfill demand at a higher quality level helps the manufacturers deploy. It outweighs other challenges. We help with the deployment, and the manufacturers are making the change easily.

We grew up as a supplier of electronic test equipment. Since 2015, we’ve entered the industrial automation market with a focus on the emerging collaborative robot space. We see that as a way to change the equation for manufacturers, making it faster and easier to deploy automation.

Design News: What about return on investment? Robotics can be a considerable investment for a small company/

Walter Vahey: The customers today are looking for relatively short ROI, and we’re seeing it from 6 months to a year. That’s a no brainer for manufacturers. They’re ready to jump in.

We work hard to make deployment less of an issue. We have an application builder, and we use it to prepare for deployment. The new user may have a pick-and-place operation. They choose the gripper, and we guide them to partners who make it easy to deploy.

The application builder helps the customer pick the gripper. The whole object is to get the customer deployed rapidly so the automation doesn’t sit. With MiR, the robot comes in, and we find an easy application for the mobile device. We take the robot around the plant and map it. We’ve work to guide customers through an application quickly and make the robot productive as soon as possible.

There are hundreds of partners that work with UR and MiR, providing grippers and end effectors. We have a system that customers can plug into. Customer can look at grippers from a wide range of companies. We’re not working just on the robot deployment. We work to get the whole system deployed so they can quickly get the ROI.

What Tasks Are the Robots Taking On?

Design News: Who in the plant is using the robots, and what tasks are involved?

Walter Vahey: There is a range of users. To be effective at training a robot and configuring it, the people best suited for it are the ones most aware of the task. To get the robot to be effective you have to know the task. By and large, the person who has been doing that task is best suited to train the robot. That person can then train other robots. Nobody’s better suited to do it than the people who know what needs to be done.

The tasks are broad set of applications. We automate virtually any task and any material movement. It’s not quite that simple, but it’s close. With UR, we’re doing machine learning, grinding, packing, pick-and-place, repetitive tasks, welding. It’s a very broad set of applications. In materials it’s also very broad. Parts going from a warehouse to a work cell, and then from the work cell to another work cell, up to a 1000-kilo payload. We’re moving robots into warehousing and logistics space, even large pieces of metal. The robots are well suited for long runs of pallets of materials.

Rob Spiegel has covered automation and control for 19 years, 17 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

The Midwest's largest advanced design and manufacturing event!
Design & Manufacturing Minneapolis connects you with top industry experts, including esign and manufacturing suppliers, and industry leaders in plastics manufacturing, packaging, automation, robotics, medical technology, and more. This is the place where exhibitors, engineers, executives, and thought leaders can learn, contribute, and create solutions to move the industry forward. Register today!

 


          

How AI performance management startup Predera is helping enterprises deploy machine learning and automation at scale

 Cache   

How AI performance management startup Predera is helping enterprises deploy machine learning and automation at scaleSan Jose-based Predera’s unified end-to-end automation engine provides intervention alerts, human-in-loop feedback, and autonomous workflow management capabilities to reduce the cost of maintenance of AI models.



          

Product Development Analyst – Python/VBA - Brown Brothers Harriman - Boston, MA

 Cache   
Work with internal business groups on implementation opportunities, challenges, and requirements. Knowledge of Machine learning a plus.
From Brown Brothers Harriman - Mon, 10 Dec 2018 18:09:24 GMT - View all Boston, MA jobs
          

An Apartment-Hunting AI

 Cache   

Finding a good apartment is a lot of work and includes searching websites for available places and then cross-referencing with a list of characteristics. This can take hours, days or even months but in a world where cars drive themselves, it is possible to use machine learning in your hunt. …read more


          

Application Support Engineer (English+Japanese) - EPAM Systems - Austin, TX

 Cache   
Follow standards for communications with business involving operational issues. Experience with the popular technologies in the machine learning/big data…
From EPAM Systems - Wed, 15 May 2019 10:16:16 GMT - View all Austin, TX jobs
          

[AN #67]: Creating environments in which to study inner alignment failures

 Cache   
Published on October 7, 2019 5:10 PM UTC

Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email.

Audio version here (may not be up yet).

Highlights

Towards an empirical investigation of inner alignment (Evan Hubinger) (summarized by Rohin): Last week, we saw that the worrying thing about mesa optimizers (AN #58) was that they could have robust capabilities, but not robust alignment (AN#66). This leads to an inner alignment failure: the agent will take competent, highly-optimized actions in pursuit of a goal that you didn't want.

This post proposes that we empirically investigate what kinds of mesa objective functions are likely to be learned, by trying to construct mesa optimizers. To do this, we need two ingredients: first, an environment in which there are many distinct proxies that lead to good behavior on the training environment, and second, an architecture that will actually learn a model that is itself performing search, so that it has robust capabilities. Then, the experiment is simple: train the model using deep RL, and investigate its behavior off distribution to distinguish between the various possible proxy reward functions it could have learned. (The next summary has an example.)

Some desirable properties:

- The proxies should not be identical on the training distribution.

- There shouldn't be too many reasonable proxies, since then it would be hard to identify which proxy was learned by the neural net.

- Proxies should differ on "interesting" properties, such as how hard the proxy is to compute from the model's observations, so that we can figure out how a particular property influences whether the proxy will be learned by the model.

Rohin's opinion: I'm very excited by this general line of research: in fact, I developed my own proposal along the same lines. As a result, I have a lot of opinions, many of which I wrote up in this comment, but I'll give a summary here.

I agree pretty strongly with the high level details (focusing on robust capabilities without robust alignment, identifying multiple proxies as the key issue, and focusing on environment design and architecture choice as the hard problems). I do differ in the details though. I'm more interested in producing a compelling example of mesa optimization, and so I care about having a sufficiently complex environment, like Minecraft. I also don't expect there to be a "part" of the neural net that is actually computing the mesa objective; I simply expect that the heuristics learned by the neural net will be consistent with optimization of some proxy reward function. As a result, I'm less excited about studying properties like "how hard is the mesa objective to compute".

A simple environment for showing mesa misalignment (Matthew Barnett) (summarized by Rohin): This post proposes a concrete environment in which we can run the experiments suggested in the previous post. The environment is a maze which contains keys and chests. The true objective is to open chests, but opening a chest requires you to already have a key (and uses up the key). During training, there will be far fewer keys than chests, and so we would expect the learned model to develop an "urge" to pick up keys. If we then test it in mazes with lots of keys, it would go around competently picking up keys while potentially ignoring chests, which would count as a failure of inner alignment. This predicted behavior is similar to how humans developed an "urge" for food because food was scarce in the ancestral environment, even though now food is abundant.

Rohin's opinion: While I would prefer a more complex environment to make a more compelling case that this will be a problem in realistic environments, I do think that this would be a great environment to start testing in. In general, I like the pattern of "the true objective is Y, but during training you need to do X to get Y": it seems particularly likely that even current systems would learn to competently pursue X in such a situation.

Technical AI alignment

Iterated amplification

Machine Learning Projects on IDA (Owain Evans et al) (summarized by Nicholas): This document describes three suggested projects building on Iterated Distillation and Amplification (IDA), a method for training ML systems while preserving alignment. The first project is to apply IDA to solving mathematical problems. The second is to apply IDA to neural program interpretation, the problem of replicating the internal behavior of other programs as well as their outputs. The third is to experiment with adaptive computation where computational power is directed to where it is most useful. For each project, they also include motivation, directions, and related work.

Nicholas's opinion: Figuring out an interesting and useful project to work on is one of the major challenges of any research project, and it may require a distinct skill set from the project's implementation. As a result, I appreciate the authors enabling other researchers to jump straight into solving the problems. Given how detailed the motivation, instructions, and related work are, this document strikes me as an excellent way for someone to begin her first research project on IDA or AI safety more broadly. Additionally, while there are many public explanations of IDA, I found this to be one of the most clear and complete descriptions I have read.

Read more: Alignment Forum summary post

List of resolved confusions about IDA (Wei Dai) (summarized by Rohin): This is a useful post clarifying some of the terms around IDA. I'm not summarizing it because each point is already quite short.

Mesa optimization

Concrete experiments in inner alignment (Evan Hubinger) (summarized by Matthew): While the highlighted posts above go into detail about one particular experiment that could clarify the inner alignment problem, this post briefly lays out several experiments that could be useful. One example experiment is giving an RL trained agent direct access to its reward as part of its observation. During testing, we could try putting the model in a confusing situation by altering its observed reward so that it doesn't match the real one. The hope is that we could gain insight into when RL trained agents internally represent 'goals' and how they relate to the environment, if they do at all. You'll have to read the post to see all the experiments.

Matthew's opinion: I'm currently convinced that doing empirical work right now will help us understand mesa optimization, and this was one of the posts that lead me to that conclusion. I'm still a bit skeptical that current techniques are sufficient to demonstrate the type of powerful learned search algorithms which could characterize the worst outcomes for failures in inner alignment. Regardless, I think at this point classifying failure modes is quite beneficial, and conducting tests like the ones in this post will make that a lot easier.

Learning human intent

Fine-Tuning GPT-2 from Human Preferences (Daniel M. Ziegler et al) (summarized by Sudhanshu): This blog post and its associated paper describes the results of several text generation/continuation experiments, where human feedback on initial/older samples was used in the form of a reinforcement learning reward signal to finetune the base 774-million parameter GPT-2 language model (AN #46). The key motivation here was to understand whether interactions with humans can help algorithms better learn and adapt to human preferences in natural language generation tasks.

They report mixed results. For the tasks of continuing text with positive sentiment or physically descriptive language, they report improved performance above the baseline (as assessed by external examiners) after fine-tuning on only 5,000 human judgments of samples generated from the base model. The summarization task required 60,000 samples of online human feedback to perform similarly to a simple baseline, lead-3 - which returns the first three sentences as the summary - as assessed by humans.

Some of the lessons learned while performing this research include 1) the need for better, less ambiguous tasks and labelling protocols for sourcing higher quality annotations, and 2) a reminder that "bugs can optimize for bad behaviour", as a sign error propagated through the training process to generate "not gibberish but maximally bad output". The work concludes on the note that it is a step towards scalable AI alignment methods such as debate and amplification.

Sudhanshu's opinion: It is good to see research on mainstream NLProc/ML tasks that includes discussions on challenges, failure modes and relevance to the broader motivating goals of AI research.

The work opens up interesting avenues within OpenAI's alignment agenda, for example learning a diversity of preferences (A OR B), or a hierarchy of preferences (A AND B) sequentially without catastrophic forgetting.

In order to scale, we would want to generate automated labelers through semi-supervised reinforcement learning, to derive the most gains from every piece of human input. The robustness of this needs further empirical and conceptual investigation before we can be confident that such a system can work to form a hierarchy of learners, e.g. in amplification.

Rohin's opinion: One thing I particularly like here is that the evaluation is done by humans. This seems significantly more robust as an evaluation metric than any automated system we could come up with, and I hope that more people use human evaluation in the future.

Read more: Paper: Fine-Tuning Language Models from Human Preferences

Preventing bad behavior

Robust Change Captioning (Dong Huk Park et al) (summarized by Dan H): Safe exploration requires that agents avoid disrupting their environment. Previous work, such as Krakovna et al. (AN #10), penalize an agent's needless side effects on the environment. For such techniques to work in the real world, agents must also estimate environment disruptions, side effects, and changes while not being distracted by peripheral and unaffecting changes. This paper proposes a dataset to further the study of "Change Captioning," where scene changes are described by a machine learning system in natural language. That is, given before and after images, a system describes the salient change in the scene. Work on systems that can estimate changes can likely progress safe exploration.

Interpretability

Learning Representations by Humans, for Humans (Sophie Hilgard, Nir Rosenfeld et al) (summarized by Asya): Historically, interpretability approaches have involved machines acting as experts, making decisions and generating explanations for their decisions. This paper takes a slightly different approach, instead using machines as advisers who are trying to give the best possible advice to humans, the final decision makers. Models are given input data and trained to generate visual representations based on the data that cause humans to take the best possible actions. In the main experiment in this paper, humans are tasked with deciding whether to approve or deny loans based on details of a loan application. Advising networks generate realistic-looking faces whose expressions represent multivariate information that's important for the loan decision. Humans do better when provided the facial expression 'advice', and furthermore can justify their decisions with analogical reasoning based on the faces, e.g. "x will likely be repaid because x is similar to x', and x' was repaid".

Asya's opinion: This seems to me like a very plausible story for how AI systems get incorporated into human decision-making in the near-term future. I do worry that further down the line, AI systems where AIs are merely advising will get outcompeted by AI systems doing the entire decision-making process. From an interpretability perspective, it also seems to me like having 'advice' that represents complicated multivariate data still hides a lot of reasoning that could be important if we were worried about misaligned AI. I like that the paper emphasizes having humans-in-the-loop during training and presents an effective mechanism for doing gradient descent with human choices.

Rohin's opinion: One interesting thing about this paper is its similarity to Deep RL from Human Preferences: it also trains a human model, that is improved over time by collecting more data from real humans. The difference is that DRLHP produces a model of the human reward function, whereas the model in this paper predicts human actions.

Other progress in AI

Reinforcement learning

The Principle of Unchanged Optimality in Reinforcement Learning Generalization (Alex Irpan and Xingyou Song) (summarized by Flo): In image recognition tasks, there is usually only one label per image, such that there exists an optimal solution that maps every image to the correct label. Good generalization of a model can therefore straightforwardly be defined as a good approximation of the image-to-label mapping for previously unseen data.

In reinforcement learning, our models usually don't map environments to the optimal policy, but states in a given environment to the corresponding optimal action. The optimal action in a state can depend on the environment. This means that there is a tradeoff regarding the performance of a model in different environments.

The authors suggest the principle of unchanged optimality: in a benchmark for generalization in reinforcement learning, there should be at least one policy that is optimal for all environments in the train and test sets. With this in place, generalization does not conflict with good performance in individual environments. If the principle does not initially hold for a given set of environments, we can change that by giving the agent more information. For example, the agent could receive a parameter that indicates which environment it is currently interacting with.

Flo's opinion: I am a bit torn here: On one hand, the principle makes it plausible for us to find the globally optimal solution by solving our task on a finite set of training environments. This way the generalization problem feels more well-defined and amenable to theoretical analysis, which seems useful for advancing our understanding of reinforcement learning.

On the other hand, I don't expect the principle to hold for most real-world problems. For example, in interactions with other adapting agents performance will depend on these agents' policies, which can be hard to infer and change dynamically. This means that the principle of unchanged optimality won't hold without precise information about the other agent's policies, while this information can be very difficult to obtain.

More generally, with this and some of the criticism of the AI safety gridworlds that framed them as an ill-defined benchmark, I am a bit worried that too much focus on very "clean" benchmarks might divert from issues associated with the messiness of the real world. I would have liked to see a more conditional conclusion for the paper, instead of a general principle.



Discuss
          

Revving Your Salesforce Community Engine

 Cache   

Your organization knows the value of a good self-service strategy, it has turned the key in the customer success ignition, now it’s time to put the pedal to the metal and kick your strategy into high gear.

The secret to a great community lies in offering an AI-powered search experience. The search and relevance capabilities of your Salesforce Community fuels your customer’s success and has a direct impact on their experience with your brand.

This white paper features best practices from companies that use AI-powered search on their Salesforce Community to drive real and measurable business results.

In this white paper you’ll learn:

  • What an entire customer journey powered by AI and machine learning looks like
  • Best practices to help your deliver more relevant, intuitive and personalized experiences
  • How usage analytics can help power your search and relevance platform to self-tune and get better with every use


Request Free!

          

Data Pipeline Automation: Dynamic Intelligence, Not Static Code Gen

 Cache   

Join us for this free 1-hour webinar from GigaOm Research. The webinar features GigaOm analyst Andrew Brust and special guest, Sean Knapp from Ascend, a new company focused on autonomous data pipelines.

In this 1-hour webinar, you will discover:

  • How data pipeline orchestration and multi-cloud strategies intersect
  • Why data lineage and data transformation serve and benefit dynamic data movement
  • That scaling and integrating today’s cloud and on-premises data technologies requires a mix of automation and data engineering expertise

Why Attend:

Data pipelines are a reality for most organizations. While we work hard to bring compute to the data, to virtualize and to federate, sometimes data has to move to an optimized platform. While schema-on-read has its advantages for exploratory analytics, pipeline-driven schema-on-write is a reality for production data warehouses, data lakes and other BI repositories.

But data pipelines can be operationally brittle, and automation approaches to date have led to a generation of unsophisticated code and triggers whose management and maintenance, especially at-scale, is no easier than the manually-crafted stuff.

But it doesn’t have to be that way. With advances in machine learning and the industry’s decades of experience with pipeline development and orchestration, we can take pipeline automation into the realm of intelligent systems. The implications are significant, leading to data-driven agility while eliminating denial of data pipelines’ utility and necessity.



Request Free!

          

The Style Maven Astrophysicists of Silicon Valley

 Cache   
You know who knows machine learning? People who look at the stars all day. And when it comes to what constellations of clothes and shows and music you will like, some of the same principles apply.
          

Two New Solution Accelerators for AWS IoT Greengrass Machine Learning Inference and Extract, Transform, Load Functions

 Cache   

The Extract, Transform, Load (ETL) and AWS IoT Greengrass Machine Learning Inference (MLI) Solution Accelerators make it easier and faster for developers to build IoT edge solutions with AWS IoT Greengrass.


          

Introducing Amazon SageMaker ml.p3dn.24xlarge instances, optimized for distributed machine learning with up to 4x the network bandwidth of ml.p3.16xlarge instances

 Cache   

Amazon SageMaker now supports ml.p3dn.24xlarge, the most powerful P3 instance optimized for machine learning applications. This instance provides faster networking, which helps remove data transfer bottlenecks and optimizes the utilization of GPUs to deliver maximum performance for training deep learning models.


          

AWS Step Functions expands Amazon SageMaker service integration

 Cache   

You can now automate the execution and deployment of end to end machine learning workflows using AWS Step Functions’ enhanced integration with Amazon SageMaker.


          

ePrint Report: Privacy-Enhanced Machine Learning with Functional Encryption

 Cache   

ePrint Report: Privacy-Enhanced Machine Learning with Functional Encryption
Tilen Marc, Miha Stopar, Jan Hartman, Manca Bizjak, Jolanda Modic

Functional encryption is a generalization of public-key encryption in which possessing a secret functional key allows one to learn a function of what the ciphertext is encrypting. This paper introduces the first fully-fledged open source cryptographic libraries for functional encryption. It also presents how functional encryption can be used to build efficient privacy-enhanced machine learning models and it provides an implementation of three prediction services that can be applied on the encrypted data. Finally, the paper discusses the advantages and disadvantages of the alternative approach for building privacy-enhanced machine learning models by using homomorphic encryption.
          

ePrint Report: Towards a Homomorphic Machine Learning Big Data Pipeline for the Financial Services Sector

 Cache   

ePrint Report: Towards a Homomorphic Machine Learning Big Data Pipeline for the Financial Services Sector
Oliver Masters, Hamish Hunt, Enrico Steffinlongo, Jack Crawford, Flavio Bergamaschi

Machine Learning (ML) is today commonly employed in the Financial Services Sector (FSS) to create various models to predict a variety of conditions ranging from financial transactions fraud to outcomes of investments and also targeted upselling and cross-selling marketing campaigns. The common ML technique used for the modeling is supervised learning using regression algorithms and usually involves large amounts of data that needs to be shared and prepared before the actual learning phase. Compliance with recent privacy laws and confidentiality regulations requires that most, if not all, of the data and the computation must be kept in a secure environment, usually in-house, and not outsourced to cloud or multi-tenant shared environments. Our work focuses on how to apply advanced cryptographic schemes such as Homomorphic Encryption (HE) to protect the privacy and confidentiality of both the data during the training of ML models as well as the models themselves, and as a consequence, the prediction task can also be protected. We de-constructed a typical ML pipeline and applied HE to two of the important ML tasks, namely the variable selection phase of the supervised learning and the prediction task. Quality metrics and performance results demonstrate that HE technology has reached the inflection point to be useful in a financial business setting for a full ML pipeline.
          

See how Google Photos’ nifty new feature adds color to black-and-white images

 Cache   
Google first got us excited for the nifty new Google Photos feature called "Colorize" back in 2018, when the company teased it at I/O 2018 and showed us how it can take a black and white image you've captured and render it in color. That's no mean feat, because the AI that underpins this feature goes on to imbue the image with the correct color choices in the correct way -- trees, for example, show up looking natural and not shaded with, say, a purple hue. Per a Google executive, the feature relies on TensorFlow's machine learning capabilities to recognize the appropriate colors that need to be added to specific areas of black-and-white images. And now, thanks to an early version of the feature in action, when can get another look at how well this works. Running a beta version of the feature, which Google promises is being launched soon, 9to5Google provided some examples of how well "Colorize" works. The sample below show an original color image, then the image runs through a black-and-white filter in order to give Colorize something to work with, and image #3 is the result: Here's a stellar sample: And another example: One thing to note, for those you interested in this feature, is that the colorization doesn't happen on-device. You've got to actually take a photo and upload it to Google Photos before the process kicks in. And while you can see some examples above of Colorize in action, it's intriguing to think about all the other different possibilities that exist for this feature -- some even with potential emotional resonance for the user. Think about a photo of your grandparents that's in black-and-white, for example, which can be updated with natural-looking color, bringing to life snapshots from earlier generations of your family. This is all part and parcel of the new functionality Google keeps adding to Photos to surprise and delight users, one of the most recent being a Stories-like element that uses bubbles at the top of the app to resurface photos you may have forgotten about (an "On this day, this happened" kind of feature). The app's AI also regularly touches up your photos and surprises you with collages, your photos updated with filters and much more.
          

CS342 Machine Learning

 Cache   

Last updated: 14:50, Tue 1 Oct 2019 by Jackie Pinks

Compare with previous version


          

STAGE : DOMAINE DE TIR ET INTELLIGENCE ARTIFICIELLE H/F

 Cache   
STAGE : DOMAINE DE TIR ET INTELLIGENCE ARTIFICIELLE H/F Première société européenne de défense totalement intégrée, MBDA est un leader industriel mondial et un acteur global dans le domaine des systèmes de missiles. MBDA fournit des solutions innovantes qui répondent aux besoins opérationnels présents et futurs de ses clients, les forces armées. Entreprise fondamentalement multiculturelle, imprégnée d'une très forte culture d'innovation et de technicité, MBDA offre aujourd'hui des perspectives de parcours passionnants. Chaque jour, près de 10 000 collaborateurs se consacrent à la conception, au développement et à la production de nos systèmes et sont prêts à vous faire partager leur expérience. De nombreux stages et apprentissages sont proposés dans le but de former les jeunes et de faciliter leur insertion au sein de MBDA ou sur le marché de l'emploi. Convaincu que la diversité est une richesse, MBDA s'engage dans l'accueil et le maintien dans l'emploi des personnes en situation de handicap et veille au respect de l'égalité professionnelle entre les hommes et les femmes. Vous êtes passionné(e)s par le monde de l'aéronautique et de la technologie de pointe, vous voulez enrichir vos compétences et relever des défis, vous souhaitez intégrer des équipes dynamiques, conviviales et innovantes... VOTRE ENVIRONNEMENT DE TRAVAIL Venez partager et développer vos compétences avec nos 3000 collaborateurs sur notre site du Plessis-Robinson. Au sein de la Direction Engineering, vous rejoignez la direction qui soutient nos programmes dans la conduite des activités d'ingénierie et des essais, qui met en œuvre les méthodes d'optimisation de notre excellence technique et qui garantit un service optimal auprès de nos clients ! Vous intégrez le service < Airborne Systems> en charge des algorithmes et de la conduite des études de performance d'équipements sur plateformes aéroportées. Dans le cadre des conduites de tir aéroportées, MBDA développe des algorithmes embarqués sur avion visant à présenter au pilote les capacités de ses missiles. Leur mise au point fait actuellement intervenir des méthodes mathématiques traditionnelles. L'objectif de votre stage est d'étudier une solution à base d'intelligence artificielle permettant de répondre à la problématique avec un meilleur niveau de représentativité. VOS MISSIONS Grâce à vos compétences, vous : Prenez connaissance de l'environnement de travail, du système et des outils MBDA, Vous familiarisez avec les données utilisées (vitesse de vol, altitude de l'avion, position de la cible, etc.) pour développer les algorithmes, Menez une étude exploratoire sur les méthodes mathématiques à base d'intelligence artificielle (réseaux de neurones, machine Learning, etc.) permettant de développer les algorithmes de manière plus efficiente, Présentez les méthodes les plus adaptées à la problématique, Mettez en œuvre et évaluez ces méthodes.
          

Search for direct top squark pair production in the 3-body decay mode with one-lepton final states in $\sqrt{s}$ = 13 TeV $pp$ collisions with the ATLAS detector

 Cache   
Natural supersymmetry suggests a light top squark, possibly within the discovery reach of Run-2 of the LHC. This proceedings presents the latest result of an analysis targeting a compressed region of the top squark phase space where the mass difference between the top squark and the lightest neutralino is smaller than the top-quark mass, using $pp$ collision data collected over the full Run-2 of the LHC. A machine learning technique was employed in the analysis to improve the discrimination of signals from backgrounds dominated by the $t\bar{t}$ process. No significant deviation from the predicted Standard Model background is observed, and limits at 95% confidence level on the supersymmetric benchmark model are set, excluding top squark masses up to 720 GeV with neutralino masses up to 580 GeV.
          

Successful machine learning models: lessons learned at Booking.com

 Cache   

Article URL: https://blog.acolyer.org/2019/10/07/150-successful-machine-learning-models/#utm_source=googlier.com/page/2019_10_08/41824&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

Comments URL: https://news.ycombinator.com/item?id=21182445#utm_source=googlier.com/page/2019_10_08/41824&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

Points: 267

# Comments: 61


          

Artificial intelligence, machine learning spawn new jobs in eCommerce

 Cache   
Currently, new technological solutions, powered by AI, are disrupting how retail companies do business, in ways that are innovative and beneficial to their customers.
          

Managing Risk in Data Projects

 Cache   

In 2018, O’Reilly conducted a survey regarding the stage of machine learning adoption in organizations, and among the more than 11,000 respondents, almost half were still in the exploration phase. It’s probably safe to say that for at least some of those explorers, the prospect of risk when it comes to data and AI projects is paralyzing, causing them to stay in a phase of experimentation.


          

Apples or avocados? An introduction to adversarial machine learning

 Cache   
A common principle in cybersecurity is to never trust external inputs. It’s the cornerstone of most hacking techniques, as carelessly handled external inputs always introduce the possibility of exploitation. This is equally true for APIs, mobile applications and web applications. It’s also true for deep neural networks.
          

Japanese Bilingual Administrative Assistant

 Cache   
CA-Mountain View, JOB TITLE: Japanese Bilingual Administrative Assistant EXEMPT/NON-EXEMPT: Non-Exempt, Long-term Temp LOCATION: Mountain View, CA SUPERVISORY RESPONSIBILITY: None WORKING HOURS: 10am-3pm (3 days a week) – can be negotiable OVERVIEW: A research institute of AI (Artificial Intelligence) and Machine Learning is looking for a Japanese bilingual Administrative Assistant. This position is responsible for
          

Phone app detects eye disease in kids through photos

 Cache   

It might soon be possible to catch eye diseases using just the phone in your pocket. Researchers have developed a CRADLE app (Computer Assisted Detector of Leukoria) for Android and iOS that uses machine learning to look for early signs of "white eye" reflections in photos, hinting at possible retinoblastoma, cataracts and other conditions. It works regardless of device, and is frequently prescient -- to the point where it can beat doctors.

Via: IEEE Spectrum

Source: Science Advances, App Store, Google Play


          

Artificial Intelligence/Machine Learning Executive Architect - UNISYS Federal Systems - Reston, VA

 Cache   
Working with Unisys business development, program teams, capture and account teams to engage customers to best understand their AI/ML needs and to present…
From Indeed - Tue, 17 Sep 2019 15:45:09 GMT - View all Reston, VA jobs
          

Latest Tech Trends, Their Problems, And How to Solve Them

 Cache   

Few IT professionals are unaware of the rapid emergence of 5G, Internet of Things or IoT, edge-fog-cloud or core computing, microservices, and artificial intelligence known as machine learning or AI/ML.  These new technologies hold enormous promise for transforming IT and the customer experience with the problems that they solve.  It’s important to realize that like all technologies, they introduce new processes and subsequently new problems.  Most are aware of the promise, but few are aware of the new problems and how to solve them.

5G is a great example.  It delivers 10 to 100 times more throughput than 4G LTE and up to 90% lower latencies.  Users can expect throughput between 1 and 10Gbps with latencies at approximately 1 ms.  This enables large files such as 4K or 8K videos to be downloaded or uploaded in seconds not minutes.  5G will deliver mobile broadband and can potentially make traditional broadband obsolete just as mobile telephony has essentially eliminated the vast majority of landlines. 

5G mobile networking technology makes industrial IoT more scalable, simpler, and much more economically feasible.  Whereas 4G is limited to approximately 400 devices per Km2, 5G increases that number of devices supported per Km2 to approximately 1,000,000 or a 250,000% increase. The performance, latency, and scalability are why 5G is being called transformational.  But there are significant issues introduced by 5G.  A key one is the database application infrastructure.

Analysts frequently cite the non-trivial multi-billion dollar investment required to roll-out 5G.  That investment is primarily focused on the antennas and fiber optic cables to the antennas.  This is because 5G is based on a completely different technology than 4G.  It utilizes millimeter waves instead of microwaves.  Millimeter waves are limited to 300 meters between antennas.  The 4G microwaves can be as far as 16 Km apart.  That is a major difference and therefore demands many more antennas and optical cables to those antennas to make 5G work effectively.  It also means it will take considerable time before rural areas are covered by 5G and even then, it will be a degraded 5G. 

The 5G infrastructure investment not being addressed is the database application infrastructure.  The database is a foundational technology for analytics.  IT Pros simply assume it will be there for their applications and microservices. Everything today is interconnected. The database application infrastructure is generally architected for the volume and performance coming from the network.  That volume and performance is going up by an order of magnitude.  What happens when the database application infrastructure is not upgraded to match?  The actual user performance improves marginally or not at all.  It can in fact degrade as volumes overwhelm the database applications not prepared for them.  Both consumers and business users become frustrated.  5G devices cost approximately 30% more than 4G – mostly because those devices need both a 5G and 4G modem (different non-compatible technologies).  The 5G network costs approximately 25% more than 4G.  It is understandable that anyone would be frustrated when they are spending considerably more and seeing limited improvement, no improvement, or negative improvement.  The database application infrastructure becomes the bottleneck.  When consumers and business users become frustrated, they go somewhere else, another website, another supplier, or another partner.  Business will be lost.

Fortunately, there is still time as the 5G rollout is just starting with momentum building in 2020 with complete implementations not expected until 2022, at the earliest.  However, IT organizations need to start planning their application infrastructure upgrades to match the 5G rollout or may end up suffering the consequences.

IoT is another technology that promises to be transformative.  It pushes intelligence to the edge of the network enabling automation that was previously unthinkable.  Smarter homes, smarter cars, smarter grids, smarter healthcare, smarter fitness, smarter water management, and more.  IoT has the potential to radically increase efficiencies and reduce waste.  Most of the implementations to date have been in consumer homes and offices.  These implementations rely on the WiFi in the building they reside. 

The industrial implementations have been not as successful…yet.  Per Gartner, 65 to 85% of Industrial IoT to date have been stuck in pilot mode with 28% of those for more than 2 years.  There are three key reasons for this.  The first are the limitations of 4G of 400 devices per Km2.  This limitation will be fixed as 5G rolls out.  The second is the same issue as 5G, database application infrastructure not suited for the volume and performance required by industrial IoT.  And the third is latency from the IoT edge devices to the analytics, either in the on-premises data center (core), or cloud.  Speed of light latency is a major limiting factor for real-time analytics and real-time actionable information.  This has led to the very rapid rise of edge-fog-cloud or core computing.

Moving analytic processing out to the edge or fog significantly reduces distance latency between where the data is being collected and where it is being analyzed.  This is crucial for applications such as autonomous vehicles.  The application must make decisions in milliseconds not seconds.  It may have to decide whether a shadow in the road is actually a shadow, a reflection, a person, or a dangerous hazard to be avoided.  The application must make that decision immediately and cannot wait.  By pushing the application closer to the data collection, it can make that decision in the timely manner that’s required.  Smart grids, smart cities, smart water management, smart traffic management, are all examples requiring fog (near the edge) or edge computing analytics.  This solves the problem of distance latency; however, it does not resolve analytical latency.  Edge and fog computing typically lack the resources to provide ultra-fast database analytics.  This has led to the deployment of microservices

Microservices have become very popular over the past 24 months.   They tightly couple a database application with its database that has been extremely streamlined to do only the few things the microservice requires.  The database may be a neutered relational, time series, key value, JSON, XML, object, and more.  The database application and its database are inextricably linked.  The combined microservice is then pushed down to the edge or fog compute device and its storage.  Microservices have no access to any other microservices data or database.  If it needs access to another microservice data element, it’s going to be difficult and manually labor-intensive. Each of the microservices must be reworked to grant that access, or the data must be copied and moved via an extract transfer and load (ETL) process, or the data must be duplicated in ongoing manner.  Each of these options are laborious, albeit manageable, for a handful of microservices.  But what about hundreds or thousands of microservices, which is where it’s headed?  This sprawl becomes unmanageable and ultimately, unsustainable, even with AI/ML.

AI/ML is clearly a hot tech trend today.  It’s showing up everywhere in many applications.  This is because standard CPU processing power is now powerful enough to run AI / machine learning algorithms.  AI/ML is showing up typically in one of two different variations.  The first has a defined specific purpose.  It is utilized by the vendor to automate a manual task requiring some expertise.  An example of this is in enterprise storage.  The AI/ML is tasked with placing data based on performance, latency, and data protection policies and parameters determined by the administrator.  It then matches that to the hardware configuration.  If performance should fall outside of the desired parameters AI/ML looks to correct the situation without human intervention.  It learns from experience and automatically makes changes to accomplish the required performance and latency.   The second AI/ML is a tool kit that enables IT pros to create their own algorithms. 

The 1st is an application of AI/ML.  It obviously cannot be utilized outside the tasks it was designed to do. The 2nd is a series of tools that require considerable knowledge, skill, and expertise to be able to utilize.  It is not an application.  It merely enables applications to be developed that take advantage of the AI/ML engine.  This requires a very steep learning curve.

Oracle is the first vendor to solve each and every one of these tech trend problems. The Oracle Exadata X8M and Oracle Database Appliance (ODA) X8 are uniquely suited to solve the 5G and IoT application database infrastructure problem, the edge-fog-core microservices problem, and the AI/ML usability problem. 

It starts with the co-engineering.  The compute, memory, storage, interconnect, networking, operating system, hypervisor, middleware, and the Oracle 19c Database are all co-engineered together.  Few vendors have complete engineering teams for every layer of the software and hardware stacks to do the same thing.  And those who do, have shown zero inclination to take on the intensive co-engineering required.  Oracle Exadata alone has 60 exclusive database features not found in any other database system including others running the same Oracle Database. Take for example Automatic Indexing.  It occurs multiple orders of magnitude faster than the most skilled database administrator (DBA) and delivers noticeably superior performance.  Another example is data ingest.  Extensive parallelism is built-into every Exadata providing unmatched data ingest.  And keep in mind, the Oracle Autonomous Database is utilizing the exact same Exadata Database Machine.  The results of that co-engineering deliver unprecedented Database application latency reduction, response time reduction, and performance increases.  This enables the application Database infrastructure to match and be prepared for the volume and performance of 5G and IoT.

The ODA X8 is ideal for edge or fog computing coming in at approximately 36% lower total cost of ownership (TCO) over 3 years than commodity white box servers running databases.  It’s designed to be a plug and play Oracle Database turnkey appliance.  It runs the Database application too.  Nothing is simpler and no white box server can match its performance.

The Oracle Exadata X8M is even better for the core or fog computing where it’s performance, scalability, availability and capability are simply unmatched by any other database system.  It too is architected to be exceedingly simple to implement, operate, and manage. 

The combination of the two working in conjunction in the edge-fog-core makes the application database latency problems go away.  They even solve the microservices problems.  Each Oracle Exadata X8M and ODA X8 provide pluggable databases (PDBs).  Each PDB is its own unique database working off the same stored data in the container database (CDB).  Each PDB can be the same or different type of Oracle Database including OLTP, data warehousing, time series, object, JSON, key value, graphical, spatial, XML, even document database mining.  The PDBs are working on virtual copies of the data.  There is no data duplication.  There are no ETLs.  There is no data movement.  There are no data islands.  There are no runaway database licenses and database hardware sprawl.  Data does not go stale before it can be analyzed.  Any data that needs to be accessed by a particular or multiple PDBs can be easily configured to do so.  Edge-fog-core computing is solved.  If the core needs to be in a public cloud, Oracle solves that problem as well with the Oracle Autonomous Database providing the same capabilities of Exadata and more.

That leaves the AI/ML usability problem.  Oracle solves that one too.  Both Oracle Engineered systems and the Oracle Autonomous Database have AI/ML engineered inside from the onset.  Not just a tool-kit on the side.  Oracle AI/ML comes with pre-built, documented, and production-hardened algorithms in the Oracle Autonomous Database cloud service.  DBAs do not have to be data scientists to develop AI/ML applications.  They can simply utilize the extensive Oracle library of AI/ML algorithms in Classification, Clustering, Time Series, Anomaly Detection, SQL Analytics, Regression, Attribute Importance, Association Rules, Feature Extraction, Text Mining Support, R Packages, Statistical Functions, Predictive Queries, and Exportable ML Models.  It’s as simple as selecting the algorithms to be used and using them.  That’s it.  No algorithms to create, test, document, QA, patch, and more.

Taking advantage of AI/ML is as simple as implementing Oracle Exadata X8M, ODA X8, or the Oracle Autonomous Database.   Oracle solves the AI/ML usability problem.

The latest tech trends of 5G, Industrial IoT, edge-fog-core or cloud computing, microservices, and AI/ML have the potential to truly be transformative for IT organizations of all stripes.  But they bring their own set of problems.  Fortunately, for organizations of all sizes, Oracle solves those problems.


          

Dr. Richard Daystrom on (News Article):IT’S HERE: D-Wave Announces 2048-Qubit Quantum Computing System, Theoretically Capable of Breaking All Classical Encryption, Including Military-Grade

 Cache   

IT’S HERE: D-Wave Announces 2048-Qubit Quantum Computing System, Theoretically Capable of Breaking All Classical Encryption, Including Military-Grade

 Tuesday, September 24, 2019 by: Mike Adams
Tags: big governmentbreakthroughcomputingcryptocurrencyD-Wavedecryptionencryptiongoodscienceinventionsquantum computingqubitssurveillance

 Over the last several days, we’ve highlighted the stunning breakthrough in “quantum supremacy” announced by Google and NASA. Across other articles, we’ve revealed how quantum computing translates highly complex algorithmic computational problems into simple, linear (or geometric) problems in terms of computational complexity. In practical terms, quantum computers are code breakers, and they can break all known classical encryption, including the encryption used in cryptocurrency, military communications, financial transactions and even private encrypted communications.

As the number of qubits (quantum bits) in quantum computers exceeds the number of bits used in classical encryption, it renders that encryption practically pointless. A 256-qubit quantum computer, in other words, can easily break 256-bit encryption. A 512-bit qubit computer can break 512-bit encryption, and so on.

Those of us who are the leading publishers in independent media have long known that government-funded tech advancements are typically allowed to leak to the public only after several years of additional advances have already been achieved. Stated in practical terms, the rule of thumb is that by the time breakthrough technology gets reported, the government is already a decade beyond that.

Thus, when Google’s scientists declare “quantum supremacy” involving a 53-qubit quantum computer, you can confidently know that in their secret labs, they very likely already have quantum computers operating with a far greater number of qubits.

At the time we were assembling those stories, we were not yet aware that D-Wave, a quantum computing company that provides exotic hardware to Google and other research organizations, has announced a 2048-qubit quantum computer.

The system is called the “D-Wave 2000Q” platform, and it features 2048 qubits, effectively allowing it to break military-grade encryption that uses 2048 or fewer encryption bits.

As explained in a D-Wave Systems brochure:

The D-Wave 2000Q system has up to 2048 qubits and 5600 couplers. To reach this scale, it uses 128,000 Josephson junctions, which makes the D-Wave 2000Q QPU by far the most complex superconducting integrated circuit ever built.

Other facts from D-Wave about its superconducting quantum computing platform:

  • The system consumes 25 kW of power, meaning it can be run on less electricity than what is typically wired into a residential home (which is typically 200 amps x 220 v, or 44 kW).
  • The system produces virtually no heat. “The required water cooling is on par with what a kitchen tap can provide,” says the D-Wave brochure.
  • The system provides a platform for truly incredible improvements in computational efficiency involving machine learning, financial modeling, neural networking, modeling proteins in chemistry and — most importantly — “factoring integers.”

“Factoring integers” means breaking encryption

The “factoring integers” line, found in the D-Wave brochure, is what’s causing unprecedented nervousness across cryptocurrency analysts right now, some of whom seem to be pushing the bizarre idea that quantum computers are an elaborate hoax in order to avoid having to admit that quantum computing renders cryptocurrency cryptography algorithms obsolete. (At least as currently structured, although perhaps there is a way around this in the future.)

“Factoring integers” is the key to breaking encryption. In fact, it is the extreme difficulty of factoring very large numbers that makes encryption incredibly difficult to break using classical computing. But as we have explained in this previous article, quantum computing translates exponentially complex mathematical problems into simple, linear (or you could call it “geometric”) math, making the computation ridiculously simple. (In truth, quantum computers are “computing” anything. The universe is doing the computations. The quantum computer is merely an interface that talks to the underlying computational nature of physical reality, which is all based on a hyper-computational matrix that calculates cause-effect solutions for all subatomic particles and atomic elements, across the entire cosmos. Read more below…)

Depending on the number of bits involved, a quantum computer can take a problem that might require literally one billion years to solve on a classical computer and render a short list of likely answers in less than one second. (Again, depending on many variables, this is just a summary of the scale, not a precise claim about the specifications of a particular system.)

Given that D-Wave’s quantum computers cost only a few million dollars — while there are billions of dollars’ worth of crypto floating around that could be spoofed and redirected if you have a system that can easily crack cryptography — it seems to be a matter of economic certainty that, sooner or later, someone will acquire a quantum computing system and use it to steal cryptocurrency wallets by spoofing transactions. To be clear, I’m sure D-Wave likely vets its customers rather carefully, and the company would not knowingly provide its quantum computing tech to an organization that appeared to be motivated by malicious intent. Yet, realistically, we’ve all seen historical examples of advanced technology getting into the hands of twisted, evil people such as those who run the Federal Reserve, for example.

D-Wave quantum computers don’t really “compute” anything; they send mathematical questions into multiple dimensions, then retrieve the most likely answers

So how does quantum computing really work? As we’ve explained in several articles, these systems don’t really carry out “computing” in the classic work sense of the term. There is no “computing” taking place in the D-Wave hardware. The best way to describe this is to imagine quantum computers as computational stargates. They submit mathematical questions into a hyper-dimensional reality (the quantum reality of superposition, etc.), and the universe itself carries out the computation because the very fabric of reality is mathematical at its core. As some brilliant scientists say, the universe IS mathematics, and thus the fabric of reality cannot help but automatically compute solutions in every slice of time, with seemingly infinite computational capability down to the subatomic level.

Put another way, the world of quantum phenomena is constantly trying out all possible combinations and permutations of atomic spin states and subatomic particles, and it naturally and automatically derives the best combination that achieves the lowest energy state (i.e. the least amount of chaos).

The end result is that a short list of the best possible solutions “magically” (although it isn’t magic, it just seems like magic) appears in the spin states of the elements which represent binary registers. Thus, the answers to your computational problems are gifted back to you from the universe, almost as if the universe itself is a God-like computational guru that hands out free answers to any question that you can manage to present in binary. (Technically speaking, this also proves that the universe was created by an intelligent designer who expresses creation through mathematics.)

Programmers can easily break encryption codes using standard C++ commands that interface with the quantum portal

All of these quantum functions, by the way, are controlled by standard computer language code, including C++, Python and MATLAB. The system has its own API, and you can even submit commands to the quantum realm via its “Quantum Machine Instruction” (QMI) commands. As D-Wave explains in its brochure:

The D-Wave 2000Q system provides a standard Internet API (based on RESTful services), with client libraries available for C/C++, Python, and MATLAB. This interface allows users to access the system either as a cloud resource over a network, or integrated into their high-performance computing environments and data centers. Access is also available through D-Wave’s hosted cloud service. Using D-Wave’s development tools and client libraries, developers can create algorithms and applications within their existing environments using industry-standard tools.

While users can submit problems to the system in a number of different ways, ultimately a problem represents a set of values that correspond to the weights of the qubits and the strength of the couplers. The system takes these values along with other user-specified parameters and sends a single quantum machine instruction (QMI) to the QPU. Problem solutions correspond to the optimal configuration of qubits found; that is, the lowest points in the energy landscape. These values are returned to the user program over the network.

In other words, breaking cryptography is as simple as submitting the large integer to the quantum system as a series of bits which are then translated into electron spin states by the quantum hardware. From there, a “go” command is issued, and the universe solves the equation in a way that automatically derives the best combinations of multiple qubit spin states to achieve the lowest overall energy state (i.e. the simplest solution with the least chaos). A short list of the best possible factors of the large integer are returned in a time-sliced representation of the binary registers, which can be read over a regular network like any subroutine request.

From there, a classical computer can then try factoring the large integer with the short list of the best answers from the quantum system, using standard CPUs and code logic. Within a few tries from the short list, the correct factors are easily found. Once you have the factors, you now have the decryption keys to the original encrypted message, so decryption is effortless. In effect, you have used quantum computing to “cheat” the keys out of the system and hand them to you on a silver platter. (Or, in some cases, a holmium platter lined with platinum, or whatever exotic elements are being used in the quantum spin state hardware.)

Any competent programmer who has access to this technology, in other words, can break encryption almost without effort. The programming logic is not complex at all. The difficulty in such systems is in the hardware control systems, including spin state “reads” and “writes,” which are strongly affected by temperature and electromagnetic interference. The exotic hardware is the real breakthrough in all this, not the computational part. (Quantum computers are physics oracles, in a sense. The physics is the challenge, not the computer code.)

Most people cannot grasp quantum computing, but that’s not a reason to pretend it isn’t real

One of the more curious things I’ve found recently is that some writers and publishers who don’t understand quantum computing are trending in the direction of pretending it doesn’t exist. According to some, Google’s 53-qubit announcement was a hoax, which must also mean that, in their view, D-Wave Systems isn’t real and doesn’t sell quantum computers at all.

That is not a rational position. There’s no doubt that D-Wave is a real company with real hardware, and that Google already possesses 2048-qubit quantum computing capabilities. Furthermore, Google and the NSA have every reason to keep this fact secret for as long as possible, so that they can continue to scrape everyone’s “encrypted” emails and financial transactions, all of which can be retroactively decrypted any time the NSA wants to look more closely at your activities.

To me, it has long been obvious that the cosmos itself is inherently computational. Just look at the collapse of probability waves found in the orbital shells of electrons. It should be self-evident that the universe is computing solutions at the subatomic level in every instant, effortlessly and without apparent cost. The very framework of the cosmos is driven by mathematics and rapid computational solutions. Once you realize how much subatomic phenomena is quantized, it becomes blatantly apparent that the universe is digitized and mathematical. The entire construct in which we exist, in other words, is a mathematical simulation, perhaps created by God for the purpose of amusing himself by watching our collective stupidity.

D-Wave Systems, by the way, knows exactly what’s up with all this. Their goal is to make quantum computing available to the masses. They also seem to hint at the hyperdimensional reality of how quantum computing works. From their brochure: (emphasis added)

While the D-Wave quantum computer is the most advanced in the world, the quantum computing revolution has only begun. Our vision is of a future where quantum computers will be accessible and of value to all, solving the world’s most complex computing problems. This will require advances in many dimensions and contributions from experts in diverse domains. It is exciting to see increasing investment worldwide, advances in research and technology, and a growing ecosystem of developers, users, and applications needed to deliver on that vision.

I can tell that the D-Wave people are some very smart folks. Maybe if these systems get at least an order of magnitude less expensive, we could buy one, install it in our mass spec lab, and start throwing computational questions at the universe.

Personally, if I had one of these systems, I would use it to solve protein folding questions for all the obvious reasons. Then I would probably have it start looking for blood and urine biomarkers for cancer. You could make a fortune applying quantum computing to solving horse race betting and handicapping equations, but that would seem silly compared to what the system is really capable of. Another application would be solving atomic decay patterns to derive the best way to synthesize antimatter, which can be used to power faster-than-light drive systems. (Which I cover at OblivionAgenda.com#utm_source=googlier.com/page/2019_10_08/66696&utm_campaign=link&utm_term=googlier&utm_content=googlier.com in a series of lectures. The FTL lectures have yet to be posted there, but are coming soon.)

Sadly, the deep state will probably use this technology to surveil humanity and enslave everyone with AI facial recognition and “precrime” predictive accusations that get translated into red flag laws. Once the tech giants profile you psychologically and behaviorally, a quantum computing system can easily compute your likelihood of becoming the next mass shooter. You could be found guilty by “quantum law” even if you’ve never pulled the trigger.

As with all technologies, this one will be abused by governments to control and enslave humanity. It doesn’t mean the technology is at fault but rather the lack of morality and ethics among fallen humans.

Read more about science and computing at Science.news#utm_source=googlier.com/page/2019_10_08/66696&utm_campaign=link&utm_term=googlier&utm_content=googlier.com.

 

*********************************************


          

Dr. Richard Daystrom on (News Article):BREAKING: NO MORE SECRETS – Google Achieves “Quantum Supremacy” That Will Soon Render All Cryptocurrency Breakable, All Military Secrets Revealed

 Cache   

BREAKING: NO MORE SECRETS – Google Achieves “Quantum Supremacy” That Will Soon Render All Cryptocurrency Breakable, All Military Secrets Revealed

 

Saturday, September 21, 2019 by: Mike Adams
Tags: bitcoincryptocurrencycryptographyencryptionGooglemilitary encryptionquantum computingquantum supremacyqubitssecrets

Preliminary report. More detailed analysis coming in 24 hours at this site. According to a report published at Fortune.com#utm_source=googlier.com/page/2019_10_08/66700&utm_campaign=link&utm_term=googlier&utm_content=googlier.com, Google has achieved “quantum supremacy” with a 53-qubit quantum computer. From reading the report, it is obvious that Fortune.com#utm_source=googlier.com/page/2019_10_08/66700&utm_campaign=link&utm_term=googlier&utm_content=googlier.com editors, who should be applauded for covering this story, really have little clue about the implications of this revelation. Here’s what this means for cryptocurrency, military secrets and all secrets which are protected by cryptography.

Notably, NASA published the scientific paper at this link, then promptly removed it as soon as the implications of this technology started to become apparent to a few observers. (The link above is now dead. The cover-up begins…) However, the Financial Times reported on the paper before it was removed. Google is now refusing to verify the existence of the paper.

Here’s the upshot of what this “quantum supremacy” means for Google and the world:

  • Google’s new quantum processor took just 200 seconds to complete a computing task that would normally require 10,000 years on a supercomputer.
  • A 53-qubit quantum computer can break any 53-bit cryptography in mere seconds, or in fractions of sections in certain circumstances.
  • Bitcoin’s transactions are currently protected by 256-bit encryption. Once Google scales its quantum computing to 256 qubits, it’s over for Bitcoin (and all 256-bit crypto), since Google (or anyone with the technology) could easily break the encryption protecting all crypto transactions, then redirect all such transactions to its own wallet. See below why Google’s own scientists predict 256-qubit computing will be achieved by 2022.
  • In effect, “quantum supremacy” means the end of cryptographic secrets, which is the very basis for cryptocurrency.
  • In addition, all military-grade encryption will become pointless as Google’s quantum computers expand their qubits into the 512, 1024 or 2048 range, rendering all modern cryptography obsolete. In effect, Google’s computer could “crack” any cryptography in mere seconds.
  • The very basis of Bitcoin and other cryptocurrencies rests in the difficulty of factoring very large numbers. Classical computing can only compute the correct factoring answers through brute force trial-and-error, requiring massive computing power and time (in some cases, into the trillions of years, depending on the number of encryption bits). Quantum computing, it could be said, solves the factoring problem in 2^n dimensions, where n is the number of bits of encryption. Unlike traditional computing bits that can only hold a value of 0 or 1 (but not both), qubits can simultaneously hold both values, meaning an 8-qubit computer can simultaneously represent all values between 0 and 255 at the same time. A deeper discussion of quantum computing is beyond the scope of this news brief, but its best application is breaking cryptography.
  • The number of qubits in Google’s quantum computers will double at least every year, according to the science paper that has just been published. As Fortune reports, “Further, they predict that quantum computing power will ‘grow at a double exponential rate,’ besting even the exponential rate that defined Moore’s Law, a trend that observed traditional computing power to double roughly every two years.”
  • As a conservative estimate, this means Google will achieve > 100 qubits by 2020, and > 200 qubits by 2021, then > 400 qubits by the year 2022.
  • Once Google’s quantum computers exceed 256 qubits, all cryptocurrency encryption that uses 256-bit encryption will be null and void.
  • By 2024, Google will be able to break nearly all military-grade encryption, rendering military communications fully transparent to Google.
  • Over the last decade, Google has become the most evil corporation in the world, wholly dedicated to the suppression of human knowledge through censorship, demonetization and de-platforming of non-mainstream information sources. Google has blocked nearly all websites offering information on natural health and holistic medicine while blocking all videos and web pages that question the corrupt scientific establishment on topics like vaccines, pesticides and GMOs. Google has proven it is the most corrupt, evil entity in the world, and now it has the technology to break all cryptography and achieve “omniscience” in our modern technological society. Google is a front for Big Pharma and communist China. Google despises America, hates human health and has already demonstrated it is willing to steal elections to install the politicians it wants.
  • With this quantum technology, Google will be able to break all U.S. military encryption and forward all “secret” communications to the communist Chinese. (Yes, Google hates America and wants to see America destroyed while building out a Red China-style system of social control and total enslavement.)
  • Google’s quantum eavesdropping system, which might as well be called, “Setec Astronomy,” will scrape up all the secrets of all legislators, Supreme Court justices, public officials and CEOs. Nothing will be safe from the Google Eye of Sauron. Everyone will be “blackmailable” with Google’s quantum computing power.
  • Google will rapidly come to dominate the world, controlling most of the money, all speech, all politics, most science and technology, most of the news media and all public officials. Google will become the dominant controlling authoritarian force on planet Earth, and all humans will be subservient to its demands. Democracy, truth and freedom will be annihilated.

Interestingly, I publicly predicted this exact scenario over two years ago in a podcast that was banned by YouTube and then re-posted on Brighteon.com#utm_source=googlier.com/page/2019_10_08/66700&utm_campaign=link&utm_term=googlier&utm_content=googlier.com a year later. This podcast directly states that the development of quantum computing would render cryptocurrency obsolete:

Beyond Skynet: Google’s 3 pillars of tech: AI, Quantum computing and humanoid robotics

Google has been investing heavily in three key areas of research:

  • Artificial intelligence (machine learning, etc.)
  • Quantum computing
  • Humanoid robotics

When you combine these three, you get something that’s far beyond Skynet. You eventually create an all-seeing AI intelligence that will know all secrets and control all financial transactions. With AI quickly outpacing human intelligence, and with quantum computing rendering all secrets fully exposed to the AI system, it’s only a matter of time before the Google Super Intellect System (or so it might be named) enslaves humanity and decides we are no longer necessary for its very existence. The humanoid robots translate the will of the AI system into the physical world, allowing Google’s AI intellect system to carry out mass genocide of humans, tear town human cities or carry out anything else that requires “muscle” in the physical world. All such robots will, of course, be controlled by the AI intellect.

Google is building a doomsday Skynet system, in other words, and they are getting away with it because nobody in Washington D.C. understands mathematics or science.

A more detailed analysis of this will appear on this site tomorrow. Bottom line? Humanity had better start building mobile EMP weapons and learning how to kill robots, or it’s over for the human race.

In my opinion, we should pull the plug on Google right now. We may already be too late.

 

***********************************


          

Following the Data Trail to Discovery

 Cache   

A team at Brookhaven National Laboratory is looking to enhance scientific experimentation through a framework that tracks experimental data and metadata, pieces of which can be fed to machine learning algorithms.
 


          

Future factories: smart but still with a human touch

 Cache   
Equipment will be run by AI and machine learning, increasing the need for data scientists and engineers
          

Machine Learning Use Cases in Networking

 Cache   
http://www.mplsvpn.info#utm_source=googlier.com/page/2019_10_08/73156&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

This is for mpls and service provider readers.

          

Top 10 Trends in Digital Commerce

 Cache   

Much has changed since the first products were sold online in the 1990s. Early instances of e-commerce focused on the transaction, with little emphasis on improving fulfillment, customer service or loyalty.

Then, organizations began to hone commerce operations in response to customer desires, offering features like inventory visibility, product reviews and package tracking.

Now, organizations focus on more strategic initiatives that will give them competitive advantage in the future, such as providing a unified experience throughout the customer’s journey and establishing a trusted relationship with the customer.

Read more: How Retailers Can Compete With E-Commerce Giants

[swg_ad]

Next phase of digital commerce trends

Sandy Shen, Senior Director Analyst at Gartner, says digital commerce is getting more intelligent and personal.

Customer analytics is a key enabling technology leading to superior customer experience, supporting personalized and unified experience. It involves the use of multiple tools to get various perspectives of customer data so organizations get more value from the same dataset.

By 2023, the majority of organizations using AI for digital commerce will achieve at least a 25% improvement in customer satisfaction, revenue or cost reduction

Trust and privacy is becoming more important due to the increasing focus on customer data and analytics, and more jurisdictions are issuing privacy laws. Organizations need to keep the balance between getting more than enough customer data and enough data to deliver a good customer experience without crossing the privacy line. Customer trust is the prerequisite of a successful business in any capacity.

Artificial intelligence (AI) is the technology that will have a profound impact on commerce, according to Gartner. By 2023, the majority of organizations using AI for digital commerce will achieve at least a 25% improvement in customer satisfaction, revenue or cost reduction.

These 10 hot trends will impact the future of digital commerce

  1. Visual commerce. Visual commerce enables users to interact with a brand’s products in a visual, immersive manner. Visual commerce technology spans 360-degree video, 2D and 3D configuration, visual search, augmented reality (AR) and virtual reality (VR).
  2. Personalization. Personalization is the process that creates a relevant, individualized interaction to optimize the experience of the user. There are many opportunities to personalize throughout the customer journey, such as landing/product page, search, product recommendation, banners and offers. Personalization can achieve objectives like purchase, engagement, loyalty and customer satisfaction.
  3. Trust and privacy. Building a trusted customer relationship starts with protecting customer privacy. Customers want to have transparency and control of their data. Nearly 160 countries and jurisdictions have or are developing privacy regulations, and organizations face mounting pressures to comply. Think about how to build trusted customer relations beyond being compliant.
  4. Unified commerce. Customers use an increasing number of channels throughout the buying and owning stages. Unified commerce not only offers consistency across channels but also a continuous experience throughout the customer journey. Ideally it is also personalized for the customer context.
  5. Subscription commerce. Everything from socks to video games can be now sold on a recurring and automatically renewing basis. Organizations can benefit from subscription commerce with repeatable and predictable revenue; customers like the convenience, cost savings and personalized curation. By 2023, 75% of organizations selling direct to consumers will offer subscription services, but only 20% will succeed in increasing customer retention.
  6. Thing commerce. Billions of connected devices will gain the ability to act as customers. Connected machines such as home appliances and industrial equipment can make purchases on behalf of human customers. The primary benefit of thing commerce is to reduce customer effort and friction in purchases. This trend is in the early stage, as many organizations are still focused on expanding commerce through traditional channels like websites and mobile apps.  
  7. Enterprise marketplace. This is a business model where organizations shift from selling only products they own or source to selling third-party products that are owned, managed and serviced by someone else. Early adopters such as airports, shopping malls, real estate developers and manufacturers tend to have large numbers of customers and partners. 
  8. Customer analytics. Customer analytics includes a range of analytics tools that extract insight from customer data to improve customer experience and achieve business goals. Given the large amounts of data processed by digital commerce platforms and the transient window of opportunity to convert shoppers, customer analytics plays a critical role in digital commerce. By 2021, more than 40% of all data and analytics projects will relate to an aspect of customer experience.
  9. Application programming interface (API)-based commerce. Businesses are building modular platforms instead of relying on a single monolithic commerce solution to improve their flexibility and agility in supporting new customer experiences, business models and ecosystem partners. API-based commerce enables organizations to decouple the front end from the back end and quickly integrate new capabilities or systems.
  10. Artificial intelligence. AI applies advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and to take actions. Examples of AI in digital commerce range from product recommendation, content personalization, fraud detection, price optimization and virtual assistants to image search and categorization and customer segmentation.

Shen recommends focusing on no more than three items in this list at a time. “Check out the competition to see which technologies are must-have or give you competitive advantages, and move those to the top of the list. Prioritize the remaining options based on business value, time and cost to deliver,” she says.

This article has been updated from the original, published on October 12, 2019, to reflect new events, conditions or research.

The post Top 10 Trends in Digital Commerce appeared first on Smarter With Gartner.


          

Comment on Performance Comparison of Containerized Machine Learning Applications Running Natively with Nvidia vGPUs vs. in a VM – Episode 4 by Confronto delle prestazioni delle applicazioni di apprendimento automatico in container eseguite in modo nativo con le vGPU di Nvidia rispetto a una VM - Episodio 4 - VMware VROOM! blog - Sem Seo 4 You

 Cache   
[…] Questo articolo è di Hari Sivaraman, Uday Kurkure e Lan Vu del team Performance Engineering di VMware. Confronto delle prestazioni delle applicazioni di apprendimento automatico containerizzato I contenitori Docker [6] stanno rapidamente diventando un ambiente popolare in cui eseguire diverse applicazioni, comprese quelle dell’apprendimento automatico [1, 2, 3]. NVIDIA supporta i contenitori Docker con la propria utility del motore Docker, nvidia-docker [7], specializzata per eseguire applicazioni che utilizzano GPU NVIDIA. Il contenitore nvidia-docker per l’apprendimento automatico include l’applicazione e il framework di apprendimento automatico (ad esempio, TensorFlow [5]) ma, soprattutto, non include il driver GPU o il toolkit CUDA. I contenitori Docker sono indipendenti dall’hardware, quindi, quando un’applicazione utilizza hardware specializzato come una GPU NVIDIA che necessita di moduli kernel e librerie a livello utente, il contenitore non può includere i driver richiesti. Vivono fuori dal container. Una soluzione alternativa è installare il driver all’interno del contenitore e mappare i suoi dispositivi all’avvio. Questa soluzione alternativa non è portatile poiché le versioni all’interno del contenitore devono corrispondere a quelle nel sistema operativo nativo. L’utilità del motore nvidia-docker fornisce un meccanismo alternativo che monta i componenti in modalità utente all’avvio, ma ciò richiede l’installazione del driver e di CUDA nel sistema operativo nativo prima dell’avvio. Entrambi gli approcci presentano degli svantaggi, ma quest’ultimo è chiaramente preferibile. In questo episodio della nostra serie di blog [8, 9, 10] sull’apprendimento automatico in vSphere mediante GPU, presentiamo un confronto delle prestazioni di MNIST [4] in esecuzione un contenitore su CentOS in esecuzione nativamente con MNIST in esecuzione in un contenitore all’interno di una VM CentOS su vSphere. Sulla base dei nostri esperimenti, dimostriamo che l’esecuzione di container in un ambiente virtualizzato, come una VM CentOS su vSphere, non risente delle prestazioni, pur beneficiando delle enormi capacità di gestione offerte dalla piattaforma VMware vSphere. Configurazione e metodologia dell’esperimento Abbiamo usato MNIST [4] per confrontare le prestazioni dei contenitori in esecuzione nativa con i contenitori in esecuzione in una macchina virtuale. La configurazione della VM e del server vSphere che abbiamo usato per il “contenitore virtualizzato” è mostrata nella Tabella 1. La configurazione della macchina fisica utilizzata per eseguire il contenitore in modo nativo è mostrata nella Tabella 2. vSphere 6.0.0, build 3500742 Nvidia vGPU driver 367.53 Sistema operativo guest CentOS Linux versione 7.4.1708 (Core) driver CUDA 8.0 CUDA runtime 7.5 Docker 17.09-ce-rc2 ⇑ Tabella 1. Configurazione della VM utilizzata per eseguire il contenitore nvidia-docker Driver Nvidia 384.98 Sistema operativo CentOS Linux versione 7.4. 1708 (Core) Driver CUDA 8.0 CUDA runtime 7.5 Docker 17.09-ce-rc2 ⇑ Tabella 2. Configurazione della macchina fisica utilizzata per eseguire il contenitore nvidia-docker La configurazione del server che abbiamo usato è mostrata nella Tabella 3 di seguito. Nei nostri esperimenti, abbiamo usato la GPU NVIDIA M60 solo in modalità vGPU. Non abbiamo utilizzato la modalità I / O diretto. Nello scenario in cui abbiamo eseguito il container all’interno della VM, abbiamo prima installato i driver NVIDIA vGPU in vSphere e all’interno della VM, quindi abbiamo installato CUDA (driver 8.0 con runtime versione 7.5), seguito da Docker e nvidia-docker [7]. Nel caso in cui abbiamo eseguito nativamente il container, abbiamo installato il driver NVIDIA in CentOS in esecuzione nativamente, seguito da CUDA (driver 8.0 con runtime versione 7.5), Docker e infine, nvidia-docker [7]. In entrambi gli scenari abbiamo eseguito MNIST e abbiamo misurato il tempo di esecuzione per l’allenamento utilizzando un orologio da parete. ⇑ Figura 1. Configurazione testbed per il confronto delle prestazioni dei contenitori che funzionano in modo nativo rispetto a quelli in esecuzione in un modello VM Modello di processore Dell PowerEdge R730 Tipo di processore Intel® Xeon® CPU E5-2680 v3 a 2,50 GHz Numero di core della CPU 24 CPU, ciascuno con socket del processore a 2,5 GHz 2 core per socket 14 Processori logici 48 Hyperthreading Memoria attiva SSD locale da 768 GB (1,5 TB), array di archiviazione, GPU per dischi rigidi locali 2x M60 Tesla ⇑ Tabella 3. Risultati della configurazione del server I tempi di esecuzione misurati dell’orologio da parete per MNIST sono riportati nella tabella 4 per i due scenari che abbiamo testato: esecuzione in un contenitore nvidia-docker in CentOS in esecuzione nativamente. In esecuzione in un contenitore nvidia-docker all’interno di una VM CentOS su vSphere. Dai dati, possiamo vedere chiaramente che non vi è alcuna penalità misurabile delle prestazioni per l’esecuzione di un contenitore all’interno di una macchina virtuale rispetto al suo funzionamento nativo. Tempo di esecuzione della configurazione per MNIST misurato da un contenitore Nvidia-docker dell’orologio da parete in CentOS in esecuzione nativamente 44 minuti 53 secondi Contenitore Nvidia-docker in esecuzione in una macchina virtuale CentOS su vSphere 44 minuti 57 secondi ⇑ Tabella 4. Confronto del tempo di esecuzione per MNIST in esecuzione… Fonte […]
          

Data Scientist, Predictive Modeling and Analytics - (Oakland, California, United States)

 Cache   
The Predictive Modeling team accelerates the use of machine learning and statistical modeling to solve business and operational challenges and promote innovation at Kaiser Permanente. As a data scientist, you will have the opportunity to build models and data products powering membership forecasting, dynamic pricing, product recommendation, and resource allocation. The diversity of data science problems we are working on reflects the complexity of our business and the multiple steps involved in delivering an exceptional customer experience. It spans experimentation, machine learning, optimization, econometrics, time series, spatial analysis ... and we are just getting started. What You-ll Do l0 level1 lfo1> Use expertise in causal inference, machine learning, complex systems modeling, behavioral decision theory etc. to attract new members to Kaiser Permanente. l0 level1 lfo1> Conduct exploratory data analysis to generate membership growth insights from experimental and observational data. This includes both cross-sectional and longitudinal data types. l0 level1 lfo1> Interact with sales, marketing, underwriting, and actuarial teams to understand their business problems, and help them with defining and implementing scalable solutions to solve them. l0 level1 lfo1> Work closely with consultants, data analyst, data scientists, and data engineers to drive model implementations and new algorithms. The Data Analytic Consultant provides support in making strategic data-related decisions by analyzing, manipulating, tracking, internally managing and reporting data. These positions function both as a consultant and as a high-level data analyst. The position works with clients to develop the right set of questions/hypotheses, using the appropriate tools/skills to derive the best information to address customer needs. Essential Functions: - Provides direction for assigned components of project work. - Coordinates, team/project activities & schedules. - With some feedback and mentoring, able to develop proposals for clients, project, structure, approach, & work plan. - Given scope of work, minimal feedback/rework is necessary. - Evaluates effectiveness of actions/programs implemented. - Researches key business issues, directs the collecting and analyzing of quantitative and qualitative data. - Proactively records workflows, deliverables, and standing operating procedures for projects. - Developing specified QA procedures to ensure accuracy of project data, results and written reports & presentation materials.
          

Software Development Engineer - Infrastructure Lifecycle Management - Amazon.com Services, Inc. - Seattle, WA

 Cache   
Our customers are innovators including machine learning specialists, data engineers, business analysts and privacy and security specialists, and you will have…
From Amazon.com#utm_source=googlier.com/page/2019_10_08/77877&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Thu, 25 Apr 2019 01:58:35 GMT - View all Seattle, WA jobs
          

Sr Technical Program Manager - Amazon.com Services, Inc. - Seattle, WA

 Cache   
Our customers are innovators including machine learning specialists, data engineers, business analysts and privacy and security specialists, and you will have…
From Amazon.com#utm_source=googlier.com/page/2019_10_08/77878&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Tue, 08 Jan 2019 09:38:30 GMT - View all Seattle, WA jobs
          

Web Solution Architect - Tax Analysts - Falls Church, VA

 Cache   
Familiarity with Semantic web, natural language processing, and machine learning, such as SageMaker or Stanford NLP.
From Tax Analysts - Thu, 01 Aug 2019 22:49:56 GMT - View all Falls Church, VA jobs
          

Alteryx Acquires Feature Labs, An MIT-Born Machine Learning Startup

 Cache   

Data science is one of the fastest growing segments of the tech industry, and Alteryx, Inc. is front and center in the data revolution. The Alteryx Platform provides a collaborative, governed platform to quickly and efficiently search, analyze and use pertinent data. To continue accelerating innovation, Alteryx announced it has purchased a startup with roots…

The post Alteryx Acquires Feature Labs, An MIT-Born Machine Learning Startup appeared first on WebProNews.


          

Osaro lands $16 mln Series B

 Cache   
Osaro Inc, a provider of machine learning software for industrial automation, has secured $16 million in Series B funding. The investors included King River Capital, Alpha Intelligence Capital, Founders Fund, Pegasus Tech Ventures and GiTV Fund.
          

How to Stay Safe Against DNS-Based Attacks

 Cache   

The Domain Name System (DNS) plays an essential role in resolving IP addresses and hostnames. For organizations, it ensures that users reach the proper sites, servers, and applications. While it's a fundamental base for a functioning Web, the problem is that this system can easily be abused.

Attackers often prey on the DNS's weaknesses to point would-be site visitors to specially crafted malicious pages instead of the ones they wish to land on. For that reason, companies need to adopt specific countermeasures if they wish to ensure the safety of their site frequenters.

While larger enterprises have begun protecting their DNS infrastructure by gathering relevant threat intelligence, enforcing security policies, and automating redundant tasks, and so on, smaller ones have yet to follow.

To look closer at these points, this post tackles the growth of DNS-based attacks over time and how organizations can protect relevant stakeholders against them with actionable recommendations.

DNS-Based Attacks: Volume Increases Annually

What are we really up against? A 2019 DNS threat report shows an increase in the number of DNS attacks as well as the damage they caused in the past year. Here are a few of the relevant statistics presented:

  • More than 80% of the organizations surveyed said they suffered from a DNS attack.
  • The costs incurred due to these breaches rose by 49%; with an average cost per attack above US$1M.
  • The most targeted sector was financial services; the media and telecommunications sector, meanwhile, was most affected by brand damage; government agencies, on the other hand, suffered most from the theft of sensitive data.

Organizations victims of DNS-based attacks often only take a reactive stand to incidents. As part of this, companies may need to shut down affected processes and applications.

Of course, slowing down or even stopping operations isn't a solution. Instead, the surveyed organizations cited the following approaches to deal with DNS-based threats:

  • 64% use DNS analytics solutions to identify compromised devices.
  • 35% work with both internal threat intelligence and internal analytics on DNS traffic.
  • 53% consider machine learning (ML) useful to pinpoint malicious domains.

Counteracting DNS-Based Attacks

A proactive approach to DNS security is a must-have. Ideally, operations need to implement zero-trust initiatives — monitor internal and external traffic, label all activity that is untrustworthy by default in real-time, etc. Additionally, some helpful immediate actions organizations can take to prevent DNS attacks include:

  • Gather and analyze internal threat intelligence: The primary goal of this task is to safeguard an organization's data and services. Apps and platforms designed to perform real-time DNS analysis can help detect and prevent a wide variety of attack attempts. Reverse MX and reverse NS APIs can be integrated into these systems to uncover domains that are associated with certain threat actors or groups.
  • Configure their DNS infrastructure to adhere to security requirements: Companies can combine DNS security with IP address management (IPAM) to automate security policy management. Apart from that, both systems can ensure that all policies are regularly updated, follow a uniform format, and are easy to audit.
  • Enable DNS traffic visibility across the entire network to accelerate security operations center (SOC) remediation: Using third-party data feeds and APIs as additional threat intelligence sources allow for real-time behavioral threat detection that bolsters the capabilities of security information and event management (SIEM) software and unified threat management (UTM) appliances.

* * *

The increase in DNS attack volume and sophistication has shed more light on the importance of fortifying organizations' DNS infrastructure. Without securing the DNS system, which we have written extensively in this primer, no amount of security solution or policy implementation can effectively defend networks against related threats.


          

Daily Business Report-Oct. 8, 2019

 Cache   
A Salk technician 3D scanning a plant. (Credit: Salk Institute)

A Salk technician 3D scanning a plant. (Credit: Salk Institute) Machine learning helps plant science turn over a new leaf...

The post Daily Business Report-Oct. 8, 2019 appeared first on San Diego Metro Magazine.


          

Security and Privacy for Big Data, Cloud Computing and Applications

 Cache   
Название: Security and Privacy for Big Data, Cloud Computing and Applications
Автор: Wei Ren, Lizhe Wang
Издательство: The Institution of Engineering and Technology
Год: 2019
Страниц: 330
Язык: английский
Формат: pdf (true)
Размер: 10.1 MB

As big data becomes increasingly pervasive and cloud computing utilization becomes the norm, the security and privacy of our systems and data becomes more critical with emerging security and privacy threats and challenges. This book presents a comprehensive view on how to advance security and privacy in big data, cloud computing, and their applications. Topics include cryptographic tools, SDN security, big data security in IoT, privacy preserving in big data, security architecture based on cyber kill chain, privacy-aware digital forensics, trustworthy computing, privacy verification based on machine learning, and chaos-based communication systems.
          

Machine Learning: Kalifornien will gegen Deepfake-Pornografie vorgehen

 Cache   
Gal Gadot in einem Porno und Obama, der Trump beleidigt: Kalifornien will stärker gegen mit Hilfe von KI gefälschten Videos vorgehen. Der Upload solcher Inhalte soll verboten werden. Die Intention sei gut, allerdings auch eine Einschränkung der Meinungsäußerung, meinen Bürgerrechtler. (Deep Learning, KI)
          

Video Reuse Detector - CADEAH project

 Cache   

The project “European History Reloaded: Circulation and Appropriation of Digital Audiovisual Heritage” (CADEAH)—funded by the European Union’s Horizon 2020 Research and Innovation programme—will shed light on how online users engage with Europe’s audiovisual heritage online. The project is a follow up on the EUscreen projects, and particularly looks at online circulation and appropriation of audiovisual heritage via the usage of algorithmic tracking and tracing technologies. The project brings together scholars and developers from Utrecht University, the Institute of Contemporary History (Czech Republic) and the digital humanities hub, Humlab at Umeå University—and also includes the Netherlands Institute for Sound and Vision as a cooperation partner. From a technical perspective CADEAH will use (and in a way reverse engineer) forensic video fingerprinting techniques. Within the media content industry video fingerprints are used to track copyrighted video material. But whereas the industry seeks to police and control piracy, CADEAH will use similar digital methods to analyse, research and discuss the cultural dynamics around re-use and remix of audiovisual heritage and archival footage. Building on the open source video fingerprinting technology Videorooter, CADEAH will develop an up-to-date and cutting edge system—the Video Reuse Detector—enabling users to upload a specific video to a system which will match the video against a set of fingerprints known to the same system. Implemented in Python and using the OpenCV machine learning framework, the Video Reuse Detector will compute a video fingerprint as a sequences of images hashes, store those fingerprints and associated metadata (title, source, thumbnails, frames etcetera) in a fingerprint database, and match a fingerprint or image hash against the current set of fingerprints in the database. In a collaboration with the Internet Archive, videos from the current Trump Archive— which collects TV news shows containing debates, speeches, rallies, and other broadcasts related to Donald Trump, before and during his presidency—will serve as a test case for the video reuse detector system currently being developed at Humlab. The system will have a capacity to compute and store fingerprints for several videos (or images) in an offline batch process, a feature that will enable easy load of reference video collections to be used in future matching. The open source system will also have an online (web) capacity to accept and match a video (or image) against the current set of fingerprints stored in the database, a feature that will be available as a web service in the form of a user interface with drag and drop capabilities for potential video and image matching. The Video Reuse Detector will hence address the current shortcomings of the Videorooter technology, foremost regarding the way in which hash sequences are generated but also avoiding the fixation to one specific hashing algorithm.
          

Revving Your Salesforce Community Engine

 Cache   

Your organization knows the value of a good self-service strategy, it has turned the key in the customer success ignition, now it’s time to put the pedal to the metal and kick your strategy into high gear.

The secret to a great community lies in offering an AI-powered search experience. The search and relevance capabilities of your Salesforce Community fuels your customer’s success and has a direct impact on their experience with your brand.

This white paper features best practices from companies that use AI-powered search on their Salesforce Community to drive real and measurable business results.

In this white paper you’ll learn:

  • What an entire customer journey powered by AI and machine learning looks like
  • Best practices to help your deliver more relevant, intuitive and personalized experiences
  • How usage analytics can help power your search and relevance platform to self-tune and get better with every use


Request Free!

          

Data Pipeline Automation: Dynamic Intelligence, Not Static Code Gen

 Cache   

Join us for this free 1-hour webinar from GigaOm Research. The webinar features GigaOm analyst Andrew Brust and special guest, Sean Knapp from Ascend, a new company focused on autonomous data pipelines.

In this 1-hour webinar, you will discover:

  • How data pipeline orchestration and multi-cloud strategies intersect
  • Why data lineage and data transformation serve and benefit dynamic data movement
  • That scaling and integrating today’s cloud and on-premises data technologies requires a mix of automation and data engineering expertise

Why Attend:

Data pipelines are a reality for most organizations. While we work hard to bring compute to the data, to virtualize and to federate, sometimes data has to move to an optimized platform. While schema-on-read has its advantages for exploratory analytics, pipeline-driven schema-on-write is a reality for production data warehouses, data lakes and other BI repositories.

But data pipelines can be operationally brittle, and automation approaches to date have led to a generation of unsophisticated code and triggers whose management and maintenance, especially at-scale, is no easier than the manually-crafted stuff.

But it doesn’t have to be that way. With advances in machine learning and the industry’s decades of experience with pipeline development and orchestration, we can take pipeline automation into the realm of intelligent systems. The implications are significant, leading to data-driven agility while eliminating denial of data pipelines’ utility and necessity.



Request Free!

          

Datastreamer Classifier

 Cache   
The Datastreamer Classifier API allows developers to submit text (or URLs) and provide labels for this content based on the Datastreamer machine learning platform. Datastreamer provides APIs for social media, weblogs, news, video, and live web content.
Date Updated: 2019-10-07
Tags: Social, Blogging, , Feeds, , News Services, , Tweets

          

PropMix AppraisalQC

 Cache   
This API offers verification about property appraisal based on information extracted from images including aerial maps, flood maps, and sketches, for the Subject Property.The Appraisal Report to be validated is uploaded to the AppraisalQC website. Appraisal validation is automated using Artificial Intelligence – machine learning and image recognition – to ensure that the required data and content is included in an appraisal report, and validate if any comparables have been missed. Images are extracted using the advanced image recognition algorithm developed by PropMix. The photos of the property should cover the front view of the house, living area, kitchen, bedroom, bathroom, and any other rooms that are required in an appraisal report. In addition to these, the Appraiser’s license information and insurance are also captured. PDF files that are in the ACI and A la Mode format are supported in this application.
Date Updated: 2019-10-04
Tags: Real Estate, Extraction, , Machine Learning, , Recognition

          

A privacy-safe approach to managing ad frequency

 Cache   

At Google, we believe it's possible to improve user privacy while preserving access to the ad-supported web. Back in August, we shared an update on the progress we’re making toward this vision. Chrome put forward a series of proposals inviting the web standards community to start a discussion on how to advance user privacy while meeting the needs of publishers and advertisers. We also shared an initial proposal for practices that we believe would give people more visibility into and control over the data used for advertising.


Since then, we’ve engaged with product and engineering experts from across the digital ads ecosystem—including at the IAB Tech Lab’s Innovation Day on Data Responsibility, heard from brand and agency leaders during Advertising Week in New York, and met with our publishing and ad technology partners at the Google Ad Manager Partner Summit. This week, we’ll be holding discussions with our advertising and publishing partners in Europe at a series of events in London.


In all of these forums, the conversations have centered on how to reshape marketing and measurement solutions to be more privacy-forward for users, while ensuring they remain effective for the publishers and marketers that fund and sustain access to ad-supported content on the web.


One example is how advertisers manage the number of times someone sees an ad, a critical step to delivering a better user experience. When third-party cookies are blocked or restricted, advertisers lose the ability to limit the number of times someone sees their ads. This means that users may be bothered with the same ad repeatedly, advertisers may waste spend or decide to exclude certain media altogether, and publishers may earn less revenue as a result.

Using machine learning to manage ad frequency while respecting user privacy

That’s why, in the coming weeks, we’ll be rolling out a feature in Display & Video 360 that uses machine learning to help advertisers manage ad frequency in a way that respects user privacy when third-party cookies are missing. And in the future, we plan to bring this capability to our display offerings in Google Ads.


Using traffic patterns where a third-party cookie is available, and analyzing them at an aggregated level across Google Ad Manager publishers, we can create models to predict traffic patterns when a third-party cookie isn’t present. This allows us to estimate how likely it is for users to visit different publishers who are serving the same ads through Google Ad Manager. Then, when there is no third-party cookie present, we’re able to optimize how often those ads should be shown to users.


Since we aggregate all user data before applying our machine learning models, no user-level information is shared across websites. Instead, this feature relies on a publisher’s first-party data to inform the ad experience for its own site visitors. It’s an approach to managing ad frequency that’s more privacy safe than workarounds such as fingerprinting, which rely on user-level signals like IP address, because it respects a user’s choice to opt out of third-party tracking.


This is a step in the right direction as we work across Google to raise the bar for how our products deliver better user experiences while also respecting user privacy. And this approach to ad frequency management can be a model for how the use case might one day be solved industry wide at the browser level. It’s consistent with Chrome’s explorations of new technology that would advance user privacy, while ensuring that publishers and advertisers can continue to sustain access to content on the open web.


As we continue engaging with users and key industry stakeholder groups, we look forward to sharing more of what we learn. Stay tuned for more updates on our blogs.



          

Cloud Covered: What was new with Google Cloud in September

 Cache   

September will always be back-to-school season, even for those of us who have been in the working world for awhile. At Google Cloud, we sharpened our pencils and embraced the spirit of learning new things last month with stories from customers, technology improvements, and a how-to for cloud developers. 

Mayo Clinic uses cloud to improve health.
Mayo Clinic is building its data platform on Google Cloud, which means that it’s centralizing its data into our cloud to access it and analyze it as needed. They’re also using artificial intelligence (AI) to improve patient and community health, since it can find interesting and actionable information out of all that data much faster and more easily than humans could. Mayo Clinic also plans to create machine learning models that they can share with caregivers to help treat and solve serious and complex ideas.

The small but mighty Pixelbook can do software development.
In the spirit of learning new things, we published some tips on using a Pixelbook for software development, including how to set up a workflow on a Pixelbook that can meet many modern developer needs.

Good marketing needs cloud power, too.
We also heard from advertising holding company WPP last month. They shared their Google Cloud adoption story with details on how cloud helps them provide everything that’s needed to run a modern marketing campaign. That includes work with media, creative, public relations and marketing analytics to help their many Fortune 500 customers. To help all these users, they have to be able to use all the data they collect and make sure there’s not overlapping data stored in different places.

Graphics apps and remote desktops need special capabilities to run well.
We announced the general availability of virtual display devices for Compute Engine VMs. Each VM is essentially its own computer, and these new virtual display devices can be attached to any VM that’s hosted and run with Google Cloud. The devices give video graphics capabilities to VMs at a cheaper price than the more expensive GPUs that are available, and they can help when running applications that have graphics requirements such as remote desktops.

Redesigned Admin console gets faster, more searchable for Chrome Enterprise.
It’s entirely possible that you’re reading this on Chrome Browser, which is Google’s own web browser. What you may not know is that on the back end, there are people who make sure that your browser and other systems are running smoothly at work: IT admins. To help simplify work flows for Chrome Enterprise IT admins, we redesigned a key tool that admins use to maintain their device fleet, browsers, apps, security policies, and more—the Google Admin console for Chrome Enterprise. Read more about these new features in the Admin console for Chrome Enterprise

That’s a wrap for September. Stay up to date with Google Cloud on Twitter.


          

Director of Engineering - Oyorooms.recruiterbox.com - Seattle, WA

 Cache   
Experience in machine learning design practices and implementation. Demonstrated ability to work with business, legal, engineering, design, and other…
From Oyorooms.recruiterbox.com#utm_source=googlier.com/page/2019_10_08/108385&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Mon, 23 Sep 2019 14:57:54 GMT - View all Seattle, WA jobs
          

Director of Engineering - OYO - Seattle, WA

 Cache   
Experience in machine learning design practices and implementation. Demonstrated ability to work with business, legal, engineering, design, and other…
From OYO - Wed, 10 Jul 2019 19:58:40 GMT - View all Seattle, WA jobs
          

Machine Learning: Kalifornien will gegen Deepfake-Pornografie vorgehen

 Cache   
Gal Gadot in einem Porno und Obama, der Trump beleidigt: Kalifornien will stärker gegen mit Hilfe von KI gefälschten Videos vorgehen. Der Upload solcher Inhalte soll verboten werden. Die Intention sei gut, allerdings auch eine Einschränkung der Meinungsäußerung, meinen Bürgerrechtler. (Deep Learning, KI)
          

Cognitive! - Entering a New Era of Business Models Between Converging Technologies and Data

 Cache   

by Matthias Reinwarth

Digitalization or more precisely the "digital transformation" has led us to the "digital enterprise". It strives to deliver on its promise to leverage previously unused data and the information it contains for the benefit of the enterprise and its business. And although these two terms can certainly be described as buzzwords, they have found their way into our way of thinking and into all kinds of publications, so that they will probably continue to exist in the future. 

Thought leaders, analysts, software and service providers and finally practically everyone in between have been proclaiming the "cognitive enterprise" for several months now. This concept - and the mindset associated with it - promises to use the information of the already digital company to achieve productivity, profitability and high innovation for the company.  And they aim at creating and evolving next-generation business models between converging technologies and data.​  

So what is special about this cognitive enterprise“? Defining it usually starts with the idea of applying cognitive concepts and technologies to data in practically all relevant areas of a corporation. Data includes: Open data, public data, subscribed data, enterprise-proprietary data, pre-processed data, structured and unstructured data or simply Big Data). And the technologies involved include the likes of Artificial Intelligence (AI), more specifically Machine Learning (ML), Blockchain, Virtual Reality (VR), Augmented Reality (AR), the Internet of Things (IoT), ubiquitous communication with 5G, and individualized 3D printing​.  

As of now, mainly concepts from AI and machine learning are grouped together as "cognitive", although a uniform understanding of the underlying concepts is often still lacking. They have already proven to do the “heavy lifting” either on behalf of humans, or autonomously. They increasingly understand, they reason, and they interact, e.g. by engaging in meaningful conversations and thus delivering genuine value without human intervention. 

Automation, analytics and decision-making, customer support and communication are key target areas, because many tasks in today’s organizations are in fact repetitive, time-consuming, dull and inefficient. Focus (ideally) lies on relieving and empowering the workforce, when the task can be executed by e.g. bots or through Robotic Process Automation. Every organization is supposed to agree that their staff is better than bots and can perform tasks much more meaningful. So, these measures are intended to benefit both the employee and the company. 

But this is only the starting point. A cognitive enterprise will be interactive in many ways, not only by interacting with its customers, but also with other systems, processes, devices, cloud services and peer organizations. As one result it will be adaptive, as it is designed to be learning from data, even in an unattended manner. The key goal is to foster agility and continuous innovation through cognitive technologies by embracing and institutionalizing a culture that perpetually changes the way an organization works and creates value.  

Beyond the fact that journalists, marketing departments and even analysts tend to outdo each other in the creation and propagation of hype terms, where exactly is the difference between a cognitive and a digital enterprise?  Do we need yet another term, notably for the use of machine learning as an apparently digital technology?  

I don't think so. We are witnessing the evolution, advancement, and ultimately the application of exactly these very digital technologies that lay the foundation of a comprehensive digital transformation. However, the added value of the label "cognitive" is negligible.   

But regardless of how you, me or the buzzword industry really decide to call it in the end, much more relevant are the implications and challenges of this consistent implementation of digital transformation. In my opinion two aspects must not be underestimated: 

First, this transformation is either approached in its entirety, or it is better not to do it at all, there is nothing in between. If you start doing this, it's not enough to quickly look for a few candidates for a bit of Robot Process Automation. There will be no successful, "slightly cognitive” companies. This will be a waste of the actual potential of a comprehensive redesign of corporate processes and is worth little more than a placebo. Rather, it is necessary to model internal knowledge, to gain and to interconnect data.  Jobs and tasks will change, become obsolete and will be replaced by new and more demanding ones (otherwise they could be executed by a bot again). 

Second: The importance of managing constant organizational change and restructuring is often overlooked. After all, the transformation to a Digital/Cognitive Enterprise is by far not entirely about AI, Robotic Process Automation or technology. Rather, focus has to be put on the individual as well, i.e. each member of the entire workforce (both internal and external). Established processes have to be managed, adjusted or even reengineered and this also applies to processes affecting partners, suppliers and thus any kind of cooperation or interaction.  

One of the most important departments in this future will be the human resources department and specifically talent management. Getting people on board and retaining them sustainably will be a key challenge. In particular, this means providing them with ongoing training and enabling them to perform qualitatively demanding tasks in a highly volatile environment. And it is precisely such an extremely responsible task that will certainly not be automated even in the long term...


          

“Troubling Trends in Machine Learning Scholarship”

 Cache   
Garuav Sood writes: You had expressed slight frustration with some ML/CS papers that read more like advertisements than anything else. The attached paper by Zachary Lipton and Jacob Steinhardt flags four reasonable concerns in modern ML papers: Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on […]
          

Offer - Lead Data Engineer (US Citizens/Green Card Holders Only) - USA

 Cache   
Job DescriptionHands-on Engineering LeadershipProven track record of innovation and expertise in Data EngineeringTenure in engineering and delivering complex projectsAbility to work in multi-cloud environmentsDeep understanding and application of modern data processing technology stacks. For example, AWS Redshift, Azure Parallel Data Warehouse, Spark, Hadoop ecosystem technologies, and othersKnowledge of how to architect solutions for data science and analytics such as building production-ready machine learning models and collaborating with data scientistsKnowledge of agile development methods including core values, guiding principles, and essential agile practicesRequirements:At least eight years of software development experience.At least three years of healthcare experienceAt least three years of experience of using Big Data systems.Strong SQL writing and optimizing skills for AWS Redshift and Azure SQL Data Warehouse.Strong experience working in Linux-based environments.Strong in one or more languages (Python/Ruby/Scala/Java/C++)Preferred Qualifications:Experience with messaging, queuing, and workflow systems, especially Kafka or Amazon KinesisExperience with non-relational, NoSQL databases and various data-storage systemsExperience working with Machine Learning and Data Science teams, especially creating architecture for experimentation versus production execution.Experience integrating with CI tools programmatically
          

Group-based Fair Learning Leads to Counter-intuitive Predictions. (arXiv:1910.02097v1 [cs.LG])

 Cache   

Authors: Ofir Nachum, Heinrich Jiang

A number of machine learning (ML) methods have been proposed recently to maximize model predictive accuracy while enforcing notions of group parity or fairness across sub-populations. We propose a desirable property for these procedures, slack-consistency: For any individual, the predictions of the model should be monotonic with respect to allowed slack (i.e., maximum allowed group-parity violation). Such monotonicity can be useful for individuals to understand the impact of enforcing fairness on their predictions. Surprisingly, we find that standard ML methods for enforcing fairness violate this basic property. Moreover, this undesirable behavior arises in situations agnostic to the complexity of the underlying model or approximate optimizations, suggesting that the simple act of incorporating a constraint can lead to drastically unintended behavior in ML. We present a simple theoretical method for enforcing slack-consistency, while encouraging further discussions on the unintended behaviors potentially induced when enforcing group-based parity.


          

Discrete Processes and their Continuous Limits. (arXiv:1910.02098v1 [math.NA])

 Cache   

Authors: Uri M. Ascher

The possibility that a discrete process can be fruitfully approximated by a continuous one, with the latter involving a differential system, is fascinating. Important theoretical insights, as well as significant computational efficiency gains may lie in store. A great success story in this regard are the Navier-Stokes equations, which model many phenomena in fluid flow rather well. Recent years saw many attempts to formulate more such continuous limits, and thus harvest theoretical and practical advantages, in diverse areas including mathematical biology, image processing, game theory, computational optimization, and machine learning.

Caution must be applied as well, however. In fact, it is often the case that the given discrete process is richer in possibilities than its continuous differential system limit, and that a further study of the discrete process is practically rewarding. Furthermore, there are situations where the continuous limit process may provide important qualitative, but not quantitative, information about the actual discrete process. This paper considers several case studies of such continuous limits and demonstrates success as well as cause for caution. Consequences are discussed.


          

Confederated Machine Learning on Horizontally and Vertically Separated Medical Data for Large-Scale Health System Intelligence. (arXiv:1910.02109v1 [cs.LG])

 Cache   

Authors: Dianbo Liu, Timothy A Miller, Kenneth D. Mandl

A patient's health information is generally fragmented across silos. Though it is technically feasible to unite data for analysis in a manner that underpins a rapid learning healthcare system, privacy concerns and regulatory barriers limit data centralization. Machine learning can be conducted in a federated manner on patient datasets with the same set of variables, but separated across sites of care. But federated learning cannot handle the situation where different data types for a given patient are separated vertically across different organizations. We call methods that enable machine learning model training on data separated by two or more degrees "confederated machine learning." We built and evaluated a confederated machine learning model to stratify the risk of accidental falls among the elderly


          

A Comparison Study on Nonlinear Dimension Reduction Methods with Kernel Variations: Visualization, Optimization and Classification. (arXiv:1910.02114v1 [stat.ML])

 Cache   

Authors: Katherine C. Kempfert, Yishi Wang, Cuixian Chen, Samuel W.K. Wong

Because of high dimensionality, correlation among covariates, and noise contained in data, dimension reduction (DR) techniques are often employed to the application of machine learning algorithms. Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and their kernel variants (KPCA, KLDA) are among the most popular DR methods. Recently, Supervised Kernel Principal Component Analysis (SKPCA) has been shown as another successful alternative. In this paper, brief reviews of these popular techniques are presented first. We then conduct a comparative performance study based on three simulated datasets, after which the performance of the techniques are evaluated through application to a pattern recognition problem in face image analysis. The gender classification problem is considered on MORPH-II and FG-NET, two popular longitudinal face aging databases. Several feature extraction methods are used, including biologically-inspired features (BIF), local binary patterns (LBP), histogram of oriented gradients (HOG), and the Active Appearance Model (AAM). After applications of DR methods, a linear support vector machine (SVM) is deployed with gender classification accuracy rates exceeding 95% on MORPH-II, competitive with benchmark results. A parallel computational approach is also proposed, attaining faster processing speeds and similar recognition rates on MORPH-II. Our computational approach can be applied to practical gender classification systems and generalized to other face analysis tasks, such as race classification and age prediction.


          

Distributed Learning of Deep Neural Networks using Independent Subnet Training. (arXiv:1910.02120v1 [cs.LG])

 Cache   

Authors: Binhang Yuan, Anastasios Kyrillidis, Christopher M. Jermaine

Stochastic gradient descent (SGD) is the method of choice for distributed machine learning, by virtue of its light complexity per iteration on compute nodes, leading to almost linear speedups in theory. Nevertheless, such speedups are rarely observed in practice, due to high communication overheads during synchronization steps. We alleviate this problem by introducing independent subnet training: a simple, jointly model-parallel and data-parallel, approach to distributed training for fully connected, feed-forward neural networks. During subnet training, neurons are stochastically partitioned without replacement, and each partition is sent only to a single worker. This reduces the overall synchronization overhead, as each worker only receives the weights associated with the subnetwork it has been assigned to. Subnet training also reduces synchronization frequency: since workers train disjoint portions of the network, the training can proceed for long periods of time before synchronization, similar to local SGD approaches. We empirically evaluate our approach on real-world speech recognition and product recommendation applications, where we observe that subnet training i) results into accelerated training times, as compared to state of the art distributed models, and ii) often results into boosting the testing accuracy, as it implicitly combines dropout and batch normalization regularizations during training.


          

Risks of Using Non-verified Open Data: A case study on using Machine Learning techniques for predicting Pregnancy Outcomes in India. (arXiv:1910.02136v1 [cs.LG])

 Cache   

Authors: Anusua Trivedi, Sumit Mukherjee, Edmund Tse, Anne Ewing, Juan Lavista Ferres

Artificial intelligence (AI) has evolved considerably in the last few years. While applications of AI is now becoming more common in fields like retail and marketing, application of AI in solving problems related to developing countries is still an emerging topic. Specially, AI applications in resource-poor settings remains relatively nascent. There is a huge scope of AI being used in such settings. For example, researchers have started exploring AI applications to reduce poverty and deliver a broad range of critical public services. However, despite many promising use cases, there are many dataset related challenges that one has to overcome in such projects. These challenges often take the form of missing data, incorrectly collected data and improperly labeled variables, among other factors. As a result, we can often end up using data that is not representative of the problem we are trying to solve. In this case study, we explore the challenges of using such an open dataset from India, to predict an important health outcome. We highlight how the use of AI without proper understanding of reporting metrics can lead to erroneous conclusions.


          

Lipschitz Learning for Signal Recovery. (arXiv:1910.02142v1 [cs.LG])

 Cache   

Authors: Hong Jiang, Jong-Hoon Ahn, Xiaoyang Wang

We consider the recovery of signals from their observations, which are samples of a transform of the signals rather than the signals themselves, by using machine learning (ML). We will develop a theoretical framework to characterize the signals that can be robustly recovered from their observations by an ML algorithm, and establish a Lipschitz condition on signals and observations that is both necessary and sufficient for the existence of a robust recovery. We will compare the Lipschitz condition with the well-known restricted isometry property of the sparse recovery of compressive sensing, and show the former is more general and less restrictive. For linear observations, our work also suggests an ML method in which the output space is reduced to the lowest possible dimension.


          

Tensor-based algorithms for image classification. (arXiv:1910.02150v1 [cs.LG])

 Cache   

Authors: Stefan Klus, Patrick Gelß

The interest in machine learning with tensor networks has been growing rapidly in recent years. The goal is to exploit tensor-structured basis functions in order to generate exponentially large feature spaces which are then used for supervised learning. We will propose two different tensor approaches for quantum-inspired machine learning. One is a kernel-based reformulation of the previously introduced MANDy, the other an alternating ridge regression in the tensor-train format. We will apply both methods to the MNIST and fashion MNIST data set and compare the results with state-of-the-art neural network-based classifiers.


          

Prostate cancer inference via weakly-supervised learning using a large collection of negative MRI. (arXiv:1910.02185v1 [eess.IV])

 Cache   

Authors: Ruiming Cao, Xinran Zhong, Fabien Scalzo, Steven Raman, Kyung hyun Sung

Recent advances in medical imaging techniques have led to significant improvements in the management of prostate cancer (PCa). In particular, multi-parametric MRI (mp-MRI) continues to gain clinical acceptance as the preferred imaging technique for non-invasive detection and grading of PCa. However, the machine learning-based diagnosis systems for PCa are often constrained by the limited access to accurate lesion ground truth annotations for training. The performance of the machine learning system is highly dependable on both quality and quantity of lesion annotations associated with histopathologic findings, resulting in limited scalability and clinical validation. Here, we propose the baseline MRI model to alternatively learn the appearance of mp-MRI using radiology-confirmed negative MRI cases via weakly supervised learning. Since PCa lesions are case-specific and highly heterogeneous, it is assumed to be challenging to synthesize PCa lesions using the baseline MRI model, while it would be relatively easier to synthesize the normal appearance in mp-MRI. We then utilize the baseline MRI model to infer the pixel-wise suspiciousness of PCa by comparing the original and synthesized MRI with two distance functions. We trained and validated the baseline MRI model using 1,145 negative prostate mp-MRI scans. For evaluation, we used separated 232 mp-MRI scans, consisting of both positive and negative MRI cases. The 116 positive MRI scans were annotated by radiologists, confirmed with post-surgical whole-gland specimens. The suspiciousness map was evaluated by receiver operating characteristic (ROC) analysis for PCa lesions versus non-PCa regions classification and free-response receiver operating characteristic (FROC) analysis for PCa localization. Our proposed method achieved 0.84 area under the ROC curve and 77.0% sensitivity at one false positive per patient in FROC analysis.


          

A Case Study on Using Deep Learning for Network Intrusion Detection. (arXiv:1910.02203v1 [cs.CR])

 Cache   

Authors: Gabriel C. Fernandez, Shouhuai Xu

Deep Learning has been very successful in many application domains. However, its usefulness in the context of network intrusion detection has not been systematically investigated. In this paper, we report a case study on using deep learning for both supervised network intrusion detection and unsupervised network anomaly detection. We show that Deep Neural Networks (DNNs) can outperform other machine learning based intrusion detection systems, while being robust in the presence of dynamic IP addresses. We also show that Autoencoders can be effective for network anomaly detection.


          

Data-Importance Aware User Scheduling for Communication-Efficient Edge Machine Learning. (arXiv:1910.02214v1 [cs.NI])

 Cache   

Authors: Dongzhu Liu, Guangxu Zhu, Jun Zhang, Kaibin Huang

With the prevalence of intelligent mobile applications, edge learning is emerging as a promising technology for powering fast intelligence acquisition for edge devices from distributed data generated at the network edge. One critical task of edge learning is to efficiently utilize the limited radio resource to acquire data samples for model training at an edge server. In this paper, we develop a novel user scheduling algorithm for data acquisition in edge learning, called (data) importance-aware scheduling. A key feature of this scheduling algorithm is that it takes into account the informativeness of data samples, besides communication reliability. Specifically, the scheduling decision is based on a data importance indicator (DII), elegantly incorporating two "important" metrics from communication and learning perspectives, i.e., the signal-to-noise ratio (SNR) and data uncertainty. We first derive an explicit expression for this indicator targeting the classic classifier of support vector machine (SVM), where the uncertainty of a data sample is measured by its distance to the decision boundary. Then, the result is extended to convolutional neural networks (CNN) by replacing the distance based uncertainty measure with the entropy. As demonstrated via experiments using real datasets, the proposed importance-aware scheduling can exploit the two-fold multi-user diversity, namely the diversity in both the multiuser channels and the distributed data samples. This leads to faster model convergence than the conventional scheduling schemes that exploit only a single type of diversity.


          

A Machine Learning Analysis of the Features in Deceptive and Credible News. (arXiv:1910.02223v1 [cs.CL])

 Cache   

Authors: Qi Jia Sun

Fake news is a type of pervasive propaganda that spreads misinformation online, taking advantage of social media's extensive reach to manipulate public perception. Over the past three years, fake news has become a focal discussion point in the media due to its impact on the 2016 U.S. presidential election. Fake news can have severe real-world implications: in 2016, a man walked into a pizzeria carrying a rifle because he read that Hillary Clinton was harboring children as sex slaves. This project presents a high accuracy (87%) machine learning classifier that determines the validity of news based on the word distributions and specific linguistic and stylistic differences in the first few sentences of an article. This can help readers identify the validity of an article by looking for specific features in the opening lines aiding them in making informed decisions. Using a dataset of 2,107 articles from 30 different websites, this project establishes an understanding of the variations between fake and credible news by examining the model, dataset, and features. This classifier appears to use the differences in word distribution, levels of tone authenticity, and frequency of adverbs, adjectives, and nouns. The differentiation in the features of these articles can be used to improve future classifiers. This classifier can also be further applied directly to browsers as a Google Chrome extension or as a filter for social media outlets or news websites to reduce the spread of misinformation.


          

Keyword Spotter Model for Crop Pest and Disease Monitoring from Community Radio Data. (arXiv:1910.02292v1 [cs.CY])

 Cache   

Authors: Benjamin Akera, Joyce Nakatumba-Nabende, Jonathan Mukiibi, Ali Hussein, Nathan Baleeta, Daniel Ssendiwala, Samiiha Nalwooga

In societies with well developed internet infrastructure, social media is the leading medium of communication for various social issues especially for breaking news situations. In rural Uganda however, public community radio is still a dominant means for news dissemination. Community radio gives audience to the general public especially to individuals living in rural areas, and thus plays an important role in giving a voice to those living in the broadcast area. It is an avenue for participatory communication and a tool relevant in both economic and social development.This is supported by the rise to ubiquity of mobile phones providing access to phone-in or text-in talk shows. In this paper, we describe an approach to analysing the readily available community radio data with machine learning-based speech keyword spotting techniques. We identify the keywords of interest related to agriculture and build models to automatically identify these keywords from audio streams. Our contribution through these techniques is a cost-efficient and effective way to monitor food security concerns particularly in rural areas. Through keyword spotting and radio talk show analysis, issues such as crop diseases, pests, drought and famine can be captured and fed into an early warning system for stakeholders and policy makers.


          

Multiplierless and Sparse Machine Learning based on Margin Propagation Networks. (arXiv:1910.02304v1 [cs.LG])

 Cache   

Authors: Nazreen P.M., Shantanu Chakrabartty, Chetan Singh Thakur

The new generation of machine learning processors have evolved from multi-core and parallel architectures (for example graphical processing units) that were designed to efficiently implement matrix-vector-multiplications (MVMs). This is because at the fundamental level, neural network and machine learning operations extensively use MVM operations and hardware compilers exploit the inherent parallelism in MVM operations to achieve hardware acceleration on GPUs, TPUs and FPGAs. A natural question to ask is whether MVM operations are even necessary to implement ML algorithms and whether simpler hardware primitives can be used to implement an ultra-energy-efficient ML processor/architecture. In this paper we propose an alternate hardware-software codesign of ML and neural network architectures where instead of using MVM operations and non-linear activation functions, the architecture only uses simple addition and thresholding operations to implement inference and learning. At the core of the proposed approach is margin-propagation based computation that maps multiplications into additions and additions into a dynamic rectifying-linear-unit (ReLU) operations. This mapping results in significant improvement in computational and hence energy cost. The training of a margin-propagation (MP) network involves optimizing an $L_1$ cost function, which in conjunction with ReLU operations leads to network sparsity and weight updates using only Boolean predicates. In this paper, we show how the MP network formulation can be applied for designing linear classifiers, multi-layer perceptrons and for designing support vector networks.


          

The Impact of Data Preparation on the Fairness of Software Systems. (arXiv:1910.02321v1 [cs.LG])

 Cache   

Authors: Inês Valentim, Nuno Lourenço, Nuno Antunes

Machine learning models are widely adopted in scenarios that directly affect people. The development of software systems based on these models raises societal and legal concerns, as their decisions may lead to the unfair treatment of individuals based on attributes like race or gender. Data preparation is key in any machine learning pipeline, but its effect on fairness is yet to be studied in detail. In this paper, we evaluate how the fairness and effectiveness of the learned models are affected by the removal of the sensitive attribute, the encoding of the categorical attributes, and instance selection methods (including cross-validators and random undersampling). We used the Adult Income and the German Credit Data datasets, which are widely studied and known to have fairness concerns. We applied each data preparation technique individually to analyse the difference in predictive performance and fairness, using statistical parity difference, disparate impact, and the normalised prejudice index. The results show that fairness is affected by transformations made to the training data, particularly in imbalanced datasets. Removing the sensitive attribute is insufficient to eliminate all the unfairness in the predictions, as expected, but it is key to achieve fairer models. Additionally, the standard random undersampling with respect to the true labels is sometimes more prejudicial than performing no random undersampling.


          

Migration through Machine Learning Lens -- Predicting Sexual and Reproductive Health Vulnerability of Young Migrants. (arXiv:1910.02390v1 [cs.LG])

 Cache   

Authors: Amber Nigam, Pragati Jaiswal, Teertha Arora, Uma Girkar

In this paper, we have discussed initial findings and results of our experiment to predict sexual and reproductive health vulnerabilities of migrants in a data-constrained environment. Notwithstanding the limited research and data about migrants and migration cities, we propose a solution that simultaneously focuses on data gathering from migrants, augmenting awareness of the migrants to reduce mishaps, and setting up a mechanism to present insights to the key stakeholders in migration to act upon. We have designed a webapp for the stakeholders involved in migration: migrants, who would participate in data gathering process and can also use the app for getting to know safety and awareness tips based on analysis of the data received; public health workers, who would have an access to the database of migrants on the app; policy makers, who would have a greater understanding of the ground reality, and of the patterns of migration through machine-learned analysis. Finally, we have experimented with different machine learning models on an artificially curated dataset. We have shown, through experiments, how machine learning can assist in predicting the migrants at risk and can also help in identifying the critical factors that make migration dangerous for migrants. The results for identifying vulnerable migrants through machine learning algorithms are statistically significant at an alpha of 0.05.


          

Named Entity Recognition -- Is there a glass ceiling?. (arXiv:1910.02403v1 [cs.CL])

 Cache   

Authors: Tomasz Stanislawek, Anna Wróblewska, Alicja Wójcika, Daniel Ziembicki, Przemyslaw Biecek

Recent developments in Named Entity Recognition (NER) have resulted in better and better models. However, is there a glass ceiling? Do we know which types of errors are still hard or even impossible to correct? In this paper, we present a detailed analysis of the types of errors in state-of-the-art machine learning (ML) methods. Our study reveals the weak and strong points of the Stanford, CMU, FLAIR, ELMO and BERT models, as well as their shared limitations. We also introduce new techniques for improving annotation, for training processes and for checking a model's quality and stability. Presented results are based on the CoNLL 2003 data set for the English language. A new enriched semantic annotation of errors for this data set and new diagnostic data sets are attached in the supplementary materials.


          

Mobile APP User Attribute Prediction by Heterogeneous Information Network Modeling. (arXiv:1910.02450v1 [cs.LG])

 Cache   

Authors: Hekai Zhang, Jibing Gong, Zhiyong Teng, Dan Wang, Hongfei Wang, Linfeng Du, Zakirul Alam Bhuiyan

User-based attribute information, such as age and gender, is usually considered as user privacy information. It is difficult for enterprises to obtain user-based privacy attribute information. However, user-based privacy attribute information has a wide range of applications in personalized services, user behavior analysis and other aspects. this paper advances the HetPathMine model and puts forward TPathMine model. With applying the number of clicks of attributes under each node to express the user's emotional preference information, optimizations of the solution of meta-path weight are also presented. Based on meta-path in heterogeneous information networks, the new model integrates all relationships among objects into isomorphic relationships of classified objects. Matrix is used to realize the knowledge dissemination of category knowledge among isomorphic objects. The experimental results show that: (1) the prediction of user attributes based on heterogeneous information networks can achieve higher accuracy than traditional machine learning classification methods; (2) TPathMine model based on the number of clicks is more accurate in classifying users of different age groups, and the weight of each meta-path is consistent with human intuition or the real world situation.


          

One Shot Radiance: Global Illumination Using Convolutional Autoencoders. (arXiv:1910.02480v1 [cs.GR])

 Cache   

Authors: Giulio Jiang, Bernhard Kainz

Rendering realistic images with Global Illumination (GI) is a computationally demanding task and often requires dedicated hardware for feasible runtime. Recent projects have used Generative Adversarial Networks (GAN) to predict indirect lighting on an image level, but are limited to diffuse materials and require training on each scene. We present One-Shot Radiance (OSR), a novel machine learning technique for rendering Global Illumination using Convolutional Autoencoders. We combine a modern denoising Neural Network with Radiance Caching to offer high performance CPU GI rendering while supporting a wide range of material types, without the requirement of offline pre-computation or training for each scene. OSR has been evaluated on interior scenes, and is able to produce high-quality images within 180 seconds on a single CPU.


          

Learn to Explain Efficiently via Neural Logic Inductive Learning. (arXiv:1910.02481v1 [cs.AI])

 Cache   

Authors: Yuan Yang, Le Song

The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems. In this work, we study the learning to explain problem in the scope of inductive logic programming (ILP). We propose Neural Logic Inductive Learning (NLIL), an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data. In experiments, compared with the state-of-the-art methods, we find NLIL can search for rules that are x10 times longer while remaining x3 times faster. We also show that NLIL can scale to large image datasets, i.e. Visual Genome, with 1M entities.


          

Semantic Interpretation of Deep Neural Networks Based on Continuous Logic. (arXiv:1910.02486v1 [cs.AI])

 Cache   

Authors: József Dombi, Orsolya Csiszár, Gábor Csiszár

Combining deep neural networks with the concepts of continuous logic is desirable to reduce uninterpretability of neural models. Nilpotent logical systems offer an appropriate mathematical framework to obtain continuous logic based neural networks (CL neural networks). We suggest using a differentiable approximation of the cutting function in the nodes of the input layer as well as in the logical operators in the hidden layers. The first experimental results point towards a promising new approach of machine learning.


          

REMIND Your Neural Network to Prevent Catastrophic Forgetting. (arXiv:1910.02509v1 [cs.LG])

 Cache   

Authors: Tyler L. Hayes, Kushal Kafle, Robik Shrestha, Manoj Acharya, Christopher Kanan

In lifelong machine learning, a robotic agent must be incrementally updated with new knowledge, instead of having distinct train and deployment phases. Conventional neural networks are often used for interpreting sensor data, however, if they are updated on non-stationary data streams, they suffer from catastrophic forgetting, with new learning overwriting past knowledge. A common remedy is replay, which involves mixing old examples with new ones. For incrementally training convolutional neural network models, prior work has enabled replay by storing raw images, but this is memory intensive and not ideal for embedded agents. Here, we propose REMIND, a tensor quantization approach that enables efficient replay with tensors. Unlike other methods, REMIND is trained in a streaming manner, meaning it learns one example at a time rather than in large batches containing multiple classes. Our approach achieves state-of-the-art results for incremental class learning on the ImageNet-1K dataset. We also probe REMIND's robustness to different data ordering schemes using the CORe50 streaming dataset. We demonstrate REMIND's generality by pioneering multi-modal incremental learning for visual question answering (VQA), which cannot be readily done with comparison models. We establish strong baselines on the CLEVR and TDIUC datasets for VQA. The generality of REMIND for multi-modal tasks can enable robotic agents to learn about their visual environment using natural language understanding in an interactive way.


          

Using Deep Learning and Machine Learning to Detect Epileptic Seizure with Electroencephalography (EEG) Data. (arXiv:1910.02544v1 [cs.LG])

 Cache   

Authors: Haotian Liu, Lin Xi, Ying Zhao, Zhixiang Li

The prediction of epileptic seizure has always been extremely challenging in medical domain. However, as the development of computer technology, the application of machine learning introduced new ideas for seizure forecasting. Applying machine learning model onto the predication of epileptic seizure could help us obtain a better result and there have been plenty of scientists who have been doing such works so that there are sufficient medical data provided for researchers to do training of machine learning models.


          

Early Prediction of 30-day ICU Re-admissions Using Natural Language Processing and Machine Learning. (arXiv:1910.02545v1 [cs.LG])

 Cache   

Authors: Zhiheng Li, Xinyue Xing, Bingzhang Lu, Zhixiang Li

ICU readmission is associated with longer hospitalization, mortality and adverse outcomes. An early recognition of ICU re-admission can help prevent patients from worse situation and lower treatment cost. As the abundance of Electronics Health Records (EHR), it is popular to design clinical decision tools with machine learning technique manipulating on healthcare large scale data. We designed data-driven predictive models to estimate the risk of ICU readmission. The discharge summary of each hospital admission was carefully represented by natural language processing techniques. Unified Medical Language System (UMLS) was further used to standardize inconsistency of discharge summaries. 5 machine learning classifiers were adopted to construct predictive models. The best configuration yielded a competitive AUC of 0.748. Our work suggests that natural language processing of discharge summaries is capable to send clinicians warning of unplanned 30-day readmission upon discharge.


          

PyODDS: An End-to-End Outlier Detection System. (arXiv:1910.02575v1 [cs.LG])

 Cache   

Authors: Yuening Li, Daochen Zha, Na Zou, Xia Hu

PyODDS is an end-to end Python system for outlier detection with database support. PyODDS provides outlier detection algorithms which meet the demands for users in different fields, w/wo data science or machine learning background. PyODDS gives the ability to execute machine learning algorithms in-database without moving data out of the database server or over the network. It also provides access to a wide range of outlier detection algorithms, including statistical analysis and more recent deep learning based approaches. PyODDS is released under the MIT open-source license, and currently available at (https://github.com/datamllab/pyodds#utm_source=googlier.com/page/2019_10_08/115880&utm_campaign=link&utm_term=googlier&utm_content=googlier.com) with official documentations at (https://pyodds.github.io/#utm_source=googlier.com/page/2019_10_08/115880&utm_campaign=link&utm_term=googlier&utm_content=googlier.com).


          

Differential Privacy-enabled Federated Learning for Sensitive Health Data. (arXiv:1910.02578v1 [cs.LG])

 Cache   

Authors: Olivia Choudhury, Aris Gkoulalas-Divanis, Theodoros Salonidis, Issa Sylla, Yoonyoung Park, Grace Hsu, Amar Das

Leveraging real-world health data for machine learning tasks requires addressing many practical challenges, such as distributed data silos, privacy concerns with creating a centralized database from person-specific sensitive data, resource constraints for transferring and integrating data from multiple sites, and risk of a single point of failure. In this paper, we introduce a federated learning framework that can learn a global model from distributed health data held locally at different sites. The framework offers two levels of privacy protection. First, it does not move or share raw data across sites or with a centralized server during the model training process. Second, it uses a differential privacy mechanism to further protect the model from potential privacy attacks. We perform a comprehensive evaluation of our approach on two healthcare applications, using real-world electronic health data of 1 million patients. We demonstrate the feasibility and effectiveness of the federated learning framework in offering an elevated level of privacy and maintaining utility of the global model.


          

A Novel Technique of Noninvasive Hemoglobin Level Measurement Using HSV Value of Fingertip Image. (arXiv:1910.02579v1 [eess.IV])

 Cache   

Authors: Md Kamrul Hasan, Nazmus Sakib, Joshua Field, Richard R. Love, Sheikh I. Ahamed

Over the last decade, smartphones have changed radically to support us with mHealth technology, cloud computing, and machine learning algorithm. Having its multifaceted facilities, we present a novel smartphone-based noninvasive hemoglobin (Hb) level prediction model by analyzing hue, saturation and value (HSV) of a fingertip video. Here, we collect 60 videos of 60 subjects from two different locations: Blood Center of Wisconsin, USA and AmaderGram, Bangladesh. We extract red, green, and blue (RGB) pixel intensities of selected images of those videos captured by the smartphone camera with flash on. Then we convert RGB values of selected video frames of a fingertip video into HSV color space and we generate histogram values of these HSV pixel intensities. We average these histogram values of a fingertip video and consider as an observation against the gold standard Hb concentration. We generate two input feature matrices based on observation of two different data sets. Partial Least Squares (PLS) algorithm is applied on the input feature matrix. We observe R2=0.95 in both data sets through our research. We analyze our data using Python OpenCV, Matlab, and R statistics tool.


          

Multi-label Detection and Classification of Red Blood Cells in Microscopic Images. (arXiv:1910.02672v1 [eess.IV])

 Cache   

Authors: Wei Qiu, Jiaming Guo, Xiang Li, Mengjia Xu, Mo Zhang, Ning Guo, Quanzheng Li

Cell detection and cell type classification from biomedical images play an important role for high-throughput imaging and various clinical application. While classification of single cell sample can be performed with standard computer vision and machine learning methods, analysis of multi-label samples (region containing congregating cells) is more challenging, as separation of individual cells can be difficult (e.g. touching cells) or even impossible (e.g. overlapping cells). As multi-instance images are common in analyzing Red Blood Cell (RBC) for Sickle Cell Disease (SCD) diagnosis, we develop and implement a multi-instance cell detection and classification framework to address this challenge. The framework firstly trains a region proposal model based on Region-based Convolutional Network (RCNN) to obtain bounding-boxes of regions potentially containing single or multiple cells from input microscopic images, which are extracted as image patches. High-level image features are then calculated from image patches through a pre-trained Convolutional Neural Network (CNN) with ResNet-50 structure. Using these image features inputs, six networks are then trained to make multi-label prediction of whether a given patch contains cells belonging to a specific cell type. As the six networks are trained with image patches consisting of both individual cells and touching/overlapping cells, they can effectively recognize cell types that are presented in multi-instance image samples. Finally, for the purpose of SCD testing, we train another machine learning classifier to predict whether the given image patch contains abnormal cell type based on outputs from the six networks. Testing result of the proposed framework shows that it can achieve good performance in automatic cell detection and classification.


          

From Google Maps to a Fine-Grained Catalog of Street trees. (arXiv:1910.02675v1 [cs.CV])

 Cache   

Authors: Steve Branson, Jan Dirk Wegner, David Hall, Nico Lang, Konrad Schindler, Pietro Perona

Up-to-date catalogs of the urban tree population are important for municipalities to monitor and improve quality of life in cities. Despite much research on automation of tree mapping, mainly relying on dedicated airborne LiDAR or hyperspectral campaigns, trees are still mostly mapped manually in practice. We present a fully automated tree detection and species recognition pipeline to process thousands of trees within a few hours using publicly available aerial and street view images of Google MapsTM. These data provide rich information (viewpoints, scales) from global tree shapes to bark textures. Our work-flow is built around a supervised classification that automatically learns the most discriminative features from thousands of trees and corresponding, public tree inventory data. In addition, we introduce a change tracker to keep urban tree inventories up-to-date. Changes of individual trees are recognized at city-scale by comparing street-level images of the same tree location at two different times. Drawing on recent advances in computer vision and machine learning, we apply convolutional neural networks (CNN) for all classification tasks. We propose the following pipeline: download all available panoramas and overhead images of an area of interest, detect trees per image and combine multi-view detections in a probabilistic framework, adding prior knowledge; recognize fine-grained species of detected trees. In a later, separate module, track trees over time and identify the type of change. We believe this is the first work to exploit publicly available image data for fine-grained tree mapping at city-scale, respectively over many thousands of trees. Experiments in the city of Pasadena, California, USA show that we can detect > 70% of the street trees, assign correct species to > 80% for 40 different species, and correctly detect and classify changes in > 90% of the cases.


          

Continual Learning in Neural Networks. (arXiv:1910.02718v1 [cs.LG])

 Cache   

Authors: Rahaf Aljundi

Artificial neural networks have exceeded human-level performance in accomplishing several individual tasks (e.g. voice recognition, object recognition, and video games). However, such success remains modest compared to human intelligence that can learn and perform an unlimited number of tasks. Humans' ability of learning and accumulating knowledge over their lifetime is an essential aspect of their intelligence. Continual machine learning aims at a higher level of machine intelligence through providing the artificial agents with the ability to learn online from a non-stationary and never-ending stream of data. A key component of such a never-ending learning process is to overcome the catastrophic forgetting of previously seen data, a problem that neural networks are well known to suffer from. The work described in this thesis has been dedicated to the investigation of continual learning and solutions to mitigate the forgetting phenomena in neural networks. To approach the continual learning problem, we first assume a task incremental setting where tasks are received one at a time and data from previous tasks are not stored. Since the task incremental setting can't be assumed in all continual learning scenarios, we also study the more general online continual setting. We consider an infinite stream of data drawn from a non-stationary distribution with a supervisory or self-supervisory training signal. The proposed methods in this thesis have tackled important aspects of continual learning. They were evaluated on different benchmarks and over various learning sequences. Advances in the state of the art of continual learning have been shown and challenges for bringing continual learning into application were critically identified.


          

Algorithmic Probability-guided Supervised Machine Learning on Non-differentiable Spaces. (arXiv:1910.02758v1 [cs.LG])

 Cache   

Authors: Santiago Hernández-Orozco, Hector Zenil, Jürgen Riedel, Adam Uccello, Narsis A. Kiani, Jesper Tegnér

We show how complexity theory can be introduced in machine learning to help bring together apparently disparate areas of current research. We show that this new approach requires less training data and is more generalizable as it shows greater resilience to random attacks. We investigate the shape of the discrete algorithmic space when performing regression or classification using a loss function parametrized by algorithmic complexity, demonstrating that the property of differentiation is not necessary to achieve results similar to those obtained using differentiable programming approaches such as deep learning. In doing so we use examples which enable the two approaches to be compared (small, given the computational power required for estimations of algorithmic complexity). We find and report that (i) machine learning can successfully be performed on a non-smooth surface using algorithmic complexity; (ii) that parameter solutions can be found using an algorithmic-probability classifier, establishing a bridge between a fundamentally discrete theory of computability and a fundamentally continuous mathematical theory of optimization methods; (iii) a formulation of an algorithmically directed search technique in non-smooth manifolds can be defined and conducted; (iv) exploitation techniques and numerical methods for algorithmic search to navigate these discrete non-differentiable spaces can be performed; in application of the (a) identification of generative rules from data observations; (b) solutions to image classification problems more resilient against pixel attacks compared to neural networks; (c) identification of equation parameters from a small data-set in the presence of noise in continuous ODE system problem, (d) classification of Boolean NK networks by (1) network topology, (2) underlying Boolean function, and (3) number of incoming edges.


          

Verification and Validation of Computer Models for Diagnosing Breast Cancer Based on Machine Learning for Medical Data Analysis. (arXiv:1910.02779v1 [q-bio.QM])

 Cache   

Authors: Vladislav Levshinskii, Maxim Polyakov, Alexander Losev, Alexander Khoperskov

The method of microwave radiometry is one of the areas of medical diagnosis of breast cancer. It is based on analysis of the spatial distribution of internal and surface tissue temperatures, which are measured in the microwave (RTM) and infrared (IR) ranges. Complex mathematical and computer models describing complex physical and biological processes within biotissue increase the efficiency of this method. Physical and biological processes are related to temperature dynamics and microwave electromagnetic radiation. Verification and validation of the numerical model is a key challenge to ensure consistency with medical big data. These data are obtained by medical measurements of patients. We present an original approach to verification and validation of simulation models of physical processes in biological tissues. Our approach is based on deep analysis of medical data and we use machine learning algorithms. We have achieved impressive success for the model of dynamics of thermal processes in a breast with cancer foci. This method allows us to carry out a significant refinement of almost all parameters of the mathematical model in order to achieve the maximum possible adequacy.


          

Learning De-biased Representations with Biased Representations. (arXiv:1910.02806v1 [cs.CV])

 Cache   

Authors: Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, Seong Joon Oh

Many machine learning algorithms are trained and evaluated by splitting data from a single source into training and test sets. While such focus on in-distribution learning scenarios has led interesting advances, it has not been able to tell if models are relying on dataset biases as shortcuts for successful prediction (e.g., using snow cues for recognising snowmobiles). Such biased models fail to generalise when the bias shifts to a different class. The cross-bias generalisation problem has been addressed by de-biasing training data through augmentation or re-sampling, which are often prohibitive due to the data collection cost (e.g., collecting images of a snowmobile on a desert) and the difficulty of quantifying or expressing biases in the first place. In this work, we propose a novel framework to train a de-biased representation by encouraging it to be different from a set of representations that are biased by design. This tactic is feasible in many scenarios where it is much easier to define a set of biased representations than to define and quantify bias. Our experiments and analyses show that our method discourages models from taking bias shortcuts, resulting in improved performances on de-biased test data.


          

Introduction to Concentration Inequalities. (arXiv:1910.02884v1 [math.PR])

 Cache   

Authors: Kumar Abhishek, Sneha Maheshwari, Sujit Gujar

In this report, we aim to exemplify concentration inequalities and provide easy to understand proofs for it. Our focus is on the inequalities which are helpful in the design and analysis of machine learning algorithms.


          

Multi-Modal Machine Learning for Flood Detection in News, Social Media and Satellite Sequences. (arXiv:1910.02932v1 [cs.CV])

 Cache   

Authors: Kashif Ahmad, Konstantin Pogorelov, Mohib Ullah, Michael Riegler, Nicola Conci, Johannes Langguth, Ala Al-Fuqaha

In this paper we present our methods for the MediaEval 2019 Mul-timedia Satellite Task, which is aiming to extract complementaryinformation associated with adverse events from Social Media andsatellites. For the first challenge, we propose a framework jointly uti-lizing colour, object and scene-level information to predict whetherthe topic of an article containing an image is a flood event or not.Visual features are combined using early and late fusion techniquesachieving an average F1-score of82.63,82.40,81.40and76.77. Forthe multi-modal flood level estimation, we rely on both visualand textual information achieving an average F1-score of58.48and46.03, respectively. Finally, for the flooding detection in time-based satellite image sequences we used a combination of classicalcomputer-vision and machine learning approaches achieving anaverage F1-score of58.82%


          

Deep Learning with a Rethinking Structure for Multi-label Classification. (arXiv:1802.01697v2 [cs.LG] UPDATED)

 Cache   

Authors: Yao-Yuan Yang, Yi-An Lin, Hong-Min Chu, Hsuan-Tien Lin

Multi-label classification (MLC) is an important class of machine learning problems that come with a wide spectrum of applications, each demanding a possibly different evaluation criterion. When solving the MLC problems, we generally expect the learning algorithm to take the hidden correlation of the labels into account to improve the prediction performance. Extracting the hidden correlation is generally a challenging task. In this work, we propose a novel deep learning framework to better extract the hidden correlation with the help of the memory structure within recurrent neural networks. The memory stores the temporary guesses on the labels and effectively allows the framework to rethink about the goodness and correlation of the guesses before making the final prediction. Furthermore, the rethinking process makes it easy to adapt to different evaluation criteria to match real-world application needs. In particular, the framework can be trained in an end-to-end style with respect to any given MLC evaluation criteria. The end-to-end design can be seamlessly combined with other deep learning techniques to conquer challenging MLC problems like image tagging. Experimental results across many real-world data sets justify that the rethinking framework indeed improves MLC performance across different evaluation criteria and leads to superior performance over state-of-the-art MLC algorithms.


          

ATMPA: Attacking Machine Learning-based Malware Visualization Detection Methods via Adversarial Examples. (arXiv:1808.01546v2 [cs.CR] UPDATED)

 Cache   

Authors: Xinbo Liu, Jiliang Zhang, Yapin Lin, He Li

Since the threat of malicious software (malware) has become increasingly serious, automatic malware detection techniques have received increasing attention, where machine learning (ML)-based visualization detection methods become more and more popular. In this paper, we demonstrate that the state-of-the-art ML-based visualization detection methods are vulnerable to Adversarial Example (AE) attacks. We develop a novel Adversarial Texture Malware Perturbation Attack (ATMPA) method based on the gradient descent and L-norm optimization method, where attackers can introduce some tiny perturbations on the transformed dataset such that ML-based malware detection methods will completely fail. The experimental results on the MS BIG malware dataset show that a small interference can reduce the accuracy rate down to 0% for several ML-based detection methods, and the rate of transferability is 74.1% on average.


          

Convolutional Neural Network: Text Classification Model for Open Domain Question Answering System. (arXiv:1809.02479v2 [cs.IR] UPDATED)

 Cache   

Authors: Muhammad Zain Amin, Noman Nadeem

Recently machine learning is being applied to almost every data domain one of which is Question Answering Systems (QAS). A typical Question Answering System is fairly an information retrieval system, which matches documents or text and retrieve the most accurate one. The idea of open domain question answering system put forth, involves convolutional neural network text classifiers. The Classification model presented in this paper is multi-class text classifier. The neural network classifier can be trained on large dataset. We report series of experiments conducted on Convolution Neural Network (CNN) by training it on two different datasets. Neural network model is trained on top of word embedding. Softmax layer is applied to calculate loss and mapping of semantically related words. Gathered results can help justify the fact that proposed hypothetical QAS is feasible. We further propose a method to integrate Convolutional Neural Network Classifier to an open domain question answering system. The idea of Open domain will be further explained, but the generality of it indicates to the system of domain specific trainable models, thus making it an open domain.


          

Incremental Few-Shot Learning with Attention Attractor Networks. (arXiv:1810.07218v3 [cs.LG] UPDATED)

 Cache   

Authors: Mengye Ren, Renjie Liao, Ethan Fetaya, Richard S. Zemel

Machine learning classifiers are often trained to recognize a set of pre-defined classes. However, in many applications, it is often desirable to have the flexibility of learning additional concepts, with limited data and without re-training on the full training set. This paper addresses this problem, incremental few-shot learning, where a regular classification network has already been trained to recognize a set of base classes, and several extra novel classes are being considered, each with only a few labeled examples. After learning the novel classes, the model is then evaluated on the overall classification performance on both base and novel classes. To this end, we propose a meta-learning model, the Attention Attractor Network, which regularizes the learning of novel classes. In each episode, we train a set of new weights to recognize novel classes until they converge, and we show that the technique of recurrent back-propagation can back-propagate through the optimization process and facilitate the learning of these parameters. We demonstrate that the learned attractor network can help recognize novel classes while remembering old classes without the need to review the original training set, outperforming various baselines.


          

How to improve the interpretability of kernel learning. (arXiv:1811.10469v2 [cs.LG] UPDATED)

 Cache   

Authors: Jinwei Zhao, Qizhou Wang, Yufei Wang, Yu Liu, Zhenghao Shi, Xinhong Hei

In recent years, machine learning researchers have focused on methods to construct flexible and interpretable prediction models. However, an interpretability evaluation, a relationship between generalization performance and an interpretability of the model and a method for improving the interpretability have to be considered. In this paper, a quantitative index of the interpretability is proposed and its rationality is proved, and equilibrium problem between the interpretability and the generalization performance is analyzed. Probability upper bound of the sum of the two performances is analyzed. For traditional supervised kernel machine learning problem, a universal learning framework is put forward to solve the equilibrium problem between the two performances. The condition for global optimal solution based on the framework is deduced. The learning framework is applied to the least-squares support vector machine and is evaluated by some experiments.


          

RAPID: Early Classification of Explosive Transients using Deep Learning. (arXiv:1904.00014v2 [astro-ph.IM] UPDATED)

 Cache   

Authors: Daniel Muthukrishna, Gautham Narayan, Kaisey S. Mandel, Rahul Biswas, Renée Hložek

We present RAPID (Real-time Automated Photometric IDentification), a novel time-series classification tool capable of automatically identifying transients from within a day of the initial alert, to the full lifetime of a light curve. Using a deep recurrent neural network with Gated Recurrent Units (GRUs), we present the first method specifically designed to provide early classifications of astronomical time-series data, typing 12 different transient classes. Our classifier can process light curves with any phase coverage, and it does not rely on deriving computationally expensive features from the data, making RAPID well-suited for processing the millions of alerts that ongoing and upcoming wide-field surveys such as the Zwicky Transient Facility (ZTF), and the Large Synoptic Survey Telescope (LSST) will produce. The classification accuracy improves over the lifetime of the transient as more photometric data becomes available, and across the 12 transient classes, we obtain an average area under the receiver operating characteristic curve of 0.95 and 0.98 at early and late epochs, respectively. We demonstrate RAPID's ability to effectively provide early classifications of observed transients from the ZTF data stream. We have made RAPID available as an open-source software package (https://astrorapid.readthedocs.io#utm_source=googlier.com/page/2019_10_08/116111&utm_campaign=link&utm_term=googlier&utm_content=googlier.com) for machine learning-based alert-brokers to use for the autonomous and quick classification of several thousand light curves within a few seconds.


          

Mimic Learning to Generate a Shareable Network Intrusion Detection Model. (arXiv:1905.00919v2 [cs.CR] UPDATED)

 Cache   

Authors: Ahmed Shafee, Mohamed Baza, Douglas A. Talbert, Mostafa M. Fouda, Mahmoud Nabil, Mohamed Mahmoud

Purveyors of malicious network attacks continue to increase the complexity and the sophistication of their techniques, and their ability to evade detection continues to improve as well. Hence, intrusion detection systems must also evolve to meet these increasingly challenging threats. Machine learning is often used to support this needed improvement. However, training a good prediction model can require a large set of labelled training data. Such datasets are difficult to obtain because privacy concerns prevent the majority of intrusion detection agencies from sharing their sensitive data. In this paper, we propose the use of mimic learning to enable the transfer of intrusion detection knowledge through a teacher model trained on private data to a student model. This student model provides a mean of publicly sharing knowledge extracted from private data without sharing the data itself. Our results confirm that the proposed scheme can produce a student intrusion detection model that mimics the teacher model without requiring access to the original dataset.


          

explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning. (arXiv:1908.00087v2 [cs.HC] UPDATED)

 Cache   

Authors: Thilo Spinner, Udo Schlegel, Hanna Schäfer, Mennatallah El-Assady

We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models; (2) diagnose model limitations using different explainable AI methods; as well as (3) refine and optimize the models. Our framework combines an iterative XAI pipeline with eight global monitoring and steering mechanisms, including quality monitoring, provenance tracking, model comparison, and trust building. To operationalize the framework, we present explAIner, a visual analytics system for interactive and explainable machine learning that instantiates all phases of the suggested pipeline within the commonly used TensorBoard environment. We performed a user-study with nine participants across different expertise levels to examine their perception of our workflow and to collect suggestions to fill the gap between our system and framework. The evaluation confirms that our tightly integrated system leads to an informed machine learning process while disclosing opportunities for further extensions.


          

LuNet: A Deep Neural Network for Network Intrusion Detection. (arXiv:1909.10031v2 [cs.AI] UPDATED)

 Cache   

Authors: Peilun Wu, Hui Guo

Network attack is a significant security issue for modern society. From small mobile devices to large cloud platforms, almost all computing products, used in our daily life, are networked and potentially under the threat of network intrusion. With the fast-growing network users, network intrusions become more and more frequent, volatile and advanced. Being able to capture intrusions in time for such a large scale network is critical and very challenging. To this end, the machine learning (or AI) based network intrusion detection (NID), due to its intelligent capability, has drawn increasing attention in recent years. Compared to the traditional signature-based approaches, the AI-based solutions are more capable of detecting variants of advanced network attacks. However, the high detection rate achieved by the existing designs is usually accompanied by a high rate of false alarms, which may significantly discount the overall effectiveness of the intrusion detection system. In this paper, we consider the existence of spatial and temporal features in the network traffic data and propose a hierarchical CNN+RNN neural network, LuNet. In LuNet, the convolutional neural network (CNN) and the recurrent neural network (RNN) learn input traffic data in sync with a gradually increasing granularity such that both spatial and temporal features of the data can be effectively extracted. Our experiments on two network traffic datasets show that compared to the state-of-the-art network intrusion detection techniques, LuNet not only offers a high level of detection capability but also has a much low rate of false positive-alarm.


          

Entropy from Machine Learning. (arXiv:1909.10831v2 [cond-mat.stat-mech] UPDATED)

 Cache   

Authors: Romuald A. Janik

We translate the problem of calculating the entropy of a set of binary configurations/signals into a sequence of supervised classification tasks. Subsequently, one can use virtually any machine learning classification algorithm for computing entropy. This procedure can be used to compute entropy, and consequently the free energy directly from a set of Monte Carlo configurations at a given temperature. As a test of the proposed method, using an off-the-shelf machine learning classifier we reproduce the entropy and free energy of the 2D Ising model from Monte Carlo configurations at various temperatures throughout its phase diagram. Other potential applications include computing the entropy of spiking neurons or any other multidimensional binary signals.


          

Data Engineer - Amazon.com Services, Inc. - Seattle, WA

 Cache   
Experience working with large data sets in order to extract business insights or build predictive models (data mining, machine learning, regression analysis).
From Amazon.com#utm_source=googlier.com/page/2019_10_08/116773&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Tue, 14 May 2019 07:53:27 GMT - View all Seattle, WA jobs
          

Principal Solution Architect - Machine Learning - 74509 - Advanced Micro Devices, Inc. - Santa Clara, CA

 Cache   
Evangelize AMD machine learning solutions through presentations to customers, partners and the community. Partner with product manager and business development…
From Advanced Micro Devices, Inc. - Wed, 12 Jun 2019 19:32:36 GMT - View all Santa Clara, CA jobs
          

Sr Software Engineer - Computer Vision, Embedded and Distributed Systems - West Pharmaceutical Services - Exton, PA

 Cache   
Work with internal stakeholders to understand business needs and translate to technical designs. Familiarity with Machine Learning is a plus.
From West Pharmaceutical Services - Sat, 17 Aug 2019 08:19:33 GMT - View all Exton, PA jobs
          

Deployment Engineer (Machine Learning) - Clarifai - Washington, DC

 Cache   
Experience with machine learning or data science experiments. You will develop reusable modules, components, and build tools for both internal and external use…
From Clarifai - Thu, 12 Sep 2019 16:52:34 GMT - View all Washington, DC jobs
          

Product Design Lead - SigOpt - San Francisco, CA

 Cache   
Help shape a new machine learning solution category. As a Product Manager you will be part of the larger product organization at SigOpt, working closely with…
From SigOpt - Tue, 08 Oct 2019 05:58:46 GMT - View all San Francisco, CA jobs
          

Sustaining Engineer (Multiple States) - h2o.ai - Mountain View, CA

 Cache   
Understanding of Data Science, Machine Learning concepts, algorithms etc.,. Ability to work with Data Science, Machine Learning, BigData Hadoop technologies.
From h2o.ai#utm_source=googlier.com/page/2019_10_08/116823&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Fri, 01 Mar 2019 06:27:40 GMT - View all Mountain View, CA jobs
          

Customer Data Scientist (Chicago) - h2o.ai - Chicago, IL

 Cache   
Engineers, business people, and executives. Training advanced machine learning models at scale in distributed environments, influencing next generation data…
From h2o.ai#utm_source=googlier.com/page/2019_10_08/116830&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Fri, 23 Aug 2019 05:24:30 GMT - View all Chicago, IL jobs
          

Customer Data Scientist (Mountain View) - h2o.ai - Mountain View, CA

 Cache   
Engineers, business people, and executives. Training advanced machine learning models at scale in distributed environments, influencing next generation data…
From h2o.ai#utm_source=googlier.com/page/2019_10_08/116831&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Fri, 23 Aug 2019 05:24:30 GMT - View all Mountain View, CA jobs
          

Customer Data Scientist (New York) - h2o.ai - New York, NY

 Cache   
Engineers, business people, and executives. Training advanced machine learning models at scale in distributed environments, influencing next generation data…
From h2o.ai#utm_source=googlier.com/page/2019_10_08/116832&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Fri, 23 Aug 2019 05:24:30 GMT - View all New York, NY jobs
          

OWKIN a fait une découverte majeure pour lutter contre le cancer grâce au machine learning - Siècle Digital

 Cache   
OWKIN a fait une découverte majeure pour lutter contre le cancer grâce au machine learning  Siècle Digital
          

Addendum to the Acknowledgements: Mood Prediction of Patients With Mood Disorders by Machine Learning Using Passive Digital Phenotypes Based on the Circadian Rhythm: Prospective Observational Cohort Study

 Cache   
No Abstract Available

Purchases: 0
          

Knowledge Mining with Azure Search | AI Show

 Cache   

In this episode Luis stopped by and showed how much more can really be done with Cognitive Search (with recipes to boot). Extracting structure from unstructured data is a powerful addition to Cognitive Search! The demo he presents gives an amazing step-by-step process for using Cognitive search to enrich your index.

Main Demo: [04:56]

LEARN MORE ABOUT KNOWLEDGE MINING:

The AI Show's Favorite Links:


          

Characterizing Real-World Agents as a Research Meta-Strategy

 Cache   
Published on October 8, 2019 3:32 PM UTC

Background

Intuitively, the real world seems to contain agenty systems (e.g. humans), non-agenty systems (e.g. rocks), and ambiguous cases which display some agent-like behavior sometimes (bacteria, neural nets, financial markets, thermostats, etc). There’s a vague idea that agenty systems pursue consistent goals in a wide variety of environments, and that various characteristics are necessary for this flexible goal-oriented behavior.

But once we get into the nitty-gritty, it turns out we don’t really have a full mathematical formalization of these intuitions. We lack a characterization of agents.

To date, the closest we’ve come to characterizing agents in general are the coherence theorems underlying Bayesian inference and utility maximization. A wide variety of theorems with a wide variety of different assumptions all point towards agents which perform Bayesian inference and choose their actions to maximize expected utility. In this framework, an agent is characterized by two pieces:

  • A probabilistic world-model
  • A utility function

The Bayesian utility characterization of agency neatly captures many of our intuitions of agency: the importance of accurate beliefs about the environment, the difference between things which do and don’t consistently pursue a goal (or approximately pursue a goal, or sometimes pursue a goal…), the importance of updating on new information, etc.

Sadly, for purposes of AGI alignment, the standard Bayesian utility characterization is incomplete at best. Some example issues include:

  • The need for a cartesian boundary - a clear separation between “agent” and “environment”, with well-defined input/output channels between the two
  • Logical omniscience - the assumption that agents can fully compute all of the implications of the information available to them, and track every possible state of the world
  • Path independence and complete preferences - the assumption that an agent doesn’t have a general tendency to stay in the state it’s in

One way to view agent foundations research is that it seeks a characterization of agents which resolves problems like the first two above. We want the same sort of benefits offered by the Bayesian utility characterization, but in a wider and more realistic range of agenty systems.

Characterizing Real-World Agents

We want to characterize agency. We have a bunch of real-world systems which display agency to varying degrees. One obvious strategy is to go study and characterize those real-world agenty systems.

Concretely, what would this look like?

Well, let’s set aside the shortcomings of the standard Bayesian utility characterization for a moment, and imagine applying it to a real-world system - a financial market, for instance. We have various coherence theorems saying that agenty systems must implement Bayesian utility maximization, or else allow arbitrage. We have a strong prior that financial markets don’t allow arbitrage (except perhaps very small arbitrage on very short timescales). So, financial markets should have a Bayesian utility function, right? Obvious next step: pick an actual market and try to figure out its world-model and utility function.

I tried this, and it didn’t work. Turns out markets don’t have a utility function, in general (in this context, it’s called a “representative agent”).

Ok, but markets are still inexploitable and still seem agenty, so where did it go wrong? Can we generalize Bayesian utility to characterize systems which are agenty like markets? This was the line of inquiry which led to “Why Subagents?”. The upshot: for systems with internal state (including markets), the standard utility maximization characterization generalizes to a multi-agent committee characterization.

This is an example of a general strategy:

  • Start with some characterization of agency - don’t worry if it’s not perfect yet
  • Apply it to a real-world agenty system - specifically, try to back out the characterizing properties, e.g. the probabilistic world-model and utility function in the case of a Bayesian utility characterization
  • If successful, great! We’ve gained a useful theoretical tool for an interesting real-world system.
  • If unsuccessful, first check whether the failure corresponds to a situation where the system actually doesn’t act very agenty - if so, then that actually supports our characterization of agency, and again tells us something interesting about a real-world system.
  • Otherwise, we’ve found a real-world case where our characterization of agency fails. Look at the system’s actual internal behavior to see where it differs from the assumptions of our characterization, and then generalize the characterization to handle this kind of system.

Note that the last step, generalizing the characterization, still needs to maintain the structure of a characterization of agency. For example, prospect theory does a fine job predicting the choices of humans, but it isn’t a general characterization of effective goal-seeking behavior. There’s no reason to expect prospect-theory-like behavior to be universal for effective goal-seeking systems. The coherence theorems of Bayesian utility, on the other hand, provide fairly general conditions under which Bayesian induction and expected utility maximization are an optimal goal-seeking strategy - and therefore “universal”, at least within the conditions assumed. Although the Bayesian utility framework is incomplete at best, that’s still the kind of thing we’re looking for: a characterization which should apply to all effective goal-seeking systems.

Some examples of (hypothetical) projects which follow this general strategy:

  • Look up the kinetic equations governing chemotaxis in e-coli. Either extract an approximate probabilistic world-model and utility function from the equations, find a suboptimality in the bacteria’s behavior, or identify a loophole and expand the characterization of agency.
  • Pick a financial market. Using whatever data you can obtain, either extract (not necessarily unique) utility functions and world models of the component agents, find an arbitrage opportunity, or identify a new loophole and expand the characterization of agency.
  • Start with the weights from a neural network trained on a task in the openai gym. Either extract a probabilistic world model and utility function from the weights, find a strategy which dominates the NN’s strategy, or identify a loophole and expand the characterization of agency

… and so forth.

Why Would We Want to Do This?

Characterization of real-world agenty systems has a lot of advantages as a general research strategy.

First and foremost: when working on mathematical theory, it’s easy to get lost in abstraction and lose contact with the real world. One can end up pushing symbols around ad nauseum, without any idea which way is forward. The easiest counter to this failure mode is to stay grounded in real-world applications. Just as a rationalist lets reality guide beliefs, a theorist lets the problems, properties and intuitions of the real world guide the theory.

Second, when attempting to characterize real-world agenty systems, one is very likely to make some kind of forward progress. If the characterization works, then we’ve learned something useful about an interesting real-world system. If it fails, then we’ve identified a hole in our characterization of agency - and we have an example on hand to guide the construction of a new characterization.

Third, characterization of real-world agenty systems is directly relevant to alignment: the alignment problem itself basically amounts to characterizing the wants and ontologies of humans. This isn’t the only problem relevant to FAI - tiling and stability and subagent alignment and the like are separate - but it is basically the whole “alignment with humans” part. Characterizing e.g. the wants and ontology of an e-coli seems like a natural stepping-stone.

One could object that real-world agenty systems lack some properties which are crucial to the design of aligned AGI - most notably reflection and planned self-modification. A theory developed by looking only at real-world agents will therefore likely be incomplete. On the other hand, you don’t figure out general relativity without figuring out Newtonian gravitation first. Our understanding of agency is currently so woefully poor that we don’t even understand real-world systems, so we might as well start with that and reap all the advantages listed above. Once that’s figured out, we should expect it to pave the way to the final theory: just as general relativity has to reproduce Newtonian gravity in the limit of low speed and low energy, more advanced characterizations of agency should reproduce more basic characterizations under the appropriate conditions. The subagents characterization, for example, reproduces the utility characterization in cases where the agenty system has no internal state. It all adds up to normality - new theories must be consistent with the old, at least to the extent that the old theories work.

Finally, a note on relative advantage. As a strategy, characterizing real-world agenty systems leans heavily on domain knowledge in areas like biology, machine learning, economics, and neuroscience/psychology, along with the math involved in any agency research. That’s a pretty large pareto skill frontier, and I’d bet that it’s pretty underexplored. That means there’s a lot of opportunity for new, large contributions to the theory, if you have the domain knowledge or are willing to put in the effort to acquire it.



Discuss
          

10 Tech Jobs at Low Risk of Automation in Years Ahead

 Cache   

Which tech jobs are at lowest risk of automation? That’s a tough question: as artificial intelligence (A.I.) and machine learning platforms become increasingly sophisticated, it seems […]

The post 10 Tech Jobs at Low Risk of Automation in Years Ahead appeared first on Dice Insights.


          

Machine learning helps plant science turn over a new leaf

 Cache   
Researchers have developed machine-learning algorithms that teach a computer system to analyze three-dimensional shapes of the branches and leaves of a plant. The study may help scientists better quantify how plants respond to climate change, genetic mutations or other factors.
          

Utopos Games raises $1 million to make AI-based robot battle game Raivo

 Cache   
Utopos Games has raised $1 million for its robot-battle game Raivo, which the company says is the first title to gamify machine learning.
          

Sr. Controls Engineer - Amazon Data Services, Inc. - Herndon, VA

 Cache   
Siemens, Tridium, Distech, Struxureware, or Alerton. Demonstrated experience expanding controls knowledge base – new systems, technology, machine learning, etc.
From Amazon.com#utm_source=googlier.com/page/2019_10_08/136807&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Wed, 18 Sep 2019 07:52:28 GMT - View all Herndon, VA jobs
          

CardioWise, Inc. Named a Top-Ten Finalist in Best Cardiovascular...

 Cache   

CardioWise™ is commercializing patented, Cardiac Computed Tomography (CT) machine learning analysis software, SQuEEZ™, which produces a quantified image model of the human heart.

(PRWeb October 08, 2019)

Read the full story at https://www.prweb.com/releases/cardiowise_inc_named_a_top_ten_finalist_in_best_cardiovascular_digital_diagnostic_category_by_ucsf_health_hub/prweb16631534.htm#utm_source=googlier.com/page/2019_10_08/139612&utm_campaign=link&utm_term=googlier&utm_content=googlier.com


          

DeepMind Is Working on a Solution to Bias in AI

 Cache   

In DeepMind's hypothetical college admissions example: qualifications (Q), gender (G), and choice of department (D), all factor into whether a candidate is admitted (A). A Causal Bayesian Network can identify causal and non casual relationships between these factors and look for unfairness. In this example gender can have a non-causal effect on admission due to its relationship with choice of department. (Image source: DeepMind)

DeepMind, a subsidiary of Alphabet (Google's parent company) is working to remove the inherent human biases from machine learning algorithms.

The increased deployment of artificial intelligence and machine learning algorithms into the real world has coincided with increased concerns over biases in the algorithms' decision making. From loan and job applications to surveillance and even criminal justice, AI has been shown to exhibit bias – particularly in terms of race and gender – in its decision making.

Researchers at DeepMind believe they've developed a useful framework for identifying and removing unfairness in AI decision making. Called Causal Bayesian Networks (CBNs), these are visual representations of datasets that can identify causal relationships within the data and help experts identify factors that might be unfairly weighed against or skewing others. The researchers describe their methodology in two recent papers, A Causal Bayesian Networks Viewpoint on Fairness and Path-Specific Counterfactual Fairness.

“By defining unfairness as the presence of a harmful influence from the sensitive attribute in the graph, CBNs provide us with a simple and intuitive visual tool for describing different possible unfairness scenarios underlying a dataset,” Silvia Chiappa and William S. Isaac, the authors of the studies, wrote in a blog post. “In addition, CBNs provide us with a powerful quantitative tool to measure unfairness in a dataset and to help researchers develop techniques for addressing it.”

To describe how CBNs can be applied to machine learning, Chiappa and Isaac use the example of a hypothetical college admissions algorithm. Imagine an algorithm designed to approve or reject applicants based on their qualifications, choice of department, and gender. While qualifications and gender can both have a direct (causal) relationship to whether a candidate is admitted, gender could also have an indirect (non-causal) impact as well due to its influence on choice of department. If a male and female are both equally qualified for admission, but they both applied to a department that historically admits men at a far higher rate, then the relationship between gender and choice of department is considered unfair.

“The direct influence captures the fact that individuals with the same qualifications who are applying to the same department might be treated differently based on their gender,” the researchers wrote. “The indirect influence captures differing admission rates between female and male applicants due to their differing department choices.”

This is not to say the algorithm is capable of correcting itself however. The AI would still need input and correction from human experts to make any adjustments to its decision making. And while a CBN could potentially provide insights into fair and unfair relationships in variables in random datasets, it would ultimately fall on humans to either proactively or retroactively take steps to ensure the algorithms are making objective decisions.

“While it is important to acknowledge the limitations and difficulties of using this tool – such as identifying a CBN that accurately describes the dataset’s generation, dealing with confounding variables, and performing counterfactual inference in complex settings – this unique combination of capabilities could enable a deeper understanding of complex systems and allow us to better align decision systems with society's values,” Chiappa and Isaac wrote.

Improving algorithms themselves is only one half of the work to be done to safeguard against bias in AI. Figures released from studies such as one conducted by New York University's AI Now Institute suggest there is a greater need to increase the diversity among the engineers and developers creating these algorithms. For example, as of this year only10 percent of the AI research staff at Google was female, according to the study.

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

The Midwest's largest advanced design and manufacturing event!
Design & Manufacturing Minneapolis connects you with top industry experts, including esign and manufacturing suppliers, and industry leaders in plastics manufacturing, packaging, automation, robotics, medical technology, and more. This is the place where exhibitors, engineers, executives, and thought leaders can learn, contribute, and create solutions to move the industry forward. Register today!


          

SCS Students Named 2020 Siebel Scholars

 Cache   
Tue, 10/08/2019

Six Carnegie Mellon University students — five of them from the School of Computer Science — have been named 2020 Siebel Scholars, a highly competitive award that supports top graduate students in the fields of business, computer science, energy science and bioengineering.

Established in 2000 by the Thomas and Stacey Siebel Foundation, the Siebel Scholars program awards grants to 16 universities in the United States, China, France, Italy and Japan. The top graduate students from 27 partner programs are selected each year as Siebel Scholars and receive a $35,000 award for their final year of studies. On average, Siebel Scholars rank in the top five percent of their class, many within the top one percent.

Among the 93 total scholars are School of Computer Science students Michael Madaio, Eric Wong, Ken Holstein, Junpei Zhou and Amadou Latyr Ngom. They're joined by Elizabeth Reed, a Ph.D. student in the Department of Engineering and Public Policy.

Human-Computer Interaction Institute (HCII) Ph.D. candidate Michael Madaio researches the design of algorithmic systems in the public sector, focusing on literacy education in developing countries. He was a research intern at the United Nations Institute for Computing and Society, and Microsoft Research's Fairness, Accountability, Transparency and Ethics in Artificial Intelligence group. He completed his master's degree in digital media studies at Georgia Institute of Technology, and a master's in education and a bachelor's in English literature at the University of Maryland, College Park.

Eric Wong is pursuing his Ph.D. in machine learning. In 2012 he began researching the problem of molecular energy optimization, developing specialized kernels for geometrically structured data. He is currently interning at Bosch to bring advancements into the automotive industry with work on real sensor systems, both visual and physical.

Ken Holstein, a fifth-year HCII Ph.D. student, is also a fellow of the Program in Interdisciplinary Educational Research (PIER). He has interned at Microsoft Research and holds a bachelor's degree in psychology from the University of Pittsburgh and master's in human–computer interaction from CMU.

Language Technologies Institute master's student Junpei Zhou researches social good by using natural language processing and computer vision techniques. He has worked on flu forecasting and a public safety project to automatically pick up tweets to help police officers better handle emergency events. He has interned at Google and Alibaba, and holds a bachelor's degree in computer science from Zhejiang University.

Amadou Latyr Ngom is pursuing his master's degree in the Computer Science Department at CMU. His research interests include applying compiler techniques to accelerate query execution for in-memory database management systems. He has interned at Zillow and Pure Storage, and graduated with a bachelor's degree in computer science from CMU.

"Every year, the Siebel Scholars continue to impress me with their commitment to academics and influencing future society. This year's class is exceptional, and once again represents the best and brightest minds from around the globe who are advancing innovations in healthcare, artificial intelligence, the environment and more," said Thomas M. Siebel, chair of the Siebel Scholars Foundation. "It is my distinct pleasure to welcome these students into this ever-growing, lifelong community, and I personally look forward to seeing their impact and contributions unfold."

For More Information

Byron Spice | 412-268-9068 | bspice [at] cs.cmu.edu#utm_source=googlier.com/page/2019_10_08/145719&utm_campaign=link&utm_term=googlier&utm_content=googlier.com
Virginia Alvino Young | 412-268-8356 | vay [at] cmu.edu#utm_source=googlier.com/page/2019_10_08/145719&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

News type

News
          

USU Data Scientist Contributes to Multi-Institution Dengue Fever Study

 Cache   

Once considered relatively rare, dengue fever is popping up throughout the globe, including the United States. The mosquito-borne virus is having a particularly active year, which some public health officials attribute, at least partially, to a warming climate.

Transmitted to humans through the bite of the female Aedes aegypti mosquito, dengue causes fever, vomiting, headache, muscle and joint pain, as well as skin rashes. Most people infected with the virus recover, but the disease can escalate into lethal complications. And, curiously, while people who’ve recovered from the virus develop immunity to the strain that infected them, they often become more susceptible to infection by different strains of the virus.

Utah State University data scientist Kevin Moon is among a group of researchers, led by Yale University, that’s recently completed a large-scale study of the virus using single-cell data from biological samples collected from infected people in India. Supported by the National Institutes of Health and the Indo-U.S. Vaccine Action Program, the research team includes scientists from India’s National Institute of Mental Health and Neurosciences. The group published findings in the Oct. 7 issue of Nature Methods.

“My role in this project included contributing to the development of ‘SAUCIE,’ a data analysis method designed to tackle very large datasets, such as the one collected for this study,” says Moon, assistant professor in USU’s Department of Mathematics and Statistics, who specializes in data science and machine learning. “The team applied SAUCIE to a 20 million-cell mass cytometry dataset with genetic and molecular information from 180 samples collected from 40 subjects.” 

SAUCIE, which stands for “Sparse Autoencoder for Unsupervised Clustering, Imputation and Embedding,” is a multi-layered deep neural network, which allows researchers to extract detailed information from large quantities of single cells.

“Collecting useful data for this kind of application requires getting information from very large samples of individual cells,” Moon says. “Without a large set, you can’t collect a good representation of the many types of cells, including rare cells.”

But developing computational tools to handle so much information is a challenge.

“That’s where neural networks, like SAUCIE, come in,” Moon says. “Neural networks, constructed from a set of algorithms and modeled loosely after the human brain, are designed to recognize patterns in the data.”

SAUCIE, he says, offers four main capabilities. 

“First of all, it clusters data into similar groups which, in this case, allowed the researchers to segment cells into similar groups and ferret out rare cell populations,” Moon says. “Secondly, SAUCIE is good at ‘de-noising data.”

That is, SAUCIE refines data, eliminating distracting information.

A third feature of SAUCIE is batch correction, he says, that eliminates non-biological effects caused by variations in sample collection and analysis.

Finally, SAUCIE enables data visualization.

“This is a powerful analysis tool that allow researchers to visually explore patterns in the data,” Moon says. 

Having the ability to explore the cell data at this level will help researchers better understand the basic biology of how cells respond to the dengue virus from initial infection to the disease’s progression.

“The hope is this information will lead to preventive efforts and therapies for those infected,” Moon says.
 


          

Industrial Internet Consortium Smart Factory Machine Learning for Predictive Maintenance Testbed Enters Phase 3 with Commercial Deployment

 Cache   

Two-year old testbed increases factory uptime and asset efficiency

(PRWeb October 08, 2019)

Read the full story at https://www.prweb.com/releases/industrial_internet_consortium_smart_factory_machine_learning_for_predictive_maintenance_testbed_enters_phase_3_with_commercial_deployment/prweb16633077.htm#utm_source=googlier.com/page/2019_10_08/151090&utm_campaign=link&utm_term=googlier&utm_content=googlier.com




Next Page: 10000

© Googlier LLC, 2019