Next Page: 10000

          

Manufacturing Data Architect - FCA - Auburn Hills, MI

 Cache   
Ability to understand how to combine internal and external data sources in order to develop key data feeds to support analysis by data scientists.
From Fiat Chrysler Automobiles - Sun, 11 Aug 2019 02:11:08 GMT - View all Auburn Hills, MI jobs
          

Data Scientist (627979)

 Cache   
NY-Yonkers, Our client is currently seeking a Data Scientist (627979) for a FTE (Full Time Perm employment) position, Contact bkubiak@judge.com#utm_source=googlier.com/page/2019_10_08/4883&utm_campaign=link&utm_term=googlier&utm_content=googlier.com, 732.497.4294, Please review the Qualifications: section, Position Overview: Data Scientist to be a part of one of the technology teams responsible for designing and building software components for the delivery of web applications to millions of clients users. This r
          

Sr. Data Science Engineer

 Cache   
FL-Tampa, Our client, located in Tampa, FL. is seeking a skilled Data Science Engineer with extensive hands-on experience. In this role, you will be designing and developing robust statistical and algorithmic solutions that mitigate issues with data fragmentation. You will develop and engineer solutions that bring structured and unstructured data from dispersed applications/data sources into one single plat
          

Qubit with Matthew Tamsett and Ravi Upreti

 Cache   

Our guests Matthew Tamsett and Ravi Upreti join Gabi Ferrara and Aja Hammerly to talk about data science and their project, Qubit. Qubit helps web companies by measuring different user experiences, analyzing that information, and using it to improve the website. They also use the collected data along with ML to predict things, such as which products users will prefer, in order to provide a customized website experience.

Matthew talks a little about his time at CERN and his transition from working in academia to industry. It’s actually fairly common for physicists to branch out into data science and high performance computing, Matthew explains. Later, Ravi and Matthew talk GCP shop with us, explaining how they moved Qubit to GCP and why. Using PubSub, BigQuery, and BigQuery ML, they can provide their customers with real-time solutions, which allows for more reactive personalization. Data can be analyzed and updates can be created and pushed much faster with GCP. Autoscaling and cloud management services provided by GCP have given the data scientists at Qubit back their sleep!

Matthew Tamsett

Matthew was trained in experimental particle physics at Royal Holloway University of London, and did his Ph.D. on the use of leptonic triggers for the detection of super symmetric signals at the ATLAS detector at CERN. Following this, he completed three post doctoral positions at CERN and on the neutrino experiment NOvA at Louisiana Tech University, Brookhaven National Laboratory, New York, and the University of Sussex UK, culminating in a EU Marie Curie fellowship. During this time, Matt co-authored many papers including playing a minor part in the discovery of the Higgs Boson. Since leaving academia in 2016, he’s worked at Qubit as a data scientist and later as lead data scientist where he lead a team working to improve the online shopping experience via the use of personalization, statistics and predictive modeling.

Ravi Upreti

Ravi has been working with Qubit for almost 4 years now and leads the platform engineering team there. He learned distributed computing, parallel algorithms and extreme computing at Edinburgh University. His four year stint at Ocado helped developed a strong domain knowledge for e-commerce, along with deep technical knowledge. Now it has all come together, as he gets to apply all these learnings to Qubit, at scale.

Cool things of the week
  • A developer goes to a DevOps conference blog
  • Cloud Build brings advanced CI/CD capabilities to GitHub blog
  • Cloud Build called out in Forrester Wave twitter
  • 6 strategies for scaling your serverless applications blog
Interview
  • Qubit site
  • Qubit Blog blog
  • Pub/Sub site
  • BigQuery site
  • BigQuery ML site
  • Cloud Datastore site
  • Cloud Memorystore site
  • Cloud Bigtable site
  • Cloud SQL site
  • Cloud AutoML site
  • Goodbye Hadoop. Building a streaming data processing pipeline on Google Cloud blog
Question of the week

How do you deploy a Windows container on GKE?

Where can you find us next?

Gabi will be at the Google Cloud Summit in Sao Paulo, Brazil.

Aja will be at Cloud Next London.

Sound Effect Attribution

          

Ingénieur Système DevOps (H/F) - Thales - Vélizy-Villacoublay

 Cache   
Description du poste : CE QUE NOUS POUVONS ACCOMPLIR ENSEMBLE : Le Centre de Compétences Data Storage est spécialisé dans la conception d'architecture BigData. Au sein de ce centre de compétences, vous êtes impliqués dans des projets pour nos clients. Accompagné par nos Data Engineer et Data Scientist, vous participez au déploiement de systèmes innovants de stockage et de traitement de l'information (DataLake, DataHub...) pour nos clients Grand Comptes (Télécom, Services, Energie,...
          

Développeur Big Data (H/F) - Thales - Vélizy-Villacoublay

 Cache   
Description du poste : CE QUE NOUS POUVONS ACCOMPLIR ENSEMBLE : Le Centre de Compétences Data Storage est spécialisé dans la conception d'architecture BigData. Au sein de ce centre de compétences, vous êtes impliqués dans des projets pour nos clients. Accompagné par nos Data Engineer et Data Scientist, vous concevez et développez pour nos clients Grand Comptes (Télécom, Services, Energie, Bancaire, Défense...) des systèmes innovants de stockage et de traitement de l'information ...
          

Democratizing Data Science in Your Organization

 Cache   

Despite the explosion of data collected in recent years, many organizations—from financial institutions and health care firms to management consultancies and government—are simply not equipped to learn from their data in an efficient and effective manner.

Data-driven organizations are three times more likely to report significant improvements in decision making.



Request Free!

          

Offer - ExcelR data analytics course in pune - INDIA

 Cache   
ExcelR offers Data Science course in Pune, the most comprehensive Data Science course in the market, covering the complete Data Science lifecycle concepts from Data Collection, Data Extraction, Data Cleansing, Data Exploration, Data Transformation, Feature Engineering, Data Integration, Data Mining, building Prediction models, Data Visualization and deploying the solution to the customer.
          

В ТУИТ прошел мастер-класс на тему: «Искусственный интеллект и Data Science»

 Cache   
4 октября в большом зале заседаний ТУИТ по инициативе руководства факультета Компьютерный инжиринг прошел мастер-класс на тему: "Искусственный интеллект и Data Science".
          

Data Scientist III - Mumbai, MH

 Cache   
General Mills - Mumbai, MH General Mills is reshaping the future of food. We believe food makes us better. It nourishes our bodies, brings us joy and...
          

Test Your Skills With A Data Science Crossword

 Cache   

Data scientists are busy people, but that doesn't mean they don't have time for fun! We've created a data science crossword that melds work and play, so see if you have what it takes.


          

In New Book, Cambridge Analytica Whistleblower Stops Short Of A Full Mea Culpa

 Cache   
Before going public, data scientist Christopher Wylie helped the now defunct company figure out how to target people online. In a new memoir, he offers details of the project and the players.
          

Data Scientist

 Cache   
MI-Detroit, Overview Who we are: Meridian, a WellCare Company, is part of a national network of passionate leaders, achievers, and innovators dedicated to making a difference in the lives of our members, our providers and in the healthcare industry. We provide government-based health plans (Medicare, Medicaid, and the Health Insurance Marketplace) in Michigan, Illinois, Indiana, and Ohio. As a part of the Wel
          

Chief Data Scientist/ Chief of Analytics - Unisys - Washington, DC

 Cache   
To learn more about Unisys visit us at www.Unisys.com#utm_source=googlier.com/page/2019_10_08/51884&utm_campaign=link&utm_term=googlier&utm_content=googlier.com. Unisys has more than 23,000 employees serving clients around the world.
From Unisys - Wed, 18 Sep 2019 21:01:03 GMT - View all Washington, DC jobs
          

Latest Tech Trends, Their Problems, And How to Solve Them

 Cache   

Few IT professionals are unaware of the rapid emergence of 5G, Internet of Things or IoT, edge-fog-cloud or core computing, microservices, and artificial intelligence known as machine learning or AI/ML.  These new technologies hold enormous promise for transforming IT and the customer experience with the problems that they solve.  It’s important to realize that like all technologies, they introduce new processes and subsequently new problems.  Most are aware of the promise, but few are aware of the new problems and how to solve them.

5G is a great example.  It delivers 10 to 100 times more throughput than 4G LTE and up to 90% lower latencies.  Users can expect throughput between 1 and 10Gbps with latencies at approximately 1 ms.  This enables large files such as 4K or 8K videos to be downloaded or uploaded in seconds not minutes.  5G will deliver mobile broadband and can potentially make traditional broadband obsolete just as mobile telephony has essentially eliminated the vast majority of landlines. 

5G mobile networking technology makes industrial IoT more scalable, simpler, and much more economically feasible.  Whereas 4G is limited to approximately 400 devices per Km2, 5G increases that number of devices supported per Km2 to approximately 1,000,000 or a 250,000% increase. The performance, latency, and scalability are why 5G is being called transformational.  But there are significant issues introduced by 5G.  A key one is the database application infrastructure.

Analysts frequently cite the non-trivial multi-billion dollar investment required to roll-out 5G.  That investment is primarily focused on the antennas and fiber optic cables to the antennas.  This is because 5G is based on a completely different technology than 4G.  It utilizes millimeter waves instead of microwaves.  Millimeter waves are limited to 300 meters between antennas.  The 4G microwaves can be as far as 16 Km apart.  That is a major difference and therefore demands many more antennas and optical cables to those antennas to make 5G work effectively.  It also means it will take considerable time before rural areas are covered by 5G and even then, it will be a degraded 5G. 

The 5G infrastructure investment not being addressed is the database application infrastructure.  The database is a foundational technology for analytics.  IT Pros simply assume it will be there for their applications and microservices. Everything today is interconnected. The database application infrastructure is generally architected for the volume and performance coming from the network.  That volume and performance is going up by an order of magnitude.  What happens when the database application infrastructure is not upgraded to match?  The actual user performance improves marginally or not at all.  It can in fact degrade as volumes overwhelm the database applications not prepared for them.  Both consumers and business users become frustrated.  5G devices cost approximately 30% more than 4G – mostly because those devices need both a 5G and 4G modem (different non-compatible technologies).  The 5G network costs approximately 25% more than 4G.  It is understandable that anyone would be frustrated when they are spending considerably more and seeing limited improvement, no improvement, or negative improvement.  The database application infrastructure becomes the bottleneck.  When consumers and business users become frustrated, they go somewhere else, another website, another supplier, or another partner.  Business will be lost.

Fortunately, there is still time as the 5G rollout is just starting with momentum building in 2020 with complete implementations not expected until 2022, at the earliest.  However, IT organizations need to start planning their application infrastructure upgrades to match the 5G rollout or may end up suffering the consequences.

IoT is another technology that promises to be transformative.  It pushes intelligence to the edge of the network enabling automation that was previously unthinkable.  Smarter homes, smarter cars, smarter grids, smarter healthcare, smarter fitness, smarter water management, and more.  IoT has the potential to radically increase efficiencies and reduce waste.  Most of the implementations to date have been in consumer homes and offices.  These implementations rely on the WiFi in the building they reside. 

The industrial implementations have been not as successful…yet.  Per Gartner, 65 to 85% of Industrial IoT to date have been stuck in pilot mode with 28% of those for more than 2 years.  There are three key reasons for this.  The first are the limitations of 4G of 400 devices per Km2.  This limitation will be fixed as 5G rolls out.  The second is the same issue as 5G, database application infrastructure not suited for the volume and performance required by industrial IoT.  And the third is latency from the IoT edge devices to the analytics, either in the on-premises data center (core), or cloud.  Speed of light latency is a major limiting factor for real-time analytics and real-time actionable information.  This has led to the very rapid rise of edge-fog-cloud or core computing.

Moving analytic processing out to the edge or fog significantly reduces distance latency between where the data is being collected and where it is being analyzed.  This is crucial for applications such as autonomous vehicles.  The application must make decisions in milliseconds not seconds.  It may have to decide whether a shadow in the road is actually a shadow, a reflection, a person, or a dangerous hazard to be avoided.  The application must make that decision immediately and cannot wait.  By pushing the application closer to the data collection, it can make that decision in the timely manner that’s required.  Smart grids, smart cities, smart water management, smart traffic management, are all examples requiring fog (near the edge) or edge computing analytics.  This solves the problem of distance latency; however, it does not resolve analytical latency.  Edge and fog computing typically lack the resources to provide ultra-fast database analytics.  This has led to the deployment of microservices

Microservices have become very popular over the past 24 months.   They tightly couple a database application with its database that has been extremely streamlined to do only the few things the microservice requires.  The database may be a neutered relational, time series, key value, JSON, XML, object, and more.  The database application and its database are inextricably linked.  The combined microservice is then pushed down to the edge or fog compute device and its storage.  Microservices have no access to any other microservices data or database.  If it needs access to another microservice data element, it’s going to be difficult and manually labor-intensive. Each of the microservices must be reworked to grant that access, or the data must be copied and moved via an extract transfer and load (ETL) process, or the data must be duplicated in ongoing manner.  Each of these options are laborious, albeit manageable, for a handful of microservices.  But what about hundreds or thousands of microservices, which is where it’s headed?  This sprawl becomes unmanageable and ultimately, unsustainable, even with AI/ML.

AI/ML is clearly a hot tech trend today.  It’s showing up everywhere in many applications.  This is because standard CPU processing power is now powerful enough to run AI / machine learning algorithms.  AI/ML is showing up typically in one of two different variations.  The first has a defined specific purpose.  It is utilized by the vendor to automate a manual task requiring some expertise.  An example of this is in enterprise storage.  The AI/ML is tasked with placing data based on performance, latency, and data protection policies and parameters determined by the administrator.  It then matches that to the hardware configuration.  If performance should fall outside of the desired parameters AI/ML looks to correct the situation without human intervention.  It learns from experience and automatically makes changes to accomplish the required performance and latency.   The second AI/ML is a tool kit that enables IT pros to create their own algorithms. 

The 1st is an application of AI/ML.  It obviously cannot be utilized outside the tasks it was designed to do. The 2nd is a series of tools that require considerable knowledge, skill, and expertise to be able to utilize.  It is not an application.  It merely enables applications to be developed that take advantage of the AI/ML engine.  This requires a very steep learning curve.

Oracle is the first vendor to solve each and every one of these tech trend problems. The Oracle Exadata X8M and Oracle Database Appliance (ODA) X8 are uniquely suited to solve the 5G and IoT application database infrastructure problem, the edge-fog-core microservices problem, and the AI/ML usability problem. 

It starts with the co-engineering.  The compute, memory, storage, interconnect, networking, operating system, hypervisor, middleware, and the Oracle 19c Database are all co-engineered together.  Few vendors have complete engineering teams for every layer of the software and hardware stacks to do the same thing.  And those who do, have shown zero inclination to take on the intensive co-engineering required.  Oracle Exadata alone has 60 exclusive database features not found in any other database system including others running the same Oracle Database. Take for example Automatic Indexing.  It occurs multiple orders of magnitude faster than the most skilled database administrator (DBA) and delivers noticeably superior performance.  Another example is data ingest.  Extensive parallelism is built-into every Exadata providing unmatched data ingest.  And keep in mind, the Oracle Autonomous Database is utilizing the exact same Exadata Database Machine.  The results of that co-engineering deliver unprecedented Database application latency reduction, response time reduction, and performance increases.  This enables the application Database infrastructure to match and be prepared for the volume and performance of 5G and IoT.

The ODA X8 is ideal for edge or fog computing coming in at approximately 36% lower total cost of ownership (TCO) over 3 years than commodity white box servers running databases.  It’s designed to be a plug and play Oracle Database turnkey appliance.  It runs the Database application too.  Nothing is simpler and no white box server can match its performance.

The Oracle Exadata X8M is even better for the core or fog computing where it’s performance, scalability, availability and capability are simply unmatched by any other database system.  It too is architected to be exceedingly simple to implement, operate, and manage. 

The combination of the two working in conjunction in the edge-fog-core makes the application database latency problems go away.  They even solve the microservices problems.  Each Oracle Exadata X8M and ODA X8 provide pluggable databases (PDBs).  Each PDB is its own unique database working off the same stored data in the container database (CDB).  Each PDB can be the same or different type of Oracle Database including OLTP, data warehousing, time series, object, JSON, key value, graphical, spatial, XML, even document database mining.  The PDBs are working on virtual copies of the data.  There is no data duplication.  There are no ETLs.  There is no data movement.  There are no data islands.  There are no runaway database licenses and database hardware sprawl.  Data does not go stale before it can be analyzed.  Any data that needs to be accessed by a particular or multiple PDBs can be easily configured to do so.  Edge-fog-core computing is solved.  If the core needs to be in a public cloud, Oracle solves that problem as well with the Oracle Autonomous Database providing the same capabilities of Exadata and more.

That leaves the AI/ML usability problem.  Oracle solves that one too.  Both Oracle Engineered systems and the Oracle Autonomous Database have AI/ML engineered inside from the onset.  Not just a tool-kit on the side.  Oracle AI/ML comes with pre-built, documented, and production-hardened algorithms in the Oracle Autonomous Database cloud service.  DBAs do not have to be data scientists to develop AI/ML applications.  They can simply utilize the extensive Oracle library of AI/ML algorithms in Classification, Clustering, Time Series, Anomaly Detection, SQL Analytics, Regression, Attribute Importance, Association Rules, Feature Extraction, Text Mining Support, R Packages, Statistical Functions, Predictive Queries, and Exportable ML Models.  It’s as simple as selecting the algorithms to be used and using them.  That’s it.  No algorithms to create, test, document, QA, patch, and more.

Taking advantage of AI/ML is as simple as implementing Oracle Exadata X8M, ODA X8, or the Oracle Autonomous Database.   Oracle solves the AI/ML usability problem.

The latest tech trends of 5G, Industrial IoT, edge-fog-core or cloud computing, microservices, and AI/ML have the potential to truly be transformative for IT organizations of all stripes.  But they bring their own set of problems.  Fortunately, for organizations of all sizes, Oracle solves those problems.


          

Future factories: smart but still with a human touch

 Cache   
Equipment will be run by AI and machine learning, increasing the need for data scientists and engineers
          

UBC study highlights need to improve health care access in Vancouver, Portland and Seattle

 Cache   
UBC researchers have developed a data science method that analyzes how easily citizens can access hospitals and walk-in health clinics – and it’s a tool that could eventually help city planners and policymakers build smarter, more equitable cities.
          

Data Scientist (SAS / R Programmer - Data Mining) - (San Diego, California, United States)

 Cache   
Performs advanced data design and analysis using the Electronic Medical Record (EMR) to create new knowledge useful to KP and outside organizations. Extracts and cleans large volumes of healthcare data across dozens of sources. Applies expert statistical methodologies to clinical outcomes. Partners effectively with physicians, other clinicians, software engineers and business managers to communicate findings using data visualization techniques. Essential Functions: - Works with physicians, epidemiologists, statisticians, business managers, and software engineers, to formulate and scope questions and translate knowledge into care transformation. - Develops algorithms and predictive models to solve critical health service problems. - Influences a general audience to understand the quality, completeness, and appropriate use of data. - Provides advice about choice of statistical approaches, performs statistical analysis. - Develops tools and libraries to create efficiencies for future work. - Develops systematic approaches to assure data validity. - Identifies new sources of data within the electronic medical record that will improve information about target diseases and clinical processes. - Establishes links across existing data sources; processes large volumes of data needed for complex research/operational studies. - Trains Data Consultants and other data workers.
          

Data Scientist, Predictive Modeling and Analytics - (Oakland, California, United States)

 Cache   
The Predictive Modeling team accelerates the use of machine learning and statistical modeling to solve business and operational challenges and promote innovation at Kaiser Permanente. As a data scientist, you will have the opportunity to build models and data products powering membership forecasting, dynamic pricing, product recommendation, and resource allocation. The diversity of data science problems we are working on reflects the complexity of our business and the multiple steps involved in delivering an exceptional customer experience. It spans experimentation, machine learning, optimization, econometrics, time series, spatial analysis ... and we are just getting started. What You-ll Do l0 level1 lfo1> Use expertise in causal inference, machine learning, complex systems modeling, behavioral decision theory etc. to attract new members to Kaiser Permanente. l0 level1 lfo1> Conduct exploratory data analysis to generate membership growth insights from experimental and observational data. This includes both cross-sectional and longitudinal data types. l0 level1 lfo1> Interact with sales, marketing, underwriting, and actuarial teams to understand their business problems, and help them with defining and implementing scalable solutions to solve them. l0 level1 lfo1> Work closely with consultants, data analyst, data scientists, and data engineers to drive model implementations and new algorithms. The Data Analytic Consultant provides support in making strategic data-related decisions by analyzing, manipulating, tracking, internally managing and reporting data. These positions function both as a consultant and as a high-level data analyst. The position works with clients to develop the right set of questions/hypotheses, using the appropriate tools/skills to derive the best information to address customer needs. Essential Functions: - Provides direction for assigned components of project work. - Coordinates, team/project activities & schedules. - With some feedback and mentoring, able to develop proposals for clients, project, structure, approach, & work plan. - Given scope of work, minimal feedback/rework is necessary. - Evaluates effectiveness of actions/programs implemented. - Researches key business issues, directs the collecting and analyzing of quantitative and qualitative data. - Proactively records workflows, deliverables, and standing operating procedures for projects. - Developing specified QA procedures to ensure accuracy of project data, results and written reports & presentation materials.
          

Apple Podcasts Data Scientist, Apple Media Products Data Science - Apple - Santa Clara Valley, CA

 Cache   
Domain knowledge in the podcasting space is a plus. At Apple, new ideas have a way of becoming great products, services, and customer experiences very quickly.
From Apple - Thu, 02 May 2019 01:53:27 GMT - View all Santa Clara Valley, CA jobs
          

Alteryx Acquires Feature Labs, An MIT-Born Machine Learning Startup

 Cache   

Data science is one of the fastest growing segments of the tech industry, and Alteryx, Inc. is front and center in the data revolution. The Alteryx Platform provides a collaborative, governed platform to quickly and efficiently search, analyze and use pertinent data. To continue accelerating innovation, Alteryx announced it has purchased a startup with roots…

The post Alteryx Acquires Feature Labs, An MIT-Born Machine Learning Startup appeared first on WebProNews.


          

Data Scientist / Engineer

 Cache   
Company Description . Believe in better. Deezer is a global music streaming service with over 53 millions of tracks and a leading presence in over 180 countries, with 14 million monthly active users.  . Behind the code and the pixels is our team...
          

Sr. Data Practitioner - Analyst, Data Scientist, BI Developer, Researcher - Insights Analytics - Cheyenne, WY

 Cache   
Analysis, Insights, Strategy, Testing, Data Science, Development, Business Intelligence, Reporting, Data Warehousing, Data Architecture, Cloud Deployment,…
From Indeed - Sun, 06 Oct 2019 12:17:36 GMT - View all Cheyenne, WY jobs
          

Data Scientists Are in Short Supply, But Does Manufacturing Really Need Them?

 Cache   
Finding the right people for an analytics team may mean looking within.
          

Democratizing Data Science in Your Organization

 Cache   

Despite the explosion of data collected in recent years, many organizations—from financial institutions and health care firms to management consultancies and government—are simply not equipped to learn from their data in an efficient and effective manner.

Data-driven organizations are three times more likely to report significant improvements in decision making.



Request Free!

          

دوره آموزشی مدیریت و سیاست علوم داده آموزشی Data science for education management and policy E-learning

 Cache   
سوابق تحصیلی و آزمونهای استاندارد، گنجینه ای از اطلاعات مربوط به دانش آموزان ما و آنچه که در سیستم آموزشی نیاز دارند، ارائه می دهد. دانش داده می تواند به مربیان کمک کند تا بینش های ارزشمندی را از این سیل اطلاعات جمع آوری کنند . در این دوره ، یک مرور کلی از این […]
          

In New Book, Cambridge Analytica Whistleblower Stops Short Of A Full Mea Culpa

 Cache   
Before going public, data scientist Christopher Wylie helped the now defunct company figure out how to target people online. In a new memoir, he offers details of the project and the players.
          

How an underdog Fresno startup finds local talent

 Cache   

One thing about building a business that no one ever tells you: your company’s culture is set in stone by the time you hire your tenth employee. Who you hire largely determines your ability to succeed; a recent study found 65 percent of startups fail due to people-related reasons. No pressure, right? 

We're Bitwise Industries, a Central California startup driving economic growth despite being far from the streets of Los Angeles and the high-tech workspaces of the Bay Area. Bitwise taps into the “human potential” of our hometown of Fresno in three key ways: teaching digital skills at our coding school, renovating buildings to provide physical spaces for more than 200 startups and hiring local tech talent at our custom software development firm. 

After five years and more than 125 hires, we have a 90 percent retention rate and a team that is as diverse as Fresno itself: 40 percent women, 50 percent people of color and 20 percent first-generation Americans. We’ve trained more than 4000 local people to code and created more than 1000 jobs, yet the question we get asked most during this period of very fast growth is: "How do you find talent to hire?"

We built our company largely by using free tools (like Google Analytics), by borrowing resources (transforming abandoned buildings into coworking spaces) and by tapping into a nontraditional talent pool (30 percent of the population of Fresno lives below the poverty level). To say we are underdogs is an understatement. So getting things like hiring right doesn’t just fit nicely on a bumper sticker; it’s crucial to the survival of our business. 

If you’re an underdog like us, here’s my advice for how to find great talent in unexpected places.

Take a little extra time. 

The world would have you believe that all the most talented people are already locked up in great jobs. This is categorically false. The more people we teach, the more talent grows. When you look for nontraditional people from nontraditional places and you take an honest bet on them, the idea of any “talent war” goes away.

Have their back, and make sure they know it. 

Your employees have to know you have their backs. Most of them could work anywhere, and they choose to work for you, so treat that like the gift it is. Do that right, and they’ll have your back, too.

When you hire, make sure you’d be willing to stand up for this person in a fight. At the end of a grueling and disappointing period of time, or when mistakes get made, you have to be willing to go to bat for your people. 

Understand that diversity is great for business. 

Inclusion isn’t just the right thing to do. It’s also a smart business move. The wide-ranging points of view that employees of diverse backgrounds contribute allows a business to attack complex, fast-moving problems from a variety of angles. Cultivate a team that’s up for the challenge.

Partner up.

While founders wear many hats, you simply can’t do everything yourself. Collaborate with like-minded people and organizations to amplify the efforts—like training diverse talent—that really matter to you and reach the people you may want to hire. 

Bitwise Industries is excited to work with Google to create even more opportunities in and beyond Central California. We’re partnering with Grow with Google to provide workshops, resources and trainings related to online marketing, data science, design and more. We’re also teaming up with Google for Startups to offer scholarships for our new six-month founders’ development program, intended to help aspiring entrepreneurs of all backgrounds create product-driven, revenue-generating companies. It’s great to know that Google is as invested in us as founders, like we at Bitwise are invested in the people of “underdog” cities like Fresno. Great talent can—and does—come from anywhere.


          

Sr Product Mgr- Data Science & Analytics - Discovery Communications, LLC - Bellevue, WA

 Cache   
Uncover the needs of internal customers, including product, marketing, business, and operations teams, and couple them with actionable data solutions.
From Discovery Communications, LLC - Fri, 17 May 2019 17:56:04 GMT - View all Bellevue, WA jobs
          

In New Book, Cambridge Analytica Whistleblower Stops Short Of A Full Mea Culpa

 Cache   
Ever get mad online? Think about publicly dunking on someone's take on politics or race or some ongoing cultural conversation? Turns out that while it may not be personally productive in the end, it could potentially lead to much bigger problems: a gap in democracy, say, thanks to hackers who might be watching, recording and taking notes — making it their mission to build millions of personality profiles. Enter, Christopher Wylie. The short version of Wylie's story goes like this: He's the whistle-blowing data scientist who worked for Cambridge Analytica — where he looked at all your Facebook posts and likes and rants, and distilled that information so people could figure out how to talk to you such that you'd be convinced to act in a certain way. The longer version of Wylie's story is told in his new memoir, Mindf*ck: Cambridge Analytica and the Plot to Break America. In it, he shows himself as a society outsider — queer, differently abled, not particularly interested in fitting in at
          

Jeff Leek: “Data science education as an economic and public health intervention – how statisticians can lead change in the world”

 Cache   
Jeff Leek from Johns Hopkins University is speaking in our statistics department seminar next week: Data science education as an economic and public health intervention – how statisticians can lead change in the world Time: 4:10pm Monday, October 7 Location: 903 School of Social Work Abstract: The data science revolution has led to massive new […]
          

Many perspectives on Deborah Mayo’s “Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars”

 Cache   
This is not new—these reviews appeared in slightly rawer form several months ago on the blog. After that, I reorganized the material slightly and sent to Harvard Data Science Review (motto: “A Microscopic, Telescopic, and Kaleidoscopic View of Data Science”) but unfortunately reached a reviewer who (a) didn’t like Mayo’s book, and (b) felt that […]
          

Offer - Lead Data Engineer (US Citizens/Green Card Holders Only) - USA

 Cache   
Job DescriptionHands-on Engineering LeadershipProven track record of innovation and expertise in Data EngineeringTenure in engineering and delivering complex projectsAbility to work in multi-cloud environmentsDeep understanding and application of modern data processing technology stacks. For example, AWS Redshift, Azure Parallel Data Warehouse, Spark, Hadoop ecosystem technologies, and othersKnowledge of how to architect solutions for data science and analytics such as building production-ready machine learning models and collaborating with data scientistsKnowledge of agile development methods including core values, guiding principles, and essential agile practicesRequirements:At least eight years of software development experience.At least three years of healthcare experienceAt least three years of experience of using Big Data systems.Strong SQL writing and optimizing skills for AWS Redshift and Azure SQL Data Warehouse.Strong experience working in Linux-based environments.Strong in one or more languages (Python/Ruby/Scala/Java/C++)Preferred Qualifications:Experience with messaging, queuing, and workflow systems, especially Kafka or Amazon KinesisExperience with non-relational, NoSQL databases and various data-storage systemsExperience working with Machine Learning and Data Science teams, especially creating architecture for experimentation versus production execution.Experience integrating with CI tools programmatically
          

PyODDS: An End-to-End Outlier Detection System. (arXiv:1910.02575v1 [cs.LG])

 Cache   

Authors: Yuening Li, Daochen Zha, Na Zou, Xia Hu

PyODDS is an end-to end Python system for outlier detection with database support. PyODDS provides outlier detection algorithms which meet the demands for users in different fields, w/wo data science or machine learning background. PyODDS gives the ability to execute machine learning algorithms in-database without moving data out of the database server or over the network. It also provides access to a wide range of outlier detection algorithms, including statistical analysis and more recent deep learning based approaches. PyODDS is released under the MIT open-source license, and currently available at (https://github.com/datamllab/pyodds#utm_source=googlier.com/page/2019_10_08/115880&utm_campaign=link&utm_term=googlier&utm_content=googlier.com) with official documentations at (https://pyodds.github.io/#utm_source=googlier.com/page/2019_10_08/115880&utm_campaign=link&utm_term=googlier&utm_content=googlier.com).


          

Director of Commercial Strategy and Business Development - MGH & BWH Center for Clinical Data Science - Boston, MA

 Cache   
Responsible for communicating contract terms to internal CCDS teams. A minimum of 5 years of relevant work experience, ideally in a top-tier business…
From MGH & BWH Center for Clinical Data Science - Wed, 02 Oct 2019 04:38:46 GMT - View all Boston, MA jobs
          

Data Scientist, Senior - Booz Allen Hamilton - Washington, DC

 Cache   
Experience with applying statistical or mathematical analysis techniques to business problems and marketing solutions to non-technical clients.
From Booz Allen Hamilton - Mon, 07 Oct 2019 20:29:52 GMT - View all Washington, DC jobs
          

Deployment Engineer (Machine Learning) - Clarifai - Washington, DC

 Cache   
Experience with machine learning or data science experiments. You will develop reusable modules, components, and build tools for both internal and external use…
From Clarifai - Thu, 12 Sep 2019 16:52:34 GMT - View all Washington, DC jobs
          

Sustaining Engineer (Multiple States) - h2o.ai - Mountain View, CA

 Cache   
Understanding of Data Science, Machine Learning concepts, algorithms etc.,. Ability to work with Data Science, Machine Learning, BigData Hadoop technologies.
From h2o.ai#utm_source=googlier.com/page/2019_10_08/116823&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Fri, 01 Mar 2019 06:27:40 GMT - View all Mountain View, CA jobs
          

Sr. Enterprise Account Exec- Data Science / ML - Chicago - h2o.ai - Chicago, IL

 Cache   
You bring a passion for innovation and translating complex technology into a high-ROI business outcome. Team player that knows how and when to enlist internal…
From h2o.ai#utm_source=googlier.com/page/2019_10_08/116825&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Fri, 30 Aug 2019 23:25:05 GMT - View all Chicago, IL jobs
          

Sr. Enterprise Account Exec- Data Science / ML - NYC - h2o.ai - New York, NY

 Cache   
You bring a passion for innovation and translating complex technology into a high-ROI business outcome. Team player that knows how and when to enlist internal…
From h2o.ai#utm_source=googlier.com/page/2019_10_08/116826&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Fri, 30 Aug 2019 23:25:05 GMT - View all New York, NY jobs
          

Customer Data Scientist (Chicago) - h2o.ai - Chicago, IL

 Cache   
Engineers, business people, and executives. Training advanced machine learning models at scale in distributed environments, influencing next generation data…
From h2o.ai#utm_source=googlier.com/page/2019_10_08/116830&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Fri, 23 Aug 2019 05:24:30 GMT - View all Chicago, IL jobs
          

Customer Data Scientist (Mountain View) - h2o.ai - Mountain View, CA

 Cache   
Engineers, business people, and executives. Training advanced machine learning models at scale in distributed environments, influencing next generation data…
From h2o.ai#utm_source=googlier.com/page/2019_10_08/116831&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Fri, 23 Aug 2019 05:24:30 GMT - View all Mountain View, CA jobs
          

Customer Data Scientist (New York) - h2o.ai - New York, NY

 Cache   
Engineers, business people, and executives. Training advanced machine learning models at scale in distributed environments, influencing next generation data…
From h2o.ai#utm_source=googlier.com/page/2019_10_08/116832&utm_campaign=link&utm_term=googlier&utm_content=googlier.com - Fri, 23 Aug 2019 05:24:30 GMT - View all New York, NY jobs
          

Parks Asset and Data Scientist - City of Winnipeg - Winnipeg, MB

 Cache   
Graphic design skills using Adobe Illustrator (or equivalent) is an asset. Under the general direction of the Parks Investment Strategies Coordinator, the Parks…
From City of Winnipeg - Tue, 01 Oct 2019 05:45:49 GMT - View all Winnipeg, MB jobs
          

免费获取电子书 Data Science Algorithms in a Week – Second Edition[$31.99→0]

 Cache   
packtpub.com#utm_source=googlier.com/page/2019_10_08/125318&utm_campaign=link&utm_term=googlier&utm_content=googlier.com 是一家电子书网站,目前在上面它们每天都会赠送一本电子书,今天送的电子书是 Data Science Algorithms in a Week - Second Edition。注册账户后就可以免费获取了,同时也提供了书中的源码。似乎现在网站又恢复普通浏览可见了,如果没看到的话可以使用浏览器隐身模式获取。
          

Data Science Analyst / Data Officer / Heavy Data :Data Information science (PTA-Irene)

 Cache   
Ellahi Consulting - Pretoria, Gauteng - an data/information science-related field or business administration Work experience... an data/information science-related position 5+ years progressive leadership experience in...
          

HEAD OF DATA SCIENCE -- lead the DATA and ANALYTICS activities for the fastest growing International DIGITAL RETAIL BANK - - JOHANNESBURG, R2.4 Million - R2.6 Million/annum

 Cache   
Acuity Consultants - Gauteng - Newlands, Western Cape - This is a unique and very SENIOR opportunity for a HEAD OF DATA SCIENCE to own and lead.... Based in JOHANNESBURG this HEAD OF DATA SCIENCE role offers a salary of R2.4 Million – R2...
          

Data Science Officer Big Data

 Cache   
Alam Ellahi & Associates - Pretoria, Gauteng - Data information science exp. Reporting to the Chief Digital Officer you will be responsible... for management of: Datawarehousing;Data analytics;Data science, Big data initiatives; Information...
          

Data Officer : Data/Information science (PTA-Iren

 Cache   
DGL HR - Gauteng - experienced Data Officer with 5+yrs Data information science exp. Reporting to the Chief Digital... Officer you will be responsible for management of: Datawarehousing;Data analytics;Data science...
          

Head, Data Science

 Cache   
Standard Bank - Rosebank, Johannesburg - : Together with the business unit Head Data Science and the human capital partner for the area.... With support from the business unit Head Data Science and Human Capital, interview and hire direct...
          

Le groupe Publicis officialise le lancement d'Epsilon France

 Cache   
La structure réunira les 750 collaborateurs des quatre entités data marketing du groupe : Soft Computing, Publicis ETO, Publicis Media Data Sciences et les équipes françaises d'Epsilon, racheté en juillet dernier.
          

Data Scientist Cognizant

 Cache   
Cognizant - London - Job Role: Data Scientist Every functional area, in every industry, is on an AI Journey... learning at scale: ·Building data pipelines th......
          

Staff Data Scientist - Industry 4.0 & Smart Manufacturing - Seagate Technology - Singapore

 Cache   
Lead Seagate advanced analytics projects related to Industry 4.0 and Smart Manufacturing. 193154 Staff Data Scientist - Industry 4.0 & Smart Manufacturing (Open…
From Seagate Technology - Thu, 26 Sep 2019 07:47:42 GMT - View all Singapore jobs
          

‘The Capabilities Are Still There.’ Why Cambridge Analytica Whistleblower Christopher Wylie Is Still Worried

 Cache   
In March 2018, Christopher Wylie blew the whistle on Cambridge Analytica, a political consultancy that worked for the Trump campaign. Cambridge Analytica, the Canadian data scientist revealed, had illegally obtained the Facebook information of 87 million people and used it to build psychological profiles of voters. Using cutting-edge research, Cambridge Analytica — which was funded…
          

Senior Python Developer and Data Scientist / Developer/Software Engineer

 Cache   
FL-Orlando, Make a difference Ciber Global wants you. Come build new things with us and advance your career. At Ciber Global you’ll collaborate with experts. You’ll join successful teams contributing to our clients’ success. You’ll work side by side with our clients and have long-term opportunities to advance their top priorities. Job Description: 5 years of experience with python. 3 years of experience with
          

Every platform needs a centrality to its content: VOOT’s Gourav Rakshit at Vidnet 2019

 Cache   

MUMBAI: Viacom18’s digital venture VOOT is set to expand the horizon of its business with new moves such as the upcoming launch of its subscription-based model in this calendar year and the full-fledged commercial launch of VOOT Kids. While it will maintain equal focus on the advertising-led platform, Viacom18 Digital Ventures COO Gourav Rakshit thinks it is an opportune time to enter the SVOD play.

At Indiantelevision.com#utm_source=googlier.com/page/2019_10_08/145812&utm_campaign=link&utm_term=googlier&utm_content=googlier.com’s Vidnet 2019, Rakshit spoke on industry issues as well as VOOT’s content strategy going forward in a candid fireside chat with Indiantelevision.com#utm_source=googlier.com/page/2019_10_08/145812&utm_campaign=link&utm_term=googlier&utm_content=googlier.com founder, CEO and editor-in-chief Anil Wanvari. Wanvari set the tone of the discussion by asking his lessons from Shaadi.com#utm_source=googlier.com/page/2019_10_08/145812&utm_campaign=link&utm_term=googlier&utm_content=googlier.com and his view on the industry as an outsider.

Rakshit said that Shaadi.com#utm_source=googlier.com/page/2019_10_08/145812&utm_campaign=link&utm_term=googlier&utm_content=googlier.com was obviously a fascinating journey for him as he gained more insights on the history of India and communities of the country. It also helped him to touch India at a very core level while they signed up about 10-15k people every day which largely represents middle-class India.

“I think from a learning point of view one of the things I can definitely bring is data science and AI models being used in match-making. They are very advanced. People talk about recommendations on OTT space. So, the order of magnitude of the algorithm on Shaadi is much higher than you have to in this space. Another thing is we ran the entire business on subscription model creating sufficient value to reach premium customers.”

However, he also shed light on the new things he had to face as he switched industries. “I did underestimate the long-gestation period of content between concept, and execution or original launch. At Shaadi.com#utm_source=googlier.com/page/2019_10_08/145812&utm_campaign=link&utm_term=googlier&utm_content=googlier.com we were one week away from the idea of execution. This industry has to think where the ball will be one year from now. So, it definitely has a layer of complexity,” he added further.

The other challenge is the supply-demand quotient. There has been a sudden burst of appetite but the growth in content creators can’t keep pace. However, he believes this is a short-term problem.

According to him, the OTT industry is still finding its footing. The industry also has to discover the right price points. “As you start to move from the extremely western psychological demographic to what we call India, then Bharat, the nature of the content that we produce and we are going to put on these platforms needs to be fundamentally different. There, I think the broadcast industry, which has sharpened over years of understanding what these consumers want, is going to have an edge both by virtue of the library of content and understanding the sensibilities of the users. In the near term, clearly, the highest disposable income is with people that have a westernised psychography,” he said.

He also added that every platform needs a centrality to its content that people want to consume and pay for. “We do daily digital soaps which are only available online and do exceedingly well. We have a base of extremely loyal people. In fact, we just concluded a season a few months ago. It is called Silsila and that had a massive fandom. So appointment-driven viewing on the internet as opposed to the only premise we are able to which is binge-watch is where we see a compelling case,” he also noted.

Rakshit noted that the one thing that digital offers but television is unable to provide is on-demand viewing. According to him, if consumers are able to choose when they want to watch something then they can choose what and how they want to watch, and then they will be looking for something which is closer to their own liking. Hence, there has to be an understanding of writing what these new-age consumers are looking for.

According to Rakshit, VOOT has an interesting content line up that is ready to hit the market. Moreover, the entry of fibre-to-home services will also help the platform at an ecosystem level. He also mentioned that as the industry progressed, the price discovery of subscription-based services would also be helpful for the platform.

Speaking on VOOT’s kids’ strategy, he said, “It’s a departure from pure-play video streaming and primarily because we recognise that the target group in that is in the age group of 4-8 years and they are not using their own phones. They are using the device of their parents with their explicit permission. As a form of engagement, we got multiple-choice questions, audiobooks sort of narrative as well as large-format books which people can read in electronic media. Our early indications are that parents like that.”


          

Taboola and Outbrain to Merge to Create Meaningful Advertising Competitor to Facebook and Google

 Cache   

NEW YORK: Taboola and Outbrain, two digital advertising platforms, today announced that they have entered into an agreement to merge, subject to customary closing conditions. Both companies’ Boards of Directors have approved the transaction. The combined company will provide enhanced advertising efficacy and reach to marketers worldwide, while helping news organizations and other digital properties more effectively find growth in the years to come.

“Over the past decade, I’ve admired Outbrain and the innovation that Yaron Galai, Ori Lahav and the rest of the Outbrain team have brought to the marketplace. By joining forces, we’ll be able to create a more robust competitor to Facebook and Google, giving advertisers a more meaningful choice,” said Adam Singolda, Founder & CEO of Taboola. According to eMarketer, almost 70% of total U.S. digital advertising revenue in 2019 is controlled by only three companies-Google, Facebook and Amazon. “We’re passionate about driving growth for our customers and supporting the open web, which we consider critical in a world where walled gardens are strong, and perhaps too strong. Working together, we will continue investing to better connect advertising dollars with local and national news organizations, strengthening journalism over the next decade. This is why we’re merging; this is our mission.”

“We are excited to partner with Taboola. Both Outbrain and Taboola have a shared mission and vision of supporting quality journalism globally and delivering meaningful value to the open web marketplace,” said Yaron Galai, co-Founder and co-CEO of Outbrain. “Ori and I had a vision of helping people discover quality content online, and we see a tremendous opportunity in joining forces in order to bring the next wave of innovation to our publisher partners and advertisers. I’m confident that together, we will be able to further our mission, which we call our Lighthouse, of bringing the best, most trustworthy content discovery capabilities to users around the world.”

Upon closing, Adam Singolda, the Founder and current CEO of Taboola, will assume the CEO position of the combined company, which will operate under the Taboola brand name, with branding to be determined and to reflect the merger of the two companies. Under the terms of the merger agreement Outbrain shareholders will receive shares representing 30% of the combined company plus $250 million of cash. The Board of Directors of the combined company will consist of current Taboola and Outbrain Management and Board members. Eldad Maniv, President & COO of Taboola and David Kostman, co-CEO of Outbrain will work closely together on managing all aspects of the post-merger integration. Yaron Galai will remain committed to the success of the combined company, and actively assist with the transition for the 12 months following the closing.

“We are fortunate to have great talent at both Outbrain and Taboola,” said Eldad Maniv. “As soon as the merger closes, we will work to integrate teams, technologies and infrastructures so we can quickly accelerate growth across all dimensions. We have set aggressive goals for bringing value to our customers, driving technology innovation and delivering financial results to our shareholders through increased efficacy and innovation. By working with David and the Outbrain team, I’m confident we can achieve them.”

“For over 10 years, each company has built incredibly powerful solutions that have helped tens of thousands of publishers and advertisers thrive,” said David Kostman. “I look forward to working together with Eldad and his team to bring together the best of each company’s technology, product and business expertise to build a compelling global open web alternative to Google and Facebook.”

The combined company will have over 2,000 employees across 23 offices, serving over 20,000 clients in more than 50 countries across the North America, Latin America, Europe, Middle East and Asia-Pacific regions.

Compelling Strategic and Financial Rationale for the Merger Key strategic benefits of the merger include:

1. Increased Advertiser Choice: The combined company will be able to provide advertisers, from small businesses to global brands, with a meaningful competitive alternative to Google and Facebook-the companies currently known as the “Duopoly” that command the vast majority of digital ad spend.

2. Greater Advertising Efficiency: A unified and consolidated buying platform will provide advertisers with greater efficiencies, helping them reach their awareness, consideration and conversion goals.

3. Higher Revenue and User Engagement to Publishers, Mobile Carriers and Mobile OEMs: Through increased investment in technology and expanded reach, the combined platform will be able to increase revenue to publishers, mobile carriers and device manufacturers, and drive better user engagement.

4. Accelerated Innovation: By combining two of the strongest data science and AI teams in the industry, and by accelerating investment in R&D, the company will be able to better address the evolving needs of its partners and customers.

5. Better Consumer Experience: Increasingly, Taboola’s and Outbrain’s solutions are embraced directly by consumers to help them discover what’s interesting and new, at moments when they’re ready to explore. For example, Taboola News is now embedded in over 60 million Android devices worldwide. The combined company will be able to accelerate the development of such innovative solutions, improving people’s ability to enjoy quality journalism.

Representation J.P. Morgan Securities LLC acted as a financial advisor to Taboola. Goldman, Sachs & Co. acted as a financial advisor to Outbrain. Meitar Liquornik Geva Leshem Tal Law Offices and Davis Polk & Wardwell LLP acted as legal counsel to Taboola, and Meitar Liquornik Geva Leshem Tal Law Offices, White & Case LLP and Wilson Sonsini Goodrich & Rosati acted as legal counsel to Outbrain.

About Taboola Taboola helps people discover what’s interesting and new. The company’s platform and suite of products, powered by deep learning and the largest dataset of content consumption patterns on the open web, is used by over 20,000 companies to reach over 1.4 billion people each month. Advertisers use Taboola to reach their target audience when they’re most receptive to new messages, products and services. Digital properties, including publishers, mobile carriers and handset manufacturers, use Taboola to drive audience monetization and engagement. Some of the most innovative digital properties in the world have strong relationships with Taboola, including CNBC, NBC News, USA TODAY, BILD, Sankei, Huffington Post, Microsoft, Business Insider, The Independent, El Mundo, and Le Figaro. The company is headquartered in New York City with offices in 15 cities worldwide.


          

USU Data Scientist Contributes to Multi-Institution Dengue Fever Study

 Cache   

Once considered relatively rare, dengue fever is popping up throughout the globe, including the United States. The mosquito-borne virus is having a particularly active year, which some public health officials attribute, at least partially, to a warming climate.

Transmitted to humans through the bite of the female Aedes aegypti mosquito, dengue causes fever, vomiting, headache, muscle and joint pain, as well as skin rashes. Most people infected with the virus recover, but the disease can escalate into lethal complications. And, curiously, while people who’ve recovered from the virus develop immunity to the strain that infected them, they often become more susceptible to infection by different strains of the virus.

Utah State University data scientist Kevin Moon is among a group of researchers, led by Yale University, that’s recently completed a large-scale study of the virus using single-cell data from biological samples collected from infected people in India. Supported by the National Institutes of Health and the Indo-U.S. Vaccine Action Program, the research team includes scientists from India’s National Institute of Mental Health and Neurosciences. The group published findings in the Oct. 7 issue of Nature Methods.

“My role in this project included contributing to the development of ‘SAUCIE,’ a data analysis method designed to tackle very large datasets, such as the one collected for this study,” says Moon, assistant professor in USU’s Department of Mathematics and Statistics, who specializes in data science and machine learning. “The team applied SAUCIE to a 20 million-cell mass cytometry dataset with genetic and molecular information from 180 samples collected from 40 subjects.” 

SAUCIE, which stands for “Sparse Autoencoder for Unsupervised Clustering, Imputation and Embedding,” is a multi-layered deep neural network, which allows researchers to extract detailed information from large quantities of single cells.

“Collecting useful data for this kind of application requires getting information from very large samples of individual cells,” Moon says. “Without a large set, you can’t collect a good representation of the many types of cells, including rare cells.”

But developing computational tools to handle so much information is a challenge.

“That’s where neural networks, like SAUCIE, come in,” Moon says. “Neural networks, constructed from a set of algorithms and modeled loosely after the human brain, are designed to recognize patterns in the data.”

SAUCIE, he says, offers four main capabilities. 

“First of all, it clusters data into similar groups which, in this case, allowed the researchers to segment cells into similar groups and ferret out rare cell populations,” Moon says. “Secondly, SAUCIE is good at ‘de-noising data.”

That is, SAUCIE refines data, eliminating distracting information.

A third feature of SAUCIE is batch correction, he says, that eliminates non-biological effects caused by variations in sample collection and analysis.

Finally, SAUCIE enables data visualization.

“This is a powerful analysis tool that allow researchers to visually explore patterns in the data,” Moon says. 

Having the ability to explore the cell data at this level will help researchers better understand the basic biology of how cells respond to the dengue virus from initial infection to the disease’s progression.

“The hope is this information will lead to preventive efforts and therapies for those infected,” Moon says.
 


          

Product Manager

 Cache   
Supervise, coordinate and lead cross functional teams in an efficient and effective manner • Establish and maintain good relationship with partners and organizations • Utilize and apply strategic problem-solving and decision making for the development of new products • Identify opportunities to adopt innovative processes and technologies. Lead and align a team of software engineers, designers, web/android developers, data scientists and Sales managers to produce high quality project deliverables • Meet project outcomes, including commitments around performance, schedule, cost, and resources • Collect requirements, feedback and data from users • Plan and implement training schedules for stakeholders and measure impact thereof • Provide marketing and competitive knowledge on products, key stakeholders and competitors to the product specialist.


Next Page: 10000

© Googlier LLC, 2019