Next Page: 10000

          Data Scientist - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From belairdirect - Thu, 13 Sep 2018 00:51:55 GMT - View all Montréal, QC jobs
          Data Scientist - Big Data - Intact - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From Intact - Wed, 12 Sep 2018 22:51:36 GMT - View all Montréal, QC jobs
          Senior Data Scientist - Canadian National Railway - Montréal, QC      Cache   Translate Page      
IVADO, Vector Institute, Scale.AI) to design and implement applied AI/data science models that solve real world problems. Senior Data Scientist....
From Canadian National Railway - Tue, 11 Sep 2018 07:15:16 GMT - View all Montréal, QC jobs
          Mesosphere ups automation for ‘on-demand data science’      Cache   Translate Page      
none
          5 questions CEOs are asking about AI      Cache   Translate Page      
Read  Jill Dyche detail the five questions CEO of any company is bound to ask before deploying AI in their organisation on CIO : Though AI has proved its competency, many CEOs are still sanguine about its usage in their company. Recently in a risk management meeting, I watched a data scientist explain to a group […]
          Data Science Engineer - Intern - Seagate Technology - Singapore      Cache   Translate Page      
We are looking for a motivated intern to join the R&D Media Application department for your internship from Jan to Jun 2019! About the Role Understand...
From Seagate Technology - Mon, 22 Oct 2018 01:47:51 GMT - View all Singapore jobs
          Beyond Univariate, Single-Sample Data with MCHT      Cache   Translate Page      

(This article was first published on R Curtis Miller's Personal Website , and kindly contributed toR-bloggers)

Introduction

I’ve spent the past few weeks writing about MCHT , my new package for Monte Carlo and bootstrap hypothesis testing. After discussing how to use MCHT safely , I discussed how to use it for maximized Monte Carlo (MMC) testing , then bootstrap testing . One may think I’ve said all I want to say about the package, but in truth, I’ve only barely passed the halfway point!

Today I’m demonstrating how general MCHT is, allowing one to use it for multiple samples and on non-univariate data. I’ll be doing so with two examples: a permutation test and the

test for significance of a regression model.

Permutation Test The idea of the permutation test dates back to Fisher (see [1]) and it forms the basis of computational testing for difference in mean. Let’s suppose that we have two samples with respective means and

, respectively. Suppose we wish to test

against


Beyond Univariate, Single-Sample Data with MCHT

using samples

and

, respectively.

If the null hypothesis is true and we also make the stronger assumption that the two samples were drawn from distributions that could differ only in their means, then the labelling of the two samples is artificial, and if it were removed the two samples would be indistinguishable. Relabelling the data and artificially calling one sample the

sample and the other the

sample would produce highly similar statistics to the one we actually observed. This observation suggests the following procedure:

Generate new datasets by randomly assigning labels to the combined sample of and . Compute copies of the test statistic on each of the new samples; suppose that the test statistic used is the difference in means, . Compute the test statistic on the actual sample and compare to the simulated statistics. If the actual statistic is relatively large compared to the simulated statistics, then reject the null hypothesis in favor of the alternative; otherwise, don’t reject.

In practice step 3 is done by computing a

-value representing the proportion of simulated statistics larger than the one actually computed.

Permutation Tests Using MCHT

The permutation test is effectively a bootstrap test, so it is supported by MCHT , though one may wonder how that’s the case when the parameters test_stat , stat_gen , and rand_gen only accept one parameter, x , representing the dataset (as opposed to, say, t.test() , which has an x and an optional y parameter). But MCHTest() makes very few assumptions about what object x actually is; if your object is either a vector or tabular, then the MCHTest object should not have a problem with it (it’s even possible a loosely structured list would be fine, but I have not tested this; tabular formats should cover most use cases).

In this case, putting our data in long-form format makes doing a permutation test fairly simple. One column will contain the group an observation belongs to while the other contains observation values. The test_stat function will split the data according to group, compute group-wise means, and finally compute the test statistic. rand_gen generates new dataset by permuting the labels in the data frame. stat_gen merely serves as the glue between the two.

The result is the following test.

library(MCHT) library(doParallel) registerDoParallel(detectCores()) ts <- function(x) { grp_means <- aggregate(value ~ group, data = x, FUN = mean) grp_means$value[1] - grp_means$value[2] } rg <- function(x) { x$group <- sample(x$group) x } sg <- function(x) { test_stat(x) } permute.test <- MCHTest(ts, sg, rg, seed = 123, N = 1000, localize_functions = TRUE) df <- data.frame("value" = c(rnorm(5, 2, 1), rnorm(10, 0, 1)), "group" = rep(c("x", "y"), times = c(5, 10))) permute.test(df) ## ## Monte Carlo Test ## ## data: df ## S = 1.3985, p-value = 0.036 Linear Regression F Test

Suppose for each observation in our dataset there is an outcome of interest,

, and there are variables that could together help predict the value of if they are known. Consider then the following linear regression model (with

):

The first question someone should asked when considering a regression model is whether it’s worth anything at all. An alternative approach to predicting

is simply to predict its mean value. That is, the model

is much simpler and should be preferred to the more complicated model listed above if it’s just as good at explaining the behavior of

for all . Notice the second model is simply the first model with all the coefficients

identically equal to zero.

The

-test (described in more detail here

) can help us decide between these two competing models. Under the null hypothesis, the second model is the true model.

The alternative says that at least one of the regressors is helpful in predicting

.

We can use the

statistic to decide between the two models:

and

are the residual sum of squares of models 1 and

2, respectively.

This test is called the

-test because usually the F-distribution is used to compute -values (as this is the distributiont the

statistic should follow when certain conditions hold, at least asymptotically if not exactly). What then would a bootstrap-based procedure look like?

If the null hypothesis is true then the best model for the data is this:

is the sample mean of and

is the residual. This suggests the following procedure:

Shuffle over all rows of the input dataset, with replacement, to generate new datasets. Compute statistics for each of the generated datasets. Compare the statistic of the actual dataset to the generated datasets’ statistics. F Test Using MCHT

Let’s perform the

test on a subset of the iris dataset. We will see if there is a relationship between the sepal length and sepal width among iris setosa

flowers. Below is an initial split and visualization:

library(dplyr) setosa <- iris %>% filter(Species == "setosa") %>% select(Sepal.Length, Sepal.Width) plot(Sepal.Width ~ Sepal.Length, data = setosa)
Beyond Univariate, Single-Sample Data with MCHT

There is an obvious relationship between the variables. Thus we should expect the test to reject the null hypothesis. That is what we would conclude if we were to run the conventional

test:

res <- lm(Sepal.Width ~ Sepal.Length, data = setosa) summary(res) ## ## Call: ## lm(formula = Sepal.Width ~ Sepal.Length, data = setosa) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.72394 -0.18273 -0.00306 0.15738 0.51709 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5694 0.5217 -1.091 0.281 ## Sepal.Length 0.7985 0.1040 7.681 6.71e-10 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.2565 on 48 degrees of freedom ## Multiple R-squared: 0.5514, Adjusted R-squared: 0.542 ## F-statistic: 58.99 on 1 and 48 DF, p-value: 6.71e-10

Let’s now implement the procedure I described with MCHTest() .

ts <- function(x) { res <- lm(Sepal.Width ~ Sepal.Length, data = x) summary(res)$fstatistic[[1]] # Only way I know to automatically compute the # statistic } # rand_gen's function can use both x and n, and n will be the number of rows of # the dataset rg <- function(x, n) { x$Sepal.Width <- sample(x$Sepal.Width, replace = TRUE, size = n) x } b.f.test.1 <- MCHTest(ts, ts, rg, seed = 123, N = 1000) b.f.test.1(setosa) ## ## Monte Carlo Test ## ## data: setosa ## S = 58.994, p-value < 2.2e-16

Excellent! It reached the correct conclusion.

Conclusion

One may naturally ask whether we can write functions a bit more general than what I’ve shown here at least in the regression context. For example, one may want parameters specifying a formula so that the regression model isn’t hard-coded into the test. In short, the answer is yes; MCHTest objects try to pass as many parameters to the input functions as they can.

Here is the revised example that works for basically any formula:

ts <- function(x, formula) { res <- lm(formula = formula, data = x) summary(res)$fstatistic[[1]] } rg <- function(x, n, formula) { dep_var <- all.vars(formula)[1] # Get the name of the dependent variable x[[dep_var]] <- sample(x[[dep_var]], replace = TRUE, size = n) x } b.f.test.2 <- MCHTest(ts, ts, rg, seed = 123, N = 1000) b.f.test.2(setosa, formula = Sepal.Width ~ Sepal.Length) ## ## Monte Carlo Test ## ## data: setosa ## S = 58.994, p-value < 2.2e-16

This shows that you can have a lot of control over how MCHTest objects handle their inputs, giving you considerable flexibility.

Next post: time series and MCHT

References R. A. Fisher, The design of experiments (1935)

Packt Publishing published a book for me entitled Hands-On Data Analysis with NumPy and Pandas , a book based on my video course Unpacking NumPy and Pandas . This book covers the basics of setting up a python environment for data analysis with Anaconda, using Jupyter notebooks, and using NumPy and pandas. If you are starting out using Python for data analysis or know someone who is, please consider buying my book or at least spreading the word about it. You can buy the book directly or purchase a subscription to Mapt and read it there.

If you like my blog and would like to support it, spread the word (if not get a copy yourself)!

Related

To leave a comment for the author, please follow the link and comment on their blog: R Curtis Miller's Personal Website .

R-bloggers.com offers daily e-mail updates about R news andtutorials on topics such as:Data science, Big Data, R jobs , visualization (ggplot2, Boxplots , maps ,animation), programming (RStudio, Sweave , LaTeX , SQL , Eclipse , git , hadoop ,Web Scraping) statistics (regression, PCA , time series , trading

) and more...

If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail , twitter , RSS , or facebook ...


          The 5 Best Websites to Learn Python Programming      Cache   Translate Page      

Over the past decade, the python programming language has exploded in popularity for all types of coding. From web developers to video game designers, from data scientists to in-house tool creators, many have fallen in love with Python. Why? Because Python is easy to learn, easy to use, and very powerful.

Want to learn Python programming? Here are some of the best resources and ways to learn Python online, many of which are entirely free. For optimal results, we recommend that you utilize ALL of these websites, as they each have their own pros and cons.

1. How to Think Like a Computer Scientist
The 5 Best Websites to Learn Python Programming

One of the best Python tutorials on the web, the How to Think Like a Computer Scientist interactive web tutorial is great because it not only teaches you how to use the Python programming language, but also how to think like a programmer. If this is the first time you’ve ever touched code, then this site will be an invaluable resource for you.

Keep in mind, however, that learning how to think like a computer scientist will require a complete shift in your mental paradigm. Grasping this shift may be easy for some and difficult for others, but as long as you persevere, it will eventually click. And once you’ve learned how to think like a computer scientist, you’ll be able to learn programming languages other than Python with ease!

2. The Official Python Tutorial
The 5 Best Websites to Learn Python Programming

What better place to learn Python than on the official Python website? The creators of the language itself have devised a large and helpful guide that walks you through the language basics.

The best part of this web tutorial is that it moves slowly, drilling specific concepts into your head from multiple angles to make sure you truly understand them before moving on. The website’s formatting is simple and pleasing to the eye, which just makes the whole experience that much easier.

If you have some background in programming, the official Python tutorial may be too slow and boring for you―but if you’re a brand newbie, you’ll likely find it to be an indispensable resource on your journey.

3. A Byte of Python
The 5 Best Websites to Learn Python Programming

The A Byte of Python web tutorial series is awesome for those who want to learn Python and have a bit of previous experience with programming. The very first part of the tutorial walks you through the steps necessary to set up a Python interpreter on your computer, which can be a troublesome process for first timers.

There is one drawback to this website: it does try to dive in a bit too quickly. As someone with Python experience under my belt, I can see how newbies might be intimidated by how quickly the author moves through the language.

But if you can keep up, then A Byte of Python is a fantastic resource. If you can’t?Try some of the other Python tutorial websites in this list first, and once you have a better grasp of the language, come back and try this one again.

4. LearnPython
The 5 Best Websites to Learn Python Programming

Unlike the previously listed Python tutorial sites, LearnPython is great because the website itself has a built-in Python interpreter. This means you can play around with Python coding right on the website, eliminating the need for you to muck around and install a Python interpreter on your system first.

Of course, you’ll need to install an interpreter eventually if you plan on getting serious with the language, but LearnPython actually lets you try Python before investing too much time setting up a language that you might end up not using.

LearnPython’s tutorial incorporates the interpreter, which allows you to play around with code in real-time, making changes and experimenting as you learn. The programming exercises at the end of each lesson are helpful, too.

5. Learn X in Y Minutes: Python 3
The 5 Best Websites to Learn Python Programming

Let’s say you have plenty of programming experience and you already know how to think like a programmer, but Python is new to you and you just want to get to grips with the actual syntax of the language. In that case,Learn X in Y Minutes is the best website for you.

True to its name, this site lays out all of the syntactic nuances of Python in code format so that you can learn all of the important bits of Python’s syntax in under 15 minutes. It’s succinct enough to suffice as a reference―bookmark the page and come back to it whenever you forget a certain aspect of Python.

In fact, Learn X in Y Minutes is my favorite resource for learning any programming language’s syntax.

Bonus Resource: CodeWars
The 5 Best Websites to Learn Python Programming

CodeWars isn’t so much a tutorial as it is a gamified way to test your programming knowledge . It consists of hundreds of different coding puzzles (called “katas”), which force you to take what you’ve learned from the aforementioned Python websites and apply them to real-life problems.

The katas on CodeWars are categorized by difficulty, and they do have an instructive quality to them, so you’ll definitely learn as you go through each puzzle. As you complete katas, you’ll “level up” and gain access to harder katas. But the best part? You can compare your solutions with solutions submitted by others, which will significantly accelerate your learning.

Though it has a relatively shallow learning curve, Python is a powerful language that can be utilized in multiple applications. Its popularity has grown consistently over the years, and there’s no indication that the language will disappear any time soon.

Still have questions? Check out our answers to the most frequently asked questions about Python programming The Most Frequently Asked Questions About Python Programming The Most Frequently Asked Questions About Python Programming In this article, we'll walk you through everything you need to know about Python as a beginner. Read More .


          How to build a web app using Python’s Flask and Google App Engine      Cache   Translate Page      

How to build a web app using Python’s Flask and Google App Engine

If you want to build web apps in a very short amount of time using python, then Flask is a fantastic option.

Flask is a small and powerful web framework (also known as “ microframework ”). It is also very easy to learn and simple to code. Based on my personal experience, it was easy to start as a beginner.

Before this project, my knowledge of Python was mostly limited to Data Science. Yet, I was able to build this app and create this tutorial in just a few hours.

In this tutorial, I’ll show you how to build a simple weather app with some dynamic content using an API. This tutorial is a great starting point for beginners. You will learn to build dynamic content from APIs and deploying it on Google Cloud.

The end product can be viewed here .


How to build a web app using Python’s Flask and Google App Engine
How to build a web app using Python’s Flask and Google App Engine

To create a weather app, we will need to request an API key from Open Weather Map . The free version allows up to 60 calls per minute, which is more than enough for this app. The Open Weather Map conditions icons are not very pretty. We will replace them with some of the 200+ weather icons from Erik Flowers instead.


How to build a web app using Python’s Flask and Google App Engine

This tutorial will also cover: (1) basic CSS design, (2) basic HTML with Jinja, and (3) deploying a Flask app on Google Cloud.

The steps we’ll take are listed below:

Step 0: Installing Flask (this tutorial doesn’t cover Python and PIP installation) Step 1: Building the App structure Step 2: Creating the Main App code with the API request Step 3: Creating the 2 pages for the App (Main and Result) with Jinja , HTML, and CSS Step 4: Deploying and testing on your local laptop Step 5: Deploying on Google Cloud. Step 0 ― Installing Flask and the libraries we will use in a virtual environment.

We’ll build this project using a virtual environment. But why do we need one?

With virtual environments, you create a local environment specific for each projects. You can choose libraries you want to use without impacting your laptop environment. As you code more projects on your laptop, each project will need different libraries. With a different virtual environment for each project, you won’t have conflicts between your system and your projects or between projects.

Run Command Prompt (cmd.exe) with administrator privileges. Not using admin privileges will prevent you from using pip.
How to build a web app using Python’s Flask and Google App Engine
(Optional) Install virtualenv and virtualenvwrapper-win with PIP. If you already have these system libraries, please jump to the next step. #Optional pip install virtualenvwrapper-win pip install virtualenv
How to build a web app using Python’s Flask and Google App Engine
Create your folder with the name “WeatherApp” and make a virtual environment with the name “venv” (it can take a bit of time) #Mandatory mkdir WeatherApp cd WeatherApp virtualenv venv
How to build a web app using Python’s Flask and Google App Engine
Activate your virtual environment with “call” on windows (same as “source” for linux). This step changes your environment from the system to the project local environment. call venv\Scripts\activate.bat
How to build a web app using Python’s Flask and Google App Engine
Create a requirements.txt file that includes Flask and the other libraries we will need in your WeatherApp folder, then save the file. The requirements file is a great tool to also keep track of the libraries you are using in your project. Flask==0.12.3
click==6.7
gunicorn==19.7.1
itsdangerous==0.24
Jinja2==2.9.6
MarkupSafe==1.0
pytz==2017.2
requests==2.13.0
Werkzeug==0.12.1
How to build a web app using Python’s Flask and Google App Engine
Install the requirements and their dependencies. You are now ready to build your WeatherApp. This is the final step to create your local environment. pip install -r requirements.txt
How to build a web app using Python’s Flask and Google App Engine
Step 1 ― Building the App structure

You have taken care of the local environment. You can now focus on developing your application. This step is to make sure the proper folder and file structure is in place. The next step will take care of the backend code.

Create two Python files (main.py, weather.py) and two folders (static with a subfolder img, templates).
How to build a web app using Python’s Flask and Google App Engine
Step 2 ― Creating the Main App code with the API request (Backend)

With the structure set up, you can start coding the backend of your application. Flask’s “Hello world” example only uses one Python file. This tutorial uses two files to get you comfortable with importing functions to your main app.

The main.py is the server that routes the user to the homepage and to the result page. The weather.py file creates a function with API that retrieves the weather data based on the city selected. The function populates the resulting page.

Edit main.py with the following code and save #!/usr/bin/env python from pprint import pprint as pp from flask import Flask, flash, redirect, render_template, request, url_for from weather import query_api app = Flask(__name__) @app.route('/')
def index():
return render_template(
'weather.html',
data=[{'name':'Toronto'}, {'name':'Montreal'}, {'name':'Calgary'},
{'name':'Ottawa'}, {'name':'Edmonton'}, {'name':'Mississauga'},
{'name':'Winnipeg'}, {'name':'Vancouver'}, {'name':'Brampton'},
{'name':'Quebec'}]) @app.route("/result" , methods=['GET', 'POST'])
def result():
data = []
error = None
select = request.form.get('comp_select')
resp = query_api(select)
pp(resp)
if resp:
data.append(resp)
if len(data) != 2:
error = 'Bad Response from Weather API'
return render_template(
'result.html',
data=data,
error=error) if __name__=='__main__': app.run(debug=True) Request a free API key on Open Weather Map
How to build a web app using Python’s Flask and Google App Engine
Edit weather.py with the following code (updating the API_KEY) and save from datetime import datetime
import os
import pytz
import requests
import math
API_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXXXXX'
API_URL = ('http://api.openweathermap.org/data/2.5/weather?q={}&mode=json&units=metric&appid={}') def query_api(city): try: print(API_URL.format(city, API_KEY)) data = requests.get(API_URL.format(city, API_KEY)).json() except Exception as exc: print(exc) data = None return data Step 3 ― Creating pages with Jinja , HTML, and CSS (Frontend)

This step is about creating what the user will see.

The HTML pages weather and result are the one the backend main.py will route to and give the visual structure. The CSS file will bring the final touch. There is no javascript in this tutorial (the front end is pure HTML and CSS).

It was my first time using the Jinja2 template library to populate the HTML file. It surprised me how easy it was to bring dynamic images or use functions (e.g. rounding weather). Definitely a fantastic template engine.

Create the first HTML file in the templates folder (weather.html) <!doctype html> <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}"> <div class="center-on-page"> <h1>Weather in a City</h1> <form class="form-inline" method="POST" action="{{ url_for('result') }}"> <div class="select"> <select name="comp_select" class="selectpicker form-control"> {% for o in data %} <option value="{{ o.name }}">{{ o.name }}</option> {% endfor %} </select> </div> <button type="submit" class="btn">Go</button> </form> Create the second HTML file in the templates folder (result.html) <!doctype html> <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}"> <div class="center-on-page"> {% for d in data %} {% set my_string = "static/img/" + d['weather'][0]['icon']+ ".svg" %} <h1> <img src="{{ my_string }}" class="svg" fill="white" height="100" vertical-align="middle" width="100"> </h1> <h1>Weather</h1> <h1>{{ d['name'] }}, {{ d['sys']['country'] }}</h1> <h1>{{ d['main']['temp']|round|int}} °C</h1> {% endfor %}
How to build a web app using Python’s Flask and Google App Engine
Add a CSS file in the static folder (style.css) body { color: #161616; font-family: 'Roboto', sans-serif; text-align: center; background-color: currentColor; } .center-on-page { position: absolute; top:50%; left: 50%; transform: translate(-50%,-50%); } h1 { text-align: center; color:#FFFFFF; } img { vertical-align: middle; } /* Reset Select */ select { -webkit-appearance: none; -moz-appearance: none; -ms-appearance: none; appearance: none; outline: 0; box-shadow: none; border: 0 !important; background: #2c3e50; background-image: none; } /* Custom Select */ .select { position: relative; display: block; width: 20em; height: 3em; line-height: 3; background: #2c3e50; overflow: hidden; border-radius: .25em; } select { width: 100%; height: 100%; margin: 0; padding: 0 0 0 .5em; color: #fff; cursor: pointer; } select::-ms-expand { display: none; } /* Arrow */ .select::after { content: '\25BC'; position: absolute; top: 0; right: 0; bottom: 0; padding: 0 1em; background: #34495e; pointer-events: none; } /* Transition */ .select:hover::after { color: #f39c12; } .select::after { -webkit-transition: .25s all ease; -o-transition: .25s all ease; transition: .25s all ease; } button{ -webkit-appearance: none; -moz-appearance: none; -ms-appearance: none; appearance: none; outline: 0; box-shadow: none; border: 0 !important; background: #2c3e50; background-image: none; width: 100%; height: 40px; margin: 0; margin-top: 20px; color: #fff; cursor: pointer; border-radius: .25em; } .button:hover{ color: #f39c12; } Download the images in the img subfolder in static

Link with the images on Github :


How to build a web app using Python’s Flask and Google App Engine
How to build a web app using Python’s Flask and Google App Engine
Step 4 ― Deploying and testinglocally

At this stage, you have set up the environment, the structure, the backend, and the frontend. The only thing left is to launch your app and to enjoy it on your localhost.

Just launch the main.py with Python python main.py Go to the localhost link proposed on cmd with your Web Browser (Chrome, Mozilla, etc.). You should see your new weather app live on your local laptop:)
How to build a web app using Python’s Flask and Google App Engine
How to build a web app using Python’s Flask and Google App Engine
Step 5 ― Deploying on GoogleCloud

This last step is for sharing your app with the world. It’s important to note that there are plenty of providers for web apps built using Flask. Google Cloud is just one of many. This article does not cover some of the others like AWS, Azure, Heroku…

If the community is interested, I can provide the steps of the other cloud providers in another article and some comparison (pricing, limitations, etc.).

To deploy your app on Google Cloud you will need to 1) Install the SDK, 2) Create a new project, 3) Create 3 local files, 4) Deploy and test online.

Install the SDK following Google’s instructions Connect to your Google Cloud Account (use a $300 coupon if you haven’t already) Create a new project and save the project id (wait a bit until the new project is provisioned)
How to build a web app using Python’s Flask and Google App Engine
How to build a web app using Python’s Flask and Google App Engine
Create an app.yaml file in your main folder with the following code: runtime: python27 api_version: 1 threadsafe: true handlers: - url: /static static_dir: static - url: /.* script: main.app libraries: - name: ssl version: latest Create an appengine_config.py file in your main folder with the following code: from google.appengine.ext import vendor
          T-SQL Tuesday – Non-SQL Server Technologies      Cache   Translate Page      

MJTuesday

So, this month’s T-SQL Tuesday topic is to think about a non-SQL Server technology that we want to learn.

For me, I’m going to pick machine learning.

As a DBA, I’ve always looked at machine learning as a thing for the BI guys.  I’m a DBA after all why do I care about that?

Well, my attitude has changed somewhat recently.  This little change all started when I listened to Alex Whittles’ keynote talk at Data Relay.  He presented a demo where a computer program used Python (I’m already a huge fan of Python in SQL Server as you may know) and SciPy (a machine learning, data sciencey type module) to play and learn a game.  Alex demonstrated how, over time his robot was able to increase it’s score through machine learning algorithms.

WOW, myself and Adrian looked at each other as a little light bulb come on over our heads.  For the rest of the conference I attended a number of sessions that I wouldn’t normally attend, stuff for the BI guys.  A great session from Terry McCann and an interesting one from Simon Whiteley really got the creative juices flowing.  Could the DBA use this technology to model things like performance trends, predict capacity and answer that question that we’re always asked, “have we got room on the SQL Server for just one more DB?”.

So where do I go from here?  My first port of call is going to get my head around Python, I’ve got a background in C programming to that shouldn’t be too difficult.  Once I’m happy with that, it’ll be a case of hitting the blogs, courses, books and anything else that I can get my hands on to help understand the strange mysteries that are Machine Learning.

Where can I go with this?  As DBAs, we’ve got a ton of data available to us in DBVs, Query Store, etc.  Wouldn’t it be great if we could hook a little robot into all that and start building up models of how our servers behave.  Keep an eye out for the inevitable blog post that are going to come out of it.

 


          Data Science and Machine-Learning Expert - Blue J Legal - Toronto, ON      Cache   Translate Page      
Data Science and Machine-Learning Expert Job type: Full-time We are looking for an expert in data science to help us extract value from legal case data. We...
From Blue J Legal - Tue, 06 Nov 2018 22:14:43 GMT - View all Toronto, ON jobs
          Sr. Data Scientist - Refine GINgroup Inc. - Toronto, ON      Cache   Translate Page      
The Data Scientist is an agency (OMD, PHD, Touché, Hearts &amp; Science) and client-facing role. This role will be an impactful individual contributor with the main...
From Indeed - Tue, 06 Nov 2018 22:40:01 GMT - View all Toronto, ON jobs
          Remote Data Scientist      Cache   Translate Page      
A staffing firm needs applicants for an opening for a Remote Data Scientist. Core Responsibilities of this position include: Building state of the art, scalable, and self-learning systems Training and tuning a variety of machine learning models Performing data and error analysis Must meet the following requirements for consideration: 5 years of experience with complex, self-directed data analysis 1 year of practical work experience using cloud services such as AWS Experience in at least one Statistical Modeling, Machine Learning, Predictive Analytics, Data Visualizations Hands-on with experiments and execution Experience executing document clustering Experience in analyzing English language text-based data sets
          Ted Cruz is still using a blacklisted Cambridge Analytica app developer      Cache   Translate Page      

In his re-election campaign’s final hours, Senator Ted Cruz (R-TX) is still deploying a smartphone app created by a software team at the heart of the Cambridge Analytica controversy.

The app, Cruz Crew, was developed by AggregateIQ, a small Canadian data firm that was for years the lead developer used by the infamous data analytics consultancy that made headlines last spring for harvesting user data on millions of unsuspecting Facebook users while working for the Trump campaign. Since that firm’s demise, AggregateIQ has become one focus of an international investigation into alleged data misdeeds during the 2016 Brexit campaign, and is the first company to be targeted by regulators under Europe’s new data privacy law.

The Cruz Crew app’s login screen. The app’s Facebook login was finally removed in June. [Image: Google Play]
Both Cruz Crew as well as an app for Cruz’s presidential campaign in 2016 share an interconnected history of developers and clients linked to Cambridge Analytica, its British affiliate SCL Elections, and architects of the Republican Party’s recent digital efforts. Part of a group of apps presented as walled-garden social networks for political supporters, the software helps campaigns collect voter data and microtarget messages.

In April, Facebook announced it had suspended AggregateIQ over its possible improper access to the data of millions of Facebook users. But over a dozen apps made by AggregateIQ remained connected to Facebook’s platform until May and June, when Facebook belatedly took action against them.

A Facebook spokesperson told Fast Company that it was still investigating AIQ’s possible misuse of data, amid an ongoing investigation by Canadian prosecutors. The Cruz campaign did not respond to requests for comment.

David Carroll, a professor at Parsons School of Design at the New School in New York, who has brought a legal challenge against SCL and Cambridge Analytica for release of his voter data profile, said Cruz’s continued relationships with AggregateIQ highlighted problems with the use of data by a growing ecosystem of partisan election apps and databases. The risks are particularly high, he said, when the vendors are combining data from multiple sources and processing Americans’ data overseas.

“Despite the Cambridge Analytica fiasco, it seems that the Republican data machine is still a shadowy network that includes international operators, tangled up with vendors under intense scrutiny for unlawful conduct in multiple jurisdictions,” he said. “I don’t understand why Republicans don’t insist on working with domestic tech vendors and technologists who are U.S. citizens.”

The Cruz-Cambridge Analytica connection

During the 2016 race, a U.S.-based software firm named Political Social Media, but better known as uCampaign, was credited as developer and publisher for the official “Ted Cruz 2016” presidential primary app. At the time, the app achieved modest notoriety as a somewhat novel data collection tool– appearing alongside Cambridge Analytica under headlines like, “Cruz App Data Collection Helps Campaign Read Minds of Voters”–with the app colloquially referred to in the press as “Cruz Crew.”

As in 2016, the 2018 Cruz re-election campaign relies on constant polling and voter modeling to understand and target mainstream conservatives in Texas. Cruz and his Democratic challenger Beto O’Rourke, who has repeatedly brought up Cambridge Analytica during the campaign and has refused to use big data analytics, have both heavily invested in social media. The media blitz hasn’t been cheap: According to data from the Center for Responsive Politics, the candidates in the 2018 Texas Senate race have set the all-time record for most money spent in any U.S. Senate election.

As part of its digital push, the Cruz campaign rolled out a new app, officially named “Cruz Crew,” which awards points to users for tweeting pro-Cruz messages, volunteering, and taking part in other campaign activities. On the app’s pages in the Google and Apple stores, AggregateIQ is not mentioned, but its name is visible as the developer in the app URL and in internal code. The app’s publisher is listed as the political marketing agency WPA Intelligence, or WPAi.

Chris Wilson, WPAi’s founder and chief executive, is a veteran GOP pollster who previously worked for George W. Bush and Karl Rove. WPAi’s past campaign successes include a trio of high profile Tea Party-cum-Freedom Caucus sympathizer senators: Cruz, Mike Lee (R-UT), and Ron Johnson (R-WI). By far, however, Cruz has been WPA’s biggest political client in the U.S. Between his bids for senator and president, Cruz campaign committees have paid out over $4.3 million to Chris Wilson’s firm since 2011.

As the director of research, analytics and digital strategy for Cruz’s 2016 presidential campaign, Wilson oversaw a large data team that included Rebekah Mercer and Steve Bannon’s Cambridge Analytica. Rebekah’s father, Robert Mercer, footed the $5.8 million bill for Cambridge Analytica by doubling that amount in donations.

Wilson and the Cruz team have repeatedly said that Cambridge Analytica represented to the campaign that all of the data it had was legally obtained. They also claimed that Cambridge did not deliver the results expected of them, neither through their much-discussed psychographics work nor through an important piece of software called Ripon.

In schematics, Ripon was drawn up as an all-in-one campaign solution to manage voter data collection, ad targeting, and street canvassing. According to files retrieved by computer security analyst Chris Vickery, Ripon was intended to tap into something called “the Database of Truth.” Documents revealed that the Truth project “integrates, obtains, and normalizes data from disparate sources,” beginning with the Republican National Committee’s Data Trust database, combined “with state voter files, consumer data, third-party data providers, historical WPA survey, and projects and customer data.”

Despite being a deliverable promised by Cambridge Analytica, the work on Ripon was outsourced to AggregateIQ. More recently, WPAi hired the firm to develop and manage the software for Cruz Crew, along with its two other currently available apps: one for Texas Governor Greg Abbott’s re-election campaign, and one for Osnova, a Ukrainian political party dedicated to the long-shot presidential aspirations of its oligarch founder, Serhiy Taruta.

In the 2018 race, WPAi and the Cruz campaign have said Cruz’s effort isn’t using new Cambridge Analytica-style “psychographic” modeling, but it is using social media data for specific targeting, and relying on previous campaign data. “We use social data to ID voter groups in our core universes,” WPA’s Chris Wilson previously told Fast Company. “A lot of those are 2016 voters who we know are persuaded by specific messages.”

Cruz Crew and TedCruz.org currently share a privacy policy has barely changed since late 2015, when Cambridge Analytica and uCampaign were Cruz vendors. In both cases, the policy states that the campaign may “access, collect, and store personal information about other people that is available to us through your contact list,” match the info to data from other sources, and “keep track of your device’s geographic location.”


Related: How Ted Cruz plans to beat Beto O’Rourke: Play it simple


Beyond the existing campaign app, however, AIQ’s current involvement in the Cruz campaign’s data management and software development is unknown. A report by the New York Times last month found that when users shared their friends’ contact information with the Cruz app, that data was still being sent to AggregateIQ domains.

Wilson told the Times that his company, not AggregateIQ, received and controlled app users’ information. Representatives for AggregateIQ did not immediately respond to a request for comment, and WPAi did not respond to questions about the data firm.

Intelligence quotient

AIQ, founded in 2013 in Victoria, British Columbia, is currently under investigation in the U.K. and its homebase of Canada for electoral impropriety during the Brexit Leave campaign. The company’s name has come up repeatedly in parliamentary testimony for its alleged campaign finance and data protection misdeeds in connection with the parent company of Cambridge Analytica.

“Concerns have been raised about the closeness of the two organizations including suggestions that AIQ, [SCL Elections, and Cambridge Analytica] were, in effect, one and the same entity,” stated a recent report by the U.K.’s Information Commissioner’s Office.

In testimony to a U.K. parliamentary committee, former Cambridge Analytica executive Brittany Kaiser said that AggregateIQ was the exclusive digital and data engineering partner of SCL, the British parent affiliate of Analytica.

“They would build our software, such as a platform that we designed for Senator Ted Cruz’s campaign,” she said. “That was meant to collect data for canvassing individuals who would go door-to-door collecting and hygiening data of individuals in those households. We also had no internal digital capacity at the time, so we did not actually undertake any of our digital campaigns. That was done exclusively through AggregateIQ.”

AIQ founders Zack Massingham and Jeff Silvester had been brought into the fold a year prior by their friend Christopher Wylie, then an SCL employee, who blew the whistle on the firm’s practices earlier this year. According to Wylie, the founders registered their company in their hometown of Victoria as a result of an SCL contract, which subsequently led to political work in the Caribbean.

After the two firms first made contact in August 2013, while SCL was performing its first American political work in the Virginia gubernatorial race, AIQ designed solutions for deployment in campaigns under SCL’s supervision in Trinidad and Tobago. Part of the intent, according to records obtained by the Globe and Mail, was to harvest the internet histories of up to 1.3 million civilians in order to more accurately model their psychographics for message targeting.

In December 2013, an SCL employee proposed requesting the data from the country’s internet provider by posing as academic researchers, while seeking to tie internet addresses to billing addresses, without naming customers. In response, AIQ CEO Massingham replied by email that he could use every bit of data they could get. “If the billing addresses are obfuscated, we’ll have a difficult time relating things back to a real person or household,” he wrote. It remains unknown if that data was obtained.


Related: How Cambridge Analytica fueled a shady global passport bonanza


The primary work AIQ performed was to design software that could be used to motivate volunteers, canvassers, and voters. This software concept was repeated for multiple clients, including Petronas, an oil company that sought to influence voters in Malaysia.

Campaign software developed by AIQ was used by Cambridge Analytica in U.S. elections and for clients like the oil giant Petronas. [Image: SCL]

AggregateIQ’s work across the pond

During the U.K.’s Brexit campaign in 2016, Vote Leave hired AIQ to place online ads, with AIQ paying for all 1,034 Facebook ads run by the campaign. AIQ’s services were also retained to develop and administer a piece of software that Vote Leave executives, including chief technology officer and former SCL employee Thomas Borwick, later credited with a large portion of the campaign’s success.

Vote Leave campaign director Dominic Cummings wrote an extensive blog post about the project, called the Voter Intention Collection System (VICS).

“One of our central ideas was that the campaign had to do things in the field of data that have never been done before,” Cummings wrote. “This included a) integrating data from social media, online advertising, websites, apps, canvassing, direct mail, polls, online fundraising, activist feedback . . . and b) having experts in physics and machine learning do proper data science in the way only they can, i.e. far beyond the normal skills applied in political campaigns.”

As the voter-facing front end for the Leave campaign data team, uCampaign was brought in and paid by AIQ to deliver the smartphone apps that helped to gather users’ cell numbers, email addresses, phone book contacts, and Facebook IDs for integration, exactly as it had done during the previous months for the Cruz 2016 campaign. Just as in that case, the app collected voter information for use in AIQ tools.

“We could only do this properly if we had proper canvassing software,” Cummings wrote. “We built it partly in-house and partly using an external engineer who we sat in our office for months.”

AIQ’s Zach Massingham repeatedly flew to the U.K. as his company was paid hundred of thousands of pounds for its Vote Leave work in 2016 after a series of transactions between several campaigns that Canadian officials have questioned as “money laundering” and British authorities are investigating as criminal offenses. Nonetheless, after the referendum, Cummings released an open-source version of VICS code on Github for future micro-targeters to use.

In early 2018, one of Vote Leave and SCL vet Thomas Borwick’s handful of data firms, Kanto, was hired to do canvassing and social media work during the Irish abortion referendum. Anti-abortion activist groups also contracted uCampaign to build two separate apps, which alarmed campaign finance and privacy watchdogs and led to a ban on internet advertising.

As with uCampaign, which has also made apps for the likes of Donald Trump and the NRA, AIQ’s smartphone apps were designed to gather information via Facebook Login, a tool offered by Facebook to streamline user registration across the internet. Though Facebook tightened some restrictions this year as a direct response to the Analytica flare-up, Login has allowed third-party developers to gain access to a wide range of Facebook account information about registered users.

As part of its investigation into Cambridge Analytica and its affiliates, on April 7, Facebook said that it had suspended AIQ, effectively ending its ability to deploy Facebook Login. However, security researcher Chris Vickery discovered that AIQ’s access to the Facebook platform was still active as of May 17. Additionally, he found, AIQ had already collected info on nearly 800,000 Facebook account IDs in a database, with many matched to addresses and phone numbers. Facebook removed more AIQ apps two weeks later, but it was not until June 19 that the Facebook Login feature was removed from the apps for Cruz, Osnova, and Abbott.

In written testimony to Parliament, AIQ chief technology officer Jeff Silvester, who visited British prime minister Theresa May’s office with Massingham in the weeks after the Brexit vote, explained the history of the relationship between SCL and AIQ, which began in late 2013.

After building a “customer relationship management (CRM) tool” for SCL in Trinidad and Tobago, AIQ created “an entirely new CRM tool” for the 2014 U.S. midterm elections. “SCL called the tool Ripon,” Silvester wrote. AIQ was then required to transfer all software rights to SCL before working “with SCL on similar software development, online advertising, and website development” in support of Cambridge Analytica’s work for the Ted Cruz 2016 campaign.

A referral from “an acquaintance who was working with Vote Leave” led to AIQ being hired by Vote Leave in April 2016, the day before the campaign was designated as the official Leave organization.

[Photo: Stock Catalog]
This past May, after questioning the legality of AggregateIQ founder Zach Massingham’s work on British soil while developing VICS, parliamentary committee chair Damian Collins asked Silvester about AIQ’s recent work for WPAi.

Silvester explained, “They sell their software that we create for them to whomever they like, and we just simply support that work.”

In March, WPAi CEO Chris Wilson told Gizmodo that he had almost no knowledge of the controversy surrounding AIQ, despite their work for the Cruz 2016 campaign. “I would never work with a firm that I felt had done something illegal or even unethical,” he said. The firm’s work for WPA was the result of a competitive bidding process, he said, and AIQ “offered us the best capabilities for the best price.”

Leaving the nest

In February 2017, a story on the Politico Pro website announced Archie, WPA Intelligence’s new piece of software for 2018 campaigns. The software goes by a nickname used by Texas Governor Greg Abbott’s political team, referring to Archimedes, the Greek mathematician who said, “Give me a lever and I can move the world.”

A diagram describing Archimedes, WPAi’s new campaign software [Image: WPAi]
“The program allows campaigns to work across all formats and vendors to collect data in one place,” the article said, and campaign staffers “will be able to use the app to generate models, target audiences, cut lists, and produce data visualization tools to make strategic decisions.”

From that description, Archie sounded very much like AIQ’s Ripon and VICS all-in-one campaign solutions. AIQ’s smartphone app for WPAi client Greg Abbott first appeared on Google Play and Apple’s iOS Store three months later, in May 2017.

Archie’s predictive modeling of Texan voters “yielded approximately 4.5 million individual targets for turnout efforts,” according to WPAi. That helped the Abbott campaign win the 2018 Reed award for Best Use of Data Analytics/Machine Learning in Field Program. In attendance at the March ceremony were representatives from Cambridge Analytica, which was nominated for Best Use of Online Targeting for Judicial Campaign.

Three weeks after the Reed awards, Christopher Wylie’s whistleblower account in the Observer were splashed across the world’s front pages. By the following month, SCL and Analytica were claiming bankruptcy, and AIQ’s cofounders were appearing at Canadian Parliament and dealing with its suspension from Facebook as developers.

In June, a week before AIQ’s WPA apps finally removed Facebook Login, Silvester appeared before Canadian Parliament for a second time, where he was admonished by Vice Chair Nathaniel Erskine-Smith, who remarked, “Frankly, the information you have provided is inadequate.” After being threatened with a contempt charge for excusing himself from sworn testimony with a one-line doctor’s note, Massingham later spoke with the committee via audio-only link from his lawyer’s office.

In July, AggregateIQ was served with the U.K.’s first-ever enforcement notice under the EU’s new General Data Protection Regulation, known as GDPR. The U.K.’s Information Commissioner’s Office subjected AIQ to millions in fines if it did not “cease processing any personal data of U.K. or EU citizens obtained from U.K. political organizations or otherwise for the purposes of data analytics, political campaigning, or any other advertising purposes.”

After AIQ appealed the order, it was merely mandated to “erase any personal data of individuals in the U.K.,” though it was found to have “processed personal data in a way that the data subjects were not aware of, for purposes which they would not have expected, and without a lawful basis for that processing.”

As Ted Cruz wraps up his campaign, he continues to outsource part of his voter data harvesting to a foreign firm that has been blacklisted by Facebook and British and European regulators. The total data amassed through apps like Cruz Crew and projects like Ripon and Archimedes remains unknown, but they raise concerns that Cruz acknowledged when he launched his presidential campaign at Liberty University in March 2015. “Instead of a government that seizes your emails and your cell phones,” he said, “imagine a federal government that protected the privacy rights of every American.”


Jesse Witt (@witjest) is an independent researcher, writer, and filmmaker.

With additional reporting by Alex Pasternack.


          SSIS Best Online Training (KOLKATA)      Cache   Translate Page      
SQL School is one of the best training institutes for Microsoft SQL Server Developer Training, SQL DBA Training, MSBI Training, Power BI Training, Azure Training, Data Science Training, Python Training, Hadoop Training, Tableau Training, Machine Learning Training, Oracle PL SQL Training. We have been providing Classroom Training, Live-Online Training, On Demand Video Training and Corporate trainings. All our training sessions are COMPLETELY PRACTICAL. SSIS COURSE DETAILS - FOR ONLINE TRAINING: SQL ...
          Data Scientist - Ritchie Bros. - Burnaby, BC      Cache   Translate Page      
A workout facility, featuring advanced gym equipment, bike room, shower and changing facilities, and nutrition and fitness programs....
From Indeed - Tue, 06 Nov 2018 20:00:59 GMT - View all Burnaby, BC jobs
          Data Scientist - Deloitte - Springfield, VA      Cache   Translate Page      
Demonstrated knowledge of machine learning techniques and algorithms. We believe that business has the power to inspire and transform....
From Deloitte - Fri, 10 Aug 2018 06:29:44 GMT - View all Springfield, VA jobs
          Associate, Machine Learning AI Consultant, Financial Services - KPMG - Dallas, TX      Cache   Translate Page      
Broad, versatile knowledge of analytics and data science landscape, combined with strong business consulting acumen, enabling the identification, design and...
From KPMG LLP - Fri, 07 Sep 2018 02:02:14 GMT - View all Dallas, TX jobs
          Principal Data Scientist - Clockwork Solutions - Austin, TX      Cache   Translate Page      
Support Clockwork’s Business Development efforts. Evaluates simulation analysis output to reveal key insights about unstructured, chaotic, real-world systems....
From Clockwork Solutions - Mon, 27 Aug 2018 10:03:12 GMT - View all Austin, TX jobs
          Lead Data Scientist - Clockwork Solutions - Austin, TX      Cache   Translate Page      
Support Clockwork’s Business Development efforts. Evaluates simulation analysis output to reveal key insights about unstructured, chaotic, real-world systems....
From Clockwork Solutions - Mon, 27 Aug 2018 10:03:12 GMT - View all Austin, TX jobs
          Associate, Machine Learning AI Consultant, Financial Services - KPMG - New York, NY      Cache   Translate Page      
Broad, versatile knowledge of analytics and data science landscape, combined with strong business consulting acumen, enabling the identification, design and...
From KPMG LLP - Fri, 14 Sep 2018 08:38:34 GMT - View all New York, NY jobs
          Data Scientist: Medical VoC and Text Analytics Manager - GlaxoSmithKline - Research Triangle Park, NC      Cache   Translate Page      
Strong business acumen; 2+ years of unstructured data analysis/text analytics/natural language processing and/or machine learning application for critical...
From GlaxoSmithKline - Fri, 19 Oct 2018 23:19:12 GMT - View all Research Triangle Park, NC jobs
          Data Scientist Lead - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Data Scientist Lead - (DataSciLe2090718) Description Overview: The Data Scientist will be a part of a team chartered to merge and mine large amounts of...
From Mosaic North America - Fri, 07 Sep 2018 18:33:08 GMT - View all Jacksonville, FL jobs
          Data Scientist - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Data Scientist - (Data Scientist090618) Description Overview: The Data Scientist will be a part of a team chartered to merge and mine large amounts of...
From Mosaic North America - Fri, 07 Sep 2018 00:32:46 GMT - View all Jacksonville, FL jobs
          Scientist, Data Lead - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Overview The Data Scientist will be a part of a team chartered to merge and mine large amounts of retail execution, sales and other relevant data to develop...
From Mosaic North America - Wed, 22 Aug 2018 14:26:26 GMT - View all Jacksonville, FL jobs
          Software Engineer - Data Science      Cache   Translate Page      
CA-Santa Clara, Software Engineer - Data Science What you will do As a full-stack software engineer in Johnson Controls Data Enabled Business, you will work with teams throughout Building Technology & Solutions business, connecting our products and services to bring insights, efficiency, reliability, and value to our partners and customers. You will work closely with experts in networking, security, and data anal
          Data Scientist I      Cache   Translate Page      
CA-Santa Clara, What you will do You will be responsible for working within Johnson Controls’ Data Enabled Business to identify opportunities for new growth and efficiency based on data analysis. The data scientist role will work closely with platform engineering and product management, and Data Enabled Offerings and solution engineering to integrate results into operational platforms, including modern data proce
          Senior Data Scientist - CenturyLink - Chicago, IL      Cache   Translate Page      
The Data Scientist is responsible for developing tools to collect, clean, analyze and manage the data used by strategic areas of the business. Employ...
From CenturyLink - Fri, 26 Oct 2018 06:12:22 GMT - View all Chicago, IL jobs
          Technical Product Manager - Data Science - GoDaddy - Kirkland, WA      Cache   Translate Page      
Deliver the infrastructure to provide the business insights our marketing team needs. The small business market contains over a hundred million businesses...
From GoDaddy - Wed, 10 Oct 2018 21:04:32 GMT - View all Kirkland, WA jobs
          2019 Internship - Bellevue, WA- Data Science - Expedia - Bellevue, WA      Cache   Translate Page      
June 17 – September 6. As a Data Scientist Intern within Expedia Group, you will work with a dynamic teams of product managers and engineers across multiple...
From Expedia - Fri, 31 Aug 2018 21:36:06 GMT - View all Bellevue, WA jobs
          Business Relations Manager (Office Hours / East / Up to S$3,500) - Personnel Recruit LLP - East Singapore      Cache   Translate Page      
Post Facebook and Google Ads. We are a dedicated team of traders, data scientists and software engineers working to revolutionize Robo Investing for the retail... $2,500 - $3,500 a month
From Indeed - Mon, 05 Nov 2018 05:35:45 GMT - View all East Singapore jobs
          Java Application Developer - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Fri, 26 Oct 2018 18:01:41 GMT - View all Alpharetta, GA jobs
          Principal Application Developer - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Thu, 25 Oct 2018 08:26:41 GMT - View all Alpharetta, GA jobs
          Sr. Director - Product Mgmt - My ADP - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Tue, 18 Sep 2018 06:36:47 GMT - View all Alpharetta, GA jobs
          Sr. Director - Product Mgmt-NAS Shared Services and Integrations - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Tue, 18 Sep 2018 06:35:48 GMT - View all Alpharetta, GA jobs
          Assistant Professor in Biomedical Data Science and Informatics - Clemson University - Barre, VT      Cache   Translate Page      
Clemson University is ranked 24th among public national universities by U.S. In Fall 2018, Clemson has over 18,600 undergraduate and 4,800 graduate students....
From Clemson University - Fri, 02 Nov 2018 14:09:49 GMT - View all Barre, VT jobs
          Data scientist - Diverse Lynx - Yellow Lake, WI      Cache   Translate Page      
Data Science, Digital:. As a Data Scientist, you will help create solutions enabling automation, artificial intelligence, advanced analytics, and machine...
From Diverse Lynx - Tue, 06 Nov 2018 04:27:46 GMT - View all Yellow Lake, WI jobs
          Java Application Developer - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Fri, 26 Oct 2018 18:01:41 GMT - View all Alpharetta, GA jobs
          Principal Application Developer - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Thu, 25 Oct 2018 08:26:41 GMT - View all Alpharetta, GA jobs
          Sr. Director - Product Mgmt - My ADP - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Tue, 18 Sep 2018 06:36:47 GMT - View all Alpharetta, GA jobs
          Sr. Director - Product Mgmt-NAS Shared Services and Integrations - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Tue, 18 Sep 2018 06:35:48 GMT - View all Alpharetta, GA jobs
          Data Science Team Lead - Resource Technology Partners - Boston, MA      Cache   Translate Page      
Work with internal research teams and help build out internal DNA sequencing pipeline. Indigo is looking to disrupt farming through data....
From ReSource Technology Partners - Mon, 15 Oct 2018 06:56:10 GMT - View all Boston, MA jobs
          Introducing pydbgen: A random dataframe/database table generator      Cache   Translate Page      
When you start learning data science, often your biggest worry is not the algorithms or techniques but getting access to raw data. While there are many high-quality, real-life datasets available on th ... - Source: opensource.com
          Lecturer,Asst - Chief Data Science Officer - University of Wyoming - Laramie, WY      Cache   Translate Page      
Located in a high mountain valley near the Colorado border, Laramie offers both outstanding recreational opportunities and close proximity to Colorado’s Front...
From University of Wyoming - Thu, 11 Oct 2018 15:48:32 GMT - View all Laramie, WY jobs
          [TobagoJack] Per imperatives => solutions Danger = opportunity The sort of equations some fi...      Cache   Translate Page      
Per imperatives => solutions
Danger = opportunity
The sort of equations some find challenging to take on board or even to understand, due to arms and legs mentality and beans counting

... let us see if brave-new-world protocol goes right, and if so, exports ala free-trades

Am wondering if Africans should say “no” to bot-doctors as part of turning head away from hospitals and railroads :0)

Am also wondering whether the bots would be practicing wholistic eastern medicine or specific-toxic western healing, and

What happens when bots prescribe in accordance w/ traditional Napalese healing but skip licensing fee.

Also, would such medicine improve UK healthcare system

brinknews.com

China’s Doctor Shortage Can Be Solved by AI
Andy HoNovember 6, 2018

A surgeon performs an operation at a clinic in the southwest Chinese city of Chongqing. AI might be able to solve China's doctor shortage problem.

Photo: Peter Parks/AFP/Getty Images

If there is one country that has invested heavily in health care reform over the last few years, it is China. But as its population grows older, with already 300 million people suffering from chronic diseases, it seems almost impossible to keep up with the soaring demand for health care. According to the latest data from the Organisation for Economic Co-operation and Development, China has 1.8 practicing doctors per 1,000 citizens, compared to 2.6 for the U.S. and 4.3 for Sweden. Can artificial intelligence relieve China’s overworked doctors of some of their burdens?

China’s Ailing Health Care SystemThe hard-working medical professionals who keep China’s ailing health care system running could certainly use a helping hand. Overcrowding is the order of the day in the country’s urban hospitals, with a typical outpatient department in Beijing seeing about 10,000 people every day. The problem is exacerbated by the scarcity of medical facilities in rural areas, which causes people to flock to hospitals in nearby cities.

As the Future Health Index 2018 by Philips shows, the relatively low number of skilled health care professionals in relation to the size of the population is one of the main reasons why access to care in China lags behind most of the other fifteen countries surveyed.

Demographic projections give further reason for concern. The demand for care will only continue to grow as China is aging more rapidly than almost any country in the world. The United Nations estimates that by 2040, the country’s population over 65 will reach about 303 million, which is almost equal to the current total population of the U.S.

However, there is also reason for optimism.

In its commitment to offer accessible and affordable care for all, the Chinese government is spearheading the development of health care technologies. And perhaps the most promising is AI.



The Rise of AIAI can help make sense of large amounts of data, fueled by computing power that has risen dramatically over the last few years. That’s why China offers particularly fertile ground for AI development: With its 1.4 billion population, the country sits on massive troves of data.

Recognizing the country’s AI potential, the government has set out an ambitious plan to turn China into the world’s leading AI innovation center. Health care is one of the industries that are set to benefit from multibillion-dollar investments in startups, academic research, and moonshot projects. This is not merely a vision, but a reality already in the making. According to Yiou Intelligence, a Beijing-based consultancy firm, some 131 Chinese companies are currently working on applying AI in health care.



A Smart Personal Assistant for PhysiciansSpeeding up the screening of medical images is just one of the ways in which AI could relieve China’s overburdened health care system.

As one Chinese radiologist said in an interview with The New York Times: “We have to deal with a vast amount of medical images every day. So we welcome technology if it can relieve the pressure while boosting efficiency and accuracy.”

We should take these needs to heart and focus on developing intelligent applications that ease the workload for physicians while improving outcomes for patients. Crucially, the goal should not be to replace physicians, but to augment their impact in their daily work, strengthening their role in the delivery of efficient and high-quality care.

For some, AI conjures up images of autonomous robots replacing human workers. But I believe that in health care, AI is best thought of as a smart personal assistant for physicians that adapts to their needs and ways of working—“adaptive intelligence,” as we call it at Philips. Viewed through that lens, AI will make health care more—not less—human.

Today, AI is already helping physicians with the analysis of medical images. As AI becomes increasingly sophisticated and is integrated with medical knowledge, it could support ever more precise diagnoses and personalized treatment plans. But in the short term, arguably the greatest gains are to be made in solving operational bottlenecks in hospitals—for example, by helping physicians get a quick overview of all clinically relevant information on a patient.

Patient data are usually stored in many disparate systems and formats. At Zhongshan Hospital in Shanghai, it can take a physician up to 20 days to manually extract all relevant information from 200 unstructured medical reports into one structured format.

By combining AI methods like natural language processing and machine learning with clinical knowledge, it is possible to collate all clinically relevant information in one dashboard. Physicians could spend less time capturing information from unstructured reports and less time sitting in front of a screen to get a complete picture of the patient.

Improving Care Close to People’s HomesAI could also enable patients with chronic conditions to become more informed about their health and to stay connected with professional caregivers.

According to the Future Health Index 2018, adoption of telehealth in China is currently much lower than the 16-country average, but the Chinese population is open to the use of technologies that can supplement the care they currently receive.

For example, home health monitoring technology powered by AI could help the frail and elderly stay connected with professional caregivers to ensure they receive timely care when needed. People with diabetes or hypertension could benefit from similar technology that allows them to track their condition via clinically validated sensors and devices.

Such initiatives would fit perfectly with the Chinese government’s ambition to improve care at the grassroots level to counter congestion in city hospitals. More widespread adoption of AI technologies should go hand in hand with investments in primary care facilities and Internet connectivity in rural areas—making health care more equally accessible and affordable and allowing people to enjoy a better quality of life close to their communities.

Looking further ahead, AI could also become pivotal in addressing lifestyle-related diseases such as obesity—a major health concern that affects about one in eight people in China. Imagine people with high risk of obesity getting bespoke lifestyle tips via their smartphone. On a population level, data analyses could inform public interventions targeted at specific age groups or geographic areas. As the Chinese government has outlined in its plans for a “Healthy China 2030,” the focus of the health care system will increasingly shift from treatment to prevention.

A Call for CollaborationHow to accelerate this journey toward more efficient, accessible and preventative care?

First, building a more robust data ecosystem should be top priority. The quality of AI is only as good as the quality of the data fed into it. China’s health care system would benefit from shared data standards, interoperability of systems, and improved data exchange protected by top-notch security measures. The establishment of three national digital databases with health information by 2020 is an important step in this direction.

Second, data-driven approaches such as AI will only have the desired impact when combined with proven medical expertise. AI is only part of any solution; it is never a solution by itself. A deep understanding of the clinical context is indispensable. Any form of AI-assisted care must be centered on the physician and the patient, taking their needs as a starting point and building on the wealth of human knowledge that is already available.

Third, AI-enabled tools must be rigorously tested against the highest regulatory standards. In health care, where lives are at stake, we need to deploy new technologies wisely and carefully. Only with proper clinical validation can we ensure responsible, safe and effective use of AI. Physicians as well as patients also require education on a tool’s strengths and limitations.

Fourth, collaboration between academia, startups, and established companies is of paramount importance. The challenges in China’s health care system are simply too big for any player to address it alone. In this light, it is encouraging that the Chinese government has recently founded a collaborative platform to promote the exchange of ideas and kick-start new projects in intelligent medicine.

Finally, to ensure we are creating a future-ready health care system in China, we must address the shortage of talent at the intersection of medicine and data science. We should nurture and invest in developing people who combine medical know-how with a firm understanding of AI and other technologies. Ultimately, the sustainability of China’s health care system may lie in their hands.

This piece first appeared on the World Economic Forum Agenda.

          Full Stack Health-sector Cloud Developer - Data Sciences - Montréal, QC      Cache   Translate Page      
With top graphic designers, copywriters, advertisers, and data scientists working together, we provide our clients with data driven solutions that help propel...
From Indeed - Tue, 02 Oct 2018 18:35:55 GMT - View all Montréal, QC jobs
          Global Jeep and Ram Business Analyst      Cache   Translate Page      
MI-Auburn Hills, The Global Jeep and Ram Business Analyst interfaces with FCA brands and media partners to provide ongoing assessments of demand and media spend performance. This role will partner with internal data scientists and external specialists to develop new tools and models to better predict and assess demand and conversion performance, and identify demand gaps to optimal levels. Additionally, developing
          Are You Ready for the Factory of the Future?      Cache   Translate Page      

There’s a lot of data tied up in medical device manufacturing processes—but it’s been challenging to capture and process this information from historically independent and disjointed machines and processes, say experts. In addition, skilled operators with a deep understanding of such processes have often served as the keepers of the knowledge.

Connecting those processes and machines so that they can communicate with each other and then collecting and analyzing the data from those processes could yield several benefits. Such connected, highly automated environments—often called smart factories, smart or advanced manufacturing, or Industry 4.0—offer opportunities for process improvements, improved quality, reduced costs, faster fulfillment, and more, said experts at The Medtech Conference by AdvaMed. Speakers explored how medical device manufacturing could be transformed through smart manufacturing in the panel discussion, “International Perspectives on Industry 4.0 and the Medtech Factory of the Future.” Moderated by Seamus Carroll, vice president, medical technologies division, IDA Ireland, the panel included Vivian Farrell, CEO of Modular Automation; Dan Grant, CEO of MTP Connect, based in Australia; Colm Hynes, director of engineering, science, and technology, joint reconstruction, DePuy Synthes, Johnson & Johnson; and Toby Sainsbury, technologist, industrial technologies group, IDA Ireland.

“Once you begin to connect devices, you gain the ability to predict what will happen,” Hynes told the audience. “You can continuously learn as time goes on. You can find patterns and see how the process behaves. You can reduce the cost of quality, and asset utilization goes up.”

It is possible to “connect all the different elements from order entry to component delivery,” added Hynes.

Smart factories “leverage the latest automation technologies to optimize the manufacturing process," Farrell later told MD+DI. "This often results in industrial robots interacting with people to get the task done."

Modular Automation has worked closely with J&J to implement such an approach. “It is very much a partnership—our team working hand in hand with Colin’s team. We’ve shared information and learned from each other, all raising the bar to identify better solutions,” said Farrell. “We allow the flow of data from machines to a central location to be analyzed and identify trends. The goal is to make medical devices faster and safer and more aligned with patient needs, and we all manage risks as we go forward.”

“There’s more productivity in how you produce products and get to patients,” added Hynes. “We build in batches, and [Industry 4.0] allows us to be a single-batch production and reduce inventory.” Such an approach allows manufacturers to produce personalized, patient-specific products, he added. “Speed, agility, etc. are achieved by adoption of information to create improved landscape in manufacturing.”

There’s a “need to arrive at solutions quickly,” added Farrell. She offered the example of high-speed contact lens manufacturing. “To double the output with half the footprint, you can’t achieve that with old thinking. You need to identify new ways to work. [It involves] machines and people working together to deliver products quicker and safer, while reducing waste and improving overall quality.”

 

Above: A mechanical design department manager leads a team meeting to discuss a concept for a bespoke automation solution. Image courtesy of Modular Automation.

 

Sainsbury told the audience that companies can “derisk in research and development innovation centers in a collaborative environment.” IDA Ireland works to support such exploration while remaining technology agnostic, he said.

Grant said that Australia recognized the importance of Industry 4.0 in 2016 and set up a task force. “We recognized the need to ensure appropriate standards,” he said. “We developed test labs at universities and research labs with links to industry.”

Companies need to evolve, too. “We as an organization have been fortunate enough to create an operations technology group to develop and test and learn. It is important to have the processes and structure to be able to do so,” said Hynes. “Otherwise, it is difficult in a regulated environment. You need the appropriate control system and software that can be validated and managed.”

Companies are also facing the challenge of adding new technologies in a regulated environment, such as augmented or virtual reality, said Farrell. She advises adopting technologies in “bite-sized chunks in a way that is proven safe and mitigates the risk.”

Another key challenge of this evolving manufacturing environment is “developing the workforce of the future,” said Farrell, namely the “makeup of teams and skills needed for the future.”

Sainsbury said the educational system in Ireland is responding. “Universities are offering programs in AI, cybersecurity, autonomous vehicles, and cloud computing,” he said. “As a state we have created funding, fellowships, feasibility studies, and full R&D programs.”

He added that he’s seeing “convergence of biopharma and medical with software, IT, and cybersecurity. Skills around data analytics, but data is one thing. We need security to protect the flow in and out of the data.” Cybersecurity is important for the industry as medtech becomes more mobile and consumerized, he added.

When asked what skills are being sought, Hynes said that “requirements have moved from data scientists” and more toward skills that center around getting the data and “transforming it for decision making.” He said that companies are looking for “curious, determined, adventurous people. . . . We need to allow people to test and learn and fail fast. The industry needs to be conservative from a production and manufacturing perspective but also be adventurous in a separate entity to be able to source an idea, develop a proof of concept, test it, and dump it or keep it. Such a ‘safe environment’ generates huge value. Creating that structure is the biggest step forward.”

Grant added that “STEM is shifting to STEAM so we have the arts in there, for commercialization skills and entrepreneurial skills.”

Seamus asked the panel whether robotics would have an effect on jobs.

“There is a big concern about job loss, but there will be job creation,” Grant said. He offered the example of autonomous dump trucks moving ore in mines. “Dump truck drivers lost jobs, but jobs were created in a call center.”

Grant acknowledged that “less skilled workers will struggle to retrain and redeploy themselves. But the jobs of the future don’t even exist today. Universities need to be more thoughtful in creating programs that support the jobs of the future.”

Carroll summed up the discussion this way: “Industry 4.0 is happening rapidly and is not just a fad—it is a real event. There are tremendous implications for companies and for the skills that people will need to have. In countries like Ireland, for example, there is a lot of practical and financial support, and they’re not alone—there are world-class companies to help them execute."


          Scientifique de données -Analytique d'affaires -Data Scientist - Aviva - Montréal, QC      Cache   Translate Page      
Ce que vous serez appelé(e) à faire Joignez-vous à une équipe d’actuaires et de scientifiques et d’ingénieurs de données passionnés, chargée de mettre les...
From Aviva - Thu, 25 Oct 2018 17:53:51 GMT - View all Montréal, QC jobs
          Directeur actuariat, Équipe de science des données -Manager Data Scientist - Aviva - Montréal, QC      Cache   Translate Page      
An English version will follow Vous allez vous joindre à une équipe d’actuaires et de scientifiques et d’ingénieurs de données passionnés, chargée de mettre...
From Aviva - Tue, 16 Oct 2018 17:53:50 GMT - View all Montréal, QC jobs
          Protecting What Matters: Defining Data Guardrails and Behavioral Analytics      Cache   Translate Page      

Posted under: General

This is the second post in our series on Protecting What Matters: Introducing Data Guardrails and Behavioral Analytics. Our first post, Introducing Data Guardrails and Behavioral Analytics: Understand the Mission, introduced the concepts and outlined the major categories of insider risk. This post defines the concepts.

Data security has long been the most challenging domain of information security, despite being the centerpiece of our entire practice. We only call it “data security” because “information security” was already taken. Data security must not impede use of the data itself. By contrast it’s easy to protect archival data (encrypt it and lock the keys up in a safe). But protecting unstructured data in active use by our organizations? Not so easy. That’s why we started this research by focusing on insider risks, including external attackers leveraging insider access. Recognizing someone performing an authorized action, but with malicious intent, is a nuance lost on most security tools.

How Data Guardrails and Data Behavioral Analytics are Different

Both data guardrails and data behavioral analytics strive to improve data security by combining content knowledge (classification) with context and usage. Data guardrails leverage this knowledge in deterministic models and processes to minimize the friction of security while still improving defenses. For example, if a user attempts to make a file in a sensitive repository public, a guardrail could require them to record a justification and then send a notification to Security to approve the request. Guardrails are rule sets that keep users “within the lines” of authorized activity, based on what they are doing.

Data behavioral analytics extends the analysis to include current and historical activity, and uses tools such as artificial intelligence/machine learning and social graphs to identify unusual patterns which bypass other data security controls. Analytics reduces these gaps by looking not only at content and simple context (as DLP might), but also adding in history of how that data, and data like it, has been used within the current context. A simple example is a user accessing an unusual volume of data in a short period, which could indicate malicious intent or a compromised account. A more complicated situation would identify sensitive intellectual property on an accounting team device, even though they do not need to collaborate with the engineering team. This higher order decision making requires an understanding of data usage and connections within your environment.

Central to these concepts is the reality of distributed data actively used widely by many employees. Security can’t effectively lock everything down with strict rules covering every use case without fundamentally breaking business processes. But with integrated views of data and its intersection with users, we can build data guardrails and informed data behavioral analytical models, to identify and reduce misuse without negatively impacting legitimate activity. Data guardrails enforce predictable rules aligned with authorized business processes, while data behavioral analytics look for edge cases and less predictable anomalies.

How Data Guardrails and Data Behavioral Analytics Work

The easiest way to understand the difference between data guardrails and data behavioral analytics is that guardrails rely on pre-built deterministic rules (which can be as simple as “if this then that”), while analytics rely on AI, machine learning, and other heuristic technologies which look at patterns and deviations.

To be effective both rely on the following foundational capabilities:

  • A centralized view of data. Both approaches assume a broad understanding of data and usage – without a central view you can’t build the rules or models.
  • Access to data context. Context includes multiple characteristics including location, size, data type (if available), tags, who has access, who created the data, and all available metadata.
  • Access to user context, including privileges (entitlements), groups, roles, business unit, etc.
  • The ability to monitor activity and enforce rules. Guardrails, by nature, are preventative controls which require enforcement capabilities. Data behavioral analytics can be used only for detection, but are far more effective at preventing data loss if they can block actions.

The two technologies then work differently while reinforcing each other:

  • Data guardrails are sets of rules which look for specific deviations from policy, then take action to restore compliance. To expand our earlier example:
    • A user shares a file located in cloud storage publicly. Let’s assume the user has the proper privileges to make files public. The file is in a cloud service so we also assume centralized monitoring/visibility, as well as the capability to enforce rules on that file.
    • The file is located in an engineering team’s repository (directory) for new plans and projects. Even without tagging, this location alone indicates a potentially sensitive file.
    • The system sees the request to make the file public, but because of the context (location or tag), it prompts the user to enter a justification to allow the action, which gets logged for the security team to review. Alternatively, the guardrail could require approval from a manager before allowing the action.

Guardrails are not blockers because the user can still share the file. Prompting for user justification both prevents mistakes and loops in security review for accountability, allowing the business to move fast while minimizing risk. You could also look for large file movements based on pre-determined thresholds. A guardrail would only kick in if the policy thresholds are violated, and then use enforcement actions aligned with business processes (such as approvals and notifications) rather than simply blocking activity and calling in the security goons.

  • Data behavioral analytics use historical information and activity (typically with training sets of known-good and known-bad activity), which produce artificial intelligence models to identify anomalies. We don’t want to be too narrow in our description, because there are a wide variety of approaches to building models.
    • Historical activity, ongoing monitoring, and ongoing modeling are all essential – no matter the mathematical details.
    • By definition we focus on the behavior of data as the core of these models, rather than user activity; this represents a subtle but critical distinction from User Behavioral Analytics (UBA). UBA tracks activity on a per-user basis. Data behavioral analytics (the acronym DBA is already taken, so we’ll skip making up a new TLA), instead looks at activity at the source of the data. How has that data been used? By which user populations? What types of activity happen using the data? When? We don’t ignore user activity, but we track usage of data.
      • For example we could ask, “Has a file of this type ever been made public by a user in this group?” UBA would ask “Has this particular user ever made a file public?” Focusing on the data offers a chance potential to catch a broader range of data usage anomalies.
    • At risk fo stating the obvious, the better the data, the better the model. As with most security-related data science, don’t assume more data inevitably produces better models. It’s about the quality of the data. For example social graphs of communication patterns among users could be a valuable feed to detect situations like files moving between teams who do not usually collaborate. That’s worth a look, even if you wouldn’t want to block the activity outright.

Data guardrails handle known risks, and are especially effective at reducing user error and identifying account abuse resulting from tricking authorized users into unauthorized actions. Guardrails may even help reduce account takeovers, because attackers cannot misuse data if their action violate a guardrail. Data behavioral analytics then supplements guardrails for unpredictable situations and those where a bad actor tries to circumvent guardrails, including malicious misuse and account takeovers.

Now you have a better understanding of the requirements and capabilities of data guardrails and data behavioral analytics. Our next post will focus on some quick wins to justify including these capabilities in your data security strategy.

- Rich (0) Comments Subscribe to our daily email digest
          Data scientist - Diverse Lynx - Yellow Lake, WI      Cache   Translate Page      
Data Science, Digital:. As a Data Scientist, you will help create solutions enabling automation, artificial intelligence, advanced analytics, and machine...
From Diverse Lynx - Tue, 06 Nov 2018 04:27:46 GMT - View all Yellow Lake, WI jobs
          Business Relations Manager (Office Hours / East / Up to S$3,500) - Personnel Recruit LLP - East Singapore      Cache   Translate Page      
Post Facebook and Google Ads. We are a dedicated team of traders, data scientists and software engineers working to revolutionize Robo Investing for the retail... $2,500 - $3,500 a month
From Indeed - Mon, 05 Nov 2018 05:35:45 GMT - View all East Singapore jobs
          11-28-2018 Decentralized Signal Processing and Distributed Control for Collaborative Autonomous Sensor Networks       Cache   Translate Page      
Speaker: Ryan Alan Goldhahn & Priyadip Ray, Lawrence Livermore National Laboratory Talk Title: Decentralized Signal Processing and Distributed Control for Collaborative Autonomous Sensor Networks Series: Center for Cyber-Physical Systems and Internet of Things Abstract: Collaborative autonomous sensor networks have recently been used in many applications including inspection, law enforcement, search and rescue, and national security. They offer scalable, low-cost solutions which are robust to the loss of multiple sensors in hostile or dangerous environments. While often comprised of less capable sensors, the performance of a large network can approach the performance of far more capable and expensive platforms if nodes are effectively coordinating their sensing actions and data processing. This talk will summarize work to date at LLNL on distributed signal processing and decentralized optimization algorithms for collaborative autonomous sensor networks, focusing on ADMM-based solutions for detection/estimation problems and sequential and/or greedy optimization solutions which maximize submodular functions such as mutual information. Biography: Ryan Goldhahn holds a Ph.D. in electrical engineering from Duke University with a focus in statistical and model-based signal processing. Ryan joined the NATO Centre for Maritime Research and Experimentation (CMRE) as a researcher in 2010 and later as the project lead for an effort to use multiple unmanned underwater vehicles (UUVs) to detect and track submarines using multi-static active sonar. In this work he developed collaborative autonomous behaviors to optimally reposition UUVs to improve tracking performance without human intervention. He led several experiments at sea with submarines from multiple NATO nations. At LLNL Ryan has continued to work and lead projects in collaborative autonomy and model-based and statistical signal processing in various applications. He has specifically focused on decentralized detection/estimation/tracking and optimization algorithms for autonomous sensor networks. Priyadip Ray received a Ph.D. degree in electrical engineering from Syracuse University in 2009. His Ph.D. dissertation received the Syracuse University All-University Doctoral Prize. Prior to joining LLNL, Dr. Ray was an assistant professor at the Indian Institute of Technology (IIT), Kharagpur, India where he supervised a research group of approximately 10 scholars in the areas of statistical signal processing, wireless communications, optimization, machine learning and Bayesian non-parametrics. Prior to this he was a research scientist with the Department of Electrical and Computer Engineering at Duke University. Dr. Ray has published close to 40 research articles in various highly-rated journals and conference proceedings and is also a reviewer for leading journals in the areas of statistical signal processing, wireless communications and data science. At LLNL, Dr. Ray has been the PI/Co-I on multiple LDRDs as well as a DARPA funded research effort in the areas of machine learning for healthcare and collaborative autonomy. Host: Paul Bogdan
          IT Business Systems Analyst – Finance & Accounting - IG Design Group – Americas - Atlanta, GA      Cache   Translate Page      
BS degree in Business, Computer Science / Data Science / Information Technology or related field, advanced degree desired....
From IG Design Group – Americas - Sat, 22 Sep 2018 00:08:56 GMT - View all Atlanta, GA jobs
          IT Business Systems Analyst – Sales & Order Management - IG Design Group – Americas - Atlanta, GA      Cache   Translate Page      
BS degree in Business, Computer Science / Data Science / Information Technology or related field, advanced degree desired....
From IG Design Group – Americas - Sat, 22 Sep 2018 00:08:56 GMT - View all Atlanta, GA jobs
          (Junior) Consultant (m/w) Big Data / Data Engineering / Data Science      Cache   Translate Page      
saracus ist eines der führenden unabhängigen Beratungsunter­nehmen für Big Data / Data Engineering &hellip;
          Data Scientist      Cache   Translate Page      
Als Engineering-Dienstleister und Technologieberater unterstützt AKKA seine Kunden dabei, den &hellip;
          Data Scientist      Cache   Translate Page      
Im Rahmen unserer internationalen Expansion begleitest du die Implementierungsprojekte unserer &hellip;
          Data Scientist      Cache   Translate Page      
In the role of Data Scientist, you will collaborate with our Data Engineers on advanced statistical &hellip;
          Data Scientist (Big Data) (f/m/x)      Cache   Translate Page      
StepStone is one of the largest German digital enterprises with around 2.900 employees worldwide. &hellip;
          Data Scientist / Data Analyst (m/w/d)      Cache   Translate Page      
Mit drei unterschiedlich ausgerichteten Marken und Vertrieben - RheinLand Versicherungen, Rhion &hellip;
          (Erfahrener) Data Analyst/ Data Scientist (m/w) für Big Data und Data Analytics am Standort Saarbrücken      Cache   Translate Page      
Die Daimler AG ist eines der erfolgreichsten Automobilunternehmen der Welt. Mit den &hellip;
          Data Scientist (m/w/d) / Data Mining      Cache   Translate Page      
Bonjour bei vertbaudet! Im Jahr 1963 in Frankreich gegründet, hat sich vertbaudet zu der &hellip;
          Data Analyst / Data Scientist w/m      Cache   Translate Page      
#data analyst / data scientist w/m Die HFG Gruppe ist der Spezialist für Langzeitinkasso in &hellip;
          Data Analyst / Data Scientist (m/w)      Cache   Translate Page      
Der Bertrandt-Konzern bietet seit über 40 Jahren Entwicklungslösungen für die internationale &hellip;
          Praktikant/in - Bereich Data Science      Cache   Translate Page      
Gemeinsam Versicherung neu denken: digital und einfach. Das ist Deine Chance als: Praktikant/in – &hellip;
          Data Scientist Industrie      Cache   Translate Page      
Wirtschaftlich, präzise, sicher und energieeffizient: Antriebs- und Steuerungstechnik von Bosch &hellip;
          Data Scientist (m/w)      Cache   Translate Page      
Sie denken in Daten und Entscheidungen treffen Sie auf Basis mathematischer Prognosen? Werden Sie &hellip;
          Data Scientist (m/w)      Cache   Translate Page      
Blue Yonder GmbH -- Data Scientist (m/w) Data Scientist (m/w) in Hamburg oder Karlsruhe Blue Yonder &hellip;
          Mapping the economy in real time is almost ‘within our grasp’       Cache   Translate Page      
 


Andy Haldane, BoE chief economist, says economists should embrace data flood
It should be said here that Andy Haldane also knows about Transfinancial Economics, and the notion of "mapping the economy in real time..." is clearly indicated in the following article. Though Big Data is part of the TFE Paradigm it is only part of the whole picture for a modern futuristic economics...  See   https://wiki.p2pfoundation.net/Transfinancial_Economics





The goal of mapping economic activity in real time, just as we do for weather or traffic, is “closer than ever to being within our grasp”, according to Andy Haldane, the Bank of England’s chief economist. In recent years, “data has become the new oil . . . and data companies have become the new oil giants”, Mr Haldane told an audience at King’s Business School earlier this month and released on Monday. But economics and finance have been “rather reticent about fully embracing this oil-rush”, partly because economists have tended to prefer a deductive approach that puts theory ahead of measurement. This needs to change, he said, because relying too much on either theory or real-world data in isolation can lead to serious mistakes in policymaking — as was seen when the global financial crisis exposed the “empirical fragility” of macroeconomic models.


 Parts of the private sector and academia have been far swifter to exploit the vast troves of ever-accumulating data now available — 90 per cent of which has been created in the last two years alone.  Massachusetts Institute of Technology’s “Billion Prices Project”, name-checked in Mr Haldane’s speech, now collects enough data from online retailers for its commercial arm to provide daily inflation updates for 22 economies. The Alan Turing Institute — the UK’s new national institute for data science — runs a programme, with funding from HSBC, which aims to use new data to measure economic activity faster and more precisely than was previously possible. National statisticians are taking tentative steps in the same direction. The UK’s Office for National Statistics — which has faced heavy criticism over the quality of its data in recent years — is experimenting with “web-scraping” to collect price quotes for food and groceries, for example, and making use of VAT data from small businesses to improve its output-based estimates of gross domestic product. In both cases, the increased sample size and granularity could bring considerable benefits on top of existing surveys, Mr Haldane said. The BoE itself is trying to make better use of financial data — for example, by using administrative data on owner-occupied mortgages to better understand pricing decisions in the UK housing market. Recommended Analysis UK politics & policy Starting gun prepped in race to replace Mark Carney at Bank of England Mr Haldane sees scope to go further with the new data coming on stream on payment, credit and banking flows. “Almost all economic activity leaves a financial footprint,” he said. “In time, it is possible that these sorts of data could help to create a real-time map of financial and activity flows across the economy, in much the same way as is already done for flows of traffic or information or weather.

Once mapped, there would then be scope to model and, through policy, modify these flows.” New data sources and techniques could also help policymakers think about human decision-making — which rarely conforms with the rational process assumed in many economic models. Data on music downloads from Spotify, used as an indicator of sentiment, has recently been shown to do at least as well as a standard consumer confidence survey in tracking consumer spending. “Why stop at music?” Mr Haldane asked. He saw potential to create a gaming environment “to explore behaviour in a virtual economy where players can spend or save, and one could test their reactions to monetary and regulatory policy intervention”.




File:Andy Haldane - Festival Economia 2013.JPG






















          Insight: News Network On Election Day / County Elections / BBC In California / Fighting Wildfire With Data      Cache   Translate Page      
CapRadio reporters check in as Californians head to the polls. We talk with county election officials about Election Day. We hear from a BBC team in California for the election. A professor tells us how data science can help predict wildfires.
          Data Scientist / Prognostic Health Monitoring Specialist - Abbott Laboratories - Lake Forest, IL      Cache   Translate Page      
Operational and business decisions with high risk or consequences to the business. Communication with internal and external customers....
From Abbott Laboratories - Thu, 25 Oct 2018 11:08:26 GMT - View all Lake Forest, IL jobs
          Data Scientist - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to uncover real estate trends...
From Zillow Group - Thu, 01 Nov 2018 11:21:23 GMT - View all Seattle, WA jobs
          Data Scientist (Agent Pricing) - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third-party data (think Hive, Presto, SQL Server, Python, Mode Analytics, Tableau, R) to make strategic recommendations....
From Zillow Group - Thu, 01 Nov 2018 11:21:14 GMT - View all Seattle, WA jobs
          Data Scientist - Vertical Living - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to make strategic...
From Zillow Group - Thu, 01 Nov 2018 11:21:13 GMT - View all Seattle, WA jobs
          Global Academy Jobs: Lecturer in Mathematics, Statistics or Data Science      Cache   Translate Page      
Global Academy Jobs: School of Mathematics and Physics  The University of Queensland is recognised nationally and internationally for its research programs across a Australia
          Building a rock star Data Science team      Cache   Translate Page      
It's been a year since we kicked off the IBM Data Science Elite Team. This group of expert consultants in the field of data science and machine learning ...
          WhizAI Joins NVIDIA Inception Program      Cache   Translate Page      
... joined the NVIDIA Inception program, which is designed to nurture startups revolutionizing industries with advancements in AI and data sciences.
          Top Data Science Influencers To Follow on Twitter      Cache   Translate Page      
Ronald van Loon: Director Adversitement•Helping Data-Driven Companies Generating Success•Top10 Big Data, Data Science, IoT, AI Influencer.
          New Data Analytics Major Prepares Students for Jobs in High Demand      Cache   Translate Page      
Data scientists, data engineers and business analysts are in high demand in the job market, but applicants often lack the range of skills critical for ...
          Zindi, MIIA to host data science competitions      Cache   Translate Page      


Celina Lee, CEO of Zindi. Local data science competition platform Zindi has partnered with the Machi...


          "I'll Take 'DataOps' for 500, Alex"      Cache   Translate Page      

By Dennis D. McDonald

On behalf of my client Dimensional Concepts LLC (DCL) I attended the recent webinar sponsored by Data Science Central titled How To Structure A Modern Data Ops Team. Michele Goetz from Forrester talked about what that company is finding in its research about modern corporate data governance practices. Will Davis of Trifecta discussed that topic from the perspective of a tool vendor that helps clients implement state of the art data management and analytics platforms.

My client’s interest is straightforward: in the business of supporting government programs for both a civilian and defense agencies, DCL is researching potential sub and prime contractors for program support contracts that include data management and analytics. Influencing this is my own special interest in how you manage such programs when transitioning from traditional data architectures to distributed and cloud-based approaches involving large or diverse data volumes.

The webinar presentations expanded on what I have learned in my own research and consulting related to data governance and data program management, for example:

  • There’s always more to data governance than selecting the right tools.

  • Traditional roles and organizational structures must change to address opportunities generated by rapidly evolving data management and analytics capabilities.

  • People tend to resist changes to how they have worked in the past.

  • How the transition is managed can make or break an organization’s ability to make the most of its data.

I have already addressed some of these issues—especially the last—in my own work (for example, see An Introduction to Data Program Management (DPM)). Still, the webinar’s presentations were enlightening since they addressed the practical definition of different roles and responsibilities, for example, what a “data analyst” does, what a “data scientist” does, and -- arguably most important -- how roles and responsibilities must include sharing information as well as how data contribute to the organization’s bottom line.

My only (albeit minor) disappointment with the webinar -- admittedly a lot was packed into an hour -- was the lack of discussion of how management approaches must be adapted to the modern organization’s data environment. Still, the intelligent discussion of roles and responsibilities is extremely valuable, as is the emphasis on sharing, collaboration, and targeted use cases to drive development.

In my own work I have found that implementing an appropriate data governance infrastructure depends on several factors including:

  1. How sophisticated the organization already is regarding management and use of data. For example, when is self service appropriate versus when is an internal “service bureau” operation needed?

  2. How disciplined is the organization is regarding how it manages projects and processes that cross departmental and functional lines? Does it have a centralized PMO type operation to support both administrative and operations management, or are project managers “on their own”?

  3. How prepared is the organization to recognize the necessary alignment between front end ETL, quality control, and data transformation practices with downstream value delivery to those in the organization’s executive suite?

This last item will not be a surprise to anyone with experience with organizational change efforts. The number one requirement for a successful project, for example, has always been and will continue to be the maintaining of management support through the ability to contribute demonstrably to the organization’s bottom line. So it is with transitioning to a more mature “data ops” approach.

Building and maintaining an understanding of how data governance -- starting with gathering and processing “clean” data -- aligns with delivering value can a major challenge. It requires not only technology but management sophistication as well.

Based on what I heard in the Data Science Central webinar we’re making great progress, and we still have a long way to go.

Copyright © 2018 by Dennis D. McDonald

Below: Links to more “Data Governance” articles


          Senior Scientist, Data Science (1 of 2) - Johnson & Johnson Family of Companies - Spring House, PA      Cache   Translate Page      
Consideration will be given to Raritan, NJ; Janssen Research &amp; Development LLC, a Johnson &amp; Johnson company, is recruiting for a Senior Scientist, Data Science....
From Johnson & Johnson Family of Companies - Mon, 05 Nov 2018 23:05:25 GMT - View all Spring House, PA jobs
          Scientist, Data Science (1 of 2) - Johnson & Johnson Family of Companies - Spring House, PA      Cache   Translate Page      
Consideration will be given to Raritan, NJ; Janssen Research &amp; Development LLC, a Johnson &amp; Johnson company, is recruiting for a Scientist, Data Science....
From Johnson & Johnson Family of Companies - Mon, 05 Nov 2018 23:05:25 GMT - View all Spring House, PA jobs
          Consultant, Business Analytics & Data Science - Lincoln Financial - Boston, MA      Cache   Translate Page      
Phoenix, AZ (Arizona). Knowledge and experience on applying statistical and machine learning techniques on real business data....
From Lincoln Financial Group - Fri, 02 Nov 2018 02:54:18 GMT - View all Boston, MA jobs
          Sr. Consultant, Business Analytics & Data Science - Lincoln Financial - Boston, MA      Cache   Translate Page      
Phoenix, AZ (Arizona). Implements and maintains predictive and statistical models to identify business opportunities and solve complex business problems....
From Lincoln Financial Group - Tue, 16 Oct 2018 20:54:14 GMT - View all Boston, MA jobs
          Data Scientist - State Farm - Bloomington, IL      Cache   Translate Page      
Bloomington, IL, Atlanta, GA, Dallas, TX, and Phoenix, AZ. Collaborates with business subject matter experts to select relevant sources of information....
From State Farm - Fri, 21 Sep 2018 22:31:56 GMT - View all Bloomington, IL jobs
          Survey: Retail clinic visits drive in-store purchases      Cache   Translate Page      

A new survey from data science firm Civis Analytics takes a look at the impact of retail clinic patients on retail sales and patient satisfaction.

Read More

          Scientifique de données -Analytique d'affaires -Data Scientist - Aviva - Montréal, QC      Cache   Translate Page      
Ce que vous serez appelé(e) à faire Joignez-vous à une équipe d’actuaires et de scientifiques et d’ingénieurs de données passionnés, chargée de mettre les...
From Aviva - Thu, 25 Oct 2018 17:53:51 GMT - View all Montréal, QC jobs
          Directeur actuariat, Équipe de science des données -Manager Data Scientist - Aviva - Montréal, QC      Cache   Translate Page      
An English version will follow Vous allez vous joindre à une équipe d’actuaires et de scientifiques et d’ingénieurs de données passionnés, chargée de mettre...
From Aviva - Tue, 16 Oct 2018 17:53:50 GMT - View all Montréal, QC jobs
          Data Scientist - Aimia - Toronto, ON      Cache   Translate Page      
Expert knowledge and past experience of statistics and mathematics applied to business. Ideally, the individual in this role is a technical expert with hands-on...
From Aimia - Thu, 13 Sep 2018 20:52:33 GMT - View all Toronto, ON jobs
          Amazon reportedly making Long Island City one of two HQ2 locations      Cache   Translate Page      

The most high-profile headquarters search in modern times appears to be coming to an end that few expected.

A year after Amazon teased municipalities across the country with the chance to land the company's second headquarters—one that would be on equal footing with its Seattle home base—the e-commerce Goliath has reportedly come up with a new iteration. The company is now splitting its second headquarters into two locations of equal size, according to The Wall Street Journal and The New York Times.

One half may be based in Long Island City, Queens.

"Under the new plan, Amazon would split the workforce with 25,000 employees in each city," the Journal reported, citing an unnamed source. Crystal City, a neighborhood in Arlington, Va., is the other place being considered, according to the Journal and the Times.

Mayor Bill de Blasio, speaking to NY1 today, said that as far as he knew, Amazon had not "made a final decision."

Amazon did not respond to requests for comment.

Originally, Amazon said it would invest $5 billion and create 50,000 jobs in its so-called HQ2 over two decades. Driving the change of plans were concerns about meeting the demand for tech talent and the strain such rapid growth would put on the host city's housing market.

Along with contributing to Seattle's status as the fastest-growing big city in the country in recent years, Amazon has become a divisive presence in its hometown, blamed for record-setting increases in home prices and a pervasive homelessness problem. The company was never going to have that sort of outsize impact on New York, which has more than 10 times the population of Seattle, but the company's impact would be even more diluted now.

Splitting the headquarters will "shrink" Amazon's "effects on affordability," Zillow Research wrote yesterday.

In April the website had forecast a minuscule decrease in average rents—which are already in decline—should the city be chosen.

There also have been concerns that Amazon would suck up all of New York's top tech talent, which is in short supply. But while Amazon would have competitive advantages thanks to its deep pockets and reputation as an innovator, it could wind up increasing the number of engineers, data scientists and other specialists.

"It's a name people will be attracted to work for, and the more talent you can bring into the region, the better," said Eric Hippeau, a partner at Lerer Hippeau Ventures, a Manhattan venture capital firm specializing in early-stage startups. "On balance it would improve the pool of talent we have."

It also could mean that other businesses would want to be located here.

"The Amazon jobs are just the tip of the iceberg," said Jeffrey Schulman, a marketing professor at the University of Washington business school who has studied the company's effects on Seattle. "As a leading innovator across a range of industries, Amazon will draw other companies that want to poach its well-trained talent and who want to tap into the resources of the company."

If there are words of warning, they are mainly about tax breaks, especially for a company that has a market cap north of $800 billion and annual revenue that's expected to top $230 billion this year. The Empire State Development Corp., which has been handling negotiations with Amazon, has not disclosed the state's incentives package, though the Times reported it could be in the hundreds of millions of dollars. Additional benefits could come from the city.

Some experts say it's hard for a location to come out on top.

"For the city, it should be just a straightforward thing: We will get more benefit than what we are giving away," said Alain Bertaud, a senior research scholar at the NYU Marron Institute of Urban Management and author of Order Without Design: How Markets Shape Cities. "But we are dealing with a lot of data, which are not very clear. [Government] might be tempted to make a deal that will be to the detriment of the maintenance of the city."

For now, though, some New Yorkers believe that if Amazon does settle half of its second headquarters here, it will support industry trends that have been underway since the Bloomberg administration, when the city began to see its future in technology.

"It gives us a new source of growth," said Mitchell Moss, the Henry Hart Rice professor of urban policy and planning at NYU. "It reflects the continuing relative decline of finance and the growth of information-based industries."


          Data Scientist - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From belairdirect - Thu, 13 Sep 2018 00:51:55 GMT - View all Montréal, QC jobs
          SNR. PYTHON DEVELOPER- DEVELOP YOUR MACHINE LEARNING AND DATA ANALYTICS SKILLS      Cache   Translate Page      
Acuity Consultants - Paarl, Western Cape - This is an excellent opportunity for a SNR. PYTHON DEVELOPER to develop their machine learning and data analytics skills. Based in the... has pioneered the InsureTech space in South Africa, by capitalizing on data science and machine learning technology to create the country's first award...
          PYTHON DEVELOPER- DEVELOP YOUR MACHINE LEARNING AND DATA ANALYTICS SKILLS      Cache   Translate Page      
Acuity Consultants - Paarl, Western Cape - This is an excellent opportunity for a Python developer to develop their machine learning and data analytics skills. Based in the NORTHERN... has pioneered the InsureTech space in South Africa, by capitalizing on data science and machine learning technology to create the country's first award...
          Tableau Developer      Cache   Translate Page      
MN-Minneapolis, a) Strong Tableau developer b) Able to work with client directly , gather requirements , understand , do the development - good communication skill c) Teradata knowledge - preferable or strong in other relational DB d) Power BI , Knowing data science is good to have
          Data Science Online and classroom training with Placements Assistance      Cache   Translate Page      
'Enroll for Data Science Course Training in Hyderabad with Real Time Experts. Career3s Provides Corporate level training with Experienced. We provides Data Science Classroom and Online Training with real time experts and 100% Placements.'
          Data Scientist - Deloitte - Springfield, VA      Cache   Translate Page      
Demonstrated knowledge of machine learning techniques and algorithms. We believe that business has the power to inspire and transform....
From Deloitte - Fri, 10 Aug 2018 06:29:44 GMT - View all Springfield, VA jobs
          Associate, Machine Learning AI Consultant, Financial Services - KPMG - Dallas, TX      Cache   Translate Page      
Broad, versatile knowledge of analytics and data science landscape, combined with strong business consulting acumen, enabling the identification, design and...
From KPMG LLP - Fri, 07 Sep 2018 02:02:14 GMT - View all Dallas, TX jobs
          Principal Data Scientist - Clockwork Solutions - Austin, TX      Cache   Translate Page      
Support Clockwork’s Business Development efforts. Evaluates simulation analysis output to reveal key insights about unstructured, chaotic, real-world systems....
From Clockwork Solutions - Mon, 27 Aug 2018 10:03:12 GMT - View all Austin, TX jobs
          Lead Data Scientist - Clockwork Solutions - Austin, TX      Cache   Translate Page      
Support Clockwork’s Business Development efforts. Evaluates simulation analysis output to reveal key insights about unstructured, chaotic, real-world systems....
From Clockwork Solutions - Mon, 27 Aug 2018 10:03:12 GMT - View all Austin, TX jobs
          Associate, Machine Learning AI Consultant, Financial Services - KPMG - New York, NY      Cache   Translate Page      
Broad, versatile knowledge of analytics and data science landscape, combined with strong business consulting acumen, enabling the identification, design and...
From KPMG LLP - Fri, 14 Sep 2018 08:38:34 GMT - View all New York, NY jobs
          Data Scientist: Medical VoC and Text Analytics Manager - GlaxoSmithKline - Research Triangle Park, NC      Cache   Translate Page      
Strong business acumen; 2+ years of unstructured data analysis/text analytics/natural language processing and/or machine learning application for critical...
From GlaxoSmithKline - Fri, 19 Oct 2018 23:19:12 GMT - View all Research Triangle Park, NC jobs
          To build trust in data science, work together      Cache   Translate Page      
Collaboration is key to building trust in algorithms and big data, according to a new paper by Cornell researchers.
          Protecting What Matters: Defining Data Guardrails and Behavioral Analytics      Cache   Translate Page      

Posted under: General

This is the second post in our series on Protecting What Matters: Introducing Data Guardrails and Behavioral Analytics. Our first post, Introducing Data Guardrails and Behavioral Analytics: Understand the Mission, introduced the concepts and outlined the major categories of insider risk. This post defines the concepts.

Data security has long been the most challenging domain of information security, despite being the centerpiece of our entire practice. We only call it “data security” because “information security” was already taken. Data security must not impede use of the data itself. By contrast it’s easy to protect archival data (encrypt it and lock the keys up in a safe). But protecting unstructured data in active use by our organizations? Not so easy. That’s why we started this research by focusing on insider risks, including external attackers leveraging insider access. Recognizing someone performing an authorized action, but with malicious intent, is a nuance lost on most security tools.

How Data Guardrails and Data Behavioral Analytics are Different

Both data guardrails and data behavioral analytics strive to improve data security by combining content knowledge (classification) with context and usage. Data guardrails leverage this knowledge in deterministic models and processes to minimize the friction of security while still improving defenses. For example, if a user attempts to make a file in a sensitive repository public, a guardrail could require them to record a justification and then send a notification to Security to approve the request. Guardrails are rule sets that keep users “within the lines” of authorized activity, based on what they are doing.

Data behavioral analytics extends the analysis to include current and historical activity, and uses tools such as artificial intelligence/machine learning and social graphs to identify unusual patterns which bypass other data security controls. Analytics reduces these gaps by looking not only at content and simple context (as DLP might), but also adding in history of how that data, and data like it, has been used within the current context. A simple example is a user accessing an unusual volume of data in a short period, which could indicate malicious intent or a compromised account. A more complicated situation would identify sensitive intellectual property on an accounting team device, even though they do not need to collaborate with the engineering team. This higher order decision making requires an understanding of data usage and connections within your environment.

Central to these concepts is the reality of distributed data actively used widely by many employees. Security can’t effectively lock everything down with strict rules covering every use case without fundamentally breaking business processes. But with integrated views of data and its intersection with users, we can build data guardrails and informed data behavioral analytical models, to identify and reduce misuse without negatively impacting legitimate activity. Data guardrails enforce predictable rules aligned with authorized business processes, while data behavioral analytics look for edge cases and less predictable anomalies.

How Data Guardrails and Data Behavioral Analytics Work

The easiest way to understand the difference between data guardrails and data behavioral analytics is that guardrails rely on pre-built deterministic rules (which can be as simple as “if this then that”), while analytics rely on AI, machine learning, and other heuristic technologies which look at patterns and deviations.

To be effective both rely on the following foundational capabilities:

  • A centralized view of data. Both approaches assume a broad understanding of data and usage – without a central view you can’t build the rules or models.
  • Access to data context. Context includes multiple characteristics including location, size, data type (if available), tags, who has access, who created the data, and all available metadata.
  • Access to user context, including privileges (entitlements), groups, roles, business unit, etc.
  • The ability to monitor activity and enforce rules. Guardrails, by nature, are preventative controls which require enforcement capabilities. Data behavioral analytics can be used only for detection, but are far more effective at preventing data loss if they can block actions.

The two technologies then work differently while reinforcing each other:

  • Data guardrails are sets of rules which look for specific deviations from policy, then take action to restore compliance. To expand our earlier example:
    • A user shares a file located in cloud storage publicly. Let’s assume the user has the proper privileges to make files public. The file is in a cloud service so we also assume centralized monitoring/visibility, as well as the capability to enforce rules on that file.
    • The file is located in an engineering team’s repository (directory) for new plans and projects. Even without tagging, this location alone indicates a potentially sensitive file.
    • The system sees the request to make the file public, but because of the context (location or tag), it prompts the user to enter a justification to allow the action, which gets logged for the security team to review. Alternatively, the guardrail could require approval from a manager before allowing the action.

Guardrails are not blockers because the user can still share the file. Prompting for user justification both prevents mistakes and loops in security review for accountability, allowing the business to move fast while minimizing risk. You could also look for large file movements based on pre-determined thresholds. A guardrail would only kick in if the policy thresholds are violated, and then use enforcement actions aligned with business processes (such as approvals and notifications) rather than simply blocking activity and calling in the security goons.

  • Data behavioral analytics use historical information and activity (typically with training sets of known-good and known-bad activity), which produce artificial intelligence models to identify anomalies. We don’t want to be too narrow in our description, because there are a wide variety of approaches to building models.
    • Historical activity, ongoing monitoring, and ongoing modeling are all essential – no matter the mathematical details.
    • By definition we focus on the behavior of data as the core of these models, rather than user activity; this represents a subtle but critical distinction from User Behavioral Analytics (UBA). UBA tracks activity on a per-user basis. Data behavioral analytics (the acronym DBA is already taken, so we’ll skip making up a new TLA), instead looks at activity at the source of the data. How has that data been used? By which user populations? What types of activity happen using the data? When? We don’t ignore user activity, but we track usage of data.
      • For example we could ask, “Has a file of this type ever been made public by a user in this group?” UBA would ask “Has this particular user ever made a file public?” Focusing on the data offers a chance potential to catch a broader range of data usage anomalies.
    • At risk fo stating the obvious, the better the data, the better the model. As with most security-related data science, don’t assume more data inevitably produces better models. It’s about the quality of the data. For example social graphs of communication patterns among users could be a valuable feed to detect situations like files moving between teams who do not usually collaborate. That’s worth a look, even if you wouldn’t want to block the activity outright.

Data guardrails handle known risks, and are especially effective at reducing user error and identifying account abuse resulting from tricking authorized users into unauthorized actions. Guardrails may even help reduce account takeovers, because attackers cannot misuse data if their action violate a guardrail. Data behavioral analytics then supplements guardrails for unpredictable situations and those where a bad actor tries to circumvent guardrails, including malicious misuse and account takeovers.

Now you have a better understanding of the requirements and capabilities of data guardrails and data behavioral analytics. Our next post will focus on some quick wins to justify including these capabilities in your data security strategy.

- Rich (0) Comments Subscribe to our daily email digest
          Simple Data entry - Upwork      Cache   Translate Page      
There are approx 7 worksheets that need formatting. About 300 names on each. Each sheet takes 20 minutes to format. Possible ongoing work for the right person. Files must be sent back in .csv format.

Budget: $15
Posted On: November 07, 2018 06:39 UTC
ID: 214651067
Category: Data Science & Analytics > Machine Learning
Skills: Data Entry
Country: Australia
click to apply
          Sr. Dev Smart Feat API (ISO) - Verisk Analytics - Jersey City, NJ      Cache   Translate Page      
Create a positive, lasting impact on the business; Provide technical consultation to internal teams and coach Data Scientists in following good software...
From Verisk Analytics - Thu, 25 Oct 2018 22:46:15 GMT - View all Jersey City, NJ jobs
          Sr Data Scientist Engineer (HCE) - Honeywell - Atlanta, GA      Cache   Translate Page      
50 Machine Learning. Develop relationships with business team members by being proactive, displaying a thorough understanding of the business processes and by...
From Honeywell - Thu, 20 Sep 2018 02:59:11 GMT - View all Atlanta, GA jobs
          Data Science Team Lead - Resource Technology Partners - Boston, MA      Cache   Translate Page      
Work with internal research teams and help build out internal DNA sequencing pipeline. Indigo is looking to disrupt farming through data....
From ReSource Technology Partners - Mon, 15 Oct 2018 06:56:10 GMT - View all Boston, MA jobs
          Joint DMS/NLM Initiative on Generalizable Data Science Methods for Biomedical Research Webinar      Cache   Translate Page      

Nov 19 2018 1:00PM to
Nov 19 2018 3:00PM
Alexandria

The Division of Mathematical Sciences (DMS) in the Directorate for Mathematical and Physical Sciences (MPS) at the National Science Foundation (NSF) and the National Library of Medicine (NLM) at the National Institutes of Health (NIH) plan to support the development of innovative and transformative mathematical and statistical approaches to address important data-driven biomedical and health challenges. The rationale for this interagency collaboration is that significant advances may be ...
More at https://www.nsf.gov/events/event_summ.jsp?cntn_id=297151&WT.mc_id=USNSF_13&WT.mc_ev=click


This is an NSF Events item.

          Assistant Professor in Biomedical Data Science and Informatics - Clemson University - Barre, VT      Cache   Translate Page      
Clemson University is ranked 24th among public national universities by U.S. In Fall 2018, Clemson has over 18,600 undergraduate and 4,800 graduate students....
From Clemson University - Fri, 02 Nov 2018 14:09:49 GMT - View all Barre, VT jobs
          Protecting What Matters: Defining Data Guardrails and Behavioral Analytics      Cache   Translate Page      
Posted under: General

Title: Protecting What Matters: Defining Data Guardrails and Behavioral Analytics

This is the second post in our series on Protecting What Matters: Introducing Data Guardrails and Behavioral Analytics. Our first post, Introducing Data Guardrails and Behavioral Analytics: Understand the Mission we introduced the concepts and outlined the major categories of insider risk. In this post we define the concepts.

Data security has long been the most challenging domain of information security despite it being the charter of our entire practice. We only call it “data security” because “information security” was already taken. Data security cannot impede the use of the data itself. By contrast, it’s easy to protect archival data (encrypt it and lock up the keys in a safe). But protecting unstructured data in active use by our organizations? Not so easy. That’s why we started this research by focusing on insider risks, including external attackers leveraging insider access. Determining someone doing an authorized action, but with malicious intent is a nuance lost on most security tools.

How Data Guardrails and Data Behavioral Analytics are Different

Both data guardrails and data behavioral analytics strive to improve data security by combining content knowledge (classification) with context and usage. Data guardrails leverage this knowledge in deterministic models and processes to minimize the friction of security without still improving defenses. For example, if a user attempts to make a file in a sensitive repository public, a guardrail could require them to record a justification and then send a notification to security to approve the request. Guardrails are rule sets that keep users “within the lines” of authorized activity, based on what they are doing.

Data behavioral analytics extends the analysis to include current and historical activity and uses tools like artificial intelligence/machine learning and social graphs to identify unusual patterns that bypass other data security controls. They reduce these gaps by not only looking at content and simple context (as DLP might), but by adding in the history of how that data, and data like it, has been used within the current context. A simple example is a user accessing an unusual volume of data in a short period, which could indicate malicious intent or a compromised account. A more complicated situation would identify sensitive intellectual property on an accounting team device, even though they do not need to collaborate with the engineering team. This higher order decision making requires an understanding of data usage and connections within your environment.

Central to these concepts is the reality of distributed data actively used widely by many employees. Security can’t effectively lock everything down with strict rules to cover every use cases without fundamentally breaking business process. But with integrated views of data and its intersection with users, we can build data guardrails and informed data behavioral analytical models to identify and reduce misuse without negatively impacting legitimate activities. Data guardrails enforce predictable rules aligned with authorized business processes, while data behavioral analytics look for edge cases and less predictable anomalies.

How Data Guardrails and Data Behavioral Analytics Work

The easiest way to understand the difference between data guardrails and data behavioral analytics is that guardrails rely on pre-built deterministic rules (which can be as simple as “if this then that”), while analytics relies on AI, machine learning, and other heuristic-based technologies that look at patterns and deviations.

To be effective, both rely on the following foundational capabilities:

* A centralized view of the data. Both approaches assume a broad understanding of data and usage; without a central view, you can’t build the rules or models.

* Access to data context. Context includes multiple characteristics, including location, size, data type (if available), tags, who has access, who created the data, and all available metadata.

* Access to user context, including privileges (entitlements), groups, roles, business unit, etc.

* The ability to monitor activity and enforce rules. Guardrails, by nature, are preventative controls and require enforcement capabilities. Data behavioral analytics can be technically only for detection but are far more effective in preventing loss if they can block actions.

The two technologies then work differently while reinforcing each other:

Data guardrails are sets of rules that look for specific deviations from policy, then take action to restore compliance with the policy. To expand our earlier example: A user shares a file located in cloud storage publicly. Let’s assume the user has the proper privileges to make files public. Since the file is in a cloud service, we also assume centralized monitoring/visibility, as well as the capability to enforce rules on that file. The file is located in an engineering team’s repository (directory) for new plans and projects. Even without tagging, this location alone indicates a potentially sensitive file. The system sees the request to make the file public, but because of the context (location or tag), it prompts the user to enter a justification to allow the action, which gets logged for the security team to review. Alternatively, the guardrail could require approval from a manager before allowing the file action.

Guardrails are not blockers because the user can still share the file. Prompting for user justification both prevents mistakes and loops in security review for accountability, allowing the business to move fast while still minimizing risk. You could also look for large file movements based on pre-determined thresholds. A guardrail would only kick in if the policy thresholds are violated, and then use enforcement actions aligned with the business process (like approvals and notifications) rather than just blocking activity and calling in the security goons.

Data behavioral analytics use historical information and activity (typically with training sets of known-good and known-bad activity) which build artificial intelligence models identifying anomalies. We don’t want to be too narrow here in our description since there are a wide variety of approaches to building models. Historical activity, ongoing monitoring, and ongoing modeling are essential no matter the mathematical details. By definition we focus on the behavior of the data as the core of the models, not user activity, representing a subtle, but critical distinction from User behavioral analytics (UBA) . UBA tracks activity on a per-user basis. Data behavioral analytics (since the acronym DBA is already taken we’ll skip making up a new TLA), instead looks at activity at the source of the data. How has that data been used? Which user populations? What types of activity happen using the data? When? Not that we ignore user activity, but we are tracking usage of the data . For example, we are answering the question “has a file of this type ever been made public by a user in this group?” UBA would ask “has this particular user ever made a file public?” We believe focusing on the data has the potential to catch a broader range of data usage anomalies. To state the obvious, the better the data, the better the model. As with most security-related data science, don’t assume more data results in better models. It’s about the quality of the data. For example, social graphs of communications patterns among users could be a valuable feed to detect situations like files moving between teams not usually collaborating. That’s worth a look, even if you don’t want to block the activity outright.

Data guardrails handle known risks and are especially effective in reducing user error and identifying account abuse resulting from tricking authorized users into unauthorized actions. Guardrails may even help reduce account takeovers since the attackers wouldn’t be able to misuse the data if the action violated a guardrail. Data behavioral analytics then supplements the guardrail for those unpredictable situations or where the bad actor will try to circumvent the guardrails, including malicious misuse and account takeovers.

Now you have a better understanding of the requirements and capabilities of data guardrails and data behavioral analytics. In our next post, we will focus on some quick wins to justify including these capabilities in your data security strategy.

Rich

(0) Comments

Subscribe to our daily email digest
          Wolters Kluwer Tax & Accounting CEO Karen Abramson Participates in Blockchain Challenge Broken Bots 'N' Blocks Open to US College Students Studying Accounting, Finance or Data Science      Cache   Translate Page      

Wolters Kluwer Tax & Accounting CEO, Karen Abramson, joined industry thought leaders and blockchain experts as a judge in the Broken Bots 'N' Blocks challenge designed to educate, explore and empower students and professionals on the fundamentals and potential of blockchain. Participation in the Broken Bots 'N' Blocks challenge, organized by TrueUp, included 186 students from 34 US colleges and 62 mentors from 32 Firms. The challenge required students to execute a transaction in blockc...

Read the full story at https://www.webwire.com/ViewPressRel.asp?aId=230994


          UH Data Science Institute receives $10 million boost from Hewlett Packard Enterprise      Cache   Translate Page      

Collaboration includes research opportunities for faculty and students The University of Houston announced a new collaboration with  Hewlett Packard Enterprise  (HPE) on Friday, including a $10 million gift from HPE to the University. The gift from HPE will benefit the University's Data Science Institute and include funding for a scholarship endowment, as well as both funding and equipment to enhance data science research activities. “At HPE, we have a robust presence in Houston and a...

Read the full story at https://www.webwire.com/ViewPressRel.asp?aId=230928


          Microsoft Azure Machine Learning and Project Brainwave – Intel Chip Chat – Episode 610      Cache   Translate Page      
In this Intel Chip Chat audio podcast with Allyson Klein: In this interview from Microsoft Ignite, Dr. Ted Way, Senior Program Manager for Microsoft, stops by to talk about Microsoft Azure Machine Learning, an end-to-end, enterprise grade data science platform. Microsoft takes a holistic approach to machine learning and artificial intelligence, by developing and deploying [...]
          Data Scientist      Cache   Translate Page      
TX-Irving, Hands on in applying data mining techniques, doing statistical analysis, machine learning algorithms and building high quality prediction systems integrated with products and processes. Should have experience in some of the follwoing: “automate scoring using machine learning techniques”, “build recommendation systems”, “improve and extend the features used by our existing classifier”, “develop int
          Data Scientist - Aimia - Toronto, ON      Cache   Translate Page      
Expert knowledge and past experience of statistics and mathematics applied to business. Ideally, the individual in this role is a technical expert with hands-on...
From Aimia - Thu, 13 Sep 2018 20:52:33 GMT - View all Toronto, ON jobs
          Director- Data Management - f5 Networks - Seattle, WA      Cache   Translate Page      
You will report directly to our Senior VP, IT and CIO and manage a team of Architects, Data Scientists/Analysts, and Developers focused on delivering value to...
From F5 Networks - Tue, 09 Oct 2018 21:00:58 GMT - View all Seattle, WA jobs
          Data Science & BI Manager      Cache   Translate Page      
Seven Data Science - Gloucestershire - The Analytics, Data Science and BI Manager supports the ongoing development of our pricing strategy. This role is focussed on developing... and managing a small but experienced team including statisticians, analysts and data scientists who provide market leading technical optimisation...
          Data Analyst Python SQL Mathematics      Cache   Translate Page      
Data Team - West London - Data Analyst London to £70k Data Analyst / Reporting Engineer (Python SQL). Are you a skilled Data Analyst with Python programming skills... offices in a vibrant area of London? Collaborating with Data Scientists you will design, maintain and manage the evolutio......
          Data scientist - Diverse Lynx - Yellow Lake, WI      Cache   Translate Page      
Data Science, Digital:. As a Data Scientist, you will help create solutions enabling automation, artificial intelligence, advanced analytics, and machine...
From Diverse Lynx - Tue, 06 Nov 2018 04:27:46 GMT - View all Yellow Lake, WI jobs
          vScaler integrates RAPIDS for accelerated data science toolchains      Cache   Translate Page      
vScaler has incorporated NVIDIA’s new RAPIDS open source software into its cloud platform for on-premise, hybrid, and multi-cloud environments. Deployable via its own Docker container in the vScaler...
       

          Vice President, Data Science - Machine Learning - Wunderman - Dallas, TX      Cache   Translate Page      
Goldman Sachs, Microsoft, Citibank, Coca-Cola, Ford, Pfizer, Adidas, United Airlines and leading regional brands are among our clients....
From Wunderman - Sat, 25 Aug 2018 05:00:40 GMT - View all Dallas, TX jobs
          data science training in Madhapur      Cache   Translate Page      
Sterling IT is one of the best institutes for Data science training in Madhapur Data science is a multidisciplinary blend of data inference, algorithm development, and technology to solve analytically complex problems. first of all Data science has become one of the most promising and in-demand career paths for various skilled professionals. Lots of raw information which streams in are stored in enterprise data warehouses. There is a lot to learn by mining it. We can build Advanced capabilities... $100
          Data Scientist - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to uncover real estate trends...
From Zillow Group - Thu, 01 Nov 2018 11:21:23 GMT - View all Seattle, WA jobs
          Data Scientist (Agent Pricing) - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third-party data (think Hive, Presto, SQL Server, Python, Mode Analytics, Tableau, R) to make strategic recommendations....
From Zillow Group - Thu, 01 Nov 2018 11:21:14 GMT - View all Seattle, WA jobs
          Data Scientist - Vertical Living - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to make strategic...
From Zillow Group - Thu, 01 Nov 2018 11:21:13 GMT - View all Seattle, WA jobs
          2018 Pinnacle Awards: Intel’s Melvin Greer Takes Home Artificial Intelligence Executive of the Year      Cache   Translate Page      
Melvin Greer, the chief data scientist at Intel Corp. who’s, helping chart the future of artificial intelligence in his role at the company, was awarded with an inaugural WashingtonExec Pinnacle Award for the AI Executive of the Year. Greer has drawn accolades for his work building Intel’s data science platform using AI and machine learning. [...]
          Getting Started with Data Analysis on the Microsoft Platform - Examining Data      Cache   Translate Page      

By: Nai Biao Zhou || Related Tips:More >SQL Server 2017

Problem We live in a world of data. Data are the facts and figures that are collected to make a decision [1]. Companies who use Microsoft technologies usually store their data in SQL Server databases. To extract business values from these data, we usually apply statistical techniques. Statistics involves collecting, classifying, summarizing, organizing, analyzing, and interpreting data [2]. R, which has become the worldwide language for statistics [3], can bridge the gap between statistics and business intelligence development. Furthermore, The Microsoft platform enable us to work with SQL Server databases and R together [4]. With powerful statistical tools, how can we get started to use statistical methods to extract meaningful information from voluminous amount of data? Solution We are going to use data from the AdventureWorks sample database "AdventureWorks2017.bak" [5]. We should always start with asking research questions when we analyze data. Here are research questions needed to be addressed through this study: Did an employee have a different sales performance in 2013 from 2012? Was an employee sales performance impacted by the seasonal factors? Were postal codes of customer addresses in the database valid? We will use " R Tools for Visual Studio sample projects " [6] as a starting point. While investigating employee sales performance, we will go through procedures to create and publish a stored procedure by using R Tools for Visual Studio (RTVS) and use line graphs to interpret data. Then, we will compute mean and median of the all employee sales to measure the "central tendency", which are indicators of typical middle value of all employees’ performance [1]. In the end, we will use regular expression to test Canadian postal codes in the database.

The solution was tested with SQL Server Management Studio V17.4 and Microsoft Visual Studio Community 2017 on windows 10 Home 10.0 <X64>. The DBMS is Microsoft SQL Server 2017 Enterprise Edition (64-bit).

A First Look at RTVS Line graphs are commonly used to present changes in data over a period [1]. We are going to look at employee monthly sales and reveal the employee performance changes through a line graph. In the meanwhile, we will create a stored procedure by using RTVS. 1 - Work on the first R project 1.1 Download the sample project and Open the solution file "Examples.sln".

For using RTVS effectively, we use "Data Science Setting" as the Visual Studio setting. Before switching on this setting, I recommend that we save the current window layout. We can revert to the previous window layout after we have done data analysis. Figure 1 shows the menu item that we can use to save the window layout.


Getting Started with Data Analysis on the Microsoft Platform - Examining Data
Figure 1 - Save current window layout

To change the setting for using RTVS, click on the menu item shown in the Figure 2.


Getting Started with Data Analysis on the Microsoft Platform - Examining Data
Figure 2 - Switch to the data science settings Figure 3 shows the window layout for using RTVS. It is noteworthy that we should verify if a correct version of R is used when we have multiple versions installed. We can find the version number in the "Workspaces" panel or in the bottom right corner of the IDE. For the best experience, we should follow the instructions in [6] to run the R codes of the "1-Getting_Started_with_R.R" file line-by-line.
Getting Started with Data Analysis on the Microsoft Platform - Examining Data
Figure 3 - IDE window layout for data analysis 1.2 Add a database connection to the project

Click on the menu item "Add Database Connection…", as shown in Figure 4.


Getting Started with Data Analysis on the Microsoft Platform - Examining Data
Figure 4 - Add database connection to the project

Configure the connection properties, as shown in the Figure 5.


Getting Started with Data Analysis on the Microsoft Platform - Examining Data
Figure 5 - Configure the connection properties

Click on the "OK" button. An R script file "Settings.R" is automatically added to the project, as shown in Figure 6. To access the connection string, we should run the codes immediately, consequently save the connection string in the "setting" variable.


Getting Started with Data Analysis on the Microsoft Platform - Examining Data
Figure 6 - Application settings file 2 - Create a new stored procedure 2.1 Add a new stored procedure

Right-click on the "A first look at R" folder and select "Add > New Item" from the context menu. A pop-up window shows up as illustrated in Figure 7. Select "SQL Stored Procedure with R" template and name the procedure as "sp_employee_sales_monthly".


Getting Started with Data Analysis on the Microsoft Platform - Examining Data
Figure 7 - Create a new stored procedure

In the "Solution Explore" panel shown in Figure 8, we find three files have been created. This allows us to work on R scripts and SQL scripts, separately.


Getting Started with Data Analysis on the Microsoft Platform - Examining Data
Figure 8 - R Tool for Visual Studio creates three files for one stored procedure 2.2 Write a SQL query to retrieve data from the database In this tip, we adopt SQL queries used in the SQL Server Reporting Services Product Samples [7]. Open the file "sp_employee_sales_monthly.Query.sql" and replace the content with the following SQL query: -- Place SQL query retrieving data for the R stored procedure here
-- Employee sales
DECLARE @EmployeeID int
SET @EmployeeID = 283
SELECT P.FirstName + SPACE(1) + P.LastName AS Employee,
DATEPART(Year, SOH.OrderDate) AS [Year],
DATEPART(Month, SOH.OrderDate) AS MonthNumber,
DATENAME(Month, SOH.OrderDate) AS [Month],
SUM(DET.LineTotal) AS Sales
FROM [Sales].[SalesPerson] SP
INNER JOIN [Sales].[SalesOrderHeader] SOH ON SP.[BusinessEntityID] = SOH.[SalesPersonID]
INNER JOIN Sales.SalesOrderDetail DET ON SOH.SalesOrderID = DET.SalesOrderID
INNER JOIN [Person].[Person] P ON P.[BusinessEntityID] = SP.[BusinessEntityID]
WHERE SOH.SalesPersonID = @EmployeeID
and DATEPART(Year, SOH.OrderDate) in (2012, 2013)
GROUP BY P.FirstName + SPACE(1) + P.LastName,
SOH.SalesPersonID,
DATEPART(Year, SOH.OrderDate), DATEPART(Month, SOH.OrderDate),
DATENAME(Month, SOH.OrderDate)

Click on the arrow button, as shown in Figure 9, to run the query. If we run the query first time, a pop-up window may show up and ask us to establish a database connection.


Getting Started with Data Analysis on the Microsoft Platform - Examining Data
Figure 9 - Run the SQL query in Visual Studio 2.3 Write R script to plot a multiple line graph for an employee

Open the file "sp_employee_sales_monthly.R". The RTVS included some testing codes in the file, as shown in the Figure 10. These testing codes provide us a method to load data from a SQL Server database.


Getting Started with Data Analysis on the Microsoft Platform - Examining Data
Figure 10 - Initial R codes in the new file Unc
          AFS Names Ian McCulloh Chief Data Scientist      Cache   Translate Page      
Accenture Federal Services has named Ian McCulloh, a scientist at Johns Hopkins’ Applied Physics Laboratory, to serve as the company’s chief data scientist. In the new role, McCulloh will focus on advancing innovative analytics solutions for Accenture’s federal customers, the company said. At the Johns Hopkins lab, McCulloh established a 60-member applied science group and [...]
          Data Steward      Cache   Translate Page      
MA-Cambridge, Data Steward and project management at Global Biotech/Pharma Cambridge, MA Contract: 6 months to start Data Steward and project management activities within the Data Science Institute Responsibilities: Responsible for the management, integrity and maintenance of Biotech/Pharma Customer Master System and alignment with data warehouse. Support ongoing data quality initiatives for the organization. C
          Research Data Scientist - RiverPoint - Seattle, WA      Cache   Translate Page      
6 month (extendable) contract position: Job Description: * The successful candidate will work directly on evaluating the weather data from our new provider...
From RiverPoint - Sat, 20 Oct 2018 06:40:46 GMT - View all Seattle, WA jobs
          Python Developer/Data Scientist - RiverPoint - Houston, TX      Cache   Translate Page      
We are looking for individuals to fill the role of Data Scientist on our model development team. This team builds the machine learning algorithms that...
From RiverPoint - Sat, 03 Nov 2018 06:30:32 GMT - View all Houston, TX jobs
          Data Scientist - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From belairdirect - Thu, 13 Sep 2018 00:51:55 GMT - View all Montréal, QC jobs
          Associate Clinical Project Management Director (Home or Office-based)      Cache   Translate Page      
MA-Cambridge, Job DescriptionJoin us on our exciting journey! IQVIA™ is The Human Data Science Company™, focused on using data and science to help healthcare clients find better solutions for their patients. Formed through the merger of IMS Health and Quintiles, IQVIA offers a broad range of solutions that harness advances in healthcare information, technology, analytics and human ingenuity to drive healthcare
          Faculty Member, Computer Science (Databases and Data Science) - University of Saskatchewan - Saskatoon, SK      Cache   Translate Page      
Applicants for this tenure track-position should have a PhD in Computer Science. The Department of Computer Science in the College of Arts and Science at the...
From University of Saskatchewan - Fri, 27 Jul 2018 00:18:50 GMT - View all Saskatoon, SK jobs
          Data scientist      Cache   Translate Page      
Looking for someone to contribute to an ongoing project as data analyst. You are expected to be having experience on machine learning and deep learning modules using Python. Minimum 4 hours a days is needed to be spent on the project... (Budget: ₹100 - ₹400 INR, Jobs: Data Mining, Machine Learning, Python, Software Architecture, Statistics)
          Consultant, Business Analytics & Data Science - Lincoln Financial - Boston, MA      Cache   Translate Page      
Phoenix, AZ (Arizona). Knowledge and experience on applying statistical and machine learning techniques on real business data....
From Lincoln Financial Group - Fri, 02 Nov 2018 02:54:18 GMT - View all Boston, MA jobs
          Sr. Consultant, Business Analytics & Data Science - Lincoln Financial - Boston, MA      Cache   Translate Page      
Phoenix, AZ (Arizona). Implements and maintains predictive and statistical models to identify business opportunities and solve complex business problems....
From Lincoln Financial Group - Tue, 16 Oct 2018 20:54:14 GMT - View all Boston, MA jobs
          Data Scientist - State Farm - Bloomington, IL      Cache   Translate Page      
Bloomington, IL, Atlanta, GA, Dallas, TX, and Phoenix, AZ. Collaborates with business subject matter experts to select relevant sources of information....
From State Farm - Fri, 21 Sep 2018 22:31:56 GMT - View all Bloomington, IL jobs
          Data viz challenge: Recreating FiveThirtyEight’s ‘Deadest Names’ graphic with ggplot2      Cache   Translate Page      
I’ve recently begun reading through the book Modern Data Science with R, by Benjamin S. Baumer, Daniel T. Kaplan, and Nicholas J. Horton. It’s quite clear and informative. One of the things I especially appreciate about it is that I’m not finding the math to be too cumbersome. That is, even for someone like me, […] The post Data viz challenge: Recreating FiveThirtyEight’s ‘Deadest Names’ graphic with ggplot2 appeared first on my (mis)adventures in R programming.
          Harnessing the Data Revolution (HDR): Data Science Corps (DSC)      Cache   Translate Page      

Available Formats:
HTML: https://www.nsf.gov/pubs/2019/nsf19518/nsf19518.htm?WT.mc_id=USNSF_169&WT.mc_ev=click
PDF: https://www.nsf.gov/pubs/2019/nsf19518/nsf19518.pdf?WT.mc_id=USNSF_169&WT.mc_ev=click

Document Number: nsf19518


This is an NSF Program Announcements and Information - Environmental Research item.

          (USA-VA-Herndon) Full Stack Scala Engineer: JavaScript | Responsive Web Apps      Cache   Translate Page      
Full Stack Scala Engineer: JavaScript | Responsive Web Apps Full Stack Scala Engineer: JavaScript | Responsive Web Apps - Skills Required - JavaScript, Scala, Responsive Web Apps, Math, Modeling, JVM, Python, SPARK, Angular, Liftweb If you're an experienced Full Stack Scala Engineer, please read on! We apply artificial intelligence to solve complex, real-world problems at scale. Our Human+AI operating system, blends capabilities ranging from data handling, analytics, and reporting to advanced algorithms, simulations, and machine learning, enabling decisions that are just-in-time, just-in-place, and just-in-context. If this type of environment sounds exciting, please read on! **Top Reasons to Work with Us** - Benefits start on day 1 - Free onsite gym - Unlimited snacks and drinks - Located 1 mile from Wiehle-Reston East Station on the Silver line **What You Will Be Doing** RESPONSIBILITIES: - Design and develop code, predominantly in Scala, making extensive use of current tools such as Liftweb and Scala.js. - Developing state-of-the-art analytics tools supporting diverse tasks ranging from ad hoc analysis to production-grade pipelines and workflows for customer applications - Contributing to key user interactions and interfaces for tools across our modular SaaS platform - Developing tools to improve the ease of use of algorithms and data science tools - Working collaboratively to ensure consistent and performant approaches for the entire user experience and analytic code developed inside the system - Interacting directly with client project team members and operational staff to support live customer deployments **What You Need for this Position** QUALIFICATIONS: - Bachelor's Degree - Expert knowledge of Scala - Experience on full-stack software development teams - Expert knowledge of Javascript, HTML and CSS - Experience with responsive web applications - Experience with tools including Scala.js, Grunt, Bower, Liftweb - Advanced mathematical modeling skills - Experience with Akka, Akka HTTP, and Spark **What's In It for You** - Competitive Salary - Incentive Stock Options - Medical, Dental & Vision Coverage - 401(K) Plan - Flexible “Personal Time Off (PTO) Plan - 10+ Paid Holiday Days Per Year So, if you're an experienced Full Stack Scala Engineer, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Full Stack Scala Engineer: JavaScript | Responsive Web Apps* *VA-Herndon* *WT1-1492870*
          (USA-NJ-Princeton) Data Analyst      Cache   Translate Page      
Data Analyst Data Analyst - Skills Required - Data Analysis, Python, SQL, C/C++, Statistical Software (R/Python/SAS/SPSS/SQL), Python/R, Data Analyst, Excel, Matlab, SPSS If you are a Data Scientist with experience, please read on! Located in Princeton, NJ, our leaders have been in the industry for over 30 years. We have created a platform that assists companies both large and small by analyzing consumer behavior and improving marketing tactics. **What You Will Be Doing** -Extracting and analyzing data and creating reports -Searching our large database to create customer prospecting models -Statistical modeling and regression analysis **What You Need for this Position** - Master's Degree in Statistics, Marketing Analytics, or related field STRONGLY preferred, Bachelor's Degree required - 3+ years practical experience with statistical analysis, and/or marketing/business analytics - Python - SQL - C/C+- Excel - Matlab - SPSS **What's In It for You** - Competitive salary ($75K-$110K DOE) - Excellent benefits package, 401k, PTO, and a FSA - Located near public transit - Opportunity to grow within the team So, if you are a Data Scientist with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Data Analyst* *NJ-Princeton* *TK3-1492884*
          Senior Data Scientist - CenturyLink - New Century, KS      Cache   Translate Page      
*Open to any major US City. Candidates must be eligible to work within the US without sponsorship* The Data Scientist is responsible for developing tools to...
From CenturyLink - Thu, 26 Jul 2018 16:08:50 GMT - View all New Century, KS jobs
          Senior Data Scientist - CenturyLink - Chicago, IL      Cache   Translate Page      
The Data Scientist is responsible for developing tools to collect, clean, analyze and manage the data used by strategic areas of the business. Employ...
From CenturyLink - Fri, 26 Oct 2018 06:12:22 GMT - View all Chicago, IL jobs
          CONSULENTE DATA SCIENTIST SENIOR/JUNIOR - Prisma S.r.l. - Junior, WV      Cache   Translate Page      
Prisma Srl opera nel settore dell’Information Technology dal 1984. Attraverso il continuo monitoraggio delle tecnologie emergenti e l’attenta valorizzazione...
From Prisma S.r.l. - Thu, 27 Sep 2018 07:51:39 GMT - View all Junior, WV jobs
          (USA-CA-Sunnyvale) ECL Data Scientist - Remote      Cache   Translate Page      
ECL Data Scientist - Remote ECL Data Scientist - Remote - Skills Required - ECL, Lexis Nexis, Apache, Pig If you are an ECL Data Scientist with experience, please read on! We are a cutting edge technology company dominating the IoT (Internet of Things) market where we solve real world problems in real-time for our clients. We connect entire ecosystems creating digital enterprises. Due to growth and demand for our product and services, we are in need of hiring a Data Scientist who has hands-on experience with ECL and a product focused mind set. If you are interested in joining a leading technology company that cares about its employees and their environment, then apply immediately. **Top Reasons to Work with Us** Competitive Salary 100% Paid Medical Benefits Bonus Generous Equity **What You Will Be Doing** In this role, you will bring your passion for technology and apply your skills to our platform. You will be a dynamic hands-on leader playing a key role in additions to our data science team. You will be responsible for both research and technical aspects of projects reporting directly to the CEO. **What You Need for this Position** More Than 5 Years of experience and knowledge of: - ECL - Lexis Nexis - Apache - Pig **What's In It for You** - $$200k-$300k (DOE) - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k So, if you are an ECL Data Scientist with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *ECL Data Scientist - Remote* *CA-Sunnyvale* *PZ1-1493037*
          (USA-CA-Redwood City) Senior Product Manager      Cache   Translate Page      
Senior Product Manager Senior Product Manager - Skills Required - Product Management, Analytical Skills, Enterprise Software Products, Demand Response, Distributed Energy Resources, energy markets, CRM, IoT Platforms, Agile/SCRUM We build software applications that enable a smarter Energy Internet. The company's suite of Energy Internet applications allows utilities, electricity retailers, renewable energy project developers and energy service providers to deliver cheap, clean and reliable energy by managing networked distributed energy resources (DERs) in real time and at scale. The world's leading energy companies, including E.ON, Bonneville Power Administration, Florida Power & Light, Southern California Edison, Eneco, Portland General Electric, CPS Energy, New Hampshire Electric Cooperative, NextEra Energy and CLEAResult, are using our software to improve their operations, integrate renewables and drive deeper engagement with their customers. **What You Will Be Doing** - As a Chief Product owner for key components of our Energy Internet application, you will be responsible for the overall business and technical success of the product. - Work with a cross functional team of engineering, data science, sales, and senior executives to prioritize key features/functionality to deliver a winning commercial product offering. Act as an product owner and be closely involved in design, documentation and testing of features. - Product champion when working with customers/prospects and key partners. **What You Need for this Position** - Demonstrated experience bringing successful enterprise software products to market. - Passionate about working with complex technology products. - Self-starter, results-driven, and effective leader working with a cross-functional team. - At least 5 years of product management or equivalent experience. - Detail oriented with strong organizational and analytical skills. - Excellent written & verbal communication skills with strong intuition for communication strategy among different stakeholders. - BS/BA in Computer Science or other technical field of study; MBA strongly preferred. - Experience working in the utility or energy industry - Knowledge of Demand Response, Distributed Energy Resources and Energy Markets - Experience with Reporting/BI tools, CRM or IoT Platforms Experience with Agile/Scrum methodologies - Startup experience highly desired **What's In It for You** - Collaborative, close-knit environment - Working with a very intelligent and fun group of people on solving BIG problems for a GIGANTIC industry - Competitive salary with equity options - Medical, Dental, Vision insurance (PPO, HMO options) - 401(k) and Flexible Spending Accounts So, if you are a Senior Product Manager with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Product Manager* *CA-Redwood City* *MM21-1492790*
          [Перевод] Data Science в Visual Studio Code с использованием Neuron      Cache   Translate Page      
Сегодня у нас небольшой рассказ о Neuron, расширении для Visual Studio Code, которое является настоящей киллер-фичей для дата-сайнтистов. Оно позволяет совместить Python, любую библиотеку машинного обучения и Jupyter Notebooks. Подробнее под катом!

Читать дальше →
          Microsoft Azure Machine Learning and Project Brainwave – Intel Chip Chat – Episode 610      Cache   Translate Page      
In this Intel Chip Chat audio podcast with Allyson Klein: In this interview from Microsoft Ignite, Dr. Ted Way, Senior Program Manager for Microsoft, stops by to talk about Microsoft Azure Machine Learning, an end-to-end, enterprise grade data science platform. Microsoft takes a holistic approach to machine learning and artificial intelligence, by developing and deploying [...]
          (USA-MA-Waltham) Senior Java Developer      Cache   Translate Page      
Senior Java Developer Senior Java Developer - Skills Required - Java, NoSQL, JMS, Hibernate, Spring, JUnit, Hadoop, Cassandra, Solr, Jenkins Do you like tough problems? If so, we have an opportunity that will allow you to handle millions of customer requests per day all while making sense of a ton of data. Our current need is for a Senior Java Developer who possesses strong communication skills to join our growing team located near Newton, MA. Ideal candidates will be motivated to move into a leadership role once they have established themselves as a key player within our organization and we offer a ton of room for growth both technically and professionally. **Top Reasons to Work with Us** - Competitive Base Salary (150 - 170K) - Competitive Bonus Structure - Flexible Work Hours - 401K Plan - Extremely Competitive PTO Policy - Extreme Growth Opportunities and a fast track to leadership **What You Will Be Doing** -Be a major player in a company that's a pioneer in semantic technology -Work with cool technologies like Hadoop, Solr and Cassandra -Work with enormous data sets. Our database has over 10 billion records extracted from the Web -Learn data mining and machine learning techniques, such as Bayesian classifiers -Want to learn about what data science means in the real world? -Solve interesting and challenging problems alongside a great team of engineers -Develop new skills as you push your knowledge - and our technology - to new levels -Work for a profitable, growing company that works with an impressive Fortune 500 client list -Work on helping build/maintain/clean our platform of businessperson and company data, processing millions of records per day and billions of records overall. **What You Need for this Position** -Must have a strong knowledge of Java -Preferred experience with: Java EE, JMS, Hibernate, Spring, Junit -Eager to learn new technologies such as Hadoop, Cassandra, Solr, Jenkins, etc. -Interested in technologies/techniques like data mining, machine learning, clustering/tag clouds, etc. -Experience with NoSQL data stores is a plus -Experience with big data and data analysis or data science is a plus -Minimum 5-8 years experience in software development -A mindset that research should lead to actionable results -Excited to tell us why you want to work here and what kinds of challenges you're looking for -Can talk intelligently and passionately about the interesting challenges your projects presented -A sense of humor and perspective So, if you are a Lead Java Developer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Java Developer* *MA-Waltham* *JW1-1492923*
          (USA-CA-Oakland) Principal Python Engineer      Cache   Translate Page      
Principal Python Engineer (APIs, Distributed Systems) Principal Python Engineer (APIs, Distributed Systems) - Skills Required - Python, gRPC, Tornado, C, RabbitMQ, AWS, SQL, Kubernetes, Docker If you are a Principal Python Engineer needed immediately with experience, please read on! **What You Will Be Doing** As an integral member of the backend team, you'll participate in architecture sessions and provide valuable input, while also learning from the senior members of the team. You'll be expected to design, implement, and maintain APIs that are well-tested, well-documented, and maintainable. You like coding clean, and take deadlines seriously. You'll also have plenty of mentorship opportunities, and will be expected to constantly learn and push your own technical boundaries. **What You Need for this Position** -Experience building and maintaining APIs. -Solid knowledge of Python from a backend/object-oriented perspective (not just data science or scripting) -Experience with SQL databases. -Awareness of concepts related to distributed systems (e.g. message queues, asynchronous tasks, pub-sub systems). -Proficient in at least one compiled and one interpreted language. Nice to have: -Experience in C, AWS, Kubernetes -Experience with cryptography or blockchain Our key technologies: -Backend: Python (gRPC, Tornado), C, postgresql, RabbitMQ -Infrastructure: AWS, Kubernetes, Docker -OS: Linux **What's In It for You** -The opportunity to join a well-funded, cutting-edge financial technology company at a very early stage -Competitive salary and equity -Competitive medical benefits -401k -Flexible working policies: we work twice a week from home -Smart coworkers who are world class experts in the field of cryptography So, if you are a Principal Python Engineer needed immediately with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Principal Python Engineer* *CA-Oakland* *JG2-1493050*
          (USA-WA-Bellevue) Machine Learning Scientist - NLP, Recommender/Ranking Systems      Cache   Translate Page      
Machine Learning Scientist - NLP, Recommender/Ranking Systems Machine Learning Scientist - NLP, Recommender/Ranking Systems - Skills Required - Machine Learning, NLP, Recommender Systems, Python, Deep Learning Theory, Hadoop, SPARK, Building Data Pipelines If you are a Machine Learning Scientist with experience, please read on! One of the largest and most well-known travel agencies is looking for a Machine Learning Scientist. We are an online travel agency that enables users to access a wide range of services. We books airline tickets, hotel reservations, car rentals, cruises, vacation packages, and various attractions and services via the world wide web and telephone travel agents. Our team helps power many of the features on our website. We design and build models that help our customers find what they want and where they want to go. As a member of our group, your contributions will affect millions of customers and will have a direct impact on our business results. You will have opportunities to collaborate with other talented data scientists and move the business forward using novel approaches and rich sources of data. If you want to resolve real-world problems using state-of-the-art machine learning and deep learning approaches, in a stimulating and data-rich environment, lets talk. **What You Will Be Doing** You will provide technical leadership and oversight, and mentor junior machine learning scientists You will understand business opportunities, identify key challenges, and deliver working solutions You will collaborate with business partners, program management, and engineering team partners You will communicate effectively with technical peers and senior leadership **What You Need for this Position** At Least 3 Years of experience and knowledge of: - PhD (MS considered) in computer science or equivalent quantitative fields with 3+ years of industry or academic experience - Expertise in NLP or recommender systems (strongly preferred) - Deep understanding of classic machine learning and deep learning theory, and extensive hands-on experience putting it into practice - Excellent command of Python and related machine learning/deep learning tools and frameworks - Strong algorithmic design skills - Experience working in a distributed, cloud-based computing environment (e.g., Hadoop or Spark) - Experience building data pipelines and working with live data (cleaning, visualization, and modeling) **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Bonus - 401k So, if you are a Machine Learning Scientist with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Machine Learning Scientist - NLP, Recommender/Ranking Systems* *WA-Bellevue* *GK2-1493004*
          Neo4j: Storing inferred relationships with APOC triggers      Cache   Translate Page      

Before we get to that, let’s first understand what we mean when we say inferred relationship. We’ll create a small graph containing Person , Meetup , and Topic nodes with the following query:

MERGE (mark:Person {name: "Mark"}) MERGE (neo4jMeetup:Meetup {name: "Neo4j London Meetup"}) MERGE (bigDataMeetup:Meetup {name: "Big Data Meetup"}) MERGE (dataScienceMeetup:Meetup {name: "Data Science Meetup"}) MERGE (dataScience:Topic {name: "Data Science"}) MERGE (databases:Topic {name: "Databases"}) MERGE (neo4jMeetup)-[:HAS_TOPIC]->(dataScience) MERGE (neo4jMeetup)-[:HAS_TOPIC]->(databases) MERGE (bigDataMeetup)-[:HAS_TOPIC]->(dataScience) MERGE (bigDataMeetup)-[:HAS_TOPIC]->(databases) MERGE (dataScienceMeetup)-[:HAS_TOPIC]->(dataScience) MERGE (dataScienceMeetup)-[:HAS_TOPIC]->(databases) MERGE (mark)-[:MEMBER_OF]->(neo4jMeetup) MERGE (mark)-[:MEMBER_OF]->(bigDataMeetup)

This is what the graph looks like in the Neo4j browser:


Neo4j: Storing inferred relationships with APOC triggers

          Dynamic Rule Based Decision Trees in Neo4j Part 4      Cache   Translate Page      

Dynamic Rule Based Decision Trees in Neo4j   Part 4

So far I’ve only showed you how to traverse a decision tree in Neo4j. The assumption being that you would either create the rules yourself from expert knowledge or via an external algorithm. Today we’re going to add an algorithm to build a decision tree (well a decision stream ) right into Neo4j. We will simply pass in the training data and let it build the tree for us. If you are reading this part without reading partsone,two, andthree, you should because this builds on what we learned along the way.

A decision tree is built with nodes that look at a value and go left if that value is less than or equal to some threshold, or go right if the value is greater. The nodes can only go left or right and can only go down one level at a time. Decision trees are a great starting point for machine learning models, but they suffer from a few problems: overfitting, instability and inaccuracy. These problems are overcome by combining a few hundred to several thousand decision trees together into a Random Forest . A random forest decreases the variance of the results without increasing the bias, which makes for a better model, but we have a very hard time looking at a random forest and understanding what it is really doing.

A decision stream allows nodes to follow a path based on multiple options and may go down more than 1 level. You can read the paper explaining what it is all about, but for our purposes, we are interested in knowing that a single decision stream can be as effective as a random forest, but a whole lot easier to understand. The authors of the paper were also gracious enough to code their algorithm for us to try out and that’s what we’ll do.

We are going to build a stored procedure that takes training data, answer data, and a significance threshold (which determines when to merge or split our nodes) and uses the resulting model to build a tree in Neo4j. Our training data is just a CSV file where the first row has a header and the following rows have numbers. If we had string data like “colors” where the options were “red, blue, yellow, etc” we would have to convert these to a number mapping 1 for red, 2 for blue, etc. For this project we are going to be reusing data from an old Kaggle competition that looked at the likeliness of someone defaulting on their loans.

RevolvingUtilizationOfUnsecuredLines,Age,Late30to59Days,DebtRatio,MonthlyIncome,OpenCreditLinesAndLoans,Late90Days,RealEstateLoans,Late60to89Days,Dependents
0.7661,45,2,0.803,9120,13,0,6,0,2
0.9572,40,0,0.1219,2600,4,0,0,0,1
0.6582,38,1,0.08511,3042,2,1,0,0,0

Our answer data is extremely simple, it’s just a single column of 1s and 0s for defaulted, and did not default:

Instead of diving into the stored procedure, I’m going to show you how to use it first. Follow the README, build the procedure and add it to Neo4j. We call it by giving it a few parameters. The name of the tree, the file where the training data lives, the file where the answers live and a threshold for merging and splitting rule nodes. In our case giving it a 0.02 which seemed like a good general value according to the paper:

CALL com.maxdemarzi.decision_tree.create('credit',
'/Users/maxdemarzi/Documents/Projects/branches/training.csv',
'/Users/maxdemarzi/Documents/Projects/branches/answers.csv', 0.02)

It takes 2-3 minutes to train this dataset of about 100k records and once it’s done we can see the results:


Dynamic Rule Based Decision Trees in Neo4j   Part 4

The tree node is in blue, the rules are green, our parameters are purple and our answers are in red. Notice that two of the parameter nodes “Monthly Income” and “Debt Ratio” are not connected to any rules. This tells us that these two values are not helpful in predicting the outcome, which kinda makes sense since these two parameters are used to qualify someone for a loan before they even get one. The first Rule node along the tree is the number of times someone is “Late 60 to 89 Days” paying their bills. Four different relationships emanate from there. Notice at the end when the Rule nodes connect to the Answer nodes they do so for both “IS_TRUE” and “IS_FALSE” relationships. I’ll explain this in a moment. First let’s try traversing the decision tree by passing in some values. This is the same procedure fromPart 3:

CALL com.maxdemarzi.decision_tree.traverse('credit',
{RevolvingUtilizationOfUnsecuredLines:'0.9572', Age:'40', Late30to59Days:'20',
DebtRatio:'0.1219', MonthlyIncome:'2600',OpenCreditLinesAndLoans:'4', Late90Days:'0',
RealEstateLoans:'0', Late60to89Days:'0', Dependents:'1'});
Dynamic Rule Based Decision Trees in Neo4j   Part 4

We get a path that ends in a Rule node checking “Age” connecting by the IS_TRUE relationship to both Answer nodes. The weights of those relationships however are different. We can see that if the user 66% likely to NOT default, vs 33% likely to default. So not only do we get a classifier, but we also get a confidence score.

If we omit the “Age” parameter in our query:

CALL com.maxdemarzi.decision_tree.traverse('credit',
{RevolvingUtilizationOfUnsecuredLines:'0.9572', Late30to59Days:'20',
DebtRatio:'0.1219', MonthlyIncome:'2600',OpenCreditLinesAndLoans:'4', Late90Days:'0',
RealEstateLoans:'0', Late60to89Days:'0', Dependents:'1'});
Dynamic Rule Based Decision Trees in Neo4j   Part 4

We get a partial path ending in the “Age” parameter as a way of asking for it, so we can ask the user for their age, re run the procedure and get a final answer.

The nice thing about this is that we can see and understand how the answer was derived. We can decide to alter the tree in any way, create many of them each from different training data, introduce parameters not in the original training set, whatever we want dynamically and still get results in real time.

I’m not going to explain the stored procedure line by line, but I do want to highlight a few things. You can see the whole thing at this repository . First thing is our stored procedure signature. Notice we are writing data to Neo4j so we need to use the Mode.Write option:

@Procedure(name = "com.maxdemarzi.decision_tree.create", mode = Mode.WRITE)
@Description("CALL com.maxdemarzi.decision_tree.create(tree, data, answers, threshold) - create tree")
public Stream<StringResult> create(@Name("tree") String tree, @Name("data") String data,
@Name("answers") String answers, @Name("threshold") Double threshold ) {

For all the different answers we are going to first create “Answer” nodes. In our case we only have 2 possibilities so, we will create two nodes.

for (Double value : answerSet) {
Node answerNode = db.createNode(Labels.Answer);
answerNode.setProperty("id", value);
answerMap.put(value, answerNode);
}

We want to create “Parameter” nodes for all the columns headers in our training data. We will save these in a “nodes” map and connect them to our Rules later.

HashMap<String, Node> nodes = new HashMap<>();
String[] headers = trainingData.next();
for(int i = 0; i < headers.length; i++) {
Node parameter = db.findNode(Labels.Parameter, "name", headers[i]);
if (parameter == null) {
parameter = db.createNode(Labels.Parameter);
parameter.setProperty("name", headers[i]);
parameter.setProperty("type", "double");
parameter.setProperty("prompt", "What is " + headers[i] + "?");
}
nodes.put(headers[i], parameter);
}

We will combine our answer and training data into a double array, which we then use to create a DoubleMatrix.

double[][] array = new double[answerList.size()][1 + headers.length];
for (int r = 0; r < answerList.size(); r++) {
array[r][0] = answerList.get(r);
String[] columns = trainingData.next();
for (int c = 0; c < columns.length; c++) {
array[r][1 + c] = Double.parseDouble(columns[c]);
}
}
DoubleMatrix fullData = new DoubleMatrix(array);
fullData = fullData.transpose();

The Decision Stream code was implemented in Clojure , but I don’t know Clojure so instead of trying to translate it into Java, I decided to just call it from our stored procedure. So we import Clojure core, get an interface to the training method for the model and then invoke it:

/* Import clojure core. */
final IFn require = Clojure.var("clojure.core", "require");
require.invoke(Clojure.read("DecisionStream"));
/* Invoke Clojure trainDStream function. */
final IFn trainFunc = Clojure.var("DecisionStream", "trainDStream");
HashMap dStreamM = new HashMap<>((PersistentArrayMap) trainFunc.invoke(X, rowIndices, threshold));

The training model returns as a nested hashmap with 4 values, the parameter, a threshold and two nested hashmaps on the left and right. From this we build our tree, combining leaf nodes whenever possible.

Node treeNode = db.createNode(Labels.Tree);
treeNode.setProperty("id", tree);
deepLinkMap(db, answerMap, nodes, headers, treeNode, RelationshipTypes.HAS, dStreamM, true);

The deepLinkMap method is used recursively for each side of the rule node, until we reach a Left node. One thing that was a bit of a pain was merging multi-option rule nodes into a single rule node, since the training map result doesn’t do this for us. The “merged” Rule nodes have a “script” property that ends up looking kinda like this:

if (Late60to89Days > 11.0) { return "IS_TRUE";}
if (Late60to89Days <= 11.0 && Late60to89Days > 3.0) { return "OPTION_1";}
if (Late60to89Days <= 3.0 && Late60to89Days > 0.0) { return "OPTION_2";}
if (Late60to89Days <= 3.0 && Late60to89Days <= 0.0) { return "OPTION_3";}
return "NONE";

Theoretically the “NONE” relationship type should never be returned, but the script needed a way to guarantee it ended and I didn’t want to mess with nested if statements.

Unmerged nodes have just two options “IS_TRUE” and “IS_FALSE” as well as a simple “expression” property that looks like the one below. The relationship type returned depends on the answer to the evaluation of that expression.

Late90Days > 0.0

As always the code is hosted on github , feel free to try it out, send me a pull request if you find any bugs or come up with enhancements. The one big caveat here is that I’m not a data scientists nor did I stay at a Holiday Inn Express last night , so please consult a professional before using.


          Data Scientist - Akamai - Santa Clara, CA      Cache   Translate Page      
These include Kona Site Defender, Bot Management, Akamai Prolexic, Web Application Firewall and Site shield. If you have a deep passion for data and security,...
From Akamai - Tue, 06 Nov 2018 00:35:42 GMT - View all Santa Clara, CA jobs
          Advanced Excel Dashboard with VBA      Cache   Translate Page      
Advanced Excel Dashboard with VBA
Advanced Excel Dashboard with VBA
.MP4 | Video: 1280x720, 30 fps(r) | Audio: AAC, 44100 Hz, 2ch | 2.53 GB
Duration: 3 hours | Genre: eLearning | Language: English


Advanced Excel Formulas, Data Science,Data Analysis,Excel Macros, Visual Basic(VBA), Excel 2016, Complex Excel Functions.
          Digital Analytics and Data Science      Cache   Translate Page      
Pioneer Recruitment - Johannesburg, Gauteng - , science or statistics is required · Ability to apply findings to marketing campaigns is a must · Must have high attention to detail...
          Data Scientist - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to uncover real estate trends...
From Zillow Group - Thu, 01 Nov 2018 11:21:23 GMT - View all Seattle, WA jobs
          Data Scientist (Agent Pricing) - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third-party data (think Hive, Presto, SQL Server, Python, Mode Analytics, Tableau, R) to make strategic recommendations....
From Zillow Group - Thu, 01 Nov 2018 11:21:14 GMT - View all Seattle, WA jobs
          Data Scientist - Vertical Living - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to make strategic...
From Zillow Group - Thu, 01 Nov 2018 11:21:13 GMT - View all Seattle, WA jobs
          Data Architect/Data Science      Cache   Translate Page      
CA-SAN JOSE, Role : Data Architect/Data Science Location : San Jose California Duration : 6+ Months Expert programming skills in Python, R Experience in writing code for various Machine learning algorithms for classification, clustering, forecasting, regression, Neural networks and Deep Learning Hands-on experience with modern enterprise data architectures and data toolsets (ex: data warehouse, data marts, dat
          Sr. Dev Smart Feat API (ISO) - Verisk Analytics - Jersey City, NJ      Cache   Translate Page      
Create a positive, lasting impact on the business; Provide technical consultation to internal teams and coach Data Scientists in following good software...
From Verisk Analytics - Thu, 25 Oct 2018 22:46:15 GMT - View all Jersey City, NJ jobs
          Sr Data Scientist Engineer (HCE) - Honeywell - Atlanta, GA      Cache   Translate Page      
50 Machine Learning. Develop relationships with business team members by being proactive, displaying a thorough understanding of the business processes and by...
From Honeywell - Thu, 20 Sep 2018 02:59:11 GMT - View all Atlanta, GA jobs
          Data Science Intern      Cache   Translate Page      
You will work in teams with experienced Accenture and client teams with a variety of educational backgrounds with good help guidance and support from one or more Accenture (more) p Login for more job information and to Apply
          Director- Data Management - f5 Networks - Seattle, WA      Cache   Translate Page      
You will report directly to our Senior VP, IT and CIO and manage a team of Architects, Data Scientists/Analysts, and Developers focused on delivering value to...
From F5 Networks - Tue, 09 Oct 2018 21:00:58 GMT - View all Seattle, WA jobs
          Data Science Manager - Micron - Boise, ID      Cache   Translate Page      
Create server based visualization applications that use machine learning and predictive analytic to bring new insights and solution to the business....
From Micron - Wed, 05 Sep 2018 11:18:49 GMT - View all Boise, ID jobs
          Intern - Data Scientist (NAND) - Micron - Boise, ID      Cache   Translate Page      
Machine learning and other advanced analytical methods. To ensure our software meets Micron's internal standards....
From Micron - Wed, 29 Aug 2018 20:54:50 GMT - View all Boise, ID jobs
          Intern - Data Scientist (DRAM) - Micron - Boise, ID      Cache   Translate Page      
Machine learning and other advanced analytical methods. To ensure our software meets Micron's internal standards....
From Micron - Mon, 20 Aug 2018 20:48:37 GMT - View all Boise, ID jobs
          Aptly Hires Intel Data Scientist & Leading Multifamily Expert in Artificial Intelligence      Cache   Translate Page      
Aptly continues to grow team to power intelligent resident conversations with artificial intelligence (AI). SAN FRANCISCO, CA, November 07, 2018 /24-7PressRelease/ — Aptly, Multifamily’s first AI-powered communication platform, has announced the addition of Michael Lin as Head of Data Science. Lin, a Multifamily industry veteran, is a recognized technology leader with extensive experience in artificial […]
          [looking for staff] [Birmingham] Tech Recruiter - new recruitment startup      Cache   Translate Page      

@RichardGale wrote:

We’re a new recruitment startup, founded by three of London’s most highly regarded internal tech recruiters. We’ve each had a lot of success in both internal recruitment and recruitment agencies. Having seen both sides of the fence (internal and agency) we intimately know what hiring managers look for. Our technical knowledge is greater than almost all recruiters on the market (certainly >99th percentile) having scaled up teams building cool technologies using NLP, Bayesian Inference, Probablistic Programming / Probabilistic Graphical Models; we’ve helped build the world’s largest medical Knowledge Graph; all the way to low latency, high throughput data platforms of non-trivial scale and complexity… hiring the top 5% of engineering and data science talent in London. We’re also rigorously honest. To a fault.

We’re already working with technology and product companies to help them scale their engineering and technical teams. We work with some of London’s hottest startups and scaleups; we are anticipating that our recruiters won’t have to do business development (in 2019 at least).

We’re in stealth mode until 2019, but we’re now in a position to hire our first recruiters. We’re setting up offices in London and Birmingham. Birmingham will be led by Richard Gale (ex-Schibsted, Quantcast, Skimlinks, Babylon Health): https://www.linkedin.com/in/richgalerecruiter/

We’re looking for exceptional candidates, regardless of level - you could be a Principal Recruiter with 10 years of experience; or, you could be a junior recruiter with amazing potential, with less than a year of experience.

Our bar is unapologetically high. In addition to great recruitment fundamentals, we’ll be indexing highly for:


Raw Intellectual Horsepower:
Recruitment is fundamentally a puzzle; with lots of moving pieces which tessellate in complex ways. To say that you don’t need high levels of general cognitive ability to be a great recruiter is demonstrably false. We need excellent problem solvers, who can weigh up complex competing requirements quickly and accurately.

True Intellectual Curiosity in Technology:
How can you possibly be a great recruiter in technology, if tech doesn’t fire your intellectual curiosity? Recruiters who aren’t temperamentally orientated towards technology don’t go out of their way to learn about new techs. We want geeks who get excited by tech megatrends, gadgets, apps, games (or similar).

Rigorous Honesty:
Recruiters have earned a reputation for playing fast and loose with the truth. Perhaps that’s overstated; but nonetheless, if we’re going to fix recruitment we need to be rigorously honest recruiters. In reality, this means having the strength of character to tell the truth as completely and articulately as you can manage. Always. Medium to long term, this is the only way to truly succeed in recruitment; though it might cost you fees in the short term.


We’re looking for a lot; and we won’t find many who reach this bar. But, we will not lower the bar for short term gain. If you happen to be successful in our process, you can rest assured that your mentor / peer group will have gone through the same rigorous process; you’ll get to work with a lot of other smart, honest people. We’ll be able to teach you recruitment best practices like no other agency - as we’ve all worked in (and led) internal recruitment teams in highly selective environments. This is somewhat unique in the industry.

As the bar is so high, we’ll be rewarding our recruiters with the best comp structures. Additionally, we won’t be tracking pointless KPIs (we don’t care how many minutes you’re on the phone), offer flexible working / WFH (within reason!); all we care about is excellent delivery and you delighting your clients. We are a strictly equal opportunities employer; we only discriminate based on competence, potential, and strength of character.

If you’re passionate about tech recruitment, and believe that there’s a better way of doing things, please email me (Richard Gale) your CV / LinkedIn URL - richard@techuiter.io (or feel free to call me on 07949 064 852 to learn more and discuss).

Posts: 1

Participants: 1

Read full topic


          Data Scientist - Wade & Wendy - New York, NY      Cache   Translate Page      
Our team is backed by Slack, ffVC, Randstad and other great VCs, as we bring AI and machine learning to the recruiting/HR space - all in order to make the...
From Wade & Wendy - Thu, 04 Oct 2018 06:17:29 GMT - View all New York, NY jobs
          NLP Data Scientist - Wade & Wendy - New York, NY      Cache   Translate Page      
Our team is backed by Slack, ffVC, Randstad and other great VCs, as we bring AI and machine learning to the recruiting/HR space - all in order to make the...
From Wade & Wendy - Thu, 27 Sep 2018 14:36:48 GMT - View all New York, NY jobs
          Data Science & Analytics Intern (Paid) - Summer 2019 - BOEING - Bellevue, WA      Cache   Translate Page      
Aircraft powered by hydrogen and biofuels. Bellevue,Washington,United States UANVNA....
From Boeing - Wed, 29 Aug 2018 07:13:13 GMT - View all Bellevue, WA jobs
          Data Elixir - Issue 207      Cache   Translate Page      

In the News

Harvard Converts Millions of Legal Documents into Open Data

Inspired by the Google Books Project, the new Caselaw Access Project from the Library Innovation Lab at Harvard puts the entire corpus of published U.S. case law online for anyone to access for free. The project involved scanning and digitizing 100,000 pages per day over two years. This is a big deal that will enable new analytical insights, research, and applications.

govtech.com

⚽ How data analysis helps football clubs make better signings

They said it could never be done. The game was too fluid, too chaotic. The players’ movements too difficult to track reliably. But, decades after sports like baseball first embraced statistics, football - known as soccer in the U.S. - is starting to play the data game.

ft.com

Sponsored Link

10 Guidelines for A/B Testing

Online experimentation, or A/B testing, is the gold standard for measuring the effectiveness of changes to a website. But while A/B testing can appear simple, there are many issues that can complicate an analysis. In this presentation, Emily Robinson, data scientist at DataCamp, will cover 10 best practices that will help you avoid common pitfalls.

gotowebinar.com

Tools and Techniques

Importance of Skepticism in Data Science

Great discussion about one of the most important aspects of analyzing data - being skeptical of the results. Includes lots of useful examples.

github.io

Scaling Machine Learning at Uber with Michelangelo

In 2015, machine learning was not widely used at Uber. Just three years later, Uber has advanced capabilities and infrastructure, and hundreds of production machine learning use-cases. This post describes the wide variety of ways that Uber uses machine learning and how they've managed to scale their systems so quickly and effectively.

uber.com

Why Jupyter is data scientists’ computational notebook of choice

Last week's article in Nature about Jupyter Notebooks sparked a great discussion in Hacker News. There's a lot here, including tips, debate, and links to further resources.

ycombinator.com

Grokking Deep Learning

Andrew Trask's new book, Grokking Deep Learning aims to be the easiest introduction possible to deep learning. Each section teaches how to build a neural component from scratch in NumPy. This repo contains the code examples for each lesson.

github.com

Bringing machine learning research to product commercialization

Rasmus Rothe, Founder at Merantix, explores the differences between academia and industry when applying deep learning to real-world problems. This article goes into detail about differences regarding workflow, expectations, performance, model design and data requirements.

medium.com

Find A Data Science Job Through Vettery

Vettery specializes in tech roles and is completely free for job seekers. Interested? Submit your profile, and if accepted onto the platform, you can receive interview requests directly from top companies growing their data science teams.

// sponsored

vettery.com

Resources

Data Science With R Workflow Cheatsheet

Nice map of R cheatsheets that's organized around common workflows. It's like a cheatsheet for cheatsheets.

business-science.io

Python Learning Resources

Rachel Thomas from fast.ai asked for Python learning recommendations on Twitter and the resulting thread was amazing. These two recommendations, in particular, stand out:

twitter.com

Career

My Weaknesses as a Data Scientist

Identifing your weaknesses is one of most important things you can do to become effective in your career. In this post, William Koehrsen explores his particular weaknesses as a data scientist and the steps he's taking to overcome them. The approach he models here may be uncomfortable for some but it's a super effective strategy.

towardsdatascience.com

Jobs & Careers

Hiring?

Post on Data Elixir's Job Board to reach a wide audience of data professionals.

dataelixir.com

Recent Listings:

More data science jobs >>

About

Data Elixir is curated and maintained by @lonriesberg. For additional finds from around the web, follow Data Elixir on Twitter, Facebook, or Google Plus.


This RSS feed is published on https://dataelixir.com/. You can also subscribe via email.


          Sr. Dev Smart Feat API (ISO) - Verisk Analytics - Jersey City, NJ      Cache   Translate Page      
Create a positive, lasting impact on the business; Provide technical consultation to internal teams and coach Data Scientists in following good software...
From Verisk Analytics - Thu, 25 Oct 2018 22:46:15 GMT - View all Jersey City, NJ jobs
          Sr Data Scientist Engineer (HCE) - Honeywell - Atlanta, GA      Cache   Translate Page      
50 Machine Learning. Develop relationships with business team members by being proactive, displaying a thorough understanding of the business processes and by...
From Honeywell - Thu, 20 Sep 2018 02:59:11 GMT - View all Atlanta, GA jobs
          Bioinformatics Analyst/Genomic Data Scientist - Frederick National Laboratory - Fort Detrick, MD      Cache   Translate Page      
Bachelor’s degree in biomedical science/bioinformatics/math/computer science related field from an accredited college or university according to the Council for...
From Frederick National Laboratory - Mon, 05 Nov 2018 14:23:39 GMT - View all Fort Detrick, MD jobs
          NLP Data Scientist - PARC, a Xerox company - Palo Alto, CA      Cache   Translate Page      
PARC, a Xerox company, is in the Business of Breakthroughs®. We create new business options, accelerate time to market, augment internal capabilities, and...
From PARC, a Xerox company - Sat, 29 Sep 2018 08:33:49 GMT - View all Palo Alto, CA jobs
          Assistant Professor in Biomedical Data Science and Informatics - Clemson University - Barre, VT      Cache   Translate Page      
Clemson University is ranked 24th among public national universities by U.S. In Fall 2018, Clemson has over 18,600 undergraduate and 4,800 graduate students....
From Clemson University - Fri, 02 Nov 2018 14:09:49 GMT - View all Barre, VT jobs
          Data Scientist - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From belairdirect - Thu, 13 Sep 2018 00:51:55 GMT - View all Montréal, QC jobs
          Why Data Science is Booming in Israel      Cache   Translate Page      
Israel’s tech sector is one of the hottest in the world, and it’s due primarily to the startup culture in the country. The country’s startup culture has transformed Israel into a competitive tech industry with some of the world’s most promising startups calling Israel home....
          vScaler Cloud Adopts RAPIDS Open Source Software for Accelerated Data Science      Cache   Translate Page      

vScaler has incorporated NVIDIA’s new RAPIDS open source software into its cloud platform for on-premise, hybrid, and multi-cloud environments. Deployable via its own Docker container in the vScaler Cloud management portal, the RAPIDS suite of software libraries gives users the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. "The new RAPIDS library offers Python interfaces which will leverage the NVIDIA CUDA platform for acceleration across one or multiple GPUs. RAPIDS also focuses on common data preparation tasks for analytics and data science. This includes a familiar DataFrame API that integrates with a variety of machine learning algorithms for end-to-end pipeline accelerations without paying typical serialization costs. RAPIDS also includes support for multi-node, multi-GPU deployments, enabling vastly accelerated processing and training on much larger dataset sizes."

The post vScaler Cloud Adopts RAPIDS Open Source Software for Accelerated Data Science appeared first on insideHPC.


          Financial Data Science Analyst - Apple - Austin, TX      Cache   Translate Page      
Our team applies data science and machine learning to drive strategic impact across multiple lines of business at Apple....
From Apple - Wed, 07 Nov 2018 01:45:42 GMT - View all Austin, TX jobs
          Data Scientist - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to uncover real estate trends...
From Zillow Group - Thu, 01 Nov 2018 11:21:23 GMT - View all Seattle, WA jobs
          Data Scientist (Agent Pricing) - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third-party data (think Hive, Presto, SQL Server, Python, Mode Analytics, Tableau, R) to make strategic recommendations....
From Zillow Group - Thu, 01 Nov 2018 11:21:14 GMT - View all Seattle, WA jobs
          Sr Data Scientist - NVIDIA - Santa Clara, CA      Cache   Translate Page      
3+ years’ experience in solving problems using machine learning algorithms and techniques (clustering, classification, outlier analysis, etc.)....
From NVIDIA - Tue, 30 Oct 2018 01:54:49 GMT - View all Santa Clara, CA jobs
          Ian Dunlop joins ContactEngine as Chief Product Officer      Cache   Translate Page      
ContactEngine are pleased to announce the appointment of Ian Dunlop as Chief Product Officer. Dunlop joins ContactEngine to lead the company's product management, engineering, data science, AI and Dev...
       

          Scientifique de données -Analytique d'affaires -Data Scientist - Aviva - Montréal, QC      Cache   Translate Page      
Ce que vous serez appelé(e) à faire Joignez-vous à une équipe d’actuaires et de scientifiques et d’ingénieurs de données passionnés, chargée de mettre les...
From Aviva - Thu, 25 Oct 2018 17:53:51 GMT - View all Montréal, QC jobs
          Directeur actuariat, Équipe de science des données -Manager Data Scientist - Aviva - Montréal, QC      Cache   Translate Page      
An English version will follow Vous allez vous joindre à une équipe d’actuaires et de scientifiques et d’ingénieurs de données passionnés, chargée de mettre...
From Aviva - Tue, 16 Oct 2018 17:53:50 GMT - View all Montréal, QC jobs
          Real Time Live Online Training On SSRS @ SQL School - Delhi, India      Cache   Translate Page      
SQL School is one of the best training institutes for Microsoft SQL Server Developer Training, SQL DBA Training, MSBI Training, Power BI Training, Azure Training, Data Science Training, Python Training, Hadoop Training, Tableau Training, Machine Learni...
          Data Scientist - Deloitte - Springfield, VA      Cache   Translate Page      
Demonstrated knowledge of machine learning techniques and algorithms. We believe that business has the power to inspire and transform....
From Deloitte - Fri, 10 Aug 2018 06:29:44 GMT - View all Springfield, VA jobs
          Associate, Machine Learning AI Consultant, Financial Services - KPMG - Dallas, TX      Cache   Translate Page      
Broad, versatile knowledge of analytics and data science landscape, combined with strong business consulting acumen, enabling the identification, design and...
From KPMG LLP - Fri, 07 Sep 2018 02:02:14 GMT - View all Dallas, TX jobs
          Principal Data Scientist - Clockwork Solutions - Austin, TX      Cache   Translate Page      
Support Clockwork’s Business Development efforts. Evaluates simulation analysis output to reveal key insights about unstructured, chaotic, real-world systems....
From Clockwork Solutions - Mon, 27 Aug 2018 10:03:12 GMT - View all Austin, TX jobs
          Lead Data Scientist - Clockwork Solutions - Austin, TX      Cache   Translate Page      
Support Clockwork’s Business Development efforts. Evaluates simulation analysis output to reveal key insights about unstructured, chaotic, real-world systems....
From Clockwork Solutions - Mon, 27 Aug 2018 10:03:12 GMT - View all Austin, TX jobs
          Associate, Machine Learning AI Consultant, Financial Services - KPMG - New York, NY      Cache   Translate Page      
Broad, versatile knowledge of analytics and data science landscape, combined with strong business consulting acumen, enabling the identification, design and...
From KPMG LLP - Fri, 14 Sep 2018 08:38:34 GMT - View all New York, NY jobs
          Data Scientist: Medical VoC and Text Analytics Manager - GlaxoSmithKline - Research Triangle Park, NC      Cache   Translate Page      
Strong business acumen; 2+ years of unstructured data analysis/text analytics/natural language processing and/or machine learning application for critical...
From GlaxoSmithKline - Fri, 19 Oct 2018 23:19:12 GMT - View all Research Triangle Park, NC jobs
          Data Scientist in Advisory      Cache   Translate Page      

KPMG Professional Services and KPMG Advisory Services are the KPMG member firm in Nigeria. The partners and people have been operating in Nigeria since 1978, providing multidisciplinary professional services to both local and international organisations within the Nigerian business community.

As one of the leading providers of professional services, KPMG knows that the success and growth of the firm also depends on the success and growth of the Nigerian economy. Hence, it champ

          Data Scientist / Prognostic Health Monitoring Specialist - Abbott Laboratories - Lake Forest, IL      Cache   Translate Page      
Operational and business decisions with high risk or consequences to the business. Communication with internal and external customers....
From Abbott Laboratories - Thu, 25 Oct 2018 11:08:26 GMT - View all Lake Forest, IL jobs
          Data Scientist - Akamai - Santa Clara, CA      Cache   Translate Page      
These include Kona Site Defender, Bot Management, Akamai Prolexic, Web Application Firewall and Site shield. If you have a deep passion for data and security,...
From Akamai - Tue, 06 Nov 2018 00:35:42 GMT - View all Santa Clara, CA jobs
          Business Relations Manager (Office Hours / East / Up to S$3,500) - Personnel Recruit LLP - East Singapore      Cache   Translate Page      
Post Facebook and Google Ads. We are a dedicated team of traders, data scientists and software engineers working to revolutionize Robo Investing for the retail... $2,500 - $3,500 a month
From Indeed - Mon, 05 Nov 2018 05:35:45 GMT - View all East Singapore jobs
          (USA-GA-DULUTH) Associate Director, Business Intelligence      Cache   Translate Page      
Boehringer Ingelheim is an equal opportunity global employer who takes pride in maintaining a diverse and inclusive culture. We embrace diversity of perspectives and strive for an inclusive environment which benefits our employees, patients and communities. **Description:** The Associate Director of Business Intelligence will be responsible for driving strategic direction and oversight of the Business Intelligence team within the Commercial Excellence Division. This position is critical to ensure that BI utilizes a wide array of reliable data analysis techniques to deliver data-driven Sales and Marketing intelligence. Specific areas of focus include delivering and optimizing solutions for data transparency, aligning data visualization and analytics capabilities with business goals, transforming data into knowledgeable information, and delivering actionable visual insights, to the right people, at the right time, to enable the business to make more of the right decisions. This role is responsible for leading a team of 5-8 Business Intelligence Analysts who will consult on data and insight needs back into the US AH Commercial Operations business. This role will lead the development of innovative and next generation insight services through use of dashboards, ad-hoc visualizations, and other emerging technologies. As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees **Duties & Responsibilities:** + **Direct a team of business analysts including recruiting, staffing, evaluation, and training.** + **Direct cross functional strategic initiatives related to data quality and integrity, lead and execute data investigations, and recommend process improvements across governed processes.** + **Provide strategic direction that will lead to improved operating efficiencies and enhanced user experience.** + **Facilitate regularly scheduled staff meetings to communicate vision and collaborate.** + **Utilize internal resources including budget for assigned team, consultants, and third party vendors for the execution of objectives.** + **Proactively identify and resolve issues.** + **Recognize team and individuals for achievements.** + **Build relationships with stakeholders.** + **Influence stakeholders and business partners to support vision and solutions for team.** + **Present and facilitate meetings adapting content to the different audiences.** + **Establish open, transparent communication on a one-on-one basis.** + **Manage different stakeholders and influence behaviors related to the use of the information by communicating success, concerns, risks, and status updates on related work.** + **Collaborate with peers to share learnings, core processes, systems, and best practices.** + **Cultivate teamwork by facilitating cross department communication.** + **Communicate issues or outages as outlined in the Data Governance program utilizing a data steward alert.** + **Resolve disputes and conflict by having challenging conversations while maintaining composure.** + **Influence individuals and teams to agreement on potentially divisive topics.** + **Understand and empathize with disparate groups to achieve alignment on the best course of action forward for the business.** + **Manage the development, standardization, automation, and delivery of Key Commercial Operations Performance Indicators.** + **Direct analysts in gathering/refreshing requirements from Commercial Operations and Commercial Excellence.** + **Guide team in development and delivery of KPI delivery platforms to include static slides and spreadsheets as well as interactive, live dashboards using an array of tools to include Cognos, Tableau, Qlikview, etc.** + **Lead the team in the effort to consolidate, standardize, validate, automate, and maintain the standard suite of business critical reports.** + **Define best data and reporting stabilization methodologies and influence/execute on best practices for data storage, recall, cleansing, and merging for use in KPI reporting.** + **Requires knowledge of cloud and/or server-based data storage platforms like Amazon Web Services, Hadoop distributions, etc.** + **Direct the team in development and productionalization of interactive dashboards. This includes providing structural and content guidelines in accordance with industry best practices.** + **Generate and maintain requirements for IT to stabilize and optimize data cubes within the data warehouse environment.** + **Manage Quality Assurance process to regularly ensure accurate report delivery.** + **Partner with business to understand objectives and the types of analyses that are necessary to support those objectives.** + **Allocate appropriate team resources for critical Commercial Operations data analyses (e.g. market opportunity and customer targeting).** + **Regularly consult Commercial Operations on data, reporting, and insights to understand and interpret their needs for Commercial Excellence.** + **Prepare and present concise, actionable insights based on these interactions.** + **Regularly consult IT partners to maintain alignment on tool platforms.** + **Act as a conduit to translate Commercial Operations requirements for reporting to robust, sustainable IT platforms with optimized levels of data integrity.** + **Regularly consult and collaborate with all verticals of Commercial Excellence, provide insights to other teams as needed and gather requirements from those teams to inform analyses.** + **Manage projects to merge internal data with 3rd party data to inform Commercial Operations on market conditions and market opportunities.** + **Drive and challenge business units on their assumptions of how they will successfully execute their strategy.** + **Facilitate the communication and translation of business user requirements and solutions between the customer community (internal and external customers) and internal IT.** + **Assist stakeholders to diagnose problems and understand business needs.** + **Actively support all aspects of the BI Data Governance Program.** + **Identify causes of poor data quality management, implement solutions and communicate findings to employees, management, and stakeholders.** + **Ensure processes adhere to data management policy.** + **Collaborate with Business Process owners and Data Stewards to develop methods for synchronizing data entering company systems.** + **Responsible for ensuring issues and/or outages are communicated to the user community.** + **Lead post mortem meetings and report out on data impacts and outages across organization in a manner which instills confidence in issue understanding, impact and next steps along with prevention/elimination recommendations.** + **Understand inter-relationship of systems and processes which could be corrupt data impacting events and recommend course of action.** + **Model the behaviors of championing data as an asset.** + **Specific projects this role will lead include but are not limited to: US AH Tableau Server Roll-Out, Data Merging and Consolidation Definitions and Plan, US AH Onboarding in to Global/Enterprise Data Lake, and Consultant Functions to Data Governance.** + **Direct projects or sub teams as needed to execute objectives.** + **Create and implement a project contract or charter or plan to include but are not limited to: executive summary, objectives, scope, timeline, milestones, deliverables, organization, impacts, benefits, cost, assumptions, risks, and leadership approvals.** + **Utilize agile project methodology to include kick off, requirements, alpha, beta, final review, and user acceptance testing.** + **Define and articulate the business benefit of respective project.** + **Lead the required change management and communication across functional teams for any respective assigned project.** + **Engage field leadership to gain buy in to support the respective project.** + **Performs all Company business in accordance with all regulations (e.g. EEO, FDA, OSHA, PDMA, EPA, PhRMA, etc.) and Company policies and procedures.** + **Report all violations immediately to management.** + **Document observations and provide information to management as necessary.** + **Demonstrates high ethical and professional standards with all business contacts and BI employees.** **Requirements:** + **Master's Degree in Business Intelligence, Data Visualization, Data Science, or equivalent field from an accredited institution is required.** + **Minimum of two (2) to four (4) years’ experience in a related role.** + **Demonstrated experience in systems and processes, creating and executing user experience (training, communication and continual improvement) within sales force or related systems, project management, process improvement, preferably within the Pharmaceutical Industry.** + **Demonstrated understanding and ability to apply principles, concepts, practices, and standards including knowledge and use of Animal Health or Pharma data and working knowledge of industry practices.** + **Demonstrated ability to clearly and concisely communicate ideas, facts, and technical information to senior management, as well as other internal customers both verbally and written.** + **Demonstrated excellent communication and presentation skills and ability to work with other disciplines.** + **Ability to train user groups and key stakeholders.** + **Demonstrated ability to identify and analyze problems, evaluate alternatives, and implement effective solutions.** + **Demonstrated ability to effectively manage multiple priorities and coordinate efforts with colleagues from several functional areas.** + **Ability to work independently with a high degree of accuracy and attention to detail in the fast paced environment.** + **Basic knowledge of market research analytical frameworks for conducting and analyzing primary market research.** + **Familiarity with modern visualization frameworks, such as Gephi, Processing, R and/or D3.js.** + **Proficiency in visualization and analytic tools including Tableau, Qlik, and Cognos.** + **Sharp analytical abilities and proven design skills.** + **A strong understanding of typography and how it can affect visualizations as well as layout, space and an inherent feel for motion.** + **Models willingness to learn and stay up-to-date.** + **Effective analytical and problem solving skills.** + **History of successful performance.** + **Must achieve results in a highly matrixed organization.** + **Requires working with IT on system improvements and execution.** + **Responsible for several large-scale projects and/or programs.** + **Ability to manage budget and resources.** + **Ability to travel (may include overnight travel).** **Eligibility Requirements:** + **Must be legally authorized to work in the United States without restriction** + **Must be willing to take a drug test and post-offer physical (if required)** + **Must be 18 years of age or older** **Our Culture:** Boehringer Ingelheim is one of the world’s top 20 pharmaceutical companies and operates globally with approximately 50,000 employees. Since our founding in 1885, the company has remained family-owned and today we are committed to creating value through innovation in three business areas including human pharmaceuticals, animal health and biopharmaceutical contract manufacturing. Since we are privately held, we have the ability to take an innovative, long-term view. Our focus is on scientific discoveries and the introduction of truly novel medicines that improve lives and provide valuable services and support to patients and their families. Employees are challenged to take initiative and achieve outstanding results. Ultimately, our culture and drive allows us to maintain one of the highest levels of excellence in our industry. We are also deeply committed to our communities and our employees create and engage in programs that strengthen the neighbourhoods where we live and work. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Animal Health USA, Inc., Merial Barceloneta, LLC and Boehringer Ingelheim Fremont, Inc. is an equal opportunity and affirmative action employer committed to a culturally diverse workforce. All qualified applicants will receive consideration for employment without regard to race; color; creed; religion; national origin; age; ancestry; nationality; marital, domestic partnership or civil union status; sex, gender identity or expression; affectional or sexual orientation; disability; veteran or military status, including protected veteran status; domestic violence victim status; atypical cellular or blood trait; genetic information (including the refusal to submit to genetic testing) or any other characteristic protected by law. Boehringer Ingelheim is firmly committed to ensuring a safe, healthy, productive and efficient work environment for our employees, partners and customers. As part of that commitment, Boehringer Ingelheim conducts pre-employment verifications and drug screenings. **Organization:** _US-Vetmedica_ **Title:** _Associate Director, Business Intelligence_ **Location:** _Americas-US-GA-Duluth_ **Requisition ID:** _1813542_
          Designing machine learning — Research Imagining      Cache   Translate Page      
The relationship between user experience (UX) designers and machine learning (ML) data scientists has emerged as a site for research since 2017. Central to recent findings is the limited ability of UX designers to conceive of new ways to use ML (Yang et al. 2018). This is due to a number of factors. Firstly, human […] […]
          Symfony developer      Cache   Translate Page      
Onze opdrachtgever is een succesvolle fintech startup. Zij bewegen zich van een startup naar een scaleup. Er wordt in een klein team gewerkt aan verschillende applicaties. Je gaat werken in een team samen met een andere PHP developer en een data scientist. Je gaat werken aan het robuust en en toekomstbestendig maken van de huidige backend applicatie...
          The Bridge Limited: Data Scientist / Machine Learning Engineer      Cache   Translate Page      
The Bridge Limited: Data Scientist / Machine Learning Engineer Fantastic opportunity to join a leading UK client on a long term contract. Our client requires a Data Scientist to join them and to be able to hit the ground running. The skills required for this role: Machine Le Hatfield
          Data Science Manager - Micron - Boise, ID      Cache   Translate Page      
Create server based visualization applications that use machine learning and predictive analytic to bring new insights and solution to the business....
From Micron - Wed, 05 Sep 2018 11:18:49 GMT - View all Boise, ID jobs
          Intern - Data Scientist (NAND) - Micron - Boise, ID      Cache   Translate Page      
Machine learning and other advanced analytical methods. To ensure our software meets Micron's internal standards....
From Micron - Wed, 29 Aug 2018 20:54:50 GMT - View all Boise, ID jobs
          Intern - Data Scientist (DRAM) - Micron - Boise, ID      Cache   Translate Page      
Machine learning and other advanced analytical methods. To ensure our software meets Micron's internal standards....
From Micron - Mon, 20 Aug 2018 20:48:37 GMT - View all Boise, ID jobs
          Data Scientist - Wade & Wendy - New York, NY      Cache   Translate Page      
Our team is backed by Slack, ffVC, Randstad and other great VCs, as we bring AI and machine learning to the recruiting/HR space - all in order to make the...
From Wade & Wendy - Thu, 04 Oct 2018 06:17:29 GMT - View all New York, NY jobs
          NLP Data Scientist - Wade & Wendy - New York, NY      Cache   Translate Page      
Our team is backed by Slack, ffVC, Randstad and other great VCs, as we bring AI and machine learning to the recruiting/HR space - all in order to make the...
From Wade & Wendy - Thu, 27 Sep 2018 14:36:48 GMT - View all New York, NY jobs
          Senior Data Scientist - CenturyLink - New Century, KS      Cache   Translate Page      
*Open to any major US City. Candidates must be eligible to work within the US without sponsorship* The Data Scientist is responsible for developing tools to...
From CenturyLink - Thu, 26 Jul 2018 16:08:50 GMT - View all New Century, KS jobs
          Senior Data Scientist - CenturyLink - Chicago, IL      Cache   Translate Page      
The Data Scientist is responsible for developing tools to collect, clean, analyze and manage the data used by strategic areas of the business. Employ...
From CenturyLink - Fri, 26 Oct 2018 06:12:22 GMT - View all Chicago, IL jobs
          NLP Data Scientist - PARC, a Xerox company - Palo Alto, CA      Cache   Translate Page      
PARC, a Xerox company, is in the Business of Breakthroughs®. We create new business options, accelerate time to market, augment internal capabilities, and...
From PARC, a Xerox company - Sat, 29 Sep 2018 08:33:49 GMT - View all Palo Alto, CA jobs
          LinuxToday: Introducing pydbgen: A random dataframe/database table generator      Cache   Translate Page      

Simple tool generates large database files with multiple tables to practice SQL commands for data science.


          О_Самом_Интересном: Дистанционное онлайн обучение интернет профессиям      Cache   Translate Page      

Информация взята с http://orgbiznet.ru/

4425087_Image_156756756757 (517x235, 124Kb)

Онлайн Университет Интернет Профессий приглашает вас на дистанционное обучение по самым актуальным и востребованным направлениям онлайн заработка. Онлайн курсы преподают эксперты и специалисты компаний Mail, Google и Альфа банка.

 

4425087_Image_546456456456 (517x236, 115Kb) 4425087_878568678678686 (520x239, 92Kb)

Сегодня уже никого не удивляет, что многие наши друзья и знакомые зарабатывают достаточно приличные деньги через Интернет. Причем работа в Интернете в большинстве случаев более высокооплачиваемая, чем заработок оффлайн. Но как вы уже догадались, все ни так легко и просто, как рассказывают нам все эти яркие баннеры, зазывающие нас в онлайн бизнес.

И большинство тех, кто с огромным энтузиазмом и сильной мотивацией пытались покорить финансовый Олимп в сети, через короткий промежуток времени бросали эту затею. Либо из-за отсутствия каких-либо результатов своей деятельности, или же заработки были настолько малы, что интерес к этой сфере моментально угасал.

4425087_546456456 (518x236, 135Kb)

Информации слишком много, вся она очень противоречивая и быстро теряет свою актуальность. Да и без практики все эти знания просто бессмысленны, так как вся суть в деталях и мелочах, которые ты замечаешь исключительно в ходе работы, опытов, проб и ошибок. Которые будут стоить вам приличных денег или невероятного количества потраченного времени. А реальных специалистов, готовых помочь разобраться во всех нюансах онлайн заработка, найти не так-то просто!

4425087_y2354345345 (518x236, 98Kb) 4425087_5464564564564564 (518x237, 121Kb)

Университет интернет профессий проводит онлайн обучение по таким направлениям как Программирование, Маркетинг, Дизайн, Data Science, а также Бизнес и Управление.

Благодаря профессиональным тренингам, вы сможете освоить весь необходимый массив знаний и пройти онлайн курсы в самые короткие сроки. И все это в понятной и доступной форме интерактивного обучения, с лучшими специалистами выбранного направления и профессии.

Самые популярные из которых: веб-разработчик, специалист по контекстной рекламе, дизайнер игр, SEO и SMO специалист, коммерческий редактор онлайн проектов, Python разработчик и UX дизайнер итд.

4425087_Image_156756756757 (517x235, 124Kb)

Курсы есть как платные, так и бесплатные! По окончанию некоторых курсов дистанционного обучения, вам будет предоставлена возможность трудоустройства, в одной из компаний партнеров Университета Интернет Профессий.

 

О курсах дистанционного обучения интернет профессиям

 


          [Перевод] Data Science в Visual Studio Code с использованием Neuron      Cache   Translate Page      
Сегодня у нас небольшой рассказ о Neuron, расширении для Visual Studio Code, которое является настоящей киллер-фичей для дата-сайнтистов. Оно позволяет совместить Python, любую библиотеку машинного обучения и Jupyter Notebooks. Подробнее под катом!

Читать дальше →
          Java Application Developer - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Fri, 26 Oct 2018 18:01:41 GMT - View all Alpharetta, GA jobs
          Principal Application Developer - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Thu, 25 Oct 2018 08:26:41 GMT - View all Alpharetta, GA jobs
          Sr. Director - Product Mgmt - My ADP - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Tue, 18 Sep 2018 06:36:47 GMT - View all Alpharetta, GA jobs
          Sr. Director - Product Mgmt-NAS Shared Services and Integrations - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Tue, 18 Sep 2018 06:35:48 GMT - View all Alpharetta, GA jobs
          Introducing pydbgen: A random dataframe/database table generator      Cache   Translate Page      

Simple tool generates large database files with multiple tables to practice SQL commands for data science.


          Data Scientist - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to uncover real estate trends...
From Zillow Group - Thu, 01 Nov 2018 11:21:23 GMT - View all Seattle, WA jobs
          Data Scientist (Agent Pricing) - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third-party data (think Hive, Presto, SQL Server, Python, Mode Analytics, Tableau, R) to make strategic recommendations....
From Zillow Group - Thu, 01 Nov 2018 11:21:14 GMT - View all Seattle, WA jobs
          Data Scientist / Prognostic Health Monitoring Specialist - Abbott Laboratories - Lake Forest, IL      Cache   Translate Page      
Operational and business decisions with high risk or consequences to the business. Communication with internal and external customers....
From Abbott Laboratories - Thu, 25 Oct 2018 11:08:26 GMT - View all Lake Forest, IL jobs
          Data Engineer - G2 Recruitment Solutions - Antwerp      Cache   Translate Page      
I am currently looking for a Data Engineer for a freelance opportunity in the Antwerp area. This is initially a 12 month contract with a possible extension thereafter. Your main tasks will include Gathering data in an analytical environment Working closely with the data scientists in the team to ensure that valuable models are suitable for frequent use Key requirements A degree in IT, Engineering or equal by experience 3 years of experience Experience with reading and writing SQL Some...
          Technoslavia 2.5: Open Source Topography      Cache   Translate Page      

Open source has always been a part of the data science landscape (as we explored here in 2015 and here in 2017). And while every tool in Technoslavia has some free offerings, we’re taking this chance to highlight the dominating force of open source in the ecosystem.


          Senior Data Scientist - CenturyLink - New Century, KS      Cache   Translate Page      
*Open to any major US City. Candidates must be eligible to work within the US without sponsorship* The Data Scientist is responsible for developing tools to...
From CenturyLink - Thu, 26 Jul 2018 16:08:50 GMT - View all New Century, KS jobs
          Senior Data Scientist - CenturyLink - Chicago, IL      Cache   Translate Page      
The Data Scientist is responsible for developing tools to collect, clean, analyze and manage the data used by strategic areas of the business. Employ...
From CenturyLink - Fri, 26 Oct 2018 06:12:22 GMT - View all Chicago, IL jobs
          (USA-VA-Arlington) Operations Research Analyst      Cache   Translate Page      
The Civil and San Diego (CSD) Operations is responsible for a broad range analytical, engineering, independent assessments, strategic planning, and technology research, development, testing and evaluation (RDT&E;) programs in support of new SPA markets. Requirements * Bachelors’ Degree in Mathematics, Operations Research, Statistics, Computer Science, Engineering, quantitative social sciences, or a related discipline. * Some related experience (including academic projects), preferably in national security related programs. * Experience in any of the following programming languages: Python, R, MATLAB, Java, C++, VBA, SQL. * Familiar with visualization software such as Qlik or Tableau. * Experience with ArcGIS or other Geospatial Information systems. * Familiar with common open source and commercial data science and operations research tools and software. * Proven ability to prioritize and execute workload to meet requirements and deadlines. * Experience developing compelling written and visual communications that are clear, concise, and suited to the audience. * Proficient with using Microsoft Office Suite (e.g., Excel, Access, Word, PowerPoint). * Possess an active SECRET clearance with the ability to obtain TS. **Desired ** * Master’s Degree. * Experience providing analytical support to the Department of Homeland Security. * 2+ years’ experience performing quantitative analysis. * Experience within cybersecurity and/or critical infrastructure domains. * Active TS clearance. * DHS Suitability. Job Description: The Junior Operations Research Analyst will provide advanced analytical support to the Department of Homeland Security (DHS) and apply rational, systematic, science-based techniques, and critical thinking to help inform and improve the Department’s decision-making abilities for preparation, response and recovery to manmade and natural disasters and impacts on the Nation’s critical infrastructure. Provide interdisciplinary analytics expertise and skills to support and conduct descriptive, predictive, and prescriptive projects. Participate in a wide range of optimization and related modeling activities, including, but not limited to, model implementation and operations, model use documentation, and optimization and operations research capability. Responsibilities ** * Assist in identifying, prioritizing, and executing programs to help demonstrate and expand analytical capabilities * Develop and use operations research techniques such as mathematical optimization, decision analysis, and statistical analysis. * Implement data exploration techniques, analytical methods, operations research, and modeling and simulation tools to support program execution * Develop plans of action and timetables in order to meet established deadlines * Assist in the development of briefings, memos, and Contract Deliverables * Assist with data integration requirements development Candidates must be able to work independently and as part of a team. Successful candidates will be subject to a security investigation and must meet eligibility requirements for access to classified information. U.S. citizenship required. Systems Planning and Analysis, Inc. Attention: Human Resources 2001 North Beauregard Street Alexandria, VA 22311-1739 FAX: 703-399-7350 *SPA is an Equal Opportunity Employer (Minorities/Females/Disabled/Veterans)* In addition to professionally rewarding challenges, SPA employees also enjoy an excellent benefits package that includes insurance, leave, parking, and more, plus a generous qualified 401(k) plan. *Req Code:* CD18-292 *Job Type:* Operations Research Analyst *Location:* Arlington, VA *Clearance Level:* Secret
          (USA-VA-Alexandria) Operations Research and Risk Analyst      Cache   Translate Page      
The CSD Group provides advanced analytics and decision support to civilian government clients including DOE, DHS, NASA, HHS, and others. Focus area includes Operations Research, Risk Analysis, Data Science, and Modeling and Simulation. Requirements * Bachelors’ Degree in Mathematics, Operations Research, Statistics, Computer Science, Engineering, quantitative social sciences, or arelated discipline. * Advanced knowledge of Microsoft Excel. * Experience in any of the following programming languages: Python, R, VBA. * Familiarity with decision trees and/or fault tree analysis. * Experience with Tableau or similar visualization tool. * Proven ability to prioritize and execute workload to meet requirements and deadlines. * Experience developing compelling written and visual communications that are clear, concise, and suited to the audience * Possess an active SECRET clearance with the ability to obtain TS **Desired** * Master’s Degree. * Experience providing analytic support to the Department of Energy. * Basic knowledge of radiation, radiation detection, and nuclear material. * Previous experience with national security risk analysis. * 3+ years’ experience performing quantitative analysis. * Active TS clearance. * DOE Q clearance. Job Description: Construct and execute planning scenarios in DOE’s existing Nuclear Smuggling Risk Model. Conduct focus groups to gather model input values. Design experiments and modeling campaigns to fully evaluate a defined decision space. Verify model performance using best modeling practices for troubleshooting and error recognition and reconciliation. Visualize and summarize model results for technical and non-technical audiences. Implement operations research methodologies and modeling tools to support program execution. Develop plans of action and timetables in order to meet established deadlines. Develop briefings, memos, and Contract Deliverables. Candidates must be able to work independently and as part of a team. Successful candidates will be subject to a security investigation and must meet eligibility requirements for access to classified information. U.S. citizenship required. Systems Planning and Analysis, Inc. Attention: Human Resources 2001 North Beauregard Street Alexandria, VA 22311-1739 FAX: 703-399-7350 *SPA is an Equal Opportunity Employer (Minorities/Females/Disabled/Veterans)* In addition to professionally rewarding challenges, SPA employees also enjoy an excellent benefits package that includes insurance, leave, parking, and more, plus a generous qualified 401(k) plan. *Req Code:* CD18-281 *Job Type:* Operations Research Analyst *Location:* Alexandria, VA *Clearance Level:* Secret
          (USA-VA-Alexandria) Data Science/ Modeling and Simulation Engineer      Cache   Translate Page      
Undersea Strategic Systems Analysis Group (USSAG) provides multiple organizations within the Department of Defense with timely and objective assessments and recommendations that integrate technical, operational, programmatic, policy and business analysis. We focus on key clients in the undersea community including Navy’s Strategic Systems Programs (SSP), and NAVSEA program managers for the COLUMBIA Class SSBNs and in service submarines. We work to provide integrated solutions based on information and communications throughout the chain of command to ensure clear and consistent analysis and recommendations that are aligned to strategic and leadership goals while still being executable by the working level technical communities. Requirements * Bachelor's degree or higher from an accredited college/university in Engineering, Physical Science, Computer Science, Operations Research, Mathematics, or other STEM field. * 6+ years of experience. * Experience in any of the following programming languages: Python, R, MATLAB, Java, C++. * Knowledge/course work of physics, aerodynamics, control theory, or applied mathematics. * Proficient in using Microsoft Office, including Word, Excel, and PowerPoint. * Proven ability to prioritize and execute workload to meet requirements and deadlines. * Experience developing written and visual communications that are clear, concise, and suited to the audience. * Must be comfortable working in a fast paced environment conducting analysis with uncertainties or incomplete data. * Must be capable of working both individually and as part of a team. * Must be capable of breaking down complex issues into simplified problem statements and laying out objective approaches to identify and compare the merits of competing solutions. * TOP SECRET clearance. **Preferred Qualifications** * Master’s Degree in a field related to modeling and simulation. * Experience with six degree of freedom missile simulations, Monte-Carlo techniques, and configuration management is desirable * Experience with DOD missile system acquisition programs. * Preference for experience working with PyCharm, git, BitBucket, Jira, and Confluence. Job Description: Support the customer in a fast paced environment with simulation-driven assessments and products. Participate on multidisciplinary study teams and perform the following activities: study planning; simulation development, evaluation, verification, validation, and execution; data extraction and analysis. Develop scientific simulations and data processing tools and work with the multidisciplinary team to create integrated analysis reports and recommendations to clients. Candidates must be able to work independently and as part of a team. Successful candidates will be subject to a security investigation and must meet eligibility requirements for access to classified information. U.S. citizenship required. Systems Planning and Analysis, Inc. Attention: Human Resources 2001 North Beauregard Street Alexandria, VA 22311-1739 FAX: 703-399-7350 *SPA is an Equal Opportunity Employer (Minorities/Females/Disabled/Veterans)* In addition to professionally rewarding challenges, SPA employees also enjoy an excellent benefits package that includes insurance, leave, parking, and more, plus a generous qualified 401(k) plan. *Req Code:* UWSD18-432 *Job Type:* Data Scientist *Location:* Alexandria, VA *Clearance Level:* Top Secret
          (USA-VA-Alexandria) Senior Data Scientist/ Principal Analyst      Cache   Translate Page      
Undersea Strategic Systems Analysis Group (USSAG) provides multiple organizations within the Department of Defense with timely and objective assessments and recommendations that integrate technical, operational, programmatic, policy and business analysis. We focus on key clients in the undersea community including Navy’s Strategic Systems Programs (SSP), and NAVSEA program managers for the COLUMBIA Class SSBNs and in service submarines. We work to provide integrated solutions based on information and communications throughout the chain of command to ensure clear and consistent analysis and recommendations that are aligned to strategic and leadership goals while still being executable by the working level technical communities. Requirements * Bachelor Degree in Operations Research, Engineering, Data Science. * 10-20 years of experience. * 10+ years in defense or national security related programs. * Must have demonstrated experience leading teams of analysts and leading projects. * Experience using an optimization solver, such as CPLEX, Gurobi, or LP_Solve. * Experience in any of the following programming languages: Python, R, MATLAB, Java, C++, et al. * Proficient in using Microsoft Office, including Word, Excel, and PowerPoint. * Proven ability to prioritize and execute workload to meet requirements and deadlines. * Experience developing written and visual communications that are clear, concise, and suited to the audience. * Must be comfortable working in a fast paced environment conducting analysis with uncertainties or incomplete data. * Must be capable of breaking down complex issues into simplified problem statements and laying out objective approaches to identify and compare the merits of competing solutions. * Top Secret Clearance. **Preferred Qualifications** * Master’s Degree in a field related to modeling and simulation. * Experience with DOD acquisition programs. * Experience in strategic planning and risk assessment/management. * Preference for experience working with PyCharm, git, BitBucket, Jira, and Confluence. * Demonstrated experience in any of the following technologies: Numpy, SciPy, matplotlib, scikit-learn, PyMC, Bokeh, plot.ly, CPLEX, Tableau, Qlik, Palisade Decision Tools (e.g. @Risk), SAS’s JMP. * Final SECRET or TOP SECRET security clearance is a significant factor in selection of candidates. Job Description: Lead a team of analysts and participate in a wide range of analysis tasks including: descriptive, predictive, and prescriptive analytical projects; optimization and related modeling activities; model development and formulation; model implementation and operations; model and model use documentation, team collaboration, and analysis integration and reporting. Candidates must be able to work independently and as part of a team. Successful candidates will be subject to a security investigation and must meet eligibility requirements for access to classified information. U.S. citizenship required. Systems Planning and Analysis, Inc. Attention: Human Resources 2001 North Beauregard Street Alexandria, VA 22311-1739 FAX: 703-399-7350 *SPA is an Equal Opportunity Employer (Minorities/Females/Disabled/Veterans)* In addition to professionally rewarding challenges, SPA employees also enjoy an excellent benefits package that includes insurance, leave, parking, and more, plus a generous qualified 401(k) plan. *Req Code:* UWSD18-430 *Job Type:* Data Scientist *Location:* Alexandria, VA *Clearance Level:* Top Secret
          Defining Big Data Analytics Benchmarks for Next Generation Supercomputers. (arXiv:1811.02287v1 [cs.PF])      Cache   Translate Page      

Authors: Drew Schmidt, Junqi Yin, Michael Matheson, Bronson Messer, Mallikarjun Shankar

The design and construction of high performance computing (HPC) systems relies on exhaustive performance analysis and benchmarking. Traditionally this activity has been geared exclusively towards simulation scientists, who, unsurprisingly, have been the primary customers of HPC for decades. However, there is a large and growing volume of data science work that requires these large scale resources, and as such the calls for inclusion and investments in data for HPC have been increasing. So when designing a next generation HPC platform, it is necessary to have HPC-amenable big data analytics benchmarks. In this paper, we propose a set of big data analytics benchmarks and sample codes designed for testing the capabilities of current and next generation supercomputers.


          High Dimensional Clustering with $r$-nets. (arXiv:1811.02288v1 [cs.CG])      Cache   Translate Page      

Authors: Georgia Avarikioti, Alain Ryser, Yuyi Wang, Roger Wattenhofer

Clustering, a fundamental task in data science and machine learning, groups a set of objects in such a way that objects in the same cluster are closer to each other than to those in other clusters. In this paper, we consider a well-known structure, so-called $r$-nets, which rigorously captures the properties of clustering. We devise algorithms that improve the run-time of approximating $r$-nets in high-dimensional spaces with $\ell_1$ and $\ell_2$ metrics from $\tilde{O}(dn^{2-\Theta(\sqrt{\epsilon})})$ to $\tilde{O}(dn + n^{2-\alpha})$, where $\alpha = \Omega({\epsilon^{1/3}}/{\log(1/\epsilon)})$. These algorithms are also used to improve a framework that provides approximate solutions to other high dimensional distance problems. Using this framework, several important related problems can also be solved efficiently, e.g., $(1+\epsilon)$-approximate $k$th-nearest neighbor distance, $(4+\epsilon)$-approximate Min-Max clustering, $(4+\epsilon)$-approximate $k$-center clustering. In addition, we build an algorithm that $(1+\epsilon)$-approximates greedy permutations in time $\tilde{O}((dn + n^{2-\alpha}) \cdot \log{\Phi})$ where $\Phi$ is the spread of the input. This algorithm is used to $(2+\epsilon)$-approximate $k$-center with the same time complexity.


          Sr Data Scientist - NVIDIA - Santa Clara, CA      Cache   Translate Page      
3+ years’ experience in solving problems using machine learning algorithms and techniques (clustering, classification, outlier analysis, etc.)....
From NVIDIA - Tue, 30 Oct 2018 01:54:49 GMT - View all Santa Clara, CA jobs
          Artificial Intelligence / Machine Learning Experts - Fujitsu Technology Solutions - Anderlecht      Cache   Translate Page      
Looking for Artificial Intelligence / Machine Learning Experts - Data Scientists, Consultants, Architects, Developers and Business Analysts Contribute some (Artificial) Intelligence? - Join Fujitsu's digital business team! Artificial Intelligence is the New Black, our customers are feeling the breeze of change, and have request our Intelligence to their assistance. We are looking to recruit several ready professionals and young talents to be the experts and help our customers to...
          Mobile Data Science: Towards Understanding Data-Driven Intelligent Mobile Applications. (arXiv:1811.02491v1 [cs.CY])      Cache   Translate Page      

Authors: Iqbal H. Sarker

Due to the popularity of smart mobile phones and context-aware technology, various contextual data relevant to users' diverse activities with mobile phones is available around us. This enables the study on mobile phone data and context-awareness in computing, for the purpose of building data-driven intelligent mobile applications, not only on a single device but also in a distributed environment for the benefit of end users. Based on the availability of mobile phone data, and the usefulness of data-driven applications, in this paper, we discuss about mobile data science that involves in collecting the mobile phone data from various sources and building data-driven models using machine learning techniques, in order to make dynamic decisions intelligently in various day-to-day situations of the users. For this, we first discuss the fundamental concepts and the potentiality of mobile data science to build intelligent applications. We also highlight the key elements and explain various key modules involving in the process of mobile data science. This article is the first in the field to draw a big picture, and thinking about mobile data science, and it's potentiality in developing various data-driven intelligent mobile applications. We believe this study will help both the researchers and application developers for building smart data-driven mobile applications, to assist the end mobile phone users in their daily activities.


          (USA-CA-San Jose) Sr Data Architect – Big Data      Cache   Translate Page      
**Danaher Company Description** Danaher is a global science & technology innovator committed to helping our customers solve complex challenges and improve quality of life worldwide. Our world class brands are leaders in some of the most demanding and attractive industries, including life sciences, medical diagnostics, dental, environmental and applied solutions. Our globally diverse team of 67,000 associates is united by a common culture and operating system, the Danaher Business System, which serves as our competitive advantage. We generated $18.3B in revenue last year. We are ranked #162 on the Fortune 500 and our stock has outperformed the S&P 500 by more than 1,200% over 20 years. At Danaher, you can build a career in a way no other company can duplicate. Our brands allow us to offer dynamic careers across multiple industries. We’re innovative, fast-paced, results-oriented, and we win. We need talented people to keep winning. Here you’ll learn how DBS is used to shape strategy, focus execution, align our people, and create value for customers and shareholders. Come join our winning team. **Description** *Danaher Digital* Danaher Digital is our digital innovation and acceleration center where we’re bringing together the leading strategic product and business leaders, technologists and data scientists for the common purpose of accelerating development and commercialization of disruptive and transformative digital solutions into the marketplace. We accelerate Danaher’s digital innovation journey by partnering with Danaher operating companies (OpCos) to monetize and commercialize the potential of emerging digital trends. Located in Silicon Valley, the heart of global innovation, Danaher Digital is ideally situated to capitalize on the digital mega trends transforming our world, including Internet-of Things (IoT), Data, AI, cloud, mobile, Augmented Reality (AR), Blockchain and other Digital frontiers. *Senior Data Architect* You will report to the Senior Director of Data & Analytics and will be responsible for leading the vision, design, development and deployment of large-scale data fabrics and data platforms for Danaher’s IoT and Analytics Machine Learning solutions. The right candidate will provide strategic and technical leadership in using best-of-breed big data technologies with the objective of bringing to market diverse solutions in IoT and Analytics for health sciences, medical diagnostics, industrial and other markets. This person will use his/her Agile experience to work collaboratively with other Product Managers/Owners in geographically distributed teams. *Responsibilities*: * Lead analysis, architecture, design and development of large-scale data fabrics and data platforms Advanced Analytics and IoT solutions based on best-of-breed and contemporary big-data technologies * Provide strategic leadership in evaluation, selection and/or architecture of modern data stacks supporting diverse solutions including Advanced analytics for health sciences, medical diagnostics, industrial and other markets * Provide technical leadership and delivery in all phases of a solution design from a data perspective: discovery, planning, implementation and data operations * Manage the full life-cycle of data - ingestion, aggregation, storage, access and security - for IoT and advanced analytics solutions * Work collaboratively with Product Management and Product Owners from other business units and/or customers to translate business requirements in to technical requirements (Epics and stories) in Agile process to drive data architecture * Own and drive contemporary data architecture and technology vision/road map while keeping abreast of the technology advances and architectural best practices **Qualification** *Required Skills & Experience: * * Bachelor’s degree in Computer Science or related field of study * Experience with security configurations of the Data Stack * 7 years’ hands-on leader in designing, building and running successful full stack Big Data, Data Fabric and/or Platforms and IoT/Analytics solutions in production environments * Must have experience in major big data solutions like Hadoop, HBase, Sqoop, Hive, Spark etc. * Architected, developed and deployed data solutions on one or more of: AWS, Azure or Google IaaS/PaaS services for data, analytics and visualization * Demonstrated experience of full IoT and Advanced Analytics data technology stack, including data capture, ingestion, storage, analytics and visualization * Has worked with customers as a trusted advisor in Data architecture and management. * Working experience with ETL tools, storage infrastructure, streaming and batch data, data quality tools, data modeling tools, data integration and data visualization tools * Must have experience in leading projects in Agile development methodologies * Provide mentorship and thought leadership for immediate and external(customer) teams in best practices of Data platform; Lead conversations around extracting value out of Data Platform * Travel up to 40% required **Danaher Corporation Overview** Danaher is a global science & technology innovator committed to helping our customers solve complex challenges and improve quality of life worldwide. Our world class brands are leaders in some of the most demanding and attractive industries, including life sciences, medical diagnostics, dental, environmental and applied solutions. Our globally diverse team of 67,000 associates is united by a common culture and operating system, the Danaher Business System, which serves as our competitive advantage. We generated $18.3B in revenue last year. We are ranked #162 on the Fortune 500 and our stock has outperformed the S&P 500 by more than 1,200% over 20 years. At Danaher, you can build a career in a way no other company can duplicate. Our brands allow us to offer dynamic careers across multiple industries. We’re innovative, fast-paced, results-oriented, and we win. We need talented people to keep winning. Here you’ll learn how DBS is used to shape strategy, focus execution, align our people, and create value for customers and shareholders. Come join our winning team. **Organization:** Corporate **Job Function:** Information Technology **Primary Location:** North America-North America-United States-CA-San Jose **Schedule:** Full-time **Req ID:** COR001259
          Agency Measurement Partner - 11 Months      Cache   Translate Page      
Our global tech client is seeking a highly quantitative measurement professional, with marketing analytics experience to drive the organisation measurement strategy with agencies in the DACH. We're looking for people with strong statistical and critical thinking skills to successfully influence how agencies and the wider industry conduct and use measurement. The Marketing Science measurement team is charged with driving good measurement, demonstrating the ROI/value of the platform, developing best practices and informing product development. The Agency Measurement Partner position will work with internal and external stakeholders to advise specific agencies on measurement strategy and work with them on an ongoing basis to adopt better measurement as a way to improve business performance. You will be responsible for working with agency teams in driving relevant data integrations, designing tests and research to help understand and improve the effectiveness of their advertising across digital platforms and across media. We are looking for someone with deep digital knowledge and the ability to work in a complex environment with many stakeholders including product, engineers, creative and marketing. Conclusions from this work will identify how Facebook can best partner with the agency to drive business impact. The ideal candidate will be passionate about online advertising, intellectually curious, a fast learner and able to move fast while keeping focused on high impact projects. They should demonstrate a strong understanding of the media landscape and ability to apply quantitative techniques to understand consumer behaviour and advertising effectiveness through innovative analytics, methodologies and products. A working understanding of advertising technology will be essential. Our clients platform offers marketers unprecedented opportunities to reach and engage consumers. The Marketing Science Agency Partnership team is responsible for understanding how the organisation can best partner with agencies to measure and increase the impact of paid media. In service of this goal, the team conducts research on the organisations advertising platform, designs custom research to improve the ability to measure value on and off of the platform, and helps agencies integrate the platforms data sets to augment the quality of their planning, creative, and measurement. We seek an expert in data analysis & inferential statistics to help direct our measurement work with agencies, ensure robustness in the solutions we put in place, and manage our agency partner's research agendas. Responsibilities: Responsible for working with the wider team to support the strategy for agencies in EMEA. Assess the validity and rigor of new data sources and approaches in their use toward applied research audience insights Proactive creation of custom advertising research driven to show value, insights, or missed opportunities Oversee data-focused initiatives already in place and being developed, and drive developments in the quality of these initiatives Act as an internal agency advocate with other teams specifically R&D. Conduct in-depth and custom ad effectiveness studies to understand the relative impact of different marketing strategies across digital platforms and across media. (MOST IMPORTANT) Communicate complex research results to general audience. Provide feedback to and collaborate with Product, R&D, and Partnerships to identify opportunities for new features, products and partnerships and drive engagement around measurement innovation, including products alphas and beta. Educate agencies on research capabilities. Skills/Education: 3+yrs experience in digital media, specifically dealing with data, analytics and measurement. Bachelor's degree or equivalent required, a degree in statistics, economics, behavioral or social science or a related quantitative degree preferred. A solid understanding of the advertising industry (especially online) and measurement methods/technologies most commonly used. Strong interpersonal skills with demonstrated ability to communicate technical content to general audience. Able to conduct bespoke analysis and manipulate data sets to understand patterns and provide client insights. (MOST IMPORTANT) Excellent communication skills with ability to work cross-functionally and influence clients. Previous experience working with large data sets, statistical software such as R, SPSS, SAS, STATA and hive and /or SQL. The ideal candidate will have a strong background in data science, AdTech, research methods and statistics.
          Data Science 7 :      Cache   Translate Page      
  அமெரிக்காவுக்கு மட்டுமே இந்த ஆண்டு இறுதிக்குள் இரண்டு இலட்சம் டேட்டா சயின்ஸ் பொறியாளர்கள் தேவைப்படுவார்கள் என்கிறது மெக்கன்சி ஆய்வு. அடுத்த பத்து ஆண்டுகளுக்கு தொழில் நுட்ப உலகை வசீகரிக்கப் போகும் வேலை இந்த தகவல் அறிவியல் தான் கூகிள் நிறுவன தலைமை பொருளாதார அதிகாரி ஹான் வாரியன். தகவல் அறிவியல் எனும் துறை இப்போதே … Continue reading
          NLP Data Scientist - PARC, a Xerox company - Palo Alto, CA      Cache   Translate Page      
PARC, a Xerox company, is in the Business of Breakthroughs®. We create new business options, accelerate time to market, augment internal capabilities, and...
From PARC, a Xerox company - Sat, 29 Sep 2018 08:33:49 GMT - View all Palo Alto, CA jobs
          Data Scientist - ADNM International - Laval, QC      Cache   Translate Page      
The right candidate will have the passion to discover hidden solutions in large data sets and work with stakeholders to improve business results....
From ADNM International - Tue, 24 Jul 2018 18:32:44 GMT - View all Laval, QC jobs
          Dana-Farber Cancer Institute: Research Fellow      Cache   Translate Page      
Yearly Salary: Dana-Farber Cancer Institute: Dana-Farber Cancer Institute, Boston, MA seeks a Postdoctoral Computational Biologist and Data Scientist Research Fellow. Boston, Massachusetts
          IC Resources Ltd: Senior Data Scientist      Cache   Translate Page      
€65000.00 - €90000.00 per annum: IC Resources Ltd: This Senior Data Scientist vacancy is based in Berlin, Germany with my client who develops software platforms for the business side of the advertising Berlin
          Web and LinkedIn research - Upwork      Cache   Translate Page      
Hi, I have a list of 20 names, I need you to find them on Linkedin and copied the following info:

Name of each group of interest;
The number of members in each such group;
a link to such group

Posted On: November 07, 2018 15:25 UTC
ID: 214654459
Category: Data Science & Analytics > Data Mining & Management
Skills: Data Entry, Data Mining, Data Scraping, Google Search, Internet Research, Microsoft Excel, Virtual Assistant, Web Scraping
Country: Ukraine
click to apply
          The Koch Brothers Are Watching You -- And New Documents Reveal Just How Much They Know      Cache   Translate Page      
Billionaire brothers have built personality profiles of most Americans, and use them to push right-wing propaganda

New documents uncovered by the Center for Media and Democracy show that the billionaire Koch brothers have developed detailed personality profiles on 89 percent of the U.S. population; and are using those profiles to launch an unprecedented private propaganda offensive to advance Republican candidates in the 2018 midterms.

The documents also show that the Kochs have developed persuasion models — like their "Heroin Model" and "Heroin Treatment Model" — that target voters with tailored messaging on select issues, and partner with cable and satellite TV providers to play those tailored messages during “regular” television broadcasts.

Over the last decade, big data and microtargeting have revolutionized political communications. And the Kochs, who are collectively worth $120 billion, now stand at the forefront of that revolution — investing billions in data aggregation, machine learning, software engineering and Artificial Intelligence optimization.

In modern elections, incorporating AI into voter file maintenance has become a prerequisite to producing reliable data. The Kochs’ political data firm, i360 states that it has “been practicing AI for years. Our team of data scientists uses components of Machine learning, Deep Learning and Predictive Analytics, every day as they build and refine our predictive models.”

Thanks to that investment (and the Supreme Court’s campaign finance rulings that opened the floodgates for super PACs), the Koch network is better positioned than either the Democratic Party or the GOP to reach voters with their individually tailored communications.

That is a dangerous development, with potentially dramatic consequences for our democracy.

The Kochs and i360

The Kochs formally entered the data space nine years ago, developing the “Themis Trust” program for the 2010 midterms — an uncommonly impactful election cycle where Republican operatives executed their REDMAP program and algorithmically gerrymandered congressional maps across the country in their favor.

In 2011, the Kochs folded Themis into a data competitor it acquired, i360 LLC, which was founded by Michael Palmer, the former chief technology officer of Sen. John McCain’s 2008 presidential campaign. Palmer still leads the organization.

Back then, as journalists Kenneth Vogel and Mike Allen documented, the Kochs’ long-term funding commitments to i360 allowed the organization to think bigger than their political competitors.

“Right now, we’re talking about and building things that you won’t see in 2016, because it’s not going to be ready until 2018,” Michael Palmer said in the wake of the 2014 midterm cycle.

Those programs are now operational. And according to a successful GOP campaign manager, i360 is the “best in the business” at providing Republicans with voter data.

i360’s client list reflects that data superiority. The country’s most notorious and effective political spenders, like the National Rifle Association, use the platform to identify and influence voters, as do Republican party committees, and U.S. House and Senate campaigns.

(A full list of i360’s clients is available here. Some clients, like the Republican Party of Wisconsin, have multiple sub-campaigns they run. It is also important to note that many Koch political groups, like Americans for Prosperity and the Libre Initiative, signed data sharing agreements with i360 in 2016 that are most likely still in effect.)

i360 sweetens the deal to its clients by offering its services at below-market rates. And once clients are locked into the i360 platform, they have access to the company’s voter file — the beating heart of modern political campaigns.

Conservatives agree that the Kochs are subsidizing i360. The losses they sustain by undercharging clients, however, are a pittance compared to the down-stream public policy returns and political power the Kochs receive from operating what amounts to a shadow political party in the United States — one that vigilantly guards the fossil fuel subsidies, deregulatory schemes, and regressive tax structures that enable Koch Industries to bring in $115 billion annually in private revenue.

Inside the i360 Voter File

i360’s voter file identifies “more than 199 million active voters and 290 million U.S. consumers,” and provides its users with up to 1,800 unique data points on each identified individual.

As a result, i360 and the Kochs know your vitals, ethnicity, religion, occupation, hobbies, shopping habits, political leanings, financial assets, marital status and much more.

They know if you enjoy fishing — and if you do, whether you prefer salt or fresh water. They know if you have bladder control difficulty, get migraines or have osteoporosis. They know which advertising mediums (radio, TV, internet, email) are the most effective. For you.

i360 has the following attribute tags, among hundreds of others, ranked 1-10, or subdivided otherwise in their voter file.

Here’s an example of an i360 attribute tag and code name, using a 1-10 value scale:

But i360 attribute codes are not limited to that 1-10 scale. Their knowledge of your financial standing is granular, from how much equity you have in your home to your net wealth and expendable income.

They know where you live, what your mortgage status is and even how many bathrooms are in your house.

i360 has also created a set of 70 “clustercodes” to humanize its data for campaign operatives. These categories range from “Faded Blue Collars” to “Meandering Millennials,” and have flamboyant descriptions that correspond with their attribute headings.

Here are some examples:


Koch Persuasion Models

Additionally, i360 has developed a series of persuasion models for its voter file. These models are often regionally sensitive — since voters have regional concerns — and are being used in federal elections and down-ballot races to assist Republicans across the country.

In 2016, i360 created a set of regional models while working with Sen. Rob Portman’s 2016 re-election campaign in Ohio. Portman started out the race polling nine points behind his Democratic opponent, Gov. Ted Strickland, but ultimately won with 58 percent of the vote.

The company developed a model that could predict whether a voter supported Portman or Strickland with 89 percent accuracy, and others that predicted voter policy preferences. Well aware of the 2016 landscape, i360 also made a Trump/Clinton model, an Anti-Hillary model, and a Ticket Splitter model.

Much of i360’s success in the race, however, was linked to understanding (after conducting extensive polling) that a “key local issue facing Ohio was the opioid epidemic.” In response, the company created a “heroin model” and a “heroin treatment model” that were particularly effective at convincing voters to support Portman.

When describing how they employed their “heroin model,” i360 was clear that Portman’s “position” on the crisis depended on the voter, emphasizing health care solution communications for some, and criminal justice solution communications for others.

Here is i360 on the subject:

…the issue of opioid abuse was particularly complex in that it was relatively unknown whether it was considered a healthcare issue or a criminal justice issue. The answer to this would dictate the most effective messaging. In addition, this was a particularly personal issue affecting some voters and not others.

By leveraging two predictive models — the Heroin model identifying those constituents most likely to have been affected by the issue of opioid abuse and the Heroin Treatment model determining whether those individuals were more likely to view the issue as one of healthcare or of criminal justice — the campaign was able to effectively craft their messaging about Senator Portman’s extensive work in the Senate to be tailored to each individual according to their disposition on the topic.

This manipulation of the opioid crisis for political gain has a perverse irony given the Kochs’ long-running work to provide corporate interests, including health care and pharmaceutical interests, with undue political power and influence over public policy decisions. The Kochs have gifted over a million dollars to ALEC, for example, an organization that counts Purdue Pharma — the unconscionable manufacturer of OxyContin — as a member.

The company also stated it joined Portman’s campaign 21 months before the election, and that, “Together, i360 and the campaign strategized a plan to execute one of the most custom-targeted, integrated campaigns to date with a focus on getting the right message to the right voter wherever that might be.”

This is notable because during the 2016 election, i360 also ran $11.7 million worth of “independent” expenditures for the National Rifle Association Political Victory Fund, Freedom Partners Action Fund, and Americans for Prosperity in Portman’s race.

These outside spenders, two of which are Koch-funded groups, and Portman’s campaign all used i360 to coordinate their digital marketing, phone banks and television ad buys, in the same market, in the same election.

Additionally, i360 supplied Portman’s campaign with other issue-based models on gun control, gay marriage and abortion that the company continues to supply to its clients in 2018.

Here are some examples of i360’s issue-based models:

The list goes on, but the structure stays the same. The Kochs are tailoring their advertising to you, because they know nearly everything about you.


          NLP Data Scientist - PARC, a Xerox company - Palo Alto, CA      Cache   Translate Page      
PARC, a Xerox company, is in the Business of Breakthroughs®. We create new business options, accelerate time to market, augment internal capabilities, and...
From PARC, a Xerox company - Sat, 29 Sep 2018 08:33:49 GMT - View all Palo Alto, CA jobs
          Financial Data Science Analyst - Apple - Austin, TX      Cache   Translate Page      
Our team applies data science and machine learning to drive strategic impact across multiple lines of business at Apple....
From Apple - Wed, 07 Nov 2018 01:45:42 GMT - View all Austin, TX jobs
          CONSULENTE DATA SCIENTIST SENIOR/JUNIOR - Prisma S.r.l. - Junior, WV      Cache   Translate Page      
Prisma Srl opera nel settore dell’Information Technology dal 1984. Attraverso il continuo monitoraggio delle tecnologie emergenti e l’attenta valorizzazione...
From Prisma S.r.l. - Thu, 27 Sep 2018 07:51:39 GMT - View all Junior, WV jobs
          Opportunity Lost: Data Silos Continue to inhibit your Business      Cache   Translate Page      
According to some estimates, data scientists spend as much as 80% of their time getting data in a format that can be used. As a practicing data scientist, I’d say that is a fairly accurate estimate in many organizations. In the more sophisticated organizations that have implemented proper data integration and management systems, the amount […]
          Scientifique de données -Analytique d'affaires -Data Scientist - Aviva - Montréal, QC      Cache   Translate Page      
Ce que vous serez appelé(e) à faire Joignez-vous à une équipe d’actuaires et de scientifiques et d’ingénieurs de données passionnés, chargée de mettre les...
From Aviva - Thu, 25 Oct 2018 17:53:51 GMT - View all Montréal, QC jobs
          Directeur actuariat, Équipe de science des données -Manager Data Scientist - Aviva - Montréal, QC      Cache   Translate Page      
An English version will follow Vous allez vous joindre à une équipe d’actuaires et de scientifiques et d’ingénieurs de données passionnés, chargée de mettre...
From Aviva - Tue, 16 Oct 2018 17:53:50 GMT - View all Montréal, QC jobs
          [آموزش] دانلود Udemy iOS 12 & Swift - The Complete iOS App Development Bootcamp - آموزش توسعه اپ آی او اس 12 و سوئیفت      Cache   Translate Page      

دانلود Udemy iOS 12 & Swift - The Complete iOS App Development Bootcamp - آموزش توسعه اپ آی او اس 12 و سوئیفت#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

 iOS در طی این سال‌ها به دلیل رقابت تنگاتنگ با اندروید و انتظارات فزاینده‌ی کاربران، دچار تغییر و تحولات گسترده‌ای شده است. هرچند اولین چیزی که در مورد تغییرات این سیستم‌عامل توجهات را جلب می‌کند، تغییرات ظاهری است، اما اصلی‌ترین تغییرات در زیر پوست iOS اتفاق افتاده‌اند. زمانی که اولین نسخه از iOS معرفی شد، تنها از یک گوشی-آیفون ۲G-پشتیبانی می‌کرد. اما حالا این سیستم‌عامل از انواع آیفون و آیپد که هرکدام از اندازه، رزولوشن و امکانات مختلفی بهره می‌برند، پشتیبانی می‌کند. اولین ویژگی iOS 12 را می توان بهینه شدن سرعت آن بیان کرد. همواره کاربران اپل دوستدار ...


http://p30download.com/83288

مطالب مرتبط:



دسته بندی: دانلود » آموزش » برنامه نویسی و طراحی وب
برچسب ها: , , , , , , , , , , , , ,
لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید آنتی ویروس, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است همین حالا مشترک این پایگاه شوید!
لینک دانلود: http://p30download.com/fa/entry/83288


          Data Scientist - Big Data - belairdirect - Montréal, QC      Cache   Translate Page      
Fluency in applied analytical techniques including regression analysis, clustering, decision trees, neural networks, SVM (support vector machines),...
From belairdirect - Thu, 13 Sep 2018 00:51:55 GMT - View all Montréal, QC jobs
          Data Scientist - Aimia - Toronto, ON      Cache   Translate Page      
Expert knowledge and past experience of statistics and mathematics applied to business. Ideally, the individual in this role is a technical expert with hands-on...
From Aimia - Thu, 13 Sep 2018 20:52:33 GMT - View all Toronto, ON jobs
          ICO Data Analyst - Upwork      Cache   Translate Page      
Looking for somebody who has experience with crypto currencies, especially ICO data. Need data about running and closed ICOs including the names and addresses of issuers and wallet addresses where crypto currency was/is received.


Posted On: November 07, 2018 16:40 UTC
Category: Data Science & Analytics > Machine Learning
Skills: Bitcoin
Country: Germany
click to apply
          The Fake Religion Of Black Politics: WHY AREN'T 'BLACK PROGRESSIVE CHRISTIANS" More OFFENDED When They Are Grouped As POLITICALLY INDISTINGUISHABLE From SECULAR ATHEISTS? (BOTH OF WHOM Are Fighting The WHITE RIGHT WING)      Cache   Translate Page      
On this blog I have noted my observation over the years: 

 "The Fake Religion Of Black Politics" (Black Progressive Fundamentalist Political Operations USING 'The Cover Of 'Civil Rights' ) AND The "Black Political Church - Of The Fake Exe-Jesus Of Social Justice" (the sect of the 'Christian Church' whose self proclaimed job is to HEAL 'AMERICAN CONTAINERIZED BLACKS' of the RAVAGES OF WHITE SUPREMACY and partake of the gospel of SOCIAL JUSTICE - which just always happens to be indistinguishable from PROGRESSIVE FUNDAMENTALIST PUBLIC POLICY).........

.......... are welcomed by WHITE SECULAR PROGRESSIVE ATHEISTIC forces.

 NEVER ONCE on "White Secular Progressive Programs" ("Democracy Now", "Bill Moyers and Company", "Radio Times", "MSNBC" ) have I EVER seen the WHITE PROGRESSIVE SHOW HOST get into an argument with their Black Progressive Fundamentalist guest over the CONTINUED CONDITION OF BLACK PEOPLE - and the EXTRACTIVE OPERATIONS that they together have executed - with the White Show host making the case that "CHRISTIANITY IS FRAUDULENT and 'BLACK CHRISTIANITY' is contained within this scope of condemnation" American Containerized Black Institutions Are NO MATCH For "American Politics And Ideological Encampment"

The BLACK CHRISTIAN Is Programmed To 'Break Bread' With WHITE ATHEISTS Who Called THEIR GOD 'Fraudulent' In Order To Fight Against The WHITE RIGHT WING CHRISTIAN EVANGELICALS Who (ALSO) Use Their RELIGION To Defend Their RACIAL Interests  [RACE AND POLITICS Is BOTH Of Their GODs  - Though The Atheists Don't Attack The Black Religion As Fraudulent Because It Produces Progressive Vote Harvesting]

If you didn't notice by now:  The Data Scientists Now Have "American Containerized Black Behavior' Down To A Predictable Science.   NONE Have Displayed The Temerity To Admit That MOST Of The "Black Activism And Offendedness" IS FRAUDULENT

I watch more "YouTube" videos on my new Internet connected 4K TV than I do cable channels. 
On a near daily basis I treat myself to a "We Wuz First In Africa" Black historian who tells of the AFRICAN ORIGINS of EVERY SINGLE 'biblical story' and how the WHITE EUROPEANS STOLE IT.  Never once have a heard such a 'Black Historian', upon making his case about the African origins of religion COMMAND THE MODERN DAY 'AMERICAN CONTAINERIZED BLACKS' TO REACH BACK TO AFRICA OF TODAY, with their 'Christianity In Hand' and PROVIDE CHRISTIAN STRUCTURAL ASSISTANCE FOR 'THE LEAST OF THESE', FROM WHICH THEIR RELIGION CAME FROM. 

They would rather talk about DEAD ANCIENT AFRICANS, than LIVING AFRICANS OUT OF THEIR DOMAIN OF 'THEIR FIGHT WITH THE WHITE MAN IN AMERICA"

Of course the reason for this modality is lost on most people.   The goal is to co-opt most of the prevailing sacred American institutions that the masses put their faith in.

IN TRUTH 'Religion" is a NECESSARY FORCE for SOCIAL CONTROL, with the need for a "GOD WHO REMAINS ABOVE THE FRAY" as the icon of JUSTICE - that no man can escape.

After watching the Secular Atheist "Scientists" (ie: Lawrence Krause) go immediately after "Christianity" and "His White Right Wing Political Enemies" I have come to the conclusion that SINCE RELIGION IS PRIMARILY ABOUT "MAN" AND HIS "RELATION WITH GOD" AND "SALVATION" - the fact that the bible my be inaccurate about "ASTROPHYSICS" is irrelevant:

Krause nor "Bill Nye" nor "DeGrasse Tyson" will EVER SET FOOT on a planet outside of EARTH'S ORBIT.   

BUT their ADVANCED SCIENTIFIC RESEARCH EQUIPMENT that allows them to see the stars is mere SCRAP METAL to a society that is allowed to revert to SAVAGERY. 


RACISM CHASING IS THEIR GOD: THE BLACK CHRISTIAN WOULD RATHER BREAK BREAD WITH 'ATHEISTS' (OF ANY RACE) - NO HINT OF AN EFFORT TOWARD CHANGING THEM INTO BELIEVERS, INSTEAD THEY WAGE A CONSTANT STRUGGLE AGAINST 'WHITE RIGHT WING CHRISTIANS' - HOPING TO DEFEAT THEM POLITICALLY - NOT ONE MENTION OF THEIR COMMON 'BIBLICAL BELIEFS' , BUT INSTEAD THE BIBLE IS HELD UP TO SHOW THAT THEIR 'RACISM IS UNGODLY'



Black "Christian" Protesters In Formerly Lily White 'Fayetteville Georgia' Hold Signs Saying "REAL CHRISTIANS WOULD VOTE AGAINST THE 'RACISM IN THE WHITE HOUSE' 

(I will post the pictures later)

I am convinced that IRONY flies over most people's head.

Soon after the "Racial Flipping" of Clayton County Georgia transpired AND the activists aligned the RACE of the ELECTED OFFICIALS with the POPULATION DEMOGRAPHICS - Clayton County suffered MASS BLACK EXODUS (years after the WHITE FOLKS flew to Henry County, Fayette County and Coweta County).

As two Black Democratic Political Factions played a scorched earth strategy at the expense of the county's interests THIS BUMPER STICKER was created  - with the hopes that GOD would touch the hearts and minds of the new leadership.

FAST FORWARD about 15 years.

The once lily White Fayette County / City of Fayetteville - once held the designation that "Blacks had more wealth than the White folks" - meaning 'well to do Blacks' - though small in number - had higher average Wealth than the WHITES - who spanned the economic spectrum.

A recent magazine article listed Fayette County GA as one of the counties with THE MOST HEALTHY BLACK PEOPLE (In addition to Cobb, Henry and Columbia).


SO WHY ARE THE "BLACK CHRISTIANS" IN FAYETTE PROTESTING AGAINST 'RACISM IN THE WHITE HOUSE' - Rather Than Being THANKFUL They Don't Live In Northwest Atlanta?

Why Did These "BLACK PROGRESSIVE CHRISTIANS' Wait Until ELECTION DAY SATURDAY to TAKE TO THE STREETS and "Wittiness To THEIR GOD"?

Do the BLACK CHURCHES in Clayton County ask the BLACK PEOPLE who predominate their county to VOTE AGAINST RACISM?   OR do they suggest that "Bishop Stacey Abrams" is going to bring forth the healing hand of:  FREE SOCIAL JUSTICE HEALTH CARE.

Get your heads around this one, it is important:

  • BLACK FLIGHT PROGRESSIVES exited CLAYTON/ DEKALB / CITY OF ATLANTA after they FOUGHT OFF THE WHITE FOLKS, assumed PROGRESSIVE POWER over all of the ELECTED POSITIONS, but then found out that their "BLACK AMERICAN DREAM" was more likely to be attained by MOVING TO WHERE THEIR WHITE RIGHT WING ENEMIES had vacated to (Gwinnett, Cobb, Henry, Fayette, Coweta).
  • THEY USED THEIR GOD to PRAY FOR the NEW PROGRESSIVE BLACK LEADERSHIP - with the hopes that they would STOP THE INFIGHTING
  • BUT now that they are on the verge of RACIALLY TIPPING a former LILY WHITE BASTION (which they moved to BECAUSE of the INSTITUTIONS through which their BLACK STANDARD OF LIVING could be maintained) THEY PROTEST AGAINST "RACISM (IN THE WHITE HOUSE and for STATE 'SOCIAL JUSTICE HEALTH CARE' " - but NOT the hope that WHATEVER has PLAGUED CLAYTON COUNTY - does not create similar disruption in FAYETTE AND HENRY COUNTIES
THE FAKE EXE-JESUS OF SOCIAL JUSTICE DOES NOT NEED TO DELIVER UPON THE PROMISES OF 'SALVATION ON EARTH TO BLACK PEOPLE" - HE ONLY NEEDS TO DELIVER THE "BLACK VOTE TO THE PROGRESSIVE NATIONALISTS" - BY EXPLOITING THE WEAKENED INSTITUTIONS THAT ALLOW THIS HIJIX TO EXPLOIT THEIR BLACK VALUABLES



          R language project      Cache   Translate Page      
Hi guys, I am looking for a freelancer how can do a simple project in Exploring Data and Building Regression Models with R language (Budget: $10 - $30 USD, Jobs: Data Science, R Programming Language, Statistical Analysis, Statistics)
          R language project      Cache   Translate Page      
Hi guys, I am looking for a freelancer how can do a simple project in Exploring Data and Building Regression Models with R language (Budget: $10 - $30 USD, Jobs: Data Science, R Programming Language, Statistical Analysis, Statistics)
          Acknowledgements      Cache   Translate Page      

This report is a collaborative effort based on the input and analysis of the following individuals. Find related reports online at pewresearch.org/internet. Primary researchers Aaron Smith, Associate Director, Research Patrick van Kessel, Senior Data Scientist Skye Toor, Data Science Assistant Research team Lee Rainie, Director, Internet and Technology Research Andrew Perrin, Research Analyst Jingjing Jiang, […]

The post Acknowledgements appeared first on Pew Research Center: Internet, Science & Tech.


          Source Control for Data Science – using Azure DevOps / VSTS with Jupyter Notebooks      Cache   Translate Page      
So many of you will know about https://mybinder.org/ Binder is a awesome tool that allows you turn a Git repo into a collection of interactive Jupyter notebooks and it allows you to, open those notebooks in an executable environment, making your code immediately reproducible by anyone, anywhere. Jupyter Notebooks in the cloud Another great interactive...
          Assistant Professor in Biomedical Data Science and Informatics - Clemson University - Barre, VT      Cache   Translate Page      
Clemson University is ranked 24th among public national universities by U.S. In Fall 2018, Clemson has over 18,600 undergraduate and 4,800 graduate students....
From Clemson University - Fri, 02 Nov 2018 14:09:49 GMT - View all Barre, VT jobs
          Data Scientist - ADNM International - Laval, QC      Cache   Translate Page      
The right candidate will have the passion to discover hidden solutions in large data sets and work with stakeholders to improve business results....
From ADNM International - Tue, 24 Jul 2018 18:32:44 GMT - View all Laval, QC jobs
          Vice President, Data Science - Machine Learning - Wunderman - Dallas, TX      Cache   Translate Page      
Goldman Sachs, Microsoft, Citibank, Coca-Cola, Ford, Pfizer, Adidas, United Airlines and leading regional brands are among our clients....
From Wunderman - Sat, 25 Aug 2018 05:00:40 GMT - View all Dallas, TX jobs
          Data Scientist - FedEx Services - Brookfield, WI      Cache   Translate Page      
Women’s Business Enterprise National Council “America’s Top Corporations for Women’s Business Enterprises” - 2016....
From FedEx - Tue, 30 Oct 2018 20:52:12 GMT - View all Brookfield, WI jobs
          Principal Technologist - Machine Learning and Data Science - Blue Origin - Kent, WA      Cache   Translate Page      
While in this role, you will leverage your extensive experience in machine learning and data science to accelerate and innovate across business areas to drive...
From Blue Origin - Thu, 13 Sep 2018 23:30:35 GMT - View all Kent, WA jobs
          Sr Content Developer, Azure Data Scientist - Microsoft - Redmond, WA      Cache   Translate Page      
Technical training, and/or instructional design knowledge or experience highly valued. Work with related Microsoft Technical and Business Groups to curate...
From Microsoft - Tue, 06 Nov 2018 12:37:40 GMT - View all Redmond, WA jobs
          Data Scientist - Acuity, Inc. - Reston, VA      Cache   Translate Page      
Participate in internal and external knowledge exchanges (conferences, workshops, webinars). Must be US Citizen and be able to obtain and maintain DHS...
From Acuity, Inc. - Mon, 20 Aug 2018 14:20:40 GMT - View all Reston, VA jobs
          Data Science Team Lead - Resource Technology Partners - Boston, MA      Cache   Translate Page      
Work with internal research teams and help build out internal DNA sequencing pipeline. Indigo is looking to disrupt farming through data....
From ReSource Technology Partners - Mon, 15 Oct 2018 06:56:10 GMT - View all Boston, MA jobs
          STARBUCK JAMES: Head of Legal      Cache   Translate Page      
Highly competitive plus executive benefits: STARBUCK JAMES: Head of Legal to lead the legal and compliance function for this world-famous data science and artificial intelligence research institute. Landmark offices in central London
          Data Scientist Lead - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Data Scientist Lead - (DataSciLe2090718) Description Overview: The Data Scientist will be a part of a team chartered to merge and mine large amounts of...
From Mosaic North America - Fri, 07 Sep 2018 18:33:08 GMT - View all Jacksonville, FL jobs
          Android Developer      Cache   Translate Page      
PROMORPH SOLUTIONS PRIVATE LIMITED - Ranchi, Jharkhand - About Us: Promorph Solutions Pvt. Ltd. is an Education-Centric company founded and mentored by a team of data scientists and professors from IIT Kanpur. We build game-changing applications which create large-scale impact. Our solution EmpowerU improves the Quality of Educationan...
          Data Scientist - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Data Scientist - (Data Scientist090618) Description Overview: The Data Scientist will be a part of a team chartered to merge and mine large amounts of...
From Mosaic North America - Fri, 07 Sep 2018 00:32:46 GMT - View all Jacksonville, FL jobs
          Scientist, Data Lead - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Overview The Data Scientist will be a part of a team chartered to merge and mine large amounts of retail execution, sales and other relevant data to develop...
From Mosaic North America - Wed, 22 Aug 2018 14:26:26 GMT - View all Jacksonville, FL jobs
          Data Scientist for Systems Engineering / Infrastructure      Cache   Translate Page      
NJ-Jersey City, We are seeking a data scientist to join our data management team. You have experience in building and implementing statistical or machine learning systems, as well as analytical tools and systems to perform in depth analysis in support of short and long range decision making. You develop strong working relationships with others and want to work in a collaborative team environment. You think deeply
          Data Scientist - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to uncover real estate trends...
From Zillow Group - Thu, 01 Nov 2018 11:21:23 GMT - View all Seattle, WA jobs
          Data Scientist (Agent Pricing) - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third-party data (think Hive, Presto, SQL Server, Python, Mode Analytics, Tableau, R) to make strategic recommendations....
From Zillow Group - Thu, 01 Nov 2018 11:21:14 GMT - View all Seattle, WA jobs
          Vice President, Data Science - Machine Learning - Wunderman - Dallas, TX      Cache   Translate Page      
Goldman Sachs, Microsoft, Citibank, Coca-Cola, Ford, Pfizer, Adidas, United Airlines and leading regional brands are among our clients....
From Wunderman - Sat, 25 Aug 2018 05:00:40 GMT - View all Dallas, TX jobs
          Data Science in 30 Minutes with Jake Porway of DataKind      Cache   Translate Page      
This month, The Data Incubator is hosting another free webinar on data science. This time it features an interview with Jake Porway, founder of DataKind. Jake and DataKind are doing amazing things, so I hope you can check out the webinar. Below are the webinar details: Data Science in 30 Minutes: Using Data Science in … Continue reading Data Science in 30 Minutes with Jake Porway of DataKind
          CONSULENTE DATA SCIENTIST SENIOR/JUNIOR - Prisma S.r.l. - Junior, WV      Cache   Translate Page      
Prisma Srl opera nel settore dell’Information Technology dal 1984. Attraverso il continuo monitoraggio delle tecnologie emergenti e l’attenta valorizzazione...
From Prisma S.r.l. - Thu, 27 Sep 2018 07:51:39 GMT - View all Junior, WV jobs
          Data Scientist - Aimia - Toronto, ON      Cache   Translate Page      
Expert knowledge and past experience of statistics and mathematics applied to business. Ideally, the individual in this role is a technical expert with hands-on...
From Aimia - Thu, 13 Sep 2018 20:52:33 GMT - View all Toronto, ON jobs
          Vice President, Data Science - Machine Learning - Wunderman - Dallas, TX      Cache   Translate Page      
Goldman Sachs, Microsoft, Citibank, Coca-Cola, Ford, Pfizer, Adidas, United Airlines and leading regional brands are among our clients....
From Wunderman - Sat, 25 Aug 2018 05:00:40 GMT - View all Dallas, TX jobs
          Faculty Member, Computer Science (Databases and Data Science) - University of Saskatchewan - Saskatoon, SK      Cache   Translate Page      
The University of Saskatchewan is located on Treaty 6 territory and homeland of the Métis and is located in Saskatoon, Saskatchewan, a city with a diverse and...
From University of Saskatchewan - Fri, 27 Jul 2018 00:18:50 GMT - View all Saskatoon, SK jobs
          A. Hendorf - Databases for Data Science      Cache   Translate Page      
none
          Data Scientist - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to uncover real estate trends...
From Zillow Group - Thu, 01 Nov 2018 11:21:23 GMT - View all Seattle, WA jobs
          D. Scardi - Serverless SQL queries from Python to AWS Athena... or power to Data Scientists!      Cache   Translate Page      
none
          Data Scientist (Agent Pricing) - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third-party data (think Hive, Presto, SQL Server, Python, Mode Analytics, Tableau, R) to make strategic recommendations....
From Zillow Group - Thu, 01 Nov 2018 11:21:14 GMT - View all Seattle, WA jobs
          Data Science Analyst      Cache   Translate Page      
Req Number 15690BRPosition DescriptionWe are seeking a Data Science Analyst to join Oil Supply Planning & Scheduling Department (OSPAS) within Marketing & Supply Planning Admin Area of Saudi AramcoOSPAS is trusted to establish optimized supply plans for all the company&rsquos products coordinate the operations to implement the plans safely and
          Sr Content Developer, Azure Data Scientist - Microsoft - Redmond, WA      Cache   Translate Page      
Technical training, and/or instructional design knowledge or experience highly valued. Work with related Microsoft Technical and Business Groups to curate...
From Microsoft - Tue, 06 Nov 2018 12:37:40 GMT - View all Redmond, WA jobs
          Data Scientist Lead - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Data Scientist Lead - (DataSciLe2090718) Description Overview: The Data Scientist will be a part of a team chartered to merge and mine large amounts of...
From Mosaic North America - Fri, 07 Sep 2018 18:33:08 GMT - View all Jacksonville, FL jobs
          Data Scientist - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Data Scientist - (Data Scientist090618) Description Overview: The Data Scientist will be a part of a team chartered to merge and mine large amounts of...
From Mosaic North America - Fri, 07 Sep 2018 00:32:46 GMT - View all Jacksonville, FL jobs
          Scientist, Data Lead - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Overview The Data Scientist will be a part of a team chartered to merge and mine large amounts of retail execution, sales and other relevant data to develop...
From Mosaic North America - Wed, 22 Aug 2018 14:26:26 GMT - View all Jacksonville, FL jobs
          Data Science Manager - Micron - Boise, ID      Cache   Translate Page      
Create server based visualization applications that use machine learning and predictive analytic to bring new insights and solution to the business....
From Micron - Wed, 05 Sep 2018 11:18:49 GMT - View all Boise, ID jobs
          Intern - Data Scientist (NAND) - Micron - Boise, ID      Cache   Translate Page      
Machine learning and other advanced analytical methods. To ensure our software meets Micron's internal standards....
From Micron - Wed, 29 Aug 2018 20:54:50 GMT - View all Boise, ID jobs
          Intern - Data Scientist (DRAM) - Micron - Boise, ID      Cache   Translate Page      
Machine learning and other advanced analytical methods. To ensure our software meets Micron's internal standards....
From Micron - Mon, 20 Aug 2018 20:48:37 GMT - View all Boise, ID jobs
          Data Scientist - ADNM International - Laval, QC      Cache   Translate Page      
The right candidate will have the passion to discover hidden solutions in large data sets and work with stakeholders to improve business results....
From ADNM International - Tue, 24 Jul 2018 18:32:44 GMT - View all Laval, QC jobs
          Sr Content Developer, Azure Data Scientist - Microsoft - Redmond, WA      Cache   Translate Page      
Technical training, and/or instructional design knowledge or experience highly valued. Work with related Microsoft Technical and Business Groups to curate...
From Microsoft - Tue, 06 Nov 2018 12:37:40 GMT - View all Redmond, WA jobs
          Consultant, Business Analytics & Data Science - Lincoln Financial - Boston, MA      Cache   Translate Page      
Phoenix, AZ (Arizona). Knowledge and experience on applying statistical and machine learning techniques on real business data....
From Lincoln Financial Group - Fri, 02 Nov 2018 02:54:18 GMT - View all Boston, MA jobs
          Sr. Consultant, Business Analytics & Data Science - Lincoln Financial - Boston, MA      Cache   Translate Page      
Phoenix, AZ (Arizona). Implements and maintains predictive and statistical models to identify business opportunities and solve complex business problems....
From Lincoln Financial Group - Tue, 16 Oct 2018 20:54:14 GMT - View all Boston, MA jobs
          Data Scientist - State Farm - Bloomington, IL      Cache   Translate Page      
Bloomington, IL, Atlanta, GA, Dallas, TX, and Phoenix, AZ. Collaborates with business subject matter experts to select relevant sources of information....
From State Farm - Fri, 21 Sep 2018 22:31:56 GMT - View all Bloomington, IL jobs
          Data science reading list for Wednesday, November 7, 2018: The job — working together to build trust, the kinds of data scientist, why mothers should do data science, and why not to be a generalist      Cache   Translate Page      

To build trust in data science, work together From the Cornell Chronicle: As data science systems become more widespread, effectively governing and managing them has become a top priority for practitioners and researchers. While data science allows researchers to chart new frontiers, it requires varied forms of discretion and interpretation to ensure its credibility. Central […]

The post Data science reading list for Wednesday, November 7, 2018: The job — working together to build trust, the kinds of data scientist, why mothers should do data science, and why not to be a generalist appeared first on Global Nerdy - Joey deVilla's mobile/tech blog.


          NLP Data Scientist - PARC, a Xerox company - Palo Alto, CA      Cache   Translate Page      
PARC, a Xerox company, is in the Business of Breakthroughs®. We create new business options, accelerate time to market, augment internal capabilities, and...
From PARC, a Xerox company - Sat, 29 Sep 2018 08:33:49 GMT - View all Palo Alto, CA jobs
          What are the most commonly used distributions in Data Science?      Cache   Translate Page      
Comments
          Data Science Manager - Micron - Boise, ID      Cache   Translate Page      
Create server based visualization applications that use machine learning and predictive analytic to bring new insights and solution to the business....
From Micron - Wed, 05 Sep 2018 11:18:49 GMT - View all Boise, ID jobs
          Intern - Data Scientist (NAND) - Micron - Boise, ID      Cache   Translate Page      
Machine learning and other advanced analytical methods. To ensure our software meets Micron's internal standards....
From Micron - Wed, 29 Aug 2018 20:54:50 GMT - View all Boise, ID jobs
          Intern - Data Scientist (DRAM) - Micron - Boise, ID      Cache   Translate Page      
Machine learning and other advanced analytical methods. To ensure our software meets Micron's internal standards....
From Micron - Mon, 20 Aug 2018 20:48:37 GMT - View all Boise, ID jobs
          Data Scientist - Deloitte - Springfield, VA      Cache   Translate Page      
Demonstrated knowledge of machine learning techniques and algorithms. We believe that business has the power to inspire and transform....
From Deloitte - Fri, 10 Aug 2018 06:29:44 GMT - View all Springfield, VA jobs
          Associate, Machine Learning AI Consultant, Financial Services - KPMG - Dallas, TX      Cache   Translate Page      
Broad, versatile knowledge of analytics and data science landscape, combined with strong business consulting acumen, enabling the identification, design and...
From KPMG LLP - Fri, 07 Sep 2018 02:02:14 GMT - View all Dallas, TX jobs
          Data Scientist Lead - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Data Scientist Lead - (DataSciLe2090718) Description Overview: The Data Scientist will be a part of a team chartered to merge and mine large amounts of...
From Mosaic North America - Fri, 07 Sep 2018 18:33:08 GMT - View all Jacksonville, FL jobs
          Principal Data Scientist - Clockwork Solutions - Austin, TX      Cache   Translate Page      
Support Clockwork’s Business Development efforts. Evaluates simulation analysis output to reveal key insights about unstructured, chaotic, real-world systems....
From Clockwork Solutions - Mon, 27 Aug 2018 10:03:12 GMT - View all Austin, TX jobs
          Data Scientist - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Data Scientist - (Data Scientist090618) Description Overview: The Data Scientist will be a part of a team chartered to merge and mine large amounts of...
From Mosaic North America - Fri, 07 Sep 2018 00:32:46 GMT - View all Jacksonville, FL jobs
          Lead Data Scientist - Clockwork Solutions - Austin, TX      Cache   Translate Page      
Support Clockwork’s Business Development efforts. Evaluates simulation analysis output to reveal key insights about unstructured, chaotic, real-world systems....
From Clockwork Solutions - Mon, 27 Aug 2018 10:03:12 GMT - View all Austin, TX jobs
          Scientist, Data Lead - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Overview The Data Scientist will be a part of a team chartered to merge and mine large amounts of retail execution, sales and other relevant data to develop...
From Mosaic North America - Wed, 22 Aug 2018 14:26:26 GMT - View all Jacksonville, FL jobs
          Associate, Machine Learning AI Consultant, Financial Services - KPMG - New York, NY      Cache   Translate Page      
Broad, versatile knowledge of analytics and data science landscape, combined with strong business consulting acumen, enabling the identification, design and...
From KPMG LLP - Fri, 14 Sep 2018 08:38:34 GMT - View all New York, NY jobs
          Data Scientist: Medical VoC and Text Analytics Manager - GlaxoSmithKline - Research Triangle Park, NC      Cache   Translate Page      
Strong business acumen; 2+ years of unstructured data analysis/text analytics/natural language processing and/or machine learning application for critical...
From GlaxoSmithKline - Fri, 19 Oct 2018 23:19:12 GMT - View all Research Triangle Park, NC jobs
          Key Minority and Women Business Owner Influencers on the Internet      Cache   Translate Page      
This project is to identify the top influencers with diverse business owners. In the USA diverse is Women, Minority (Black Owned, Asian Owned, Mexican Owned, Indian Owned, Native Owned), Gay Lesbian, Disabled Owned and Veteran Owned... (Budget: $30 - $250 USD, Jobs: Analytics, Business Analysis, Data Science, Digital Marketing, Market Research)
          Technical Product Manager - Data Science - GoDaddy - Kirkland, WA      Cache   Translate Page      
Deliver the infrastructure to provide the business insights our marketing team needs. The small business market contains over a hundred million businesses...
From GoDaddy - Wed, 10 Oct 2018 21:04:32 GMT - View all Kirkland, WA jobs
          2019 Internship - Bellevue, WA- Data Science - Expedia - Bellevue, WA      Cache   Translate Page      
June 17 – September 6. As a Data Scientist Intern within Expedia Group, you will work with a dynamic teams of product managers and engineers across multiple...
From Expedia - Fri, 31 Aug 2018 21:36:06 GMT - View all Bellevue, WA jobs
          Remote Voice Biometrics Data Scientist      Cache   Translate Page      
A technology company has a current position open for a Remote Voice Biometrics Data Scientist. Core Responsibilities of this position include: Collecting/processing large volumes of voice recordings and extract voice features Constructing machine learning pipelines Writing scientific papers Skills and Requirements Include: Experience with Tensorflow, Caffe2, Theano, CNTK, Torch, NLTK Understanding of statistical methods of data analysis, machine learning, etc Experience with acoustic data processing, audio codecs, and speaker identification Knowledge of relevant programming languages: Python, Java, C++, R Background in computer science, mathematics, acoustics, or related
          Data Scientist - Sports Media      Cache   Translate Page      
DESCRIPTION: Showtime is seeking a data scientist to join its growing Data Strategy team. This role will report to the VP, Data Strategy & Consumer Analytics and will work alongside members of Research, Consumer and Digital Marketing, Media and Product teams. The Data Strategy team was formed to build a robust analytics capability that can transform billions of customer data points into actionable machine learning models, analytical products and consumer insights to support marketing automation (Technology)
          Ian Dunlop joins ContactEngine as Chief Product Officer      Cache   Translate Page      
ContactEngine are pleased to announce the appointment of Ian Dunlop as Chief Product Officer. Dunlop joins ContactEngine to lead the company's product management, engineering, data science, AI and Dev...
       

          Assistant Professor in Biomedical Data Science and Informatics - Clemson University - Barre, VT      Cache   Translate Page      
Clemson University is ranked 24th among public national universities by U.S. In Fall 2018, Clemson has over 18,600 undergraduate and 4,800 graduate students....
From Clemson University - Fri, 02 Nov 2018 14:09:49 GMT - View all Barre, VT jobs
          Senior Analyst, Data Science (1 of 2) - Johnson & Johnson Family of Companies - Spring House, PA      Cache   Translate Page      
Consideration will be given to Raritan, NJ; Janssen Research &amp; Development LLC, a Johnson &amp; Johnson company, is recruiting for a Senior Analyst, Data Science....
From Johnson & Johnson Family of Companies - Mon, 05 Nov 2018 23:05:25 GMT - View all Spring House, PA jobs
          Senior Scientist, Data Science (1 of 2) - Johnson & Johnson Family of Companies - Spring House, PA      Cache   Translate Page      
Consideration will be given to Raritan, NJ; Janssen Research &amp; Development LLC, a Johnson &amp; Johnson company, is recruiting for a Senior Scientist, Data Science....
From Johnson & Johnson Family of Companies - Mon, 05 Nov 2018 23:05:25 GMT - View all Spring House, PA jobs
          Scientist, Data Science (1 of 2) - Johnson & Johnson Family of Companies - Spring House, PA      Cache   Translate Page      
Consideration will be given to Raritan, NJ; Janssen Research &amp; Development LLC, a Johnson &amp; Johnson company, is recruiting for a Scientist, Data Science....
From Johnson & Johnson Family of Companies - Mon, 05 Nov 2018 23:05:25 GMT - View all Spring House, PA jobs
          Data Scientist - W2O - Austin, TX      Cache   Translate Page      
Experience building statistical or machine learning models a plus. You will work side-by-side with our team of data scientists, account staff, and analysts to...
From W2O - Tue, 30 Oct 2018 20:09:38 GMT - View all Austin, TX jobs
          Economist on "Data scientist amazon uk"      Cache   Translate Page      

How quick can you pack boxes and work under terrible conditions?


          Consultant, Business Analytics & Data Science - Lincoln Financial - Boston, MA      Cache   Translate Page      
Phoenix, AZ (Arizona). Knowledge and experience on applying statistical and machine learning techniques on real business data....
From Lincoln Financial Group - Fri, 02 Nov 2018 02:54:18 GMT - View all Boston, MA jobs
          Sr. Consultant, Business Analytics & Data Science - Lincoln Financial - Boston, MA      Cache   Translate Page      
Phoenix, AZ (Arizona). Implements and maintains predictive and statistical models to identify business opportunities and solve complex business problems....
From Lincoln Financial Group - Tue, 16 Oct 2018 20:54:14 GMT - View all Boston, MA jobs
          Data Scientist - State Farm - Bloomington, IL      Cache   Translate Page      
Bloomington, IL, Atlanta, GA, Dallas, TX, and Phoenix, AZ. Collaborates with business subject matter experts to select relevant sources of information....
From State Farm - Fri, 21 Sep 2018 22:31:56 GMT - View all Bloomington, IL jobs
          Senior Data Scientist - CenturyLink - New Century, KS      Cache   Translate Page      
*Open to any major US City. Candidates must be eligible to work within the US without sponsorship* The Data Scientist is responsible for developing tools to...
From CenturyLink - Thu, 26 Jul 2018 16:08:50 GMT - View all New Century, KS jobs
          Senior Data Scientist - CenturyLink - Chicago, IL      Cache   Translate Page      
The Data Scientist is responsible for developing tools to collect, clean, analyze and manage the data used by strategic areas of the business. Employ...
From CenturyLink - Fri, 26 Oct 2018 06:12:22 GMT - View all Chicago, IL jobs
          Sr Content Developer, Azure Data Scientist - Microsoft - Redmond, WA      Cache   Translate Page      
Technical training, and/or instructional design knowledge or experience highly valued. Work with related Microsoft Technical and Business Groups to curate...
From Microsoft - Tue, 06 Nov 2018 12:37:40 GMT - View all Redmond, WA jobs
          Un Chief Data Officer que l’on veut très offensif chez JCDecaux      Cache   Translate Page      
François-Xavier Pierrel est nommé Chief Data Officer de JCDecaux à compter du 5 novembre 2018.  Il pilotera la direction Data en s’appuyant sur les équipes de Data Scientists, Data Analysts et Data Engineers, qui seront renforcées par des recrutements en (suite…)
          NLP Data Scientist - PARC, a Xerox company - Palo Alto, CA      Cache   Translate Page      
PARC, a Xerox company, is in the Business of Breakthroughs®. We create new business options, accelerate time to market, augment internal capabilities, and...
From PARC, a Xerox company - Sat, 29 Sep 2018 08:33:49 GMT - View all Palo Alto, CA jobs
          Data Scientist - Wade & Wendy - New York, NY      Cache   Translate Page      
Our team is backed by Slack, ffVC, Randstad and other great VCs, as we bring AI and machine learning to the recruiting/HR space - all in order to make the...
From Wade & Wendy - Thu, 04 Oct 2018 06:17:29 GMT - View all New York, NY jobs
          NLP Data Scientist - Wade & Wendy - New York, NY      Cache   Translate Page      
Our team is backed by Slack, ffVC, Randstad and other great VCs, as we bring AI and machine learning to the recruiting/HR space - all in order to make the...
From Wade & Wendy - Thu, 27 Sep 2018 14:36:48 GMT - View all New York, NY jobs
          Java Application Developer - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Fri, 26 Oct 2018 18:01:41 GMT - View all Alpharetta, GA jobs
          Principal Application Developer - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Thu, 25 Oct 2018 08:26:41 GMT - View all Alpharetta, GA jobs
          Sr. Director - Product Mgmt - My ADP - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Tue, 18 Sep 2018 06:36:47 GMT - View all Alpharetta, GA jobs
          Sr. Director - Product Mgmt-NAS Shared Services and Integrations - ADP - Alpharetta, GA      Cache   Translate Page      
Engineer Analyst Architect Data Scientist Application Developer Design Implementation Chief Principal Enterprise Specialist Infrastructure Research Development...
From Automatic Data Processing - Tue, 18 Sep 2018 06:35:48 GMT - View all Alpharetta, GA jobs
          Data Scientist Lead - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Data Scientist Lead - (DataSciLe2090718) Description Overview: The Data Scientist will be a part of a team chartered to merge and mine large amounts of...
From Mosaic North America - Fri, 07 Sep 2018 18:33:08 GMT - View all Jacksonville, FL jobs
          Data Scientist - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Data Scientist - (Data Scientist090618) Description Overview: The Data Scientist will be a part of a team chartered to merge and mine large amounts of...
From Mosaic North America - Fri, 07 Sep 2018 00:32:46 GMT - View all Jacksonville, FL jobs
          Scientist, Data Lead - Mosaic North America - Jacksonville, FL      Cache   Translate Page      
Overview The Data Scientist will be a part of a team chartered to merge and mine large amounts of retail execution, sales and other relevant data to develop...
From Mosaic North America - Wed, 22 Aug 2018 14:26:26 GMT - View all Jacksonville, FL jobs
          Ian Dunlop joins ContactEngine as Chief Product Officer      Cache   Translate Page      
ContactEngine are pleased to announce the appointment of Ian Dunlop as Chief Product Officer. Dunlop joins ContactEngine to lead the company's product management, engineering, data science, AI and Dev...
       

          Sr Content Developer, Azure Data Scientist - Microsoft - Redmond, WA      Cache   Translate Page      
Technical training, and/or instructional design knowledge or experience highly valued. Work with related Microsoft Technical and Business Groups to curate...
From Microsoft - Tue, 06 Nov 2018 12:37:40 GMT - View all Redmond, WA jobs
          Data Scientist - Wade & Wendy - New York, NY      Cache   Translate Page      
Our team is backed by Slack, ffVC, Randstad and other great VCs, as we bring AI and machine learning to the recruiting/HR space - all in order to make the...
From Wade & Wendy - Thu, 04 Oct 2018 06:17:29 GMT - View all New York, NY jobs
          NLP Data Scientist - Wade & Wendy - New York, NY      Cache   Translate Page      
Our team is backed by Slack, ffVC, Randstad and other great VCs, as we bring AI and machine learning to the recruiting/HR space - all in order to make the...
From Wade & Wendy - Thu, 27 Sep 2018 14:36:48 GMT - View all New York, NY jobs
          Sr Content Developer, Azure Data Scientist - Microsoft - Redmond, WA      Cache   Translate Page      
Technical training, and/or instructional design knowledge or experience highly valued. Work with related Microsoft Technical and Business Groups to curate...
From Microsoft - Tue, 06 Nov 2018 12:37:40 GMT - View all Redmond, WA jobs
          27: Descriptive Analytics - Discovering the Story behind the Data      Cache   Translate Page      

See the complete show analysis notes at: http://bit.ly/IoTpodcast27notes

Descriptive analytics is nothing new, however IoT is applying evolutionary forces to make it adapt to unstructured sensor data and evolve into a mechanism of discovery rather than report generation. Tools that blend traditional business intelligence, analytical modeling and visualization now help data scientists discover the story behind the data which can lead to valuable insights for the enterprise. In this episode of the IoT business show I speak with Dave Rubal about how to apply descriptive analytics to your Internet of Things.

 

Read the rest of the show analysis notes at: http://www.iot-inc.com/internet-of-things-descriptive-analytics-discovery-podcast

 

Help Spread the Word

If you have been enjoying this podcast for a while, please give it a review.  Click here to open iTunes where you can leave a one-click 5-star review or add your thoughts if you have more to say. If you use Stitcher Radio, you can do the same, here.  Thanks, iTunes reviews really help podcasts get noticed.

 

Ways to Subscribe to the IoT Business Show

Like what you hear?  Subscribe to get each episode delivered to your device via iTunes, Google PlayStitcher Radio or RSS (non-iTunes feed).

 

Of value? Help by sharing on:

 

Have an opinion? Join the discussion in our LinkedIn group

What descriptive analytics packages do you use?

Click here if you have an opinion on this podcast or want to see the opinion of others

 


          26: Predictive Analytics Deep Dive – the Shape of Things to Come      Cache   Translate Page      

See the complete show analysis notes at: http://www.iot-inc.com/deep-dive-internet-of-things-predictive-analytics-podcast

OK, get ready for it, we’re going to get down and dirty with predictive analytics and when I say dirty, I mean the mathematics of the different forms of predictive models dirty. Geek fest? Yes, but close your eyes and extrapolate how predictive analytics can be applied to your situation. By understanding how it works you will also understand the limits of what it can and cannot do. In this episode of the IoT Business Show I deep dive with Anil Gandhi and emerge with a better understanding of predictive analytics and how it really relates to real-time and descriptive analytics.

 

Read the rest of the show analysis notes at: http://www.iot-inc.com/deep-dive-internet-of-things-predictive-analytics-podcast

 

Help Spread the Word

If you have been enjoying this podcast for a while, please give it a review.  Click here to open iTunes where you can leave a one-click 5-star review or add your thoughts if you have more to say. If you use Stitcher Radio, you can do the same, here.  Thanks, iTunes reviews really help podcasts get noticed.

 

Ways to Subscribe to the IoT Business Show

Like what you hear?  Subscribe to get each episode delivered to your device via iTunes, Google Play, Stitcher Radio or RSS (non-iTunes feed).

 

Of value? Help by sharing on:

 

Have an opinion? Join the discussion in our LinkedIn group

Does anything produce more value in IoT than data science?

Click here if you have an opinion on this podcast or want to see the opinion of others

 


          25: Sexy Data Science and its Analysis of IoT      Cache   Translate Page      

See the complete show analysis notes at: http://www.iot-inc.com/data-science-internet-of-things-analytics-podcast

First it was Big Data and now it’s the Internet of Things; the science of data is becoming increasingly sexy, maybe not Victoria’s Secret sexy but it certainly get the juices flowing for business leaders in the know. Hot or not? Definitely hot. In this episode of the IoT Business Show I speak with Ajit Jaokar about his passion, data science, and the application of machine learning, deep learning and predictive analytics in IoT. 

 

Read the rest of the show analysis notes at: http://www.iot-inc.com/data-science-internet-of-things-analytics-podcast

 

Help Spread the Word

If you have been enjoying this podcast for a while, please give it a review.  Click here to open iTunes where you can leave a one-click 5-star review or add your thoughts if you have more to say. If you use Stitcher Radio, you can do the same, here.  Thanks, iTunes reviews really help podcasts get noticed.

 

Ways to Subscribe to the IoT Business Show

Like what you hear?  Subscribe to get each episode delivered to your device via iTunes, Google PlayStitcher Radio or RSS (non-iTunes feed).

 

Of value? Help by sharing on:

 

Have an opinion? Join the discussion in our LinkedIn group

Does anything produce more value in IoT than data science?

Click here if you have an opinion on this podcast or want to see the opinion of others

 


          Principal Data Scientist (HCE)      Cache   Translate Page      
GA-Atlanta, Principal Data Scientist (HCE) Join a team recognized for leadership, innovation and diversity Honeywell is a Fortune 100 company with global sales surpassing $40B and has been one of Fortune's Most Admired Companies for over a decade. Through innovation, the Company brings together the physical and digital world to tackle some of the toughest societal and business problems - making the world a mo
          Sr Data Scientist Engineer (HCE)      Cache   Translate Page      
GA-Atlanta, Sr Data Scientist Engineer (HCE) Innovate to solve the world's most important challenges Honeywell is a leading software industrial business, harnessing the power of cloud, mobile, data & analytics, IoT and design thinking. With the addition of our state-of-the-art software and innovation center in Midtown Atlanta, we will incubate, deploy and scale breakthrough offerings that will impact the live
          Data Scientist Intern - Smartease - Woodlands      Cache   Translate Page      
Completed at least 1 year of studies as part of a Bachelor/Master course in a highly quantitative field (Data Science or Machine Learning strongly preferred).... $1,200 a month
From Indeed - Thu, 25 Oct 2018 11:07:42 GMT - View all Woodlands jobs
          Best Project Oriented Video Training On MS SQL DBA (Calgary)      Cache   Translate Page      
SQL School is one of the best training institutes for Microsoft SQL Server Developer Training, SQL DBA Training, MSBI Training, Power BI Training, Azure Training, Data Science Training, Python Training, Hadoop Training, Tableau Training, Machine Learning ...
          Product Manager - Myant - Toronto, ON      Cache   Translate Page      
We are a cross-functional team solving big challenges at the intersection of fashion, electronics, software, and data science....
From Myant - Fri, 02 Nov 2018 19:47:23 GMT - View all Toronto, ON jobs
          Senior Product Manager - Myant - Toronto, ON      Cache   Translate Page      
We are a cross-functional team solving big challenges at the intersection of fashion, electronics, software, and data science....
From Myant - Sat, 25 Aug 2018 23:15:44 GMT - View all Toronto, ON jobs
          Network Administrator & IT Support - Myant - Etobicoke, ON      Cache   Translate Page      
We are a cross-functional team solving big challenges at the intersection of fashion, electronics, software, and data science....
From Myant - Tue, 24 Jul 2018 00:01:58 GMT - View all Etobicoke, ON jobs
          Data Scientist / Algorithm Developer - Myant - Etobicoke, ON      Cache   Translate Page      
The focus of this position is on Biometric Algorithms. We are a cross-functional team solving big challenges at the intersection of fashion, electronics,...
From Myant - Thu, 09 Aug 2018 23:34:39 GMT - View all Etobicoke, ON jobs
          HR Intern - IFFCO Group - Dubai      Cache   Translate Page      
We are looking for HR Intern for UAE ,3 months of Contractual. Apply Now Report Abuse Print Job Openings in IFFCO Group Manager-Data Sciences (UAE) Process...
From Akhtaboot - Wed, 24 Oct 2018 16:39:27 GMT - View all Dubai jobs
          Making life easier for your data scientists      Cache   Translate Page      

Phil Simon chimes in with some tips on how to set these folks loose.

The post Making life easier for your data scientists appeared first on SAS Blogs.


          Differential Privacy Synthetic Data Challenge      Cache   Translate Page      
Deadline: 2019-05-06
Are you a mathematician or data scientist interested in a new challenge? Then join this exciting data privacy competition with up to $150,000 in prizes, where participants will create new or improved differentially private synthetic data generation tools. When a data set has important public value but contains sensitive personal information and can’t be directly shared with the public, privacy-preserving synthetic data tools solve the problem. By mathematically proving that a synthetic data generator satisfies the rigorous Differential Privacy guarantee, we can be confident that the synthetic data it produces won’t contain any information that can be traced back to specific individuals in the original data. The “Differential Privacy Synthetic Data Challenge” will entail a sequence of three marathon matches run on the Topcoder platform, asking contestants to design and implement their own synthetic data generation algorithms, mathematically prove their algorithm satisfies differential privacy, and then enter it to compete against others’ algorithms on empirical accuracy over real data, with the prospect of advancing research in the field of Differential Privacy.
          Business Relations Manager (Office Hours / East / Up to S$3,500) - Personnel Recruit LLP - East Singapore      Cache   Translate Page      
Post Facebook and Google Ads. We are a dedicated team of traders, data scientists and software engineers working to revolutionize Robo Investing for the retail... $2,500 - $3,500 a month
From Indeed - Mon, 05 Nov 2018 05:35:45 GMT - View all East Singapore jobs
          NLP Data Scientist - PARC, a Xerox company - Palo Alto, CA      Cache   Translate Page      
PARC, a Xerox company, is in the Business of Breakthroughs®. We create new business options, accelerate time to market, augment internal capabilities, and...
From PARC, a Xerox company - Sat, 29 Sep 2018 08:33:49 GMT - View all Palo Alto, CA jobs
          NLP Data Scientist - PARC, a Xerox company - Palo Alto, CA      Cache   Translate Page      
PARC, a Xerox company, is in the Business of Breakthroughs®. We create new business options, accelerate time to market, augment internal capabilities, and...
From PARC, a Xerox company - Sat, 29 Sep 2018 08:33:49 GMT - View all Palo Alto, CA jobs
          Get the Most Out of Your CMDB Investment      Cache   Translate Page      

Auto-populate and maintain your CMDB with the real-time, contextualized data ScienceLogic captures from your monitored environment.

Use that intelligence to drive automation. Because, without accurate data, you can’t automate your incident, change, and other ITSM processes.



Request Free!

          Sr Content Developer, Azure Data Scientist - Microsoft - Redmond, WA      Cache   Translate Page      
Technical training, and/or instructional design knowledge or experience highly valued. Work with related Microsoft Technical and Business Groups to curate...
From Microsoft - Tue, 06 Nov 2018 12:37:40 GMT - View all Redmond, WA jobs
          Consultant, Business Analytics & Data Science - Lincoln Financial - Boston, MA      Cache   Translate Page      
Phoenix, AZ (Arizona). Knowledge and experience on applying statistical and machine learning techniques on real business data....
From Lincoln Financial Group - Fri, 02 Nov 2018 02:54:18 GMT - View all Boston, MA jobs
          Sr. Consultant, Business Analytics & Data Science - Lincoln Financial - Boston, MA      Cache   Translate Page      
Phoenix, AZ (Arizona). Implements and maintains predictive and statistical models to identify business opportunities and solve complex business problems....
From Lincoln Financial Group - Tue, 16 Oct 2018 20:54:14 GMT - View all Boston, MA jobs
          Data Scientist - State Farm - Bloomington, IL      Cache   Translate Page      
Bloomington, IL, Atlanta, GA, Dallas, TX, and Phoenix, AZ. Collaborates with business subject matter experts to select relevant sources of information....
From State Farm - Fri, 21 Sep 2018 22:31:56 GMT - View all Bloomington, IL jobs


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10
Site Map 2018_10_11
Site Map 2018_10_12
Site Map 2018_10_13
Site Map 2018_10_14
Site Map 2018_10_15
Site Map 2018_10_16
Site Map 2018_10_17
Site Map 2018_10_18
Site Map 2018_10_19
Site Map 2018_10_20
Site Map 2018_10_21
Site Map 2018_10_22
Site Map 2018_10_23
Site Map 2018_10_24
Site Map 2018_10_25
Site Map 2018_10_26
Site Map 2018_10_27
Site Map 2018_10_28
Site Map 2018_10_29
Site Map 2018_10_30
Site Map 2018_10_31
Site Map 2018_11_01
Site Map 2018_11_02
Site Map 2018_11_03
Site Map 2018_11_04
Site Map 2018_11_05
Site Map 2018_11_06
Site Map 2018_11_07