Next Page: 10000

          A Command Line Tool for Aggregating Precipitation Amounts Using Grid Coordinates      Cache   Translate Page      
The Weather Data Tool is a command line interface application written in Python that facilitates the acquisition of time series precipitation data from national Forecast and Historical weather data providers; and generates an output file to be consumed by the EPA Storm Water Management Model (SWMM). The application conducts lightweight processing and formatting of the downloaded data to generate an output file containing rainfall precipitation values greater than 0 and conforms to the specified DSI-3240 file format.
          Data Team Lead - 500px - Toronto, ON      Cache   Translate Page      
Working experience with python libraries for data wrangling, visualization, mining and modelling (e.g., Pandas, NumPy, Matplotlib, Seaborn, Statsmodels, SciKit...
From 500px - Tue, 09 Oct 2018 20:47:33 GMT - View all Toronto, ON jobs
          Senior Back-End Ruby Developer - Hashtag Paid - Toronto, ON      Cache   Translate Page      
Experience using Python data science related packages (NumPy, Pandas, SciKit-Learn, NLTK, etc). Senior Back-End Ruby Developer....
From Hashtag Paid - Wed, 12 Sep 2018 21:07:55 GMT - View all Toronto, ON jobs
          Python Developer - The Jonah Group - Toronto, ON      Cache   Translate Page      
Hands-on experience using Python object-oriented programming, with proficient understanding to use NumPy, Pandas and Matplotlib for financial data analytics....
From The Jonah Group - Wed, 22 Aug 2018 23:27:42 GMT - View all Toronto, ON jobs
          Principal Data Scientist | IT - G2 PLACEMENTS TI - Montréal, QC      Cache   Translate Page      
Python (Sci-kit Learn, numpy, pandas, Tensorflow, Keras), R, Matlab, SQL. Principal Data Scientist *....
From Indeed - Sun, 07 Oct 2018 19:29:37 GMT - View all Montréal, QC jobs
          Scientifique des données - Data Scientist - Gameloft - Montréal, QC      Cache   Translate Page      
Connaissance de Python, Pandas et NumPy nécessaire. Knowledge of Python, pandas and Numpy are must-haves....
From Gameloft - Sat, 06 Oct 2018 03:08:15 GMT - View all Montréal, QC jobs
          Software Engineer - Valital Technologies Inc. - Montréal, QC      Cache   Translate Page      
Experience programming in Python libraries (numpy, pandas, matplotlib, sci-kit learn); "Valital Technologies Inc."....
From Indeed - Mon, 01 Oct 2018 15:32:18 GMT - View all Montréal, QC jobs
          Principal Data Scientist - DMA Global - Montréal, QC      Cache   Translate Page      
Python (Sci-kit Learn, numpy, pandas, Tensorflow, Keras) R, Matlab, SQL. You will ideally have a Master's or PhD in Statistics, Mathematics, Computer Science,...
From Indeed - Thu, 13 Sep 2018 17:03:53 GMT - View all Montréal, QC jobs
          Warehouse building      Cache   Translate Page      
Warehouse building
Warehouse building Description Editorial Promo Credits Gallery More Items Free Stuff Reviews (2) Extended (1) A high detailed Warehouse building exterior scene with a basic interior. Day-Night option with a python script. A pack of all that you need in order to give amazing renders to your figures! No need to wary for ‘plastic look’ or washed out shadows. All the props are there, most of them can fly around, all the lights are at the correct position and give realistic shadows. For best results on shadows, use Raytrace rendering. -Python script to change in DAY-NIGHT mode. -Python script to merge with schoolblock streets. NOTE: Python scripts work ONLY on windows platform. In short this is what you get: – 2 scenefiles in .pz3 format – More than 20 props in 16 separate files in .pp2 format – 18 texture, bump and transparency maps – 21 lights – 2 python scripts https://www.renderosity.com/mod/bcs/?ViewProduct=97732
          Comment on Your First Machine Learning Project in Python Step-By-Step by Jason Brownlee      Cache   Translate Page      
I expect the code will require some modification before it can be applied to new problems. I recommend that you follow this process: https://machinelearningmastery.com/start-here/#process Perhaps some of these tutorials will help: https://machinelearningmastery.com/start-here/#python
          Comment on How to Model Volatility with ARCH and GARCH for Time Series Forecasting in Python by Jason Brownlee      Cache   Translate Page      
ARCH models are only useful when you want to forecast volatility, not a value.
          Comment on How to Implement the Backpropagation Algorithm From Scratch In Python by Jason Brownlee      Cache   Translate Page      
Thanks for the suggestion, sorry, I don't have the capacity to make these changes for you.
          Comment on How to Make Out-of-Sample Forecasts with ARIMA in Python by Jason Brownlee      Cache   Translate Page      
It really depends if you need to seasonally adjust the data or not. Learn more here: https://machinelearningmastery.com/remove-trends-seasonality-difference-transform-python/
          Comment on Time Series Forecast Study with Python: Monthly Sales of French Champagne by Jason Brownlee      Cache   Translate Page      
Why are you making that change? I don't follow?
          Test Automation Associate Manager - Accenture - Camp Douglas, WI      Cache   Translate Page      
Java / J2EE, Groovy, Python, Ruby, JavaScript, C#, VB.NET. Testing *professionals utilize Accenture delivery assets to plan and implement testing and quality...
From Indeed - Thu, 20 Sep 2018 05:29:51 GMT - View all Camp Douglas, WI jobs
          The DevOps tool GitLab 11.1 with enhanced security controls.      Cache   Translate Page      
The DevOps platform based on the Git software version control system, GitLabn has been upgraded to version 11.1. The first visible change from GitLab version 11.1, the security dashboard now allows you to view the latest security status of the default branch for each project. It will enable security teams to see if there is a potential problem and take appropriate action. The dashboard also allows you to remove false positives or generate specific problems to solve vulnerabilities. The security team can also adjust the criticality level of vulnerabilities. This dashboard is in the Project menu in the project’s lateral navigation space. GitLab version 11.1 offers better security control options. GitLab version 11.1 also includes Static Application Security Testing (SAST) tools for Node.js, which detect code vulnerabilities when sending changes to a repository. SAST support is already available for C, C++, Go, Java and Python. Filtering by file name, path, and extension for advanced syntax search have also extended the code search capabilities. Upgrade to version 11.1 Runner’s performance improves with version 11.1. For example, the correction of the pagination of POST requests called webhooks, to ensure that the display of the page is not interrupted during the editing of these webhooks. Delivered with GitLab, the Runner tool – which is used to perform CI/CD integration and continuous delivery tasks – is also upgraded to version 11.1, providing better management of Docker idle times and the ability to sign packages in RPM and DEB formats. The table of configurable problems is now accessible via the GitLab API, which allows for custom workflows. Transferring projects between namespaces via an API is possible. Besides, the user interface of GitLab 11.1 also benefits from several improvements. First, the developers revised the merge request widget. Second, the contribution analysis page is more readable. The addition of a merge request panel in the Web IDE allows the merge request to be displayed side by side in the code and the IDE. A drop-down menu has been added to switch from one group to another to the group link at the top of the navigation, making it easier to access. The pages summarizing the steps have been redesigned. It is a first step in the simplification work to facilitate team monitoring. A problem can be classified as “confidential” directly from the comment field. Finally, the design of the Kubernetes page now uses tabs for each option when adding a cluster. GitLab version 11.1 is available as a demo for on-site or public cloud deployment.
          Trying to assign a numeric value input as length      Cache   Translate Page      
Forum: Python Posted By: Ouroborous Post Time: Oct 7th, 2018 at 04:58 PM
          C++ Developer - 261499 - Procom - Mississauga, ON      Cache   Translate Page      
Good experience in other languages such Java, Python, Perl , KDB, relational databases, or experience in data analytics is huge plus....
From Procom - Tue, 09 Oct 2018 21:06:20 GMT - View all Mississauga, ON jobs
          Consultant - Vulnerability and Penetration Tester - Valencia IIP Advisors - Toronto, ON      Cache   Translate Page      
Perl, Python, Ruby, Bash, C or C++, C#, PHP, iOS, SQL, or Java, including scripting and editing existing code....
From Indeed - Tue, 09 Oct 2018 20:42:42 GMT - View all Toronto, ON jobs
          Consultant - Vulnerability and Penetration Tester - Valencia IIP Advisors - Ottawa, ON      Cache   Translate Page      
Perl, Python, Ruby, Bash, C or C++, C#, PHP, iOS, SQL, or Java, including scripting and editing existing code....
From Indeed - Tue, 09 Oct 2018 20:42:57 GMT - View all Ottawa, ON jobs
          SOLVESYS(US) - Make Changes to A Python Scraper      Cache   Translate Page      
I have a python scraper it gets data from a website. I need to modify its procedure. Currently when run, it adds everything to database over and over again. Creating same entry once stopped and run again. I want to be able to check if that specific field exists or not - then write data. Specifically I need this scraper to check a field called 'updated date' on source page and then modify the entry if needed. Once run again it needs to skip if this 'updated date' field is same with the database entry. Thanks (Prize: 10)
          GIS Specialist - West, Inc. - Cheyenne, WY      Cache   Translate Page      
(Python, R, SQL, etc.). Working knowledge of SQL queries. Western EcoSystems Technology, Inc....
From West, Inc. - Sat, 18 Aug 2018 10:30:53 GMT - View all Cheyenne, WY jobs
          Joel Sweeney's Grave in Appomattox, Virginia      Cache   Translate Page      

Joel Walker Sweeney gravemarker.

Although today the banjo is a mainstay of American country music, the prototype originated in West Africa and was brought to the New World as a result of the slave trade in the 1600s. Variants were widely known in the Caribbean and rural America, but Appomattox, Virginia, native Joel Walker Sweeney is credited with popularizing the modern version.

As the story goes, Sweeney was born in a log cabin in 1810 and learned to play a gourd banjo at age 13 from the slaves who worked on his father’s farm. A natural Virginia ham, he developed as an entertainer on the five-string banjo and performed around the state, at one point becoming a star in a regional circus. Success took him to New York and then overseas to Europe, where he is said to have performed for Queen Victoria.

In the 1840s, with his brothers and African-American performers, he formed one of the early minstrel shows, which became an international success. As the frailer member of the group, Joel died in 1860 from dropsy, but his younger brother Sam flourished as one of the United States’ most famous banjo players. Eventually discovered by Confederate General J.E.B. Stuart, Sam was attached to Stuart’s staff and rode behind the cavalryman (a la Monty Python’s Brave Sir Robin) playing his favorite tunes including “Jine the Cavalry.”

In the years since the Civil War, Joel Sweeney’s legend has grown, with him having been credited as the inventor of the banjo, the originator of the five-string banjo, the first white man to play the banjo, and the originator of the “clawhammer” style of playing the instrument. But these claims aren't exactly true. The more disturbing truth is that such stories were created to distance the banjo from its African roots to make it acceptable to white middle-class Americans.

Despite the unsavory elements of Sweeney’s legend, he was indisputably influential in the deliverance of the banjo from anonymity, and today, Appomattox hosts the annual Joel Sweeney Banjo & Old-Time Music Festival celebrating the “strum und drang” of America’s cultural heritage.


          Fuzzing技术总结与工具列表      Cache   Translate Page      
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/wcventure/article/details/82085251

首先推荐阅读2018年computing Surveys 的《Fuzzing: Art, Science, and Engineering》
https://github.com/wcventure/wcventure/blob/master/Paper/Fuzzing_Art_Science_and_Engineering.pdf
其次推荐阅读2018年Cybersecurity 的 《Fuzzing: a survey》
https://www.researchgate.net/publication/325577316_Fuzzing_a_survey
里面对fuzzing技术和fuzzing工具有详细的介绍。

一、什么是Fuzzing?

Fuzz本意是“羽毛、细小的毛发、使模糊、变得模糊”,后来用在软件测试领域,中文一般指“模糊测试”,英文有的叫“Fuzzing”,有的叫“Fuzz Testing”。本文用fuzzing表示模糊测试。

Fuzzing技术可以追溯到1950年,当时计算机的数据主要保存在打孔卡片上,计算机程序读取这些卡片的数据进行计算和输出。如果碰到一些垃圾卡片或一些废弃不适配的卡片,对应的计算机程序就可能产生错误和异常甚至崩溃,这样,Bug就产生了。所以,Fuzzing技术并不是什么新鲜技术,而是随着计算机的产生一起产生的古老的测试技术。

Fuzzing技术是一种基于黑盒(或灰盒)的测试技术,通过自动化生成并执行大量的随机测试用例来发现产品或协议的未知漏洞。随着计算机的发展,Fuzzing技术也在不断发展。

二、Fuzzing有用么?

Fuzzing是模糊测试,顾名思义,意味着测试用例是不确定的、模糊的。

计算机是精确的科学和技术,测试技术应该也是一样的,有什么的输入,对应什么样的输出,都应该是明确的,怎么会有模糊不确定的用例呢?这些不确定的测试用例具体会有什么作用呢?

为什么会有不确定的测试用例,我想主要的原因是下面几点:

1、我们无法穷举所有的输入作为测试用例。我们编写测试用例的时候,一般考虑正向测试、反向测试、边界值、超长、超短等一些常见的场景,但我们是没有办法把所有的输入都遍历进行测试的。

2、我们无法想到所有可能的异常场景。由于人类脑力的限制,我们没有办法想到所有可能的异常组合,尤其是现在的软件越来越多的依赖操作系统、中间件、第三方组件,这些系统里的bug或者组合后形成的bug,是我们某个项目组的开发人员、测试人员无法预知的。

3、Fuzzing软件也同样无法遍历所有的异常场景。随着现在软件越来越复杂,可选的输入可以认为有无限个组合,所以即使是使用软件来遍历也是不可能实现的,否则你的版本可能就永远也发布不了。Fuzzing技术本质是依靠随机函数生成随机测试用例来进行测试验证,所以是不确定的。

这些不确定的测试用例会起到我们想要的测试结果么?能发现真正的Bug么?

1、Fuzzing技术首先是一种自动化技术,即软件自动执行相对随机的测试用例。因为是依靠计算机软件自动执行,所以测试效率相对人来讲远远高出几个数量级。比如,一个优秀的测试人员,一天能执行的测试用例数量最多也就是几十个,很难达到100个。而Fuzzing工具可能几分钟就可以轻松执行上百个测试用例。

2、Fuzzing技术本质是依赖随机函数生成随机测试用例,随机性意味着不重复、不可预测,可能有意想不到的输入和结果。

3、根据概率论里面的“大数定律”,只要我们重复的次数够多、随机性够强,那些概率极低的偶然事件就必然会出现。Fuzzing技术就是大数定律的典范应用,足够多的测试用例和随机性,就可以让那些隐藏的很深很难出现的Bug成为必然现象。

目前,Fuzzing技术已经是软件测试、漏洞挖掘领域的最有效的手段之一。Fuzzing技术特别适合用于发现0 Day漏洞,也是众多黑客或黑帽子发现软件漏洞的首选技术。Fuzzing虽然不能直接达到入侵的效果,但是Fuzzing非常容易找到软件或系统的漏洞,以此为突破口深入分析,就更容易找到入侵路径,这就是黑客喜欢Fuzzing技术的原因。

三、基于生成和基于编译的Fuzzing算法?*

Fuzzing引擎算法中,测试用例的生成方式主要有2种:
1)基于变异:根据已知数据样本通过变异的方法生成新的测试用例;
2)基于生成:根据已知的协议或接口规范进行建模,生成测试用例;
一般Fuzzing工具中,都会综合使用这两种生成方式。

基于变异的算法核心要求是学习已有的数据模型,基于已有数据及对数据的分析,再生成随机数据做为测试用例。

四、state-of-the-art AFL

AFL就是著名的基于变异的Fuzzer。
以下有一些关于state-of-the-art AFL的资料

  1. american fuzzy lop (2.52b)
    http://lcamtuf.coredump.cx/afl/
  2. AFL内部实现细节小记
    http://rk700.github.io/2017/12/28/afl-internals/
  3. afl-fuzz技术白皮书
    https://blog.csdn.net/gengzhikui1992/article/details/50844857
  4. 如何使用AFL进行一次完整的fuzz过程
    https://blog.csdn.net/abcdyzhang/article/details/53487683
  5. AFL(American Fuzzy Lop)实现细节与文件变异
    https://paper.seebug.org/496/
  6. fuzz实战之libfuzzer
    https://www.secpulse.com/archives/71898.html

1

- Static analysis
- Dynamic analysis
- Symbolic execution
- Fuzzing

T1

- Generation-based Fuzzing
- Mutation-based Fuzzing

T2

- White box fuzzing
- Grey box fuzzing
- Black box fuzzing

T3

- Fuzzing技术中的关键

T4

- Fuzzing 中

T5

- 至今fuzzing工具文献的引用关系,Fuzzing工具的分类和历史

F1

- Fuzzing 工具之调研,还有一张很好的整理后的图表

F2

最后,再整理一下部分开源fuzzing工具的列表
原文来自:[https://www.peerlyst.com/posts/resource-open-source-fuzzers-list],并增加2018年最新的诸如CollAFL和SnowFuzz等工具
1.开源Fuzzers工具
2.Fuzzing的线束或框架
3.其它 Fuzzers 工具是免费的,但是和开源比不值得一提
4.Fuzzing的有效超载
5.博客将帮助你更好的了解Fuzz
6.其它关于Fuzzing博客或资源
7.商业Fuzzers工具

1.开源Fuzzers

CollAFLhttp://chao.100871.net/papers/oakland18.pdf
路径敏感的Fuzzer,解决了AFL中bitmap路径冲突的问题。
并提出了一种选择seed的策略,能更快提高覆盖率。

SnowFuzz
https://arxiv.org/pdf/1708.08437.pdf

VUzzer
http://www.cs.vu.nl//~giuffrida/papers/vuzzer-ndss-2017.pdf
基于应用感知的自进化模糊工具。在这篇文章中,我们提出一个应用感知的进化模糊策略(不需要以前的知识应用或格式输入)。为了最小化地覆盖并扩展更深的路径,我们利用基于静态和动态分析的控制以及数据流功能,来推断应用程序的基本属性。与Application-agnostic方法相比,这可以更快地生成有趣的输入。我们实行我们的模糊策略在VUzzer上,并且用三种不同的数据评估它:DARPA的大挑战二进制文件(CGC)、一组真实的应用程序(二进制输入解析器)和最近发布的LAVA数据集。

Afl-fuzz(American fuzzy lop)
http://lcamtuf.coredump.cx/afl/
Afl-fuzz是一种基于面向安全的模糊测试工具,它采用了一种新型的方式(编译时检测和遗传算法),来自动发掘干净的、有趣的测试案例,即在目标二进制中触发新的内部状态。这基本上改善了模糊代码的功能覆盖。该工具生成的简洁的合成语料库也可以用来传播其它更多的劳动型或资源密集型测试方案。
与其他仪器化的模糊工具相比,afl-fuzz是以实用性而被设计的:它具有适度的性能开销,采用了多种高效的模糊战略,和努力最小化的技巧,基本上不需要配置,并且能够无缝处理复杂的、真实世界案例,以及常见的图像分析或文件压缩等。

Filebuster
一个非常快速和灵活的网络模糊工具

TriforceAFL
AFL / QEMU 模糊器具有全系统的仿真。这是AFL的修补版本,支持使用QEMU的全系统模糊测试。它所包含的QEMU已经更新,允许在运行x86_64的系统仿真器时进行分支机构跟踪。它也添加了额外的指令来启动AFL的forkserver,进行模糊设置,并标记测试用例的启动和停止。

Nightmare:
https://github.com/joxeankoret/nightmare
一个具有web管理的分布式模糊测试套件。

Grr
DECREE二进制的高吞吐量模糊器和仿真器

Randy:
http://ptrace-security.com/blog/randy-random-based-fuzzer-in-python/
Python中的基于随机的模糊工具

IFuzzer
一个进化型的翻译模糊器

Dizzy:
https://github.com/ernw/dizzy
基于python的模糊框架:
1.可以发送到L2以及上层(TCP / UDP / SCTP)
2.能够处理奇长度分组字段(无需匹配字节边界,因此即使单个标志或7位长字3.也可以表示和模糊)
4.非常容易的协议定义语法
5.能够做多包状态的完全模糊,能够使用接收到的目标数据作为响应

Address Sanitizer:
https://github.com/Google/sanitizers
地址Sanitizer、线Sanitizer、记忆Sanitizer

Diffy:
https://github.com/twitter/diffy
使用Diffy查找您的服务中的潜在错误

Wfuzz:
https://github.com/xmendez/wfuzz
Web应用程序HTTP://www.edge-security.com/wfuzz.php

Go-fuzz:
https://github.com/Google/gofuzz
基于放弃的模糊测试

Sulley:
https://github.com/OpenRCE/sulley
Sulley是一个积极开发的模糊引擎和模糊测试框架,由多个可扩展组件组成。Sulley(IMHO)超过了此前公布的大所属模糊技术、商业和公共领域的能力。框架的目标是不仅是可以简化数据表示,而且也可以简化数据传输和仪表。Sulley是以 Monsters Inc.的生物来命名的,因为,他是模糊的。写在python内的。

Sulley_l2:
http://ernw.de/download/sulley_l2.tar.bz2
有些人可能记得2008年发布的sulley_l2,它是sulley模糊框架的修改版本,增强了第2层发送功能和一堆(L2)模糊脚本。所有的blinking, rebooting, mem-corrupting引起了我们的一些关注。从那以后,我们继续写和使用这些模糊脚本,所以它的洞集合增长了。

CERT Basic Fuzzing Framework (BFF)For linux, OSX
https://github.com/CERTCC-Vulnerability-Analysis/certfuzz
http://www.cert.org/vulnerability-analysis/tools/bff.cfm
cert基本模糊框架(BFF)是一个软件测试工具,它用于在linux和mac os x平台上运行的应用程序中寻找漏洞。BFF对消耗文件输入的软件执行突变性的模糊测试。(突变性模糊测试是采取形式良好的输入数据并以各种方式破坏它的行为,寻找导致崩溃的情况。)BFF自动收集导致了软件以独特方式使测试用例崩溃,以及利用崩溃来调试信息。BFF的目标是去最小化软件供应商和安全研究人员通过模糊测试有效地发现和分析发现的安全漏洞过程中所需要的努力。

CERT Failure Observation Engine (FOE)For windows
http://www.cert.org/vulnerability-analysis/tools/foe.cfmhttps://github.com/CERTCC-Vulnerability-Analysis/certfuzz
The cert Failure Observation Engine (FOE) 是一个软件测试工具,它被用于在Windows平台上运行的应用程序中发现漏洞。FOE在消耗文件输入的软件上执行突变模糊测试。(突变性模糊测试是采取形式良好的输入数据并以各种方式破坏它的行为,寻找导致崩溃的情况。)FOE自动收集导致了软件以独特方式使测试用例崩溃,以及利用崩溃来调试信息。FOE的目标是去最小化软件供应商和安全研究人员通过模糊测试有效地发现和分析发现的安全漏洞过程中所需要的努力。

DranzerFor ActiveX Controls.
https://github.com/CERTCC-Vulnerability-Analysis/dranzer
Dranzer是一个工具,使用户能够检查有效的技术,它用于模糊测试ActiveX控件

Radamsaa general purpose fuzzer
https://github.com/aoh/radamsa
Radamsa是一个用于鲁棒性测试的测试用例生成器,也称为fuzzer。它可以用来测试一个程序是否可以承受格式错误以及潜在的恶意输入。它通过制造文件来工作(有趣的不同于通常给定的文件),然后将修改的文件提供给Target程序,或者这样或通过一些脚本。radamsa的主要卖点(而不是其他的模糊器)是:它是非常容易在大多数机器上运行,而且很容易从命令行脚本,这已经被用来找到程序中的一系列安全问题,而且你可能现在正在使用。

zzufApplication fuzzer
https://github.com/samhocevar/zzuf
zzuf是一个透明的应用程序输入模糊器。 它的工作原理是截取文件操作并更改程序输入中的随机位。zzuf的行为是确定性的,使得它很容易再现错误。 有关如何使用zzuf的说明和示例,请参阅手册页和网站http://caca.zoy.org/wiki/zzuf

Backfuzz
https://github.com/localh0t/backfuzz
Backfuzz是一个用python写成的有着不同协议(FTP,HTTP,IMAP等)的模糊工具。因为一般的想法是这个脚本有几个预定义的功能,所以谁想要编写自己的插件(为另一个协议)就可以在一些行这样做。

KEMUfuzzer
https://github.com/jrmuizel/kemufuzzer
KEmuFuzzer是一个基于仿真或直接本地执行测试系统虚拟机的工具。 目前KEmuFuzzer支持:BHOCS,QEMU,VMware和virtualbox。

Pathgrind
https://github.com/codelion/pathgrind
Pathgrind使用基于路径的动态分析来fuzz linux / unix二进制。 它是基于valgrind被写在python内的。

Wadi-fuzzer
https://www.sensepost.com/blog/2015/wadi-fuzzer/ https://gitlab.sensepost.com/saif/DOM-Fuzzer
Wadi是基于web浏览器语法的模糊器。 这个语法用于描述浏览器应该如何处理Web内容,Wadi转向并使用语法来打破浏览器。
Wadi是一个Fuzzing模块,用于NodeFuzz fuzzing Harness并利用AddressSanitizer(ASan)在Linux和Mac OSX上进行测试。
万维网联盟(W3C)是一个国际组织,它开发开放标准以确保Web的长期增长。 W3C允许我们搜索语法并在我们的测试用例中使用。

LibFuzzer, Clang-format-fuzzer, clang-fuzzer
http://llvm.org/docs/LibFuzzer.html
http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/clang-format/fuzzer/ClangFormatFuzzer.cpp?view=markup
http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/clang-fuzzer/ClangFuzzer.cpp?view=markup
我们在LibFuzzer上实现了两个模糊器:clang-format-fuzzer和clang-fuzzer。Clang格式大多是一个词法分析器,所以给它随机字节格式是会完美运行的,但也伴随着超过20个错误。然而Clang不仅仅是一个词法分析器,给它随机字节时几乎没有划伤其表面,所以除了测试随机字节,我们还在令牌感知模式中模糊了Clang。两种模式中都发现了错误; 其中一些以前被AFL检测到,另一些则不是:我们使用AddressSanitizer运行这个模糊器,结果发现一些错误在没有它的情况下不容易被发现。

Perf-fuzzer
http://www.eece.maine.edu/~vweaver/projects/perf_events/validation/https://github.com/deater/perf_event_testshttp://web.eece.maine.edu/~vweaver/projects/perf_events/fuzzer/
用于Linux perf_event子系统的测试套件

HTTP/2 Fuzzer
https://github.com/c0nrad/http2fuzz
HTTP2模糊器内置于Golang。

QuickFuzz
http://quickfuzz.org/
QuickFuzz是一个语法模糊器,由QuickCheck,模板Haskell和Hackage的特定库生成许多复杂的文件格式,如Jpeg,Png,Svg,Xml,Zip,Tar和更多! QuickFuzz是开源的(GPL3),它可以使用其他错误检测工具,如zzuf,radamsa,honggfuzz和valgrind。

SymFuzz
https://github.com/maurer/symfuzz
http://ieeexplore.IEEE.org/xpls/abs_all.jsp?arnumber=7163057
摘要?我们提出了一个算法的设计,以最大化数量的bug为黑盒子突变性的模糊给定一个程序和种子的输入。主要的直观性的是利用给定程序 - 种子对的执行轨迹上的白盒符号进行分析,来检测输入的BIT位置之间的依赖性,然后使用这种依赖关系来为该程序种子对计算概率上最佳的突变比率。我们的结果是有希望的:我们发现使用相同的模糊时间,这比8个应用程序中的三个以前的模糊器的平均错误多38.6%。

OFuzz
https://github.com/sangkilc/ofuzz
OFuzz是一个用OCaml编写的模糊平台。 OFuzz目前专注于在* nix平台上运行的文件处理应用程序。 OFuzz的主要设计原则是灵活性:必须容易添加/替换模糊组件(崩溃分类模块,测试用例生成器等)或算法(突变算法,调度算法)。

Bed
http://www.snake-basket.de/
网络协议fuzzer。 BED是一个程序,旨在检查守护程序的潜在缓冲区溢出、格式字符串等。

Neural Fuzzer
https://cifasis.github.io/neural-fuzzer/
神经模糊测试工具是一种实验性模糊器,它被设计使用国家最先进的机器,从一组初始文件学习。 它分为两个阶段:训练和生成。

Pulsar
https://github.com/hgascon/pulsar
协议学习,模拟和状态模糊器
Pulsar是一个具有自动协议学习和模拟能力的网络模糊器。该工具允许通过机器学习技术来建模协议,例如聚类和隐马尔可夫模型。这些模型可以用于模拟Pulsar与真实客户端或服务器之间进行通信,这些消息,在一系列模糊原语的结合下,让测试一个未知协议错误的实施在更深的状态协议。

D-bus fuzzer:
https://github.com/matusmarhefka/dfuzzer
dfuzzer是D-Bus模糊器,是用于通过D-Bus进行通信的模糊测试过程的工具。它可以用于测试连接到会话总线和系统总线守护程序的进程。模糊器为客户端工作,它首先连接到总线守护进程,然后它遍历并模糊测试由D-Bus服务提供的所有方法。

Choronzon
https://census-labs.com/news/2016/07/20/choronzon-public-release/
Choronzon是一个进化型的模糊工具。它试图模仿进化过程,以保持产生更好的结果。 为了实现这一点,它具有评估系统的能力,用以分类哪些模糊文件是有趣的,哪些应该被丢弃。
此外,Choronzon是一个基于知识的模糊器。 它使用用户定义的信息来读取和写入目标文件格式的文件。要熟悉Choronzon的术语,您应该考虑每个文件由染色体表示。用户应该描述所考虑的文件格式的基本结构, 优选文件格式的高级概述,而不是描述它的每个细节和方面。那些用户定义的基本结构中的每一个都被认为是基因, 每个染色体包含一个基因树,并且它能够从中构建相应的文件。

Exploitable
这里写图片描述
‘exploitable’是一个GDB扩展,它会按严重性分类Linux应用程序错误。扩展检查已崩溃的Linux应用程序的状态,并输出攻击者利用底层软件错误获得系统控制有多困难的总结。扩展可以用于为软件开发人员确定bug的优先级,以便他们可以首先解决最严重的bug。
该扩展实现了一个名为“exploitable”的GDB命令。 该命令使用启发式来描述当前在GDB中调试的应用程序的状态的可利用性。 该命令旨在用于包含GDB Python API的Linux平台和GDB版本。 请注意,此时命令将无法在核心文件目标上正确运行。

Hodor
这里写图片描述

我们想设计一个通用的模糊器,可以用来配置使用已知的良好的输入和分隔符,以模糊特定的位置。在一个完全愚钝的模糊器和一些更聪明的东西之间,与实现适当的智能模糊器相比,表现着更少的努力。

BrundleFuzz
https://github.com/carlosgprado/BrundleFuzz
BrundleFuzz是一个用于Windows和Linux的分布式模糊器,使用动态二进制仪器。

Netzob
https://www.netzob.org/
用于通信协议的逆向工程、流量生成和模糊化的开源工具
P
assiveFuzzFrameworkOSX
该框架用于在内核模式下基于被动内联挂钩机制来模糊OSX内核漏洞。

syntribos
OpenStack安全组的Python API安全测试工具

honggfuzz
http://google.github.io/honggfuzz/
一个通用的,易于使用的有趣的分析选项的模糊器。 支持基于代码覆盖率的反馈驱动的模糊测试

dotdotpwn
http://dotdotpwn.blogspot.com/
目录遍历模糊工具

KernelFuzzer
跨平台内核Fuzzer框架。DEF CON 24视频:
https://www.youtube.com/watch?v=M8ThCIfVXow

PyJFuzz
PyJFuzz - Python JSON Fuzzer
PyJFuzz是一个小的、可扩展的和现成可用的框架,用于模糊JSON输入,如移动端点REST API,JSON实现,浏览器,cli可执行和更多。

RamFuzz
单个方法参数的模糊器。

EMFFuzzer
基于桃树模糊框架的增强的元文件模糊器

js-fuzz
一个基于javascript的AFL启发的遗传模糊测试器。

syzkaller
syzkaller是一个无监督的、覆盖引导的Linux系统调用模糊器。

2.模糊线束/框架使fuzzer提高:

FuzzFlow
Fuzzflow是来自cisco talos的一个分布式的模糊管理框架,它提供虚拟机管理,模糊作业配、可插拔变异引擎、前/后变形脚本、崩溃收集和可插拔崩溃分析。

fuzzinator
Fuzzinator是一个模糊测试框架,可以帮助你自动化任务,它通常需要在一个fuzz会话:
运行您最喜欢的测试生成器并将测试用例馈送到测试中的系统,
抓住和保存独特的问题,
减少失败的测试用例,
缓解错误跟踪器中的问题报告(例如,Bugzilla或GitHub),
如果需要,定期更新SUT
计划多个SUT和发电机,而不会使工作站超载。

Fuzzlabs
https://github.com/DCNWS/FuzzLabs
FuzzLabs在一个模块化的模糊框架中,用Python编写。 它使用了令人惊叹的Sulley模糊框架的修改版本作为核心引擎。 FuzzLabs仍在开发中。

Nodefuzz
https://github.com/attekett/NodeFuzz
对于Linux和Mac OSX。 NodeFuzz是一个用于网络浏览器和类似浏览器的应用程序的模糊器。 NodeFuzz背后有两个主要的想法:第一是创建一个简单、快速、不同浏览器的fuzz方法。 第二,有一个线束,可以轻松地扩展与新的测试用例发生器和客户端仪器,无需修改核心。

Grinder
https://github.com/stephenfewer/grinder
对于windows
Grinder是一个自动化浏览器的模糊化和大量崩溃管理的系统。

Kitty
https://github.com/Cisco-sas/kitty
Kitty是一个开源的模块化和可扩展的模糊框架,使用python编写,灵感来自OpenRCE的Sulley和Michael Eddington(现在是Deja vu Security的)Peach Fuzzer。

Peach
http://community.peachfuzzer.com/
https://github.com/MozillaSecurity/peach
Peach是一个SmartFuzzer,能够执行基于生成和基于突变的模糊测试。

3.此外,还有这些免费的但不是开源的fuzzer:

SDL MiniFuzz File Fuzzer
https://www.Microsoft.com/en-us/download/details.aspx?id=21769
对于Windows。 SDL MiniFuzz File Fuzzer是一个基本的文件模糊工具,旨在简化非安全开发人员对模糊测试的采用,这些非安全开发人员不熟悉文件模糊工具或从未在当前的软件开发过程中使用它们。

Rfuzz
http://rfuzz.rubyforge.org/index.html
RFuzz是一个Ruby库,可以使用快速HttpClient和wicked vil RandomGenerator轻松地从外部测试Web应用程序,它允许普通程序员每天使用先进的模糊技术。

Spike
http://www.immunitysec.com/downloads/SPIKE2.9.tgz
SPIKE是一个API框架,允许你编写模糊器。

Regex Fuzzer
http://go.microsoft.com/?linkid=9751929
DL Regex Fuzzer是一个验证工具,用于帮助测试正则表达式是否存在潜在的拒绝服务漏洞。它包含用指数时间执行的某些子句的正则表达式模式(例如,包含自身重复的重复的子句)可以被攻击者利用来引起拒绝服务(DoS)条件。SDL Regex Fuzzer与SDL过程模板和MSF-Agile + SDL过程模板集成,以帮助用户跟踪和消除其项目中的任何检测到的正则表达式漏洞。

4.博客,将帮助你fuzz更好
Yawml的开始到完成模糊与AFL(一个完整的fuzzjob由foxglovesecurity)
http://foxglovesecurity.com/2016/03/15/fuzzing-workflows-a-fuzz-job-from-start-to-finish/

Fuzz更聪明,更难 - 用afl引发模糊,来自bsidessf2016的引物
https://www.peerlyst.com/posts/bsidessf-2016-recap-of-fuzz-smarter-not-harder-an-afl-primer-claus-cramon

Fuzzing和afl是一种艺术
Fuzzing nginx 和 American Fuzzy Lop
您可以在此处的评论或此Google文档中发表建议:
https://docs.google.com/document/d/17pZxfs8hXBCnhfHoKfJ7JteGziNB2V_VshsVxmNRx6U/edit?usp=sharing

BSidesLisbon 2016主题演讲:智能模糊器革命
Windows内核模糊初学者 - Ben Nagy

5.其他Fuzzer博客:
循环使用编译器转换的模糊包版
谷歌推出了OSS-Fuzz(感谢Dinko Cherkezov) - 一个项目,旨在不断开发开源项目fuzz:
OSS-Fuzz现在正在测试中,并即将接受候选开源项目的建议。为了使项目被OSS-Fuzz接受,它需要有一个庞大的用户基础或针对于至关重要的全球IT基础设施,这是一个通用启发式方法,我们有意在这个早期阶段解释。查看更多详情和说明如何在这里申请。
一旦项目注册了OSS-Fuzz,它将自动接收到我们的跟踪器中,新报告的错误披露截止于90天后(见此处的详细信息)。 这符合行业的最佳实践,并通过更快地为用户提供补丁来提高最终用户的安全性和稳定性。
帮助我们确保这个程序真正服务于开源社区和依赖这个关键软件的互联网,贡献和留下您的反馈在GitHub。

这里写图片描述

6.商业模糊器

超越安全的暴风雨
http://www.beyondsecurity.com/bestorm_and_the_SDL.html
管理员编辑:查找更多真棒Peerlyst社区贡献的资源,资源目录在这里
这里写图片描述

7.关于浏览器的Fuzzing

Skyfire 一种用于Fuzzing的数据驱动的种子生成工具
https://www.inforsec.org/wp/?p=2678
https://www.ieee-security.org/TC/SP2017/papers/42.pdf

使用libFuzzer fuzz Chrome V8入门指南
http://www.4hou.com/info/news/6191.html


          310-btc/README.md at 3f91209e92c386102d4a4c20fe327bf4754b519d · ipassala/310-btc · GitHub      Cache   Translate Page      

Introduction

So there's a guy that calls himself "pip", offering 310 BitCoins (BTC) to who can solve this image riddle. We're going to try to solve, but because we're almost 1 week behind, first we'll replicate other people's findings and theories. Shout-outs to r/crypto_jedi_ninja for compiling the following list of theories-work already cooked:

  • Creator calls himself "pip", some people believe this is hint to this riddle-problem.
  • According to pip: Can only partly be solved by printing the image and not using software.
  • The number 310 is the prize and it's written by hand on the image. There's also a QR code found on row 310 in the image. [As we get here](TODO: LINK TO THE MOMENT WE MAKE THE QR HE USES LSB FROM REDS).
  • Pip used Least Significant Bit (LSB) to hide information in 310, he could be using it in more places.
  • According to pip, he expects you to message the SHA256 on a single line when you get to this page.
  • Some inferred you need to find three smaller keys with small rewards to access the big one. Those keys are 0.31, 0.2 and 0.1 BTC, a guy called Lustre was confirmed by pip to have found the 0.1 BTC key.
  • The values for each key could be another hint: 0.31, 0.2 and 0.1.
  • The original image used can be seen here note: i don't really know what that means hehe xd
  • Some say it could be a mosaic puzzle, others that it could be based on voronoi diagrams.
  • Curved lines and circle on the image may be alluding to this older riddle which used Bezier curves.
  • There are 21 characters in the image: L 3 C E O 2 7 5 K O D 8 9 9 D 4 F A 1 F 6 4
  • There're 3 letter groups in a grid, they could be words from the Bip-18 word algorithm or something else. The grid contains the following: 511 B20 332 328 410 530 | 22B 0FE 52E D0F 7A1 65B | 52C 7E7 511 2F6 56F C4B.

That's all, be sure to check the reddit in case pip drops another hint.

This is the riddle image

Getting Started

If you don't know much about python or jupyter it's highly recommended to paste this repository link to binder and interact with the code, you can do that if you press this pink button: Binder

Contributing

There's a limit that breaks desire.


          中国平安人寿招聘安全人才      Cache   Translate Page      

简历请投递至邮箱: zhangzhimin561@pingan.com.cn (请注明来自安全客)

公司简介

中国平安人寿保险股份有限公司成立于2002年,是中国平安保险(集团)股份有限公司旗下的重要成员。截至2017年12月31日,平安人寿注册资本为338亿元,在全国拥有42家分公司(含7家电话销售中心)及超过3,300个营业网点,寿险代理人达138.6万名。公司个险、银保、电销、互联网多渠道齐头并进,实现协同发展,运营管理水平及客户体验领先市场,并依托集团“金融+科技”双驱动战略,在合规经营、防范风险的前提下,开启平台经营新时代,持续提升产品、科技两大核心竞争力,推动内含价值及规模持续、健康、稳定增长。

工作地点

工作地点:深圳平安金融中心(我站在118层等你)

投递邮箱

简历请投递至邮箱: zhangzhimin561@pingan.com.cn (请注明来自安全客)

招聘详情

渗透测试安全工程师

岗位职责:
1、负责对业务系统进行安全检查和渗透测试,并对发现的重大漏洞制定解决方案;
2、协助开发等部门进行漏洞修复和系统加固工作;
3、定期对开发等进行安全培训,提升安全意识。
岗位要求:
1、大学本科以上学历,1年以上信息安全工作经验;
2、熟悉web常见攻击方法,诸如sql注入、xss攻击、命令执行、ssrf攻击等;
3、熟悉pythonruby等一种脚本语言,完成漏洞poc编写;
4、熟悉常见java框架,具备java开发经验者优先考虑;
5、在安全网站发表过优秀文章或漏洞的优先考虑。

薪资:12k-30k(能力突出者可面谈)

福利简介

工作福利:入职即可使用Mac电脑!
福利:过节费(至少6080/年)还有其他福利、带薪休假(5-15天)、五险一金等;


          Technical Support Analyst - CaseWare - Ottawa, ON      Cache   Translate Page      
Demonstrated skill and experience in scripting, specifically VB Script or Python is strongly desired. CaseWare is a global software company providing solutions...
From CaseWare - Tue, 09 Oct 2018 21:47:54 GMT - View all Ottawa, ON jobs
          Modeling thinning effects on fire behavior with STANDFIRE      Cache   Translate Page      
Primary Station: 
RMRS
Publication Series: 
Scientific Journal (JRNL)

Abstract

  • Key message: We describe a modeling system that enables detailed, 3D fire simulations in forest fuels. Using data from three sites, we analyze thinning fuel treatments on fire behavior and fire effects and compare outputs with a more commonly used model.
  • Context: Thinning is considered useful in altering fire behavior, reducing fire severity, and restoring resilient ecosystems. Yet, few tools currently exist that enable detailed analysis of such efforts.
  • Aims: The study aims to describe and demonstrate a new modeling system. A second goal is to put its capabilities in context of previous work through comparisons with established models.
  • Methods: The modeling system, built in Python and Java, uses data from a widely used forest model to develop spatially explicit fuel inputs to two 3D physics-based fire models. Using forest data from three sites in Montana, USA, we explore effects of thinning on fire behavior and fire effects and compare model outputs.
  • Results: The study demonstrates new capabilities in assessing fire behavior and fire effects changes from thinning. While both models showed some increases in fire behavior relating to higher winds within the stand following thinning, results were quite different in terms of tree mortality. These different outcomes illustrate the need for continuing refinement of decision support tools for forest management.
  • Conclusion: This system enables researchers and managers to use measured forest fuel data in dynamic, 3D fire simulations, improving capabilities for quantitative assessment of fuel treatments, and facilitating further refinement in physics-based fire modeling.
National Strategic Program Areas: 
Resource Management and Use; Wildland Fire and Fuels
Publication Year: 
2018

Citation

Parsons, Russell A.; Pimont, Francois; Wells, Lucas; Cohn, Greg; Jolly, W. Matt; de Coligny, Francois; Rigolot, Eric; Dupuy, Jean-Luc; Mell, William; Linn, Rodman R. 2018. Modeling thinning effects on fire behavior with STANDFIRE. Annals of Forest Science. doi: 10.1007/s13595-017-0686-2.
Complete title: 
Modeling thinning effects on fire behavior with STANDFIRE
Product id: 
93 089
Entry status: 
Published to Web
Doi: 
10.1007/s13595-017-0686-2
Modified on: 
Friday, October 5, 2018 - 14:57
Pimont, Francois
Wells, Lucas
de Coligny, Francois
Rigolot, Eric
Dupuy, Jean-Luc
Mell, William
Linn, Rodman R.
treesearch_pub_id: 
57 206
Source:
Annals of Forest Science. doi: 10.1007/s13595-017-0686-2.
referee_status_id: 
1

          Python Online Training      Cache   Translate Page      
Open Source Technologies is one of the best in the market since its inception. Some of the main features of the Open Source Technologies is best infrastructure, real time assignments and top-notch faculty. One of the best training provided by Open Source Technologies is Python Training.
          『みんパイ』著者が、Pythonはなぜ流行ったのかを考える       Cache   Translate Page      

Pythonとプログラミングの基礎を学べるセミナー「正しいプログラミングの学びかた」(主催:角川アスキー総合研究所)の開催が10月13日(土)に迫る中、同セミナー講師で『みんなのPython』著者・柴田淳氏が、Pythonの人気が高まる理由を、企業での活用例から分析した。



          Fаbiо Nеlli - Руthon Dаta Anаlуtiсs (2-nd edition)      Cache   Translate Page      
Во все более сконцентрированном вокруг информационных технологий мире количество этой самой информации растет не по дням , а по часам, она производится, обрабатывается и хранится каждый день. Эти данные могут поступать от автоматического обнаружения систем, датчиков и научных приборов, а могут поступать от любого человека, когда, например, он делаете вывод из банка или совершает покупку, когда записывает блог или публикуется в социальных сетях. Каковы же эти данные? Они ведь еще не являются информацией, в бесформенном потоке байтов на первый взгляд трудно понять их сущность, если, конечно, они не являются строго формализованными, например, числом, словом, временем... Информация на самом деле - это результат обработки, который может быть использован различными способами. Этот процесс извлечения информации из необработанных данных называется анализом данных. Целью анализа данных является извлечение необходимой и формализованной в каком-то аспекте информации, которую нельзя прочитать или понять из первоисточника, и которую впоследствии можно использовать для исследования этого аспекта, что позволяет прогнозировать возможные ситуации, связанные с этим аспектом, в процессе его эволюции во времени. Изучите новейшие инструменты и методы Python, которые помогут вам ориентироваться в мире сбора и анализа данных. Вы рассмотрите научные вычисления с помощью NumPy, визуализацию с помощью matplotlib и машинное обучение с помощью scikit-learn. Этот процесс знакомства с этими методами углубляет ваши познания с каждым новым контентом по анализу данных в социальных сетях, с каждым анализом изображений с OpenCV, с каждым погружением в учебные библиотеки. Автор умело демонстрирует использование Python для обработки данных, управления и поиска информации. Книга нацелена прежде всего на опытных специалистов Python, которые нуждаются в получении знаний по имеющимся возможностям Python для анализа данных.
          Test Automation Associate Manager - Accenture - Camp Douglas, WI      Cache   Translate Page      
Java / J2EE, Groovy, Python, Ruby, JavaScript, C#, VB.NET. Testing *professionals utilize Accenture delivery assets to plan and implement testing and quality...
From Indeed - Thu, 20 Sep 2018 05:29:51 GMT - View all Camp Douglas, WI jobs
          Lead Software Developer - Intelligent Solution, Inc. - Morgantown, WV      Cache   Translate Page      
Be Fluent in VB, Python, and C#. Experience with User Interface Development in VB and Python. Intelligent Solutions, Inc....
From Indeed - Wed, 19 Sep 2018 16:29:11 GMT - View all Morgantown, WV jobs
          Comment on Our Sentiments Exactly by Jonathan      Cache   Translate Page      
You said: <blockquote>Is Jonathan claiming to be an expert on period KJV english, and whether “unicorn” was simply the translators’ intent to reference rhinoceros unicornis following the example of the Vulgate? Notice that Wilson’s argument was not that “unicorn” was specifically a horned horse, but that it was not a mere “wild ox”. So maybe Jonathan should cool his jets about pseudoscience, and stop heckling Wilson.</blockquote> Pastor Wilson never mentioned King James English, he was writing in modern English when he made the reference to a unicorn (within his own sentence, not within a quote). Neither Pastor Wilson nor I referred to the rhinoceros, you added that out of nowhere. And I wasn't critiquing the Bible or claiming it to be wrong, I was critiquing Pastor Wilson. And you were the one who chose to engage with me, so you can't accuse me of not following your agenda properly, it was you who tried to distort my statement about Pastor Wilson's error. Pastor Wilson's argument was clearly not that the passage was referring to a rhinoceros, that's why he used the term "unicorn", never used the term "rhinoceros", mocked the 'feckless translators' who translate it "ox", and followed up with comparisons to Nephilim and a girl with the spirit of a python, demonstrating that it was obvious he was talking about something far more unusual than a mere rhinoceros. This was the exact phrase: <blockquote>Christians who revolt against this massive etiolation do well, in that they are wanting to return to a full-orbed biblical cosmology. The star of Bethlehem was a star, and showed the magi the way to a house. The rhetorical question in Job about taming the unicorn—what feckless translators have called a wild ox—is, when taken this latter way, seen to be a stupid question. ““Is the wild ox willing to serve you? Will he spend the night at your manger? Can you bind him in the furrow with ropes, or will he harrow the valleys after you?” (Job 39:9–10, ESVOpen in Logos Bible Software (if available)). “Well, yes. That’s how I run my farm.” Who among you, oh sons of men, can tame the ox? Ummm . . . is this a trick question? The sons of God (bene elohim) took beautiful women from among men, and had Nephilim by them. The apostle Paul cast the spirit of a python out of the fortune telling girl at Philippi, thereby indicating that the existence of Apollo was not mere superstition. In short, the ancient scriptural cosmology is not what many Christians assume it to be.</blockquote> You can play your internet word games all you want, but any observer with decent reading comprehension can see that the work of that paragraph is to speak of the unicorn reference as indicating a Biblical cosmology beyond the natural world, not to merely say, "Oh, some translator got it wrong, it's a rhino, a different perfectly natural animal." <blockquote> Since Jonathan was apparently so desperate to get Wilson on something, he didn’t interact very much with the evidence supporting the historical identification as the rhinoceros.</blockquote> Anyone who has ever seen me post on here knows that someone claiming that I didn't interact very much with something is unlikely to be telling the truth. Especially since in the very conversation in question you accused me of "not knowing when to give up" after I explained in detail why anoch (wild ox) was a better translation than rhinocerus. Once again, the particular argument you wish to use shifts like the wind with the conclusion you wish to reach. Neither Pastor Wilson nor I were identifying the passage with a rhinoceros (and if it were a rhinoceros, Pastor Wilson would still be wrong to think it was a unicorn), so your attempts there to divert the conversation were unhelpful to everyone. This is what I had written: <blockquote>The idea that the Hebrew “re’em” refers to the auroch, a huge, powerful species of wild ox that lived in Israel during Old Testament times but is now extinct, is rooted in Johann Ulrich Duerst’s linguistic work of the late 1800s, who found that the word was based on the Akkadian cognate “rimu”, which was show to refer to the auroch via actual depictions found in Assyrian artifacts, which clearly show an ox, not a unicorn or a rhinoceros. Now, is such linguistic work infallible? Of course not. While the idea that “rimu” is the root of “re’em” is highly suggestive, while the frequent depiction of the bulls with one horn as they appeared for the side (due to the profound symmetry of their horns) and their references to “one-horn” in that context is highly suggestive, and while the fact that the depictions in the Bible work quite well for an enormous wild ox, it isn’t something that I’d bet my life or my faith on. As you may have seen, I already clicked “like” on your comments regarding the rhinoceros yesterday. While I think it more likely that the text referred to an animal actually known from Israel, like an auroch or an oryx, you are correct in pointing out that while rhinoceros were never known from any Hebrew-speaking region, their range was not so far away, and the idea that Hebrews knew of rhinoceroses is not ridiculous. Thus, of course, I would not view a good well-rooted argument for “wild ox” or “rhinoceros” as pseudo-science. However, in the English language, “unicorn” does NOT refer to either of those, it clearly refers to a mythical horned horse, and the comments show that many people agree with me in getting the impression that that’s exactly what Pastor Wilson intended it to refer to. To then argue that the text must mean “unicorn” and not “wild ox” by making an argument that shows the inability to distinguish between a domestic ox and a wild ox IS pseudoscience. Pastor Wilson’s entire argument mocking those who translate “wild ox” is based on false assumptions.</blockquote> <blockquote>First off, no, I am not an expert in period KJV, but that’s irrelevant because Pastor Wilson never mentioned the KJV. We’re talking about modern English right now, where I have never in my entire life heard an English speaker in a serious conversation call a rhinoceros a unicorn. And it is obvious that that’s not what Pastor Wilson is intending to do. You ignore that it would make no sense at all for Pastor Wilson to be saying, “dumb translators call it an ox, but it’s actually a rhinoceros, and therefore ancient scriptural cosmology is far more amazing than you think.” Unless you think that a rhinoceros is an especially mythical or spiritual being. His other examples were a star traveling across the sky to lead the magi to a house, the sons of God sleeping with women and producing the Nephilim, and Paul casting a spirit-python out of the fortune telling girl…and you want to add to that, “Plus rhinoceroses actually exist, which most Christians just wouldn’t believe!” I think you’re bright enough to know that Pastor Wilson is obviously saying no such thing, which makes it feel like you’re being malicious in trying to play with words or “plausible deniability” in order to gain not the truth, but the upper hand in an argument. And no, a wild ox is NOT a domesticated ox, no more than a wolf is a collie. Domestication is a process that takes hundreds of years, and as you point out, had been completed thousands of years before this passage was written. The auroch of the Middle East was NOT the same as their domestic cattle, it was its ancestor, and no Middle Eastern man reading that passage would say, “Oh, no, but I have a pasture full of tamed auroch’s right now.” It’s possible the text doesn’t mean auroch – I’ve already given you the evidence that it does, and I already pointed out that you could be right and it could mean rhinoceros too. But it certainly doesn’t mean domesticated cattle in any translation.</blockquote>
          python-pynfft      Cache   Translate Page      
Python wrapper to the NFFT library.
          python-pygsp      Cache   Translate Page      
Graph Signal Processing in Python
          python-tensorboardx-git      Cache   Translate Page      
Tensorboard for PyTorch
          python-pytorch-ignite-git      Cache   Translate Page      
High-level library to help with training neural networks in PyTorch
          python-prompt_toolkit-2      Cache   Translate Page      
Library for building powerful interactive command line applications in Python
          Sr. Business Intelligence Analyst      Cache   Translate Page      
AZ-Scottsdale, Top Three Skills: Python, BI Visualization, freight Job Description: The Analyst will create methods to collect and report data related to things such as supply chain velocity, equipment usage, and company performance. The Analyst will also help to parse, segment, and analyze data from multiple sources that will help us drive informed decisions to our customers about their supply chain. A successf
          Database Architect with Nosql, Mysql and python      Cache   Translate Page      
TX-Austin, Hi, Greetings, Hope you are doing great, please find the job description below and reply with your updated resume asap. Title: Data Architect Location: Austin, TX Duration: 12+ months contract Required Qualifications 12+ years of experience as a data developer for web-based large-scale enterprise applications. Developed and supported large-scale enterprise databases in production based on SQL (Mar
          Java production support      Cache   Translate Page      
CA-Sunnyvale, Java production Support Sunnyvale,CA 12 Months Contract Telephonic/Skype Interview Mandatory Technical Skills Good hands-on experience on Java Technologies Good hands-on experience with Cassandra and Oracle. Good Linux/Unix hand-on experience. Shell/Python Scripting is a plus. Desirable Technical Skills Hands-on experience with splunk. Decent networking knowledge and understanding Mandatory Functi
          Python Automation Engineer      Cache   Translate Page      
NC-Morrisville, A Fortune 500 Telecommunications company in RTP is looking for automation developers to provide automation and python development for their Network Solutions Testing Team. This team is responsible for building automation solutions and converting test cases from a manual state to a python-based, automated framework for top commercial customers. The ideal candidate will be an experienced python deve
          Infastructure DevOps Engineer      Cache   Translate Page      
NC-Morrisville, You will help us improve OpenStack in a full-scale deployment. We are improving the state of the art internal cloud in OpenStack. We have 3 today and want to build more on top of that in additional to new features and functionality we are trying to add. Ideally: you know clouds; you breathe clouds; you hack clouds. You should have proven experience with Linux, Openstack, Ansible and Python, you sh
          Reposurgeon’s Excellent Journey and the Waning of Python      Cache   Translate Page      
Time to make it public and official. The entire reposurgeon suite (not just repocutter and repomapper, which have already been ported) is changing implementation languages from Python to Go. Reposurgeon itself is about 50% translated, with pretty good unit-test coverage. … Continue reading
          Comment on What next Hindi EBook you want on BccFalna.com by Vikas Datodiya      Cache   Translate Page      
Sir, Python की बुक पब्लिश कीजिये ना हिंदी में
          Lead Software Developer - Intelligent Solution, Inc. - Morgantown, WV      Cache   Translate Page      
Be Fluent in VB, Python, and C#. Experience with User Interface Development in VB and Python. Intelligent Solutions, Inc....
From Indeed - Wed, 19 Sep 2018 16:29:11 GMT - View all Morgantown, WV jobs
          Biomedical Electrical Engineer (Industrial Postdoctoral Fellow) - Myant - Etobicoke, ON      Cache   Translate Page      
Write and utilize scripts/programs for data acquisition and analysis (e.g., R / Python / MATLAB). A post-doctoral position is available at Myant Inc....
From Myant - Wed, 22 Aug 2018 17:08:34 GMT - View all Etobicoke, ON jobs
          Python Programming with Raspberry Pi      Cache   Translate Page      
скачать Python Programming with Raspberry Pi бесплатно
Название: Python Programming with Raspberry Pi
Автор: Sai Yamanoor, Srihari Yamanoor
Страниц: 306
Формат: PDF
Размер: 55 Мб
Качество: Отличное
Язык: Английский
Год издания: 2017


   Become a master of Python programming using the small yet powerful Raspberry Pi Zero
    Raspberry Pi Zero is a super-small and super-affordable product from Raspberry Pi that is packed with a plethora of features and has grabbed the notice of programmers, especially those who use Python.

          ゼロからわかる Python超入門 (かんたんIT基礎講座)       Cache   Translate Page      


書名   ゼロからわかる Python超入門 (かんたんIT基礎講座)
著者   佐藤 美登利
発行社  技術評論社
発行年  2016年7月7日
頁     239
価格   2,380円+税

本書は初めてPythonを学習する読者を対象とした入門書です。初心者の方でも無理なく基本文法が学べます!



          251 - Scientific Programmer - CANCERCARE MANITOBA - Winnipeg, MB      Cache   Translate Page      
Comprehensive knowledge of MATLAB, C, C++, Java and Python. Experience working with major projects and comfortable consulting with senior management....
From Indeed - Thu, 04 Oct 2018 19:52:28 GMT - View all Winnipeg, MB jobs
          DevOps Software Developer - Great West Life Canada Insurance - Winnipeg, MB      Cache   Translate Page      
Coding experience in C#, Java, Python and JavaScript. Two intermediate or senior DevOps Software Developers/engineers (eg....
From Indeed - Mon, 01 Oct 2018 14:20:15 GMT - View all Winnipeg, MB jobs
          Field Technician-PTC Locomotive - Canadian National Railway - Winnipeg, MB      Cache   Translate Page      
Basic programming skills using Python, SQL, Java, PHP, C++, and HTML. Excellent English verbal, reading comprehension and writing skills....
From Canadian National Railway - Wed, 26 Sep 2018 20:09:36 GMT - View all Winnipeg, MB jobs
          Thoughtful Data Science      Cache   Translate Page      

eBook Details: Paperback: 490 pages Publisher: WOW! eBook (July 31, 2018) Language: English ISBN-10: 178883996X ISBN-13: 978-1788839969 eBook Description: Thoughtful Data Science: Bridge the gap between developer and data scientist by creating a modern open-source, Python-based toolset that...

The post Thoughtful Data Science appeared first on eBookee: Free eBooks & Video Tutorials Download.


          Dynamical Systems with Applications using Python      Cache   Translate Page      
【作者(必填)】Stephen Lynch[/backcolor] 【文题(必填)】Dynamical Systems with Applications using Python 【年份(必填)】2018 【全文链接或数据库名称(选填)】https://link.springer.com/book/10.1007/978-3-319-78145-7
          Python Developer - The Jonah Group - Toronto, ON      Cache   Translate Page      
Demonstrate effective mentorship and hands-on technical leadership to team members. You’ll have the opportunity to grow your leadership and development skills...
From The Jonah Group - Wed, 22 Aug 2018 23:27:42 GMT - View all Toronto, ON jobs
          Full Stack Developer (Java, Python, Scala, PostgreSQL) - Trigyn - Montréal, QC      Cache   Translate Page      
The team is looking for a full-stack developer to create, improve and maintain a new set of tools to empower QA engineers, developers and devops working at the...
From Trigyn - Tue, 09 Oct 2018 22:02:23 GMT - View all Montréal, QC jobs
          setting up django admin template      Cache   Translate Page      
setting up django template admin selected with little changing , import + export library , upload, creating actions (Budget: $30 - $250 USD, Jobs: CSS3, Django, Git, HTML5, Python)
          Systems Developer / Python Developer - Completely Managed Inc. - Newmarket, ON      Cache   Translate Page      
*Completely Managed Inc. is looking to hire a full time Systems Developer that is very comfortable building within the Linux environment.* *\*\*\*\* PLEASE...
From Indeed - Tue, 09 Oct 2018 22:32:58 GMT - View all Newmarket, ON jobs
          SOLVESYS(US) - Make Changes to A Python Scraper      Cache   Translate Page      
I have a python scraper it gets data from a website. I need to modify its procedure. Currently when run, it adds everything to database over and over again. Creating same entry once stopped and run again. I want to be able to check if that specific field exists or not - then write data. Specifically I need this scraper to check a field called 'updated date' on source page and then modify the entry if needed. Once run again it needs to skip if this 'updated date' field is same with the database entry. Thanks (Prize: 10)
          AIは「単なる関数」、数学は「言語の一つ」、「文系出身」でも問題ない――Pythonで高校数学の範囲から学び始めよう      Cache   Translate Page      
AIに欠かせない数学を、プログラミング言語Pythonを使って高校生の学習範囲から学び直す連載。初回は、「AIエンジニア」になるために数学を学び直す意義や心構え、連載で学ぶ範囲について。
          Automation to create mail alias       Cache   Translate Page      
I need that every time an email is sent to a non-exiting recipient, the script creates an alias to that recipient. For example: Given the premise terrain@localhost exists: 1 - send an email to home.terrain@localhost 2 - home.terrain does not exist and will fail... (Budget: $250 - $750 USD, Jobs: Linux, Python, Software Architecture)
          Data Scientist - Mortgages - Zillow Group - Seattle, WA      Cache   Translate Page      
Dive into our internal data (think Hive, Presto, SQL Server, My SQL, Redshift, Python, Mode Analytics, Tableau, R), combining disparate sources of information...
From Zillow Group - Fri, 05 Oct 2018 01:07:10 GMT - View all Seattle, WA jobs
          Sr. Analyst/Developer - USA-TX-Houston      Cache   Translate Page      
Sr. Analyst / Developer C#, C OR Python Houston, TX - Greenway Plaza Email: ******* for details. MUST be able to work on Diversant's W2! Due to the nature of the position, DIVERSANT can only co...
          Technical Support Analyst - CaseWare - Ottawa, ON      Cache   Translate Page      
Demonstrated skill and experience in scripting, specifically VB Script or Python is strongly desired. CaseWare is a global software company providing solutions...
From CaseWare - Tue, 09 Oct 2018 21:47:54 GMT - View all Ottawa, ON jobs
          TPL, Discovery, and CMDB ah-ha moment (Application Lookup)      Cache   Translate Page      

Hi, Everyone!

 

I got to meet Doug Mueller... Engage 2016 gives you access, (BMC CMDB Architect) and Antonio Vargas (BMC Discovery Product Manager). in person recently.  Here is my recent ah-ha moment with CMDB, and Discovery.  I will start with BMC discovery (ADDM) then move to the CMDB topic of service concept.  There is lots confusion out there on the difference of technical and business service should be within the CMDB.  Some of you might say, I knew that years ago!  But, It took me awhile to grabs the concepts even though we used, implemented, and develop the various BMC products.  I am a visual and tactical learner.  I am writing this blog for those type of students.  Diagram 1 explains The Pattern Language (TPL) and how things are discovery

 

Picture 1 explains the discovery concepts and how things are the development by looking for a pattern within a process(s) running on a device.  Once you find that process you write a pattern or use discovery to find that process via TPL. By using the discovered process, you can now create a software instance(s) by the group the process into the software(s).

 

Screen Shot 2017-06-17 at 8.01.21 AM.png

 

Diagram 2 shows the structure of the TPL base on Diagram 1.  Notice the trigger is on a node of node kind base on the condition.  You now see the relationship between the pattern and the TPL.  Once you define the software instances into business application instance(s).  Once BAI is moved into the CMDB CI called BMC.CORE:BMC_Application.  You have to make a logical entry for BMC.CORE:BMC_ApplicationSystem using non-insteance names.  (The Instance name is production, development, and QA environment coming form BMC Discovery.). Base on your application model you create using the pattern that is consumed by CMDB common data model in different ways.  You also need to know that TPL's foundation is in Python.  Those of you are interested in the pattern, machine learning, and Artificial intelligence.  That's another discussion/blog.

 

Let's look at BAI and SI from the discovery with SAAM and predefine SI that becomes part of a larger model like BSM.

 

SAAM's Business Application Instances are consumed by these forms:

  • BMC.CORE:BMC_Application
  • BMC.CORE:BMC_ApplicationSystem

 

Let's look at the CDM for CMDB forms BMC.CORE:BMC_ApplicaitionSystem and BMC.CORE:BMC_Applicaiton.  You have to understand that parent class is BMC.CORE_ApplicationSystem.  The subclasses are BMC.CORE:BMC_Application, BMC.CORE:BMC_ApplicationInfrastructure, BMC.CORE:BMC_SoftwareServer.  (Basics)

 

CI Name
CI ClassDescription
BMC Atrium Discovery and Dependency Mapping Active Directory Proxy 10.1 identified as Active Directory on %hostname%Parent: BMC.CORE:BMC_ApplicationSystemChild:   BMC.CORE:BMC:SoftwareServerThe BMC_SoftwareServer class represents a single piece of software directly running (or otherwise deployed) on a single computer.
manager module on Apache Tomcat Application Server 7.0 listening on 8005, 8080, 8009 on %hostname%Parent:  BMC.CORE:BMC_SystemServiceChild:    BMC.CORE:BMC_ApplicaitonServiceClass that stores information about services that represent low-level modules of an application, for example, the components deployed within an application server. This class has no corresponding DMTF CIM class.
BSM (Business Service Managment is define pattern via TKU of software instance)Parent:  BMC.CORE:BMC_SystemChild: BMC.CORE:BMC_ApplicationSystemChild:   BMC.CORE:BMC_ApplicationThe BMC_Application class represents an instance of an end-user application that supports a particular business function and that can be managed as an independent unit.

 

By understanding the above and what's documented by discovery leaves ITSM team a decision to make between BMC_SoftwareServer or BMC_ApplicationSystem. Why do you have to make a decision is that BMC discovery sync with both of these CI? (BMC did not make the decision for you.) To understand why, let's review and understand model: FACTS:

  • ApplicationSystem is parent CI.
  • SoftwareServer is child CI of ApplicationSystem.
  • BMC Sync the Business Application instance into BMC_Application CI which is Child CI for ApplicationSystem out of the box.  (OOTB)

 

To be continue.... It is not comsume by the follwing forms:

  • BMC.CORE:BMC_SystemSoftware
  • BMC.CORE:BMC_ApplicationInfrastructure
  • BMC.CORE:BMC_SystemService

 

The way I'd understood @Doug Muller:  There is no direct relationship between business and technical services that relate(s) into BMC.CORE:BMC_ApplicationSystem.  These definitions can be defined by how your business generate revenue with a business service. (If your company makes cars.  Any system that supports selling cars is tied to business services.). Technical services are defined supporting of business service(s).  You can define the technical services without a business service(s).  These are logoical break down of your services based on your organization. 

 

The confusion comes from the type of business your company is providing to its customers and way BMC represents examples of technical vs business service(s). BMC is a company that sells software so a lot of the business services sounds like technical services but, it is not a technical service(s).  Becuase those services help generate revenue for BMC software. 

 

Let's review Why CMDB & Discovery project fail.

 

CMDBDiscovery
Suggestions
Project ScopeThe scope of these projects starts out has let's map the services but, the reality is that there are lots of scope creeps.  The value creation is loose scoped based on my experience.  The value creation for CMDB needs to understand and measure for each ORG.Discovery covers the automation of discovering IT infrastructure at the data center level but, does not cover end to end communications at the network level.  Mapping of BAI isn't scoped right.  BMC has recognized this issue by adding manage service to map application in CMDB.To realize and reduce the education need to use the CMDB.  We need a quick application lookup solution until the whole CMDB and discovery project in completed in scope.
Project ConstraintsHuman Resource, Knowledge Base, Wisdom Base, and Where to start the value creation for an ORG.There isn't a good way to resolving and track Access issue release in a large enterprise environment.

 

Draft thoughts: Service Modeling brain Dump Service Modeling Best Practices comparable CDM fieldsIf you want to learn discovery in detail and how you can answer debated question.  Please start here:  ADDM Support Guide When you create an application mapping in discovery.  You have to create dev, qa, and production instance that sync's into CMDB.  Those instance has to be grouped into relationships and parent class.  The Parent CI is ApplicationSystem use impact relationship to BMC.CORE:BMC_ConcreteCollection CI is used for tore a generic and instantiable collection, such as a pool of hosts available for running jobs. This class is defined as a concrete subclass of BMC_Collection and was added rather than changing BMC_Collection from an abstract class to a concrete class. I'd often get questions about how does discovery provided value to application owners vs management.   Here are some key thoughts about the value the discovery delivers. System Administrator & IT Architecture Value

  • Ability to produce a DR plan with discovery data
  • Ability to understand the impact of shared applications and Infrastructure
  • Provided a common understanding of the Business Application Instances for the company
  • Ability to produce up to date diagram of your application
  • Reduce work production current infrastructure diagrams and inventory for management

Management & C-Suite Value

  • Ability to the audit process, people, data, and tools
    • For Example:  If plan datacenter has 50 hosts to be create but, discovery find 100.
      • Management can ask a question about how the other 50 was created and who is paying for them
  • Understand Share Impact and risk management of applications
  • Ability to the budget datacenter or cloud moves

          Крупнейший банк мира заставит сотрудников учиться писать код      Cache   Translate Page      

JPMorgan решил ввести обязательные курсы для сотрудников JPMorgan, американский финансовый холдинг, решил внедрить обязательные курсы по программированию для сотрудников, которые работают в отделе по управлению активами. Следовательно, около одной трети аналитиков и работников компании научатся программировать на Python, а также пройдут курсы по информатике и машинному обучению. JPMorgan выделяет $10,8 млрд на технологические разработки, а […]

Запись Крупнейший банк мира заставит сотрудников учиться писать код впервые появилась PaySpace Magazine.


          QNX Software Development Engineer - Envorso, LLC - Waterloo, ON      Cache   Translate Page      
Java, Python or Perl experience. We take pride in offering a comprehensive variety of programs and resources to support your health and well-being needs....
From Envorso, LLC - Tue, 10 Jul 2018 21:57:34 GMT - View all Waterloo, ON jobs
          QNX Software Development Engineer - Envorso, LLC - Oakville, ON      Cache   Translate Page      
Java, Python or Perl experience. We take pride in offering a comprehensive variety of programs and resources to support your health and well-being needs....
From Envorso, LLC - Tue, 10 Jul 2018 21:57:34 GMT - View all Oakville, ON jobs
          QNX Software Development Engineer - Envorso, LLC - Ottawa, ON      Cache   Translate Page      
Java, Python or Perl experience. We take pride in offering a comprehensive variety of programs and resources to support your health and well-being needs....
From Envorso, LLC - Tue, 10 Jul 2018 21:57:33 GMT - View all Ottawa, ON jobs
          DevOps Co—op (8 months) - Visier Inc. - Vancouver, BC      Cache   Translate Page      
Coding skills in relevant languages (e.g Java, Python, Go, Ruby, Bash). Our co-op experience is unique and designed to prepare you for professional success....
From Visier Inc. - Wed, 05 Sep 2018 17:02:50 GMT - View all Vancouver, BC jobs
          QA Automation with python scripting - Evolution infosoft - Redwood City, CA      Cache   Translate Page      
*Job Summary* Job Tittle : QA Automation with python scripting Location : Redwood City, CA Duration : 6 Months job Description QA- Data automation and...
From Indeed - Mon, 01 Oct 2018 20:31:21 GMT - View all Redwood City, CA jobs
          Site Reliability Engineer - Visier Inc. - Vancouver, BC      Cache   Translate Page      
Python, Go, Java, Scala). A Bachelor’s Degree in Computer Science, Engineering, Mathematics or similar field with an excellent academic record in Computer...
From Visier Inc. - Fri, 31 Aug 2018 23:02:36 GMT - View all Vancouver, BC jobs
          python 调用c函数时如何传递指针参数?      Cache   Translate Page      
有个c函数,  int test(int* a1,char* a2) 在python里面该如何传递这个参数呢?
          Markus Koschany: My Free Software Activities in September 2018      Cache   Translate Page      

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Yavor Doganov continued his heroics in September and completed the port to GTK 3 of teg, a risk-like game. (#907834) Then he went on to fix gnome-breakout.
  • I packaged a new upstream release of freesweep, a minesweeper game, which fixed some minor bugs but unfortunately not #907750.
  • I spent most of the time this month on packaging a newer upstream version of unknown-horizons, a strategy game similar to the old Anno games. After also upgrading the fife engine, fifechan and NMUing python-enet, the game is up-to-date again.
  • More new upstream versions this month: atomix, springlobby, pygame-sdl2, and renpy.
  • I updated widelands to fix an incomplete appdata file (#857644) and to make the desktop icon visible again.
  • I enabled gconf support in morris (#908611) again because gconf will be supported in Buster.
  • Drascula, a classic adventure game, refused to start because of changes to the ScummVM engine. It is working now. (#908864)
  • In other news I backported freeorion to Stretch and sponsored a new version of the runescape wrapper for Carlos Donizete Froes.

Debian Java

  • Only late in September I found the time to work on JavaFX but by then Emmanuel Bourg had already done most of the work and upgraded OpenJFX to version 11. We now have a couple of broken packages (again) because JavaFX is no longer tied to the JRE but is designed more like a library. Since most projects still cling to JavaFX 8 we have to fix several build systems by accommodating those new circumstances.  Surely there will be more to report next month.
  • A Ubuntu user reported that importing furniture libraries was no longer possible in sweethome3d (LP: #1773532) when it is run with OpenJDK 10. Although upstream is more interested in supporting Java 6, another user found a fix which I could apply too.
  • New upstream versions this month: jboss-modules, libtwelvemonkeys-java, robocode, apktool, activemq (RC #907688), cup and jflex. The cup/jflex update required a careful order of uploads because both packages depend on each other. After I confirmed that all reverse-dependencies worked as expected, both parsers are up-to-date again.
  • I submitted two point updates for dom4j and tomcat-native to fix several security issues in Stretch.

Misc

  • Firefox 60 landed in Stretch which broke all xul-* based browser plugins. I thought it made sense to backport at least two popular addons, ublock-origin and https-everywhere, to Stretch.
  • I also prepared another security update for discount (DSA-4293-1) and uploaded  libx11 to Stretch to fix three open CVE.

Debian LTS

This was my thirty-first month as a paid contributor and I have been paid to work 29,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 24.09.2018 until 30.09.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in dom4j, otrs2, strongswan, python2.7, udisks2, asterisk, php-horde, php-horde-core, php-horde-kronolith, binutils, jasperreports, monitoring-plugins, percona-xtrabackup, poppler, jekyll and golang-go.net-dev.
  • DLA-1499-1. Issued a security update for discount fixing 4 CVE.
  • DLA-1504-1. Issued a security update for ghostscript fixing 14 CVE.
  • DLA-1506-1. Announced a security update for intel-microcode.
  • DLA-1507-1. Issued a security update for libapache2-mod-perl2 fixing 1 CVE.
  • DLA-1510-1. Issued a security update for glusterfs fixing 11 CVE.
  • DLA-1511-1. Issued an update for reportbug.
  • DLA-1513-1. Issued a security update for openafs fixing 3 CVE.
  • DLA-1517-1. Issued a security update for dom4j fixing 1 CVE.
  • DLA-1523-1. Issued a security update for asterisk fixing 1 CVE.
  • DLA-1527-1 and DLA-1527-2. Issued a security update for ghostscript fixing 2 CVE and corrected an incomplete fix for CVE-2018-16543 later.
  • I reviewed and uploaded strongswan and otrs2 for Abhijith PA.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my fourth month and I have been paid to work 15  hours on ELTS.

  • I was in charge of our ELTS frontdesk from 10.09.2018 until 16.09.2018 and I triaged CVE in samba, activemq, chromium-browser, curl, dom4j, ghostscript, firefox-esr, elfutils, gitolite, glib2.0, glusterfs, imagemagick, lcms2, lcms, jhead, libpodofo, libtasn1-3, mgetty, opensc, openafs, okular, php5, smarty3, radare, sympa, wireshark, zsh, zziplib and intel-microcode.
  • ELA-35-1. Issued a security update for samba fixing 1 CVE.
  • ELA-36-1. Issued a security update for curl fixing 1 CVE.
  • ELA-37-2. Issued a regression update for openssh.
  • ELA-39-1. Issued a security update for intel-microcode addressing 6 CVE.
  • ELA-42-1. Issued a security update for libapache2-mod-perl2 fixing 1 CVE.
  • ELA-45-1. Issued a security update for dom4j fixing 1 CVE.
  • I started to work on a security update for the Linux kernel which will be released shortly.

Thanks for reading and see you next time.


          Reproducible builds folks: Reproducible Builds: Weekly report #180      Cache   Translate Page      

Here’s what happened in the Reproducible Builds effort between Sunday September 30 and Saturday October 6 2018:

Packages reviewed and fixed, and bugs filed

Test framework development

There were a huge number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org by Holger Levsen this month, including:

In addition, Alexander Couzens added a comment regarding OpenWrt/LEDE which was subsequently amended by Holger.

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, heinrich5991, Holger Levsen, Marek Marczykowski-Górecki, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.


          Web Scrapping      Cache   Translate Page      
Hello, I wanted a web scarpper in python, contact for further details. (Budget: ₹1500 - ₹12500 INR, Jobs: Data Mining, Python, Web Scraping)
          Python Data Engineer, Минск      Cache   Translate Page      
AppCraft – продуктовая IT компания, резидент ПВТ, основанная с целью стать одним из ведущих издателей на растущем рынке мобильных игр и...

Зарплата: не указано

Компания: ООО АппКрафт




          QA Automation with python scripting - Evolution infosoft - Redwood City, CA      Cache   Translate Page      
*Job Summary* Job Tittle : QA Automation with python scripting Location : Redwood City, CA Duration : 6 Months job Description QA- Data automation and...
From Indeed - Mon, 01 Oct 2018 20:31:21 GMT - View all Redwood City, CA jobs
          Lead Software Developer - Intelligent Solution, Inc. - Morgantown, WV      Cache   Translate Page      
Be Fluent in VB, Python, and C#. Experience with User Interface Development in VB and Python. Intelligent Solutions, Inc....
From Indeed - Wed, 19 Sep 2018 16:29:11 GMT - View all Morgantown, WV jobs
          Ingénieur Test et Validation H/F      Cache   Translate Page      
Vous êtes un Ingénieur Test et Validation, vous intervenez au sein du centre de recherche et de développement en relation avec les Ingénieurs Logiciels, Logiciels Embarqués et Systèmes. Votre rôle est déterminant en interne et en externe. Vous êtes directement au Directeur Technique. Vos missions en tant qu'Ingénieur Test et Validation s'articulent autour des activités suivantes :      * Analyser les exigences utilisateurs et les spécifications,     * Mettre en place une traçabilité des exigences avec les tests Ad hoc,     * Elaborer la stratégie de test,     * Mettre en place les campagnes de validation,     * Exécuter les tests de validation,     * Gérer les anomalies : Détection, analyse, traçabilité,     * Rédiger les rapports de validation à la fin de chaque campagne de tests,     * Proposer et mettre en oeuvre les actions qualité (préventives ou correctives),     * Mettre à jour les référentiels de tests,     * Développer des scripts de tests (principalement en C# ou en Python) et concevoir des campagnes de tests et gérer l'intégration des outils externes nécessaires à la qualification en proposant des solutions d'automatisation des tests,     * Proposer des solutions pour maintenir une couverture de tests optimale et pérenne. Compétences techniques : Python, C#, C, C++ ; TestLink ou équivalent ; maîtrise de l'anglais.
          Scott Lewis: ECF 3.14.3 released      Cache   Translate Page      
ECF 3.14.3 has been released.  This is a bug fix release, but in addition to the fixes a number of usage examples have been created:

Remote Services Between Java and Python and Python for OSGi Remote Services

RESTful Remote Services Using either CXF or Jersey

Using OSGi R7 Async Remote Services

Using Gogo Commands and RSA for Controlling Remote Services

Using BndTools 4 for Developing OSGi Remote Services
           PYTHON FACTS       Cache   Translate Page      
Coastal carpet pythons are one of the largest snakes to inhabit Australia’s east coast.
          Re: Застрял в аудите. Куда податься?      Cache   Translate Page      
У меня техническое образование, дальнейшее развитие в аудите представляется мне сплошной гумунитарщиной, мне это не интересно.


Очередной «технарь» прибыл. Уверенный такой, что он доминирует гуманитариев.
Им только завидовать, как  ты применяешь свои глубокие знания в аудите.

Судя по входному барьеру в эту сложную отрасль, биг-4 должна переплачивать тебе в 10 раз за твои python/vba/и тд стремления.

Открою тебе «тайну» - аудит был (выражаясь твоим языком) «гумунитарщиной» ровно в самом начала твоего пути. Просто ты такой легендарный, что не смог попасть в биг-3, где платят нормальные деньги тем, кто оптимизирует
          Re: Застрял в аудите. Куда податься?      Cache   Translate Page      
Иди главбухом в небольшую иностранную компанию - оклад 200-250 тыс.руб/мес гарантирован,
это полюбасу больше чем сеньором третьего ранга в бигфошке (который получает 120 гросс).

Всем привет!

Работаю в BIG4 аудитором, синер 3.
Все больше понимаю, что надоел аудит, дальнейшее развитие в аудите меня не привлекает.

Это мое единственное место работы, поэтому я слабо представляю, чем занимаются в финансовом отделе, контроллинге, внутреннем аудите. В тех местах, куда обычно уходят из внешнего аудита.
Получается ситуация - хочу работать не в аудите, но где - неясно.

У меня техническое образование, дальнейшее развитие в аудите представляется мне сплошной гумунитарщиной, мне это не интересно.

В работе всегда нравился не сам процесс аудита, а то, как оптимизировать работу команды, сделать задачу быстрее.
Переработок на моих проектах практически не бывает, команда редко работает более 45 часов в неделю. Иногда меньше 40)
Привлекает анализ большого количества данных, хоть он в аудите и поверхностный.
В свободное время учу Python, в планах VBA. Оптимизирую учет на коленке в семейном микробизнесе. От такой работы получаю удовлетворение)

Буду благодарен за любой советы со стороны, в какую сторону стоит развиваться с моим бэкграундом, на какие позиции я бы смог претендовать (очень желательно без потери в ЗП).

Спасибо!

          Eric Idle on the Enduring, Powerful Legacy of the Song He Wrote As a Joke      Cache   Translate Page      

Monty Python’s Eric Idle has written a memoir, Always Look on the Bright Side of Life: A Sortabiography. The book, in which Idle recounts his adventures as a legendary comedian and actor in projects like the influential Monty Python’s Flying Circus, the savage Beatles parody band the Rutles, and the ... More »

          Top programming languages: Apple's Swift surges in popularity while Python falls back      Cache   Translate Page      
Only a month after becoming a top-three language, Python loses the title, but interest in it is still growing.
          Senior System Analyst (System Administrator) - TELUS Health - TELUS Communications - Montréal, QC      Cache   Translate Page      
Bash, Ksh, Python. 5 years programming experience with Bash, KSH, python. Join our team....
From TELUS Communications - Wed, 15 Aug 2018 18:07:59 GMT - View all Montréal, QC jobs
          Review: Smash Up: What Were We Thinking?:: That Teddy Bear's got a vicious streak a mile wide!      Cache   Translate Page      

by hist

For the full review, including more pictures, check out my blog, Dude Take Your Turn.

To find out when new reviews are posted on the blog, check out my geeklist!

Another day, another Smash Up expansion.

This is becoming almost a weekly thing! Or maybe it just seems that way.

Anyway, with my latest expansion acquisition (not the latest expansion period, since I am nothing if not eclectic (Editor: You mean random, right?) in my Smash Up buying habits).

Still haven’t come up with the meta joke yet to open these reviews, but I guess that will probably happen with the last one.

Ain’t that always the way!

My latest expansion is What Were We Thinking? The expansion is once again designed by the illustrious (and probably extremely handsome) Paul Peterson with art this time by Alberto Tavira, Marcel Stobinski, Gong Studios, and Francisco Rico Torres. It is once again published by Alderac Entertainment Group (AEG) and was released in 2017.


As with previous expansion reviews, I’m not going to get into how to play Smash Up. See the review for all of that (and just for more of my excellent writing). (Editor: I’m surprised you fit through the doorway with that ego).

I love the tagline for this expansion: “We really shouldn’t pick factions when we’re…tired.”

This is a wonderful mix of factions that do a lot of interesting things, including some new variations on some of the old stand-bys.

The factions are:

Explorers
Grannies
Rock Stars
Teddy Bears

I want some of the acid Paul and the people at AEG are doing. That is amazing.

Age before beauty (but they’re beautiful too), let’s start with the Grannies.



The Grannies love deck manipulation (are they commenting on common family dynamics?)

Looking at all the cards above, you are often looking at the top or bottom of your deck, and then playing (or drawing) the card if it’s either a minion or an action. Or sometimes you are placing cards on the top or bottom of your deck, so that later on you can play/draw them.

Grandma always knows what she’s doing.

As you can see, Family Reunion really combos well with Nana. Family Reunion has you reveal the bottom card of your deck. If it’s a minion, you draw it. But if it’s an action, you place it on top of your deck.

Then, lo and behold, playing Nana lets you reveal the top card of your deck. If it’s an action, draw it or play it as an extra action. What could be better?

You’d think Monty Python had come up with it.

Youtube Video

I love the artwork in this faction. It’s a beautiful style that just brings to mind going to Grandma’s house for Christmas.

The Explorer faction loves new bases. Many of the cards allow you to manipulate the base deck and, very possibly, play a minion to a new base after the old base scores, getting the jump on everyone else.

In fact, they like exploring new bases so much that they can really fill the table with bases!

Idaho Smith plays a new base whenever you play him, and it’s not like one of the cards in the Cease and Desist expansion that plays a new base but then does not replace the next scored base. With that card, you’ll always eventually go back to the normal amount of bases.

No, these bases are permanent. There will always be more bases. More, more more! And all of that treasure to get!

The Explorers can be a fun faction because they never have to be surprised at what’s coming next, and they get a head start on them too. They can move their minions around to make sure they are in position to benefit from bases too.

They do have great combo abilities too. One action lets you look at the top two base cards and put them back in any order. Make sure you have a bunch of minions scattered out on the table. Make sure you have a low break-point base on top of the base deck. Then play Idaho Smith.

Boom! You score the base before anybody else even has a chance to play anything there. Nothing like locking out your opponents!

That may be hard to pull off, but if you can, it’s a wonderful feeling.

The art on this faction is brilliant as well. It’s very pulpy, bringing back Indiana Jones memories and the other serials that it was based on.



Rock Stars love it big and broad. Everything needs to be turned up to 11!

If a base has a break point of at least 21, many Rock Star cards will benefit from it. So much so that other cards will make sure that the break point is 21 or higher.

The Groupie minion enables you to just swarm a base if you have a bunch in hand (there are five in your deck, so that’s very possible!). I’ve seen that swarm happen to me, and the Guest Star action above will enable you to play a regular minion (maybe the Monarch?), then grab a Groupie from your discard pile/deck and play it. Voila! All those other Groupies in your hand suddenly flood the stage like a Tom Jones concert (I’m not sure why there isn’t a Tom Jones card, now that I mention it).

Rock Stars can be muted a bit if there aren’t any high break-point bases, though at least they’re guaranteed to have two of them in the deck (their bases have break points of 26 or 27), but even without that, they can be quite a good faction. Hot Venue still gives +1 power to all the player’s minions at that base, but that draw card action may not happen much if the bases are low power.

Once again, the artwork on this faction is brilliant. I love all of the lights in the background of most of the cards. The Monarch is a wonderful piece of work too, just to name one.

Last, but certainly not least, we get to the Teddy Bears, one of my favourite factions (that could be because playing one minion and an action ended up ramping my power on a base to 17 from the original 5 and won me a game).

The bear cuteness is often overwhelming to the other players, either cancelling abilities or limiting their power to play certain minions.

Who can resist a Bear Picnic? This causes players to not be able to play low-power minions anywhere but the base where that action is played. Because really, if you can come to the picnic, why would you go anywhere else?

Snuggly Bear allows a bit of a swarm tactic, though not quite like the Groupies (there are only 4 of them instead of 5 like the Groupies). Also, they only have one power. However, they’re triggered by you playing other minions and not having played a Snuggly Bear first (which the Groupie requires). This lets you play Sir Squeezes (for example) for some real swarming action! Assuming you have a bunch of minions in your hand, of course.

This can be made possible with the Square Deal action, which allows you to draw cards until at least one player has fewer cards than you. If you’re lucky, this could pack your hand with minions!

Group Hug is the action that won me the game previously referenced, since there were 7 other minions there when I played it. Yikes! It can be quite powerful when played on a heavily-contested base.

The main ability of the Teddy Bears is to affect other players’ minions and to use their collective abilities to gain power from those minions as well.

I love that combination of messing with people and increasing my power. I think this goes well with most factions (except the Ghosts, which benefit from having a small hand size while Teddy Bears love a bigger size).

Need I say it? The artwork on this faction is also wonderful. Cute and cuddly, and I love how it incorporates other expansions in the artwork. The “Cuddle” action has a bear hugging a vampire, for instance.



The bases are a very good mix with some interesting abilities.

Looking at the 16-power City of Gold, that’s the exact kind of base I was talking about for my master Idaho Smith maneuver. You can lock in 3 points without giving your opponents a chance to get the 1 point.

However, getting 1 point a turn may be a good thing as well, so maybe leave it out on the table?

Palooza has the highest break point I’ve seen, and it really benefits the Rock Stars (but of course it does).

Retirement Community gives each player a taste of what the Grannies faction can do, placing a minion from the base on the top or bottom of the owner’s deck. Of course, while the Grannies may want to do that, I don’t think anybody else will be putting them on the bottom of the deck.

And Under the Bed adds to the Teddy Bear ability of playing low-power minions by letting players play a minion there if they’ve played a minion anywhere else that turn.

Because Teddy Bears are good at hiding.

What Were We Thinking is my third expansion, and I’m really on a roll as I think this is the best one yet.

Not only does each faction have its unique style that may have similarities to previous ones but turns them on their head, but the artwork is amazing and I can’t really find one that I would say “yeah, that doesn’t work for me.”

The factions continue the humour found in the rest of the series, and while it’s not as spot on as the parodies in Cease and Desist, it goes over the top in other ways (of course the Rock Star faction has a Rick Roll card).

While there are definitely some factions that will not work well with these (I already noted the Ghosts and the Teddy Bears, which is sad because Ghostly Teddy Bears would be a great combination otherwise), overall the factions are fun to play. I’ve seen Super Spy Rock Stars benefit greatly from the deck manipulation that Spies have, and my own Shapeshifting Teddy Bears fit together really nicely.

If this is an example of what happens when AEG personnel are on acid really tired, then please do it more often!

What Were We Thinking? We were thinking of total awesomeness.

(These expansion factions were played with twice before writing the review, though since I posted this on my blog, I've played with them many more times and my opinion hasn't changed)
          Site Reliability Engineer (SRE) - Contract - Citco - Toronto, ON      Cache   Translate Page      
Ruby, Go, Python, Perl, bash, ksh. Site Reliability Engineer (SRE) - Contract....
From Citco - Sun, 23 Sep 2018 09:22:52 GMT - View all Toronto, ON jobs
          Site Reliability Engineer - 6 Month Contract - Lannick Group - Toronto, ON      Cache   Translate Page      
Ruby, Go, Python, Perl, bash, ksh. ARE YOU A DEVOPS ENGINEER WITH A STRONG BACKGROUND IN DEVELOPMENT?...
From Lannick Group - Fri, 24 Aug 2018 22:22:34 GMT - View all Toronto, ON jobs
          Home Assistant - Open source Python3 home automation      Cache   Translate Page      
Replies: 6872 Last poster: Hmmbob at 09-10-2018 23:44 Topic is Open koelkast schreef op dinsdag 9 oktober 2018 @ 22:50: Aha @Hmmbob . Ik probeer het te converten van string naar integer met dit: code:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 alias: 'vaatwasser start' trigger: - platform: numeric_state value_template: "{{ state.sensor.vaatwasser_huidig.state | float }}" above: 10 condition: condition: or conditions: - condition: state entity_id: input_select.vaatwasser_status state: uit - condition: state entity_id: input_select.vaatwasser_status state: klaar - condition: state entity_id: input_select.vaatwasser_status state: "bijna klaar" action: - service: input_select.select_option data: entity_id: input_select.vaatwasser_status option: draait ....maar ook dat lijkt niet te werken. MOet ik het anders aanpakken?Ik ben op reis dus heb even de middelen niet bij me om te testen. Wat is de state van input_select.vaatwasser_status? Als die namelijk niet uit/klaar/“bijna klaar” is, gaat het natuurlijk ook niet werken omdat de condition niet correct is
          Software Engineer w/ Active DOD Secret Clearance (Contract) - Preferred Systems Solutions, Inc. - Cheyenne, WY      Cache   Translate Page      
Experience with Java, JavaScript, C#, PHP, Visual Basic, Python, HTML, XML, CSS, and AJAX. Experience with software installation and maintenance, specifically...
From Preferred Systems Solutions, Inc. - Tue, 25 Sep 2018 20:46:05 GMT - View all Cheyenne, WY jobs
          Senior Data Engineer - DISH Network - Cheyenne, WY      Cache   Translate Page      
4 or more years of experience in programming and software development with Python, Perl, Java, and/or other industry standard language....
From DISH - Wed, 15 Aug 2018 05:17:45 GMT - View all Cheyenne, WY jobs
          IT Manager - Infrastructure - DISH Network - Cheyenne, WY      Cache   Translate Page      
Scripting experience in one or more languages (Python, Perl, Java, Shell). DISH is a Fortune 200 company with more than $15 billion in annual revenue that...
From DISH - Sun, 15 Jul 2018 05:30:30 GMT - View all Cheyenne, WY jobs
          Software Developer - Matric - Morgantown, WV      Cache   Translate Page      
Application development with Java, Python, Scala. Enterprise level web applications. MATRIC is a strategic innovation partner providing deep, uncommon expertise...
From MATRIC - Tue, 11 Sep 2018 00:02:33 GMT - View all Morgantown, WV jobs
          boltons – over 160 BSD-licensed, pure-Python utilities      Cache   Translate Page      
Comments
          looking for a python full stack django developer      Cache   Translate Page      
should be aware of ajax, jquery and django (Budget: $10 - $30 USD, Jobs: AJAX, Django, Javascript, jQuery / Prototype, Python)
          Big Data-2: Move into the big league:Graduate from R to SparkR      Cache   Translate Page      
This post is a continuation of my earlier post Big Data-1: Move into the big league:Graduate from Python to Pyspark. While the earlier post discussed parallel constructs in Python and Pyspark, this post elaborates similar and key constructs in R and SparkR. While this post just focuses on the programming part of R and SparkR it … Continue reading Big Data-2: Move into the big league:Graduate from R to SparkR
          RStudio 1.2 Preview: Reticulated Python      Cache   Translate Page      
One of the primary focuses of RStudio v1.2 is improved support for other languages frequently used with R. Last week on the blog we talked about new features for working with SQL and D3. Today we’re taking a look at enhancements we’ve made around the reticulate package (an R interface to Python). The reticulate package makes it possible to embed a Python session within an R process, allowing you to import Python modules and call their functions directly from R. If you are an R developer that uses Python for some of your work or a member of data science team that uses both languages, reticulate can dramatically streamline your workflow. New features in RStudio v1.2 related to reticulate include: 1) Support for executing reticulated Python chunks within R Notebooks. 2) Display of matplotlib plots within both notebook and console execution modes. 3) Line-by-line execution of Python code using the reticulate repl_python() function. 4) Sourcing Python scripts using the reticulate source_python() function. 5) Code completion and inline help for Python. Note that for data science projects that are Python-only, we still recommend IDEs optimized for that, such as JupyterLab, PyCharm, Visual Studio Code, Rodeo, and Spyder. However, if you are using reticulated Python within an R project then RStudio provides a set of tools that we think you will find very useful. Installation You can download the RStudio v1.2 preview release here: https://www.rstudio.com/rstudio/download/preview/. All of the features described below require that you have previously installed the reticulate package, which you can do as follows: install.packages("reticulate") R Notebooks R Notebooks have been enhanced to support executing Python chunks using the reticulate Python engine. For example, here we use pandas to do some data manipulation then plot the results with ggplot2: Python objects all exist in a single persistent session so are usable across chunks just like R objects. R and Python objects are also shared across languages with conversions done automatically when required (e.g. from Pandas data frame to R data frame or NumPy 2D array to R matrix). The article on Calling Python from R describes the various ways to access Python objects from R as well as functions available for more advanced interactions and conversion behavior. R Notebooks can also display matplotlib plots inline when they are printed from Python chunks: See the article on the reticulate R Markdown Python Engine for full details on using Python chunks within R Markdown documents, including how to call Python code from R chunks and vice-versa. Python Scripts You can execute code from Python scripts line-by-line using the Run button (or Ctrl+Enter) in the same way as you execute R code line-by-line. RStudio will automatically switch into reticulate’s repl_python() mode whenever you execute lines from a Python script: Type exit from the Python REPL to exit back into R (RStudio will also automatically switch back to R mode whenever you execute code from an R script). Any Python objects created within the REPL are immediately available to the R session via the reticulate::py object (e.g. in the example above you could access the pandas object via py$s). In addition, RStudio now provides code completion and inline help for Python scripts: Sourcing Scripts Click the editor’s Source Script button (or the Ctrl+Shift+Enter shortcut) within a Python source file to execute a script using reticulate’s source_python() function: Objects created within the script will be made available as top-level objects in the R global environment. Why reticulate? Since we released the package, we’re often asked what the source of the name “reticulate” is. Here’s what Wikipedia says about the reticulated python: The reticulated python is a species of python found in Southeast Asia. They are the world’s longest snakes and longest reptiles…The specific name, reticulatus, is Latin meaning “net-like”, or reticulated, and is a reference to the complex colour pattern. And here’s the Merriam-Webster definition of reticulate: 1: resembling a net or network; especially : having veins, fibers, or lines crossing a reticulate leaf. 2: being or involving evolutionary change dependent on genetic recombination involving diverse interbreeding populations. The package enables you to reticulate Python code into R, creating a new breed of project that weaves together the two languages. The RStudio v1.2 Preview Release provides lots of enhancements for reticulated Python. Check it out and let us know what you think on RStudio Community and GitHub.
          How to build your own Neural Network from scratch in R      Cache   Translate Page      
Last week I ran across this great post on creating a neural network in Python. It walks through the very basics of neural networks and creates a working example using Python. I enjoyed the simple hands on approach the author used, and I was interested to see how we might make the same model using R. In this post we recreate the above-mentioned Python neural network from scratch in R. Our R refactor is focused on simplicity and understandability; we are not concerned with writing the most efficient or elegant code. Our very basic neural network will have 2 layers. Below is a diagram of the network: For background information, please read over the Python post. It may be helpful to open the Python post and compare the chunks of Python code to the corresponding R code below. The full Python code to train the model is not available in the body of the Python post, but fortunately it is included in the comments; so, scroll down on the Python post if you are looking for it. Let’s get started with R! Create Training Data First, we create the data to train the neural network. # predictor variables X
          looking for a python full stack django developer      Cache   Translate Page      
should be aware of ajax, jquery and django (Budget: $10 - $30 USD, Jobs: AJAX, Django, Javascript, jQuery / Prototype, Python)
          Full Stack Software Engineer - Parsons - Columbia, MD      Cache   Translate Page      
Parsons Cyber Operations is seeking Software Engineers with experience in Python, JavaScript, and Linux systems to join our team of exceptional individuals....
From Parsons - Fri, 21 Sep 2018 07:27:23 GMT - View all Columbia, MD jobs
          דרוש למשרה קבועה - מפתח full stack      Cache   Translate Page      
לפיתוח מוצרי וידאו ואנימציה לשידורי טלויזיה, במשרה מלאה, דרוש/ה:מפתח/ת Full stack עם התמחות ב-Front end.נסיון של 2-3 שנים.יכולת ניהול עצמי גבוהה. השתלבות בעבודת צוות.נכונות לפרוייקטים מאתגרים, בארץ ובחו"ל, עם פרופיל גבוה.חובה:שליטה ב HTML5, CSS3, Javascript, NodeJS.שליטה ב-React או Angular.היכרות עם WebGL.עבודה מול מסדי נתונים (SQL, NOSQL).יתרון עצום:שליטה בשפות תכנות נוספות (C++, Python, Logo).נסיון עבודה ב-Motion graphics / רקע עיצובי / אנימטיבי.יכולת ונכונות ללמוד כלים / תחומים חדשים...
          安全开发之subprocess库若干函数中以数组形式传参的安全性分析      Cache   Translate Page      
0x00. 引言 眼下,与Python相关的安全问题愈发引起人们的注意,本文以最为常用的外部程序调用库(也可称 […]
          Python in Visual Studio Code – September 2018 Release      Cache   Translate Page      
We are pleased to announce that the September 2018 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the marketplace, or install it directly from the extension gallery in Visual Studio Code. You can learn more about Python support in Visual Studio Code in the documentation. In this release...
          Python Developer - MJDP Resources, LLC - Radnor, PA      Cache   Translate Page      
Assemble large, complex data sets that meet business requirements and power machine learning algorithms. EC2, Lambda, ECS, S3.... $30 - $40 an hour
From Indeed - Tue, 18 Sep 2018 14:44:55 GMT - View all Radnor, PA jobs
          Come with me on a journey through this website's source code by way of a bug      Cache   Translate Page      

This article started because I thought that researching a particular bug could be useful to understand devto's source code a little better. So I literally wrote this while doing the research and I ended up writing about other things as well. Here we go.

The bug I'm referring to is called Timeouts when deleting a post on GitHub. As the title implies, removing a post results in a server timeout which results in an error to the user. @peter in his bug report added a couple of details we need to keep present for our "investigation": this bug doesn't always happens (which would be better in the context of finding a solution, deterministic is always better) and it more likely presents itself with articles that have many reactions and comments.

First clues: it happens sometimes and usually with articles with a lot of data attached.

Let's see if we can dig up more information before diving into the code.

A note: I'm writing this post to explain (and expand) the way I researched this while it happened at the same time (well, over the course of multiple days but still at the same time :-D), so all discoveries were new to me when I wrote about them as they are to you if you read this.

Another note: I'm going to use the terms "async" and "out of process" interchangeably here. Async in this context means "the user doesn't wait for the call to be executed" not "async" as in JavaScript. A better term should be "out of process" because these asynchronous calls are executed by an external process through a queue on the database with a library/gem called delayed job.

Referential integrity

ActiveRecord (Rails's ORM), like many other object relational mappers, is an object layer that sits on top of a relational database system. Let's take a little detour and talk a little about a fundamental feature to preserve data meaningfulness in database systems: referential integrity. Why not, the bug can wait!

Referential integrity, to simplify a lot, is a defense against developers with weird ideas on how to structure their relational data. It forbids insertion of rows that have no correspondence in the primary table of the relationship. In layman terms it guarantees that there is a correspondent row in the relationship: if you have a table with a list of 10 cities, you shouldn't have a customer whose address belongs to an unknown city. Funnily enough it took more than a decade for MySQL to activate referential integrity by default, while PostgreSQL had it for 10 years already, at the time. Sometimes I think that MySQL in its early incarnations was a giant collection of CSV files with SQL on top. I'm joking, maybe.

With referential integrity in place you can rest (mostly) assured that the database won't let you add zombie rows, will keep the relationship updated and will clean up after you if you tell it to.

How do you instruct the database to do all of these things? It's quite simple. I'll use an example from PostgreSQL 10 documentation:

CREATE TABLE products (
    product_no integer PRIMARY KEY,
    name text,
    price numeric
);

CREATE TABLE orders (
    order_id integer PRIMARY KEY,
    shipping_address text,
);

CREATE TABLE order_items (
    product_no integer REFERENCES products ON DELETE RESTRICT,
    order_id integer REFERENCES orders ON DELETE CASCADE,
    quantity integer,
    PRIMARY KEY (product_no, order_id)
);

The table order_items has two foreign keys, one towards orders and another that points to products (a classic example of many-to-many in case you're wondering).

Wnen you design tables like this you should ask yourself the following questions (in addition to the obvious ones like "what am I really doing with this data?"):

  • what happens if a row in the primary table is deleted?

  • do I want to delete all the related rows?

  • do I want to set the referencing column to NULL ? in that case what does it mean for my business logic? does NULL even make sense for my data?

  • do I want to set the column to its default value? what does it mean for my business logic? does this column even have a default value?

If you look back at the example what we're telling the database are the following two things:

  • products cannot be removed, unless they do not appear in any order

  • orders can be be removed at all times, and they take the items with them to the grave 😀

Keep in mind that removal in this context is still a fast operation, even in a context like dev.to's if an article were to have tables linked with a cascade directive, it should still be a fast operation. DBs tend to become slow when a single DELETE triggers millions (or tens of millions) of other removals. I assume this is not the case (yet or in the future) but since the point of this whole section is to expand our knowledge about referential integrity and not to actually investigate the bug, let's keep on digging.

Next we open the console and check if the tables are linked to each other, using psql :

$ rails dbconsole
psql (10.5)
Type "help" for help.

PracticalDeveloper_development=# \d+ articles
...
Indexes:
    "articles_pkey" PRIMARY KEY, btree (id)
    "index_articles_on_boost_states" gin (boost_states)
    "index_articles_on_featured_number" btree (featured_number)
    "index_articles_on_hotness_score" btree (hotness_score)
    "index_articles_on_published_at" btree (published_at)
    "index_articles_on_slug" btree (slug)
    "index_articles_on_user_id" btree (user_id)

This table has a primary key, a few indexes but apparently no foreign key constraints (the indicators for referential integrity). Compare it with a table that has both:

PracticalDeveloper_development=# \d users
...
Indexes:
    "users_pkey" PRIMARY KEY, btree (id)
    "index_users_on_confirmation_token" UNIQUE, btree (confirmation_token)
    "index_users_on_reset_password_token" UNIQUE, btree (reset_password_token)
    "index_users_on_username" UNIQUE, btree (username)
    "index_users_on_language_settings" gin (language_settings)
    "index_users_on_organization_id" btree (organization_id)
Referenced by:
    TABLE "messages" CONSTRAINT "fk_rails_273a25a7a6" FOREIGN KEY (user_id) REFERENCES users(id)
    TABLE "badge_achievements" CONSTRAINT "fk_rails_4a2e48ca67" FOREIGN KEY (user_id) REFERENCES users(id)
    TABLE "chat_channel_memberships" CONSTRAINT "fk_rails_4ba367990a" FOREIGN KEY (user_id) REFERENCES users(id)
    TABLE "push_notification_subscriptions" CONSTRAINT "fk_rails_c0b1e39717" FOREIGN KEY (user_id) REFERENCES users(id)

This is what we learned so far: why referential integrity can come into play when you remove rows from a DB and that the articles table has no apparent relationships with any other table at the database level. But is this true in the web app? Let's move up one layer, diving into the Ruby code.

ps. in the case of Rails (don't remember since which version) you can also see which foreign keys you have defined by looking in the schema.rb file.

ActiveRecord, associations and callbacks

Now that we know what referential integrity is, how to identify it and now that we know it's not at play in this bug we can move up a layer and check how is the Article object defined (I'll skip stuff that I think it's not related to this article and the bug itself, altough I might be wrong because I don't know the code base well):

class Article < ApplicationRecord
  # ...

  has_many :comments,       as: :commentable
  has_many :buffer_updates
  has_many :reactions,      as: :reactable, dependent: :destroy
  has_many  :notifications, as: :notifiable

  # ...

  before_destroy    :before_destroy_actions

  # ...

  def before_destroy_actions
    bust_cache
    remove_algolia_index
    reactions.destroy_all
    user.delay.resave_articles
    organization&.delay&.resave_articles
  end
end

A bunch of new information from that piece of code:

  • Rails (but not the DB) knows that an article can have many comments, buffer updates, reactions and notifications (these are called "associations" in Rails lingo)

  • Reactions are explictly dependent on the articles and they will be destroyed if the article is removed

  • There's a callback that does a bunch of stuff (we'll explore it later) before the object and its row in the database are destroyed

  • Three out of four associations are the able type, Rails calls these polymorphic associations because they allow the programmer to associate multiple types of objects to the same row, using two different columns (a string with the name of the model type the object belongs to and an id). They are very handy, though I always felt they make the database very dependent on the domain model (set by Rails). They can also require a composite index in the associated table to speed up queries

Similarly to what the underlying database system can do, ActiveRecord allows the developer to specify what happens to the related objects when the primary one is destroyed. According to the documentation Rails supports: destroying all related objects, deleting all related objects, setting the foreign key to NULL or restricting the removal with an error. The difference between destroy and delete is that in the former case all related callbacks are executed prior to removal, in the latter one the callbacks are skipped and only the row in the DB is removed.

The default strategy for the relationships without a dependent is to do nothing, which means leaving the referenced rows there in place. If it were up to me the default would be the app doesn't start until you decided what to do with the linked models but I'm not the person who designed ActiveRecord.

Keep in mind that the database trumps the code, if you define nothing at the Rails level but the database is configured to automatically destroy all related rows, then the rows will be destroyed. This is one of the many reasons why it's worth taking the time to learn how the DB works :-)

The last bit we haven't talked about the model layer is the callback which is probably where the bug manifests itself.

The infamous callback

This before destroy callback will execute prior to issuing the DELETE statement to the DB:

def before_destroy_actions
  bust_cache
  remove_algolia_index
  reactions.destroy_all
  user.delay.resave_articles
  organization&.delay&.resave_articles
end

Cache busting

The first thing the callback does is call the method bust_cache which in turn calls sequentially six times the Fastly API to purge the article's cache (each call to bust is two HTTP calls). It also does a cospicuos number of out of process calls to the same API (around 20-50, depends on the status of the article and the number of tags) but these don't matter because the user won't wait for them.

One thing to annotate: six HTTP calls are always going out after you press the button to delete an article.

Index removal

dev.to uses Algolia for search, the call remove_algolia_index does the following:

  • calls algolia_remove_from_index! which in turns calls the "async" version of the Algolia HTTP API which in reality does a (fast) synchronous call to Algolia without waiting for the index to be cleared on their side. It's still a synchronous call subject adding to the user's latency

  • calls Algolia's HTTP API other two times for other indexes

So, adding the previous 6 HTTP calls for Fastly, we're at 9 APIs called in process

Reactions destruction

The third step is reactions.destroy_all which as the call implies destroys all the reactions to the article. In Rails destroy_all simply iterates on all the objects and calls destroy on each of them which in turn activate all the "destroy" callbacks for proper cleanup. The Reaction model has two before_destroy callbacks:

class Reaction < ApplicationRecord
  # ...

  before_destroy :update_reactable_without_delay
  before_destroy :clean_up_before_destroy

  # ...
end

I had to dig a little bit to find out what the first one does (one of the things I dislike about the Rails way of doing things are the magical methods popping up everywhere, they make refactoring harder and they encourage coupling between the model and all the various gems). update_reactable_without_delay calls update_reactable (which has been declared as an async function by default) bypassing the queue. The result is a standard inline call the user waits for.

  • update_reactable recalculates (this time out of process), the scores of the Article (a thing that should probably be avoided since the Article is up for removal) if the article has been published. Then (back inline) it reindexes the article (twice) calling Algolia, removes the reactions from Fastly's cache (each call to bust the cache is two Fastly's calls), busts another cache (two more HTTP calls) and possibly updates a column on the Article (which is probably not needed since it's going to be removed). The total is 6 HTTP calls: one async HTTP calls (the first one to Algolia), one other call to Algolia and four to Fastly. Let's annotate down the 5 the user has to wait for.

  • clean_up_before_destroy reindexes the article on Algolia (a third time).

Let's sum up: a removal of a reaction amounts to 6 HTTP calls. If the article has a 100 reactions... well you can do the math.

Let's say the article had 1 reaction, plus the calls tallied before we're at around 15 HTTP calls:

  • 6 to bust the cache of the article

  • 3 to remove the article from the index

  • 6 for the reaction attached to the article

There's an additional bonus HTTP call that I've identified by chance using a gist to debug net/http calls, it calls the Stream.io API to delete the reaction from the user's feed. A total of 16 HTTP calls.

This is what happens when a reaction is destroyed (I added the awesome gem httplog to my local installation):

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/Article_development/25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time"}
[httplog] Connecting: REDACTED.algolia.net:443
[httplog] Status: 200
[httplog] Benchmark: 0.357128 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.151Z","taskID":945887592,"objectID":"25"}

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/searchables_development/articles-25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time","tag_list":["discuss","security","python","beginners"],"main_image":"https://pigment.github.io/fake-logos/logos/medium/color/8.png","id":25,"featured":true,"published":true,"published_at":"2018-09-30T07:44:48.530Z","featured_number":1538293488,"comments_count":1,"reactions_count":0,"positive_reactions_count":0,"path":"/willricki/-the-curious-incident-of-the-dog-in-the-night-time-3e4b","class_name":"Article","user_name":"Ricki Will","user_username":"willricki","comments_blob":"Waistcoat craft beer pickled vice seitan kombucha drinking. 90's green juice hoodie.","body_text":"\n\nMeggings tattooed normcore kitsch chia. Fixie migas etsy hashtag jean shorts neutra pork belly. Vice salvia biodiesel portland actually slow-carb loko chia. Freegan biodiesel flexitarian tattooed.\n\n\nNeque. \n\n\nBefore they sold out diy xoxo aesthetic biodiesel pbr\u0026amp;b. Tumblr lo-fi craft beer listicle. Lo-fi church-key cold-pressed.\n\n\n","tag_keywords_for_search":"","search_score":153832,"readable_publish_date":"Sep 30","flare_tag":{"name":"discuss","bg_color_hex":null,"text_color_hex":null},"user":{"username":"willricki","name":"Ricki Will","profile_image_90":"/uploads/user/profile_image/6/22018b1a-7afa-47c1-bbae-b829977828e4.png"},"_tags":["discuss","security","python","beginners","user_6","username_willricki","lang_en"]}
[httplog] Status: 200
[httplog] Benchmark: 0.031995 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.426Z","taskID":945887612,"objectID":"articles-25"}

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/ordered_articles_development/articles-25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time","path":"/willricki/-the-curious-incident-of-the-dog-in-the-night-time-3e4b","class_name":"Article","comments_count":1,"tag_list":["discuss","security","python","beginners"],"positive_reactions_count":0,"id":25,"hotness_score":153829,"readable_publish_date":"Sep 30","flare_tag":{"name":"discuss","bg_color_hex":null,"text_color_hex":null},"published_at_int":1538293488,"user":{"username":"willricki","name":"Ricki Will","profile_image_90":"/uploads/user/profile_image/6/22018b1a-7afa-47c1-bbae-b829977828e4.png"},"_tags":["discuss","security","python","beginners","user_6","username_willricki","lang_en"]}
[httplog] Status: 200
[httplog] Benchmark: 0.047077 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.494Z","taskID":945887622,"objectID":"articles-25"}

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/Article_development/25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time"}
[httplog] Status: 200
[httplog] Benchmark: 0.029352 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.541Z","taskID":945887632,"objectID":"25"}

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/searchables_development/articles-25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time","tag_list":["discuss","security","python","beginners"],"main_image":"https://pigment.github.io/fake-logos/logos/medium/color/8.png","id":25,"featured":true,"published":true,"published_at":"2018-09-30T07:44:48.530Z","featured_number":1538293488,"comments_count":1,"reactions_count":0,"positive_reactions_count":1,"path":"/willricki/-the-curious-incident-of-the-dog-in-the-night-time-3e4b","class_name":"Article","user_name":"Ricki Will","user_username":"willricki","comments_blob":"Waistcoat craft beer pickled vice seitan kombucha drinking. 90's green juice hoodie.","body_text":"\n\nMeggings tattooed normcore kitsch chia. Fixie migas etsy hashtag jean shorts neutra pork belly. Vice salvia biodiesel portland actually slow-carb loko chia. Freegan biodiesel flexitarian tattooed.\n\n\nNeque. \n\n\nBefore they sold out diy xoxo aesthetic biodiesel pbr\u0026amp;b. Tumblr lo-fi craft beer listicle. Lo-fi church-key cold-pressed.\n\n\n","tag_keywords_for_search":"","search_score":154132,"readable_publish_date":"Sep 30","flare_tag":{"name":"discuss","bg_color_hex":null,"text_color_hex":null},"user":{"username":"willricki","name":"Ricki Will","profile_image_90":"/uploads/user/profile_image/6/22018b1a-7afa-47c1-bbae-b829977828e4.png"},"_tags":["discuss","security","python","beginners","user_6","username_willricki","lang_en"]}
[httplog] Status: 200
[httplog] Benchmark: 0.028819 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.612Z","taskID":945887642,"objectID":"articles-25"}

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/ordered_articles_development/articles-25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time","path":"/willricki/-the-curious-incident-of-the-dog-in-the-night-time-3e4b","class_name":"Article","comments_count":1,"tag_list":["discuss","security","python","beginners"],"positive_reactions_count":1,"id":25,"hotness_score":153829,"readable_publish_date":"Sep 30","flare_tag":{"name":"discuss","bg_color_hex":null,"text_color_hex":null},"published_at_int":1538293488,"user":{"username":"willricki","name":"Ricki Will","profile_image_90":"/uploads/user/profile_image/6/22018b1a-7afa-47c1-bbae-b829977828e4.png"},"_tags":["discuss","security","python","beginners","user_6","username_willricki","lang_en"]}
[httplog] Status: 200
[httplog] Benchmark: 0.02821 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.652Z","taskID":945887652,"objectID":"articles-25"}

[httplog] Connecting: us-east-api.stream-io-api.com:443
[httplog] Sending: DELETE http://us-east-api.stream-io-api.com:443/api/v1.0/feed/user/10/Reaction:7/?api_key=REDACTED&foreign_id=1
[httplog] Data:
[httplog] Status: 200
[httplog] Benchmark: 0.336152 seconds
[httplog] Response:
{"removed":"Reaction:7","duration":"17.84ms"}

If you countn them they are 7, not 16. That's because the calls to Fastly are only executed in production.

Resaving articles

User.resave_articles refreshes the user's other articles and is called out of process so it's not interesting to us right now. The same happens to the organization if the article is part of one but again, so we don't care.

Let's recap what we know so far. Each article removal triggers a callback that does a lot of things that touch third party services that help this website be as fast as it is and it also updates various counters I didn't really investigate :-D.

What happens when the article is removed

After the callback has been dealt with and all the various caches are up to date and the reactions are gone from the database, we still need to check what happens to the other associations of the article we're removing. As you recall every article can possibly have comments, reactions (gone by now), buffer updates (not sure what they are) and notifications.

Let's see what happens when we destroy an article to see if we can get other clues. I replaced a long log with my summaries:

> art = Article.last # article id 25
> art.destroy!
# its tags are destroyed, this is handled by "acts_as_taggable_on :tags"...
# a bunch of other tag related stuff happens, 17 select calls...
# the aforamentioned HTTP calls for each reaction are here too...
# there's a SQL DELETE for each reaction...
# the user object is updated...
# a couple of other UPDATEs I didn't investigate but which seem really plausible...
# the HTTP calls to remove the article itself from search...
# the article is finally deleted from the db...
# the article count for the user is updated

Aside from the fact that in my initial overview I totally forgot about the destruction of the tags (they amount to a DELETE and an UPDATE each to the database) I would say there's a lot going on when an article is removed.

What happens to the rest of the objects we didn't find in the console?

If you remember from what I said earlier, in Rails relationships everything not marked explicitly as "dependent" survives the destruction of the primary object, so they all are in the DB:

PracticalDeveloper_development=# select count(*) from comments where commentable_id = 25;
 count
-------
     3
PracticalDeveloper_development=# select count(*) from notifications where notifiable_id = 25;
 count
-------
     2
PracticalDeveloper_development=# select count(*) from buffer_updates where article_id = 25;
 count
-------
     1

I think we can be a little bit confident that the issue that sparked this article is likely to manifest if such article is really popular before being removed having many reactions, comments and notifications.

Timeout

Another factor that I mentioned in a comment to the issue is the Heroku's default timeout setting. dev.to IIRC runs on Heroku, which has a 30 seconds timeout for HTTP calls once the router has processed them (so it's a timer for your app code). If the app doesn't respond in 30 seconds, it timeouts and sends an error.

dev.to, savvily, cuts this timeout in half using rack timeout default service timeout which is 15 seconds.

In brief: if after hitting the "remove article" button the server doesn't finish in 15 seconds, a timeout error is raised. Having seen that a popular article can possibly trigger dozens of HTTP calls, you can understand why in some cases the 15 seconds wall can be hit.

Recap

Let's recap what we learned so far about what happens when an article is removed:

  • referential integrity can be a factor if the article has millions of related rows (unlikely in this scenario)

  • Rails removing associated objects sequentially is a factor (considering that it also has to load such objects from the DB to the ORM before removing them because it has to if it wants to trigger the various callbacks)

  • Inline callbacks and HTTP calls are another factor

  • Rails is not smart at all because it could decrease the amount of calls to the DB (for example by buffering the DELETE statements for all reactions and using a IN clause)

  • Rails magic is sometimes annoying 😛

Possible solutions

This is where I stop for now because I'm not familiar with the code base (well, after this definitely more :D) and because I think it could be an interesting "collective" exercise since it's not a critical bug that needs to be fixed "yesterday".

At first the simplest solution that could pop up in a one's mind is to move everything that happens inline when an article is removed in a out of process call by delegating everything to a job that is going to be picked up by the queue manager. The user just needs the article gone from their view after all. The proper removal can happen with a worker process. Aside from the fact that I'm not sure I considered everything that's going on (I found out about tags by chance as you saw) and all the implications, I think this is just a quick win. It would fix the user's problem by swiping the reported issue under the rug.

Another possible solution is to split the removal in its two main parts: the caches need to be updated or emptied and the rows need to be removed from the DB. The caches can all be destroyed out of process so the user doesn't have to wait for Fastly or Algolia (maybe only Stream.io? I don't know). This requires a bit of refactoring because some of the code I talked about is also used by other parts of the app.

A more complete solution is to go a step further from the second solution and also clean up all the leftovers (comments, notifications and buffer updates) but there might be a reason why they are left there in the first place. All these three entities can be removed in a separate job because two out of three have before destroy callbacks which trigger other stuff I haven't looked at.

This should definitely be enough for the user to never encounter again the pesky timeout error. To go an extra mile we could also look into the fact ActiveRecord issues a single DELETE for each object it removes from the database but this is definitely too much for now. I would annotate this somewhere and come back to it after the refactoring if needed.

Conclusions

If you are still with me, thank you. It took me quite a while to write this :-D

I don't have any mighty conclusions. I hope this deep dive in dev.to's source code served at least a purpose. For me it has been a great way to learn a bit more and write about something that non-Rails developers in here might not know but more importantly to help potential contributors.

I'm definitely hoping from some feedback ;-)


          Using PM2 to manage NodeJS cluster (3/4)      Cache   Translate Page      

The cluster module allows us to create worker processes to improve our NodeJS applications performance. This is specially important in web applications, where a master process receives all the requests and load balances them among the worker processes.

But all this power comes with the cost that must be the application who manages all the complexity associated with process managements: what happens if a worker process exists unexpectedly, how exit gracefully the worker processes, what if you need to restart all your workers, etc.

In this post we present PM2 tool. although it is a general process manager, that means it can manage any kind of process like python, ruby, ... and not only NodeJS processes, the tool is specially designed to manage NodeJS applications that want to work with the cluster module.

More on this series:

  1. Understanding the NodeJS cluster module
  2. Using cluster module with HTTP servers
  3. Using PM2 to manage a NodeJS cluster
  4. Graceful shutdown NodeJS HTTP server when using PM2

Introducing PM2

As said previously, PM2 is a general process manager, that is, a program that controls the execution of other process (like a python program that check if you have new emails) and does things like: check your process is running, re-execute your process if for some reason it exits unexpectedly, log its output, etc.

The most important thing for us is PM2 simplifies the execution of NodeJS applications to run as a cluster. Yes, you write your application without worrying about cluster module and is PM2 who creates a given number of worker processes to run your application.

The hard part of cluster module

Lets see an example where we create a very basic HTTP server using the cluster module. The master process will spawn as many workers as CPUs and will take care if any of the workers exists to spawn a new worker.

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
  masterProcess();
} else {
  childProcess();  
}

function masterProcess() {
  console.log(`Master ${process.pid} is running`);

  for (let i = 0; i < numCPUs; i++) {
    console.log(`Forking process number ${i}...`);

    cluster.fork();
  }

  cluster.on('exit', (worker, code, signal) => {
    console.log(`Worker ${worker.process.pid} died`);
    console.log(`Forking a new process...`);

    cluster.fork();
  });
}

function childProcess() {
  console.log(`Worker ${process.pid} started...`);

  http.createServer((req, res) => {
    res.writeHead(200);
    res.end('Hello World');

    process.exit(1);
  }).listen(3000);
}

The worker process is a very simple HTTP server listening on port 3000 and programmed to return a Hello World and exit (to simulate a failure).

If we run the program with $ node app.js the output will show something like:

$ node app.js

Master 2398 is running
Forking process number 0...
Forking process number 1...
Worker 2399 started...
Worker 2400 started...

If we go to browser at URL http://localhost:3000 we will get a Hello World and in the console see something like:

Worker 2400 died
Forking a new process...
Worker 2401 started...

That's very nice, now lets go to see how PM2 can simplify our application.

The PM2 way

Before continue, you need to instal PM2 on your system. Typically it is installed as a global module with $ npm install pm2 -g or $ yarn global add pm2.

When using PM2 we can forget the part of the code related with the master process, that will responsibility of PM2, so our very basic HTTP server can be rewritten as:

const http = require('http');

console.log(`Worker ${process.pid} started...`);

http.createServer((req, res) => {
  res.writeHead(200);
  res.end('Hello World');

  process.exit(1);
}).listen(3000);

Now run PM2 with $ pm2 start app.js -i 3 and you will see an output similar to:

Note the option -i that is used to indicate the number of instances to create. The idea is that number be the same as your number of CPU cores. If you don't know them you can set -i 0 to leave PM2 detect it automatically.

$ pm2 start app.js -i 3

[PM2] Starting /Users/blablabla/some-project/app.js in cluster_mode (3 instances)
[PM2] Done.

| Name      | mode    | status | ↺ | cpu | memory    |
| ----------|---------|--------|---|-----|-----------|
| app       | cluster | online | 0 | 23% | 27.1 MB   |
| app       | cluster | online | 0 | 26% | 27.3 MB   |
| app       | cluster | online | 0 | 14% | 25.1 MB   |

We can see the application logs running $ pm2 log. Now when accessing the the http://localhost:3000 URL we will see logs similar to:

PM2        | App name:app id:0 disconnected
PM2        | App [app] with id [0] and pid [1299], exited with code [1] via signal [SIGINT]
PM2        | Starting execution sequence in -cluster mode- for app name:app id:0
PM2        | App name:app id:0 online
0|app      | Worker 1489 started...

We can see how PM2 process detects one of our workers has exit and automatically starts a new instance.

Conclusions

Although the NodeJS cluster module is a powerful mechanism to improve performance it comes at the cost of complexity required to manage all the situations an application can found: what happens if a worker exists, how can we reload the application cluster without down time, etc.

PM2 is a process manager specially designed to work with NodeJS clusters. It allow to cluster an application, restart or reload, without the required code complexity in addition to offer tools to see log outputs, monitorization, etc.

References

Node.js clustering made easy with PM2


          python,mysql,qt programmer needed      Cache   Translate Page      
Hello, I have an ongoing project in python using pyqt4. I need it done asap. The project is 75% complete. Database is designed, UI is there . If you have 100 hour+ experience in programming in python,It is an easy job... (Budget: $250 - $750 USD, Jobs: Python)
          Python: World’s Most Popular Language in 2018      Cache   Translate Page      
According to The Economist, Python is “becoming the world’s most popular coding language”. Here’s a chart that shows how popular the language is: There’s a lot of interesting information in that article and there’s some interesting conversation going on in this Reddit thread which is related to the article.
          How to Export Jupyter Notebooks into Other Formats      Cache   Translate Page      
When working with Jupyter Notebook, you will find yourself needing to distribute your Notebook as something other than a Notebook file. The most likely reason is that you want to share the content of your Notebook to non-technical users that don’t want to install Python or the other dependencies necessary to use your Notebook. The … Continue reading How to Export Jupyter Notebooks into Other Formats
          Lynda.com: Eclipse Essential Training      Cache   Translate Page      
Eclipse is an industry-standard IDE and a critical tool for developers who want to build projects in multiple languages. In this course, Todd Perkins shows how to effectively use Eclipse's built-in tools and extensions to create, code, test, and debug projects in Java, PHP, C/C++, Perl, and Python. He'll show how to adapt the Eclipse workflow to the nuances of each language, and integrate with Git for version control. By the end of the course, developers will be able to wield all of Eclipse's most essential features with confidence.
          Lynda.com: Building a Personal Portfolio with Django      Cache   Translate Page      
Django—an open-source web framework that's designed on top of Python—can help you quickly bring your website ideas to life. In this course, learn the basics of Django for web development by building your own website—a personal portfolio—from the ground up. Instructor Nick Walter steps through how to create a database, design the layout for your website, and add and update URL paths. Learn how to connect your Django project to Postgres, add static files and URLs, and more.
          Programming Atmel with atprogram in LabVIEW      Cache   Translate Page      

Hi everyone,

 

I have created a VI for programming an Atmel processor and it somehow works, but I don't know why. I have installed Atmel Studio which comes with a command prompt utility. The utility itself opens another command line program which is called atprogram.exe which can be called together with commands and arguments which I need for the System Exec.vi. An example command is: atprogram -t samice -i JTAG -d ATSAM4S8C chiperase and if I run this command I get an error WindowsError: [Error 6] The handle is invalid together with some python errors at line x.

 

I managed to get rid of this error by simply connecting the commands and arguments to the standard input input of the system exec.vi too (lucky guess). Does anyone know why this only works when the commands and arguments are connected to both inputs of the system exec.vi?

 

I have placed the VI in the attachment, I could not find a similar solution so maybe it can be helpful to somebody.


          Missing zlib module      Cache   Translate Page      

I have compiled and installed python 2.7 on my ubuntu lucid.

But I am unable to install setuptools for python 2.7 because the data decompression module zlib is not present. This is the exact error:

Traceback (most recent call last): File "setup.py", line 94, in <module> scripts = scripts, File "/usr/local/lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File "/usr/local/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/usr/local/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/home/rohan/setuptools-0.6c11/setuptools/command/install.py", line 76, in run self.do_egg_install() File "/home/rohan/setuptools-0.6c11/setuptools/command/install.py", line 96, in do_egg_install self.run_command('bdist_egg') File "/usr/local/lib/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "/usr/local/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/home/rohan/setuptools-0.6c11/setuptools/command/bdist_egg.py", line 236, in run dry_run=self.dry_run, mode=self.gen_header()) File "/home/rohan/setuptools-0.6c11/setuptools/command/bdist_egg.py", line 527, in make_zipfile z = zipfile.ZipFile(zip_filename, mode, compression=compression) File "/usr/local/lib/python2.7/zipfile.py", line 651, in __init__ "Compression requires the (missing) zlib module" RuntimeError: Compression requires the (missing) zlib module

Also when i try to use setuptools 2.7 .egg, it gives this error:

Traceback (most recent call last): File "<string>", line 1, in <module> zipimport.ZipImportError: can't decompress data; zlib not available


          Why GPIO Zero Is Better Than RPi.GPIO for Raspberry Pi Projects      Cache   Translate Page      

The Raspberry Pi is the perfect computer for learning. The linux-based Raspbian OS has python built in, which makes it a great first system for beginner coders. Its General Purpose Input/Output (GPIO) pins make it easy for budding makers to experiment with DIY electronics projects.

It’s especially easy when you use code libraries that control these pins, and the popular RPi.GPIO Python library is an excellent example of such a library. But is it the best path for beginners? Join us as we investigate.

What Is GPIO Zero?

The GPIO Zero library is a Python library for working with GPIO pins. It was written by Raspberry Pi community manager Ben Nuttall . Aimed at being intuitive and “friendly,” it streamlines Python code for most regular Raspberry Pi use cases.

Combining simple naming practices and descriptive functions, GPIO Zero is more accessible for beginners to understand. Even seasoned users of the RPi.GPIO library may prefer it―and to understand why, let’s take a look at how RPi.GPIO compares to GPIO Zero.

What’s Wrong With RPi.GPIO?

Nothing. Nothing at all. RPi.GPIO was released in early 2012 by developer Ben Croston. It is a robust library allowing users to control GPIO pins from code. It features in almost every beginner project Raspberry Pi Projects for Beginners Raspberry Pi Projects for Beginners These 10 Raspberry Pi projects for beginners are great for getting an introduction to the hardware and software capabilities of the Pi, and will help you get up and running in no time! Read More we’ve covered.

Despite its extensive use, RPi.GPIO was never designed for end users. It is a testament to RPi.GPIO’s good design that so many beginners use it nonetheless.

What’s So Good About GPIO Zero?

When you arelearning Python code, you learn that it should be easy to read and as short as possible. GPIO Zero aims to cover both points. Built on top of RPi.GPIO as a front-end language wrapper, it simplifies GPIO setup and usage.

Consider the following example, setting up and turning on an LED:


Why GPIO Zero Is Better Than RPi.GPIO for Raspberry Pi Projects

The above code should be pretty familiar to anyone who has used their Pi to control LEDs .

The RPi.GPIO library is imported, and a pin for the LED is declared. The pin layout type is set up (BCM and BOARD mode are explained in our GPIO guide Everything You Need to Know About Raspberry Pi GPIO Pins Everything You Need to Know About Raspberry Pi GPIO Pins The Raspberry Pi is a fantastic little computer, but what do the GPIO (General Purpose Input/Output) pins do exactly? In short, they open up a whole world of DIY electronic tinkering and invention. Read More ), and the pin is set up as an output. Then, the pin is turned on.

This approach makes sense, but the GPIO Zero way of doing it is much simpler:


Why GPIO Zero Is Better Than RPi.GPIO for Raspberry Pi Projects

GPIO Zero has a module for LEDs, imported at the start. This means you can declare the pin number, and call the led.on() method.

Why Is GPIO Zero’s Approach Better?

There are some reasons why this method of working is an improvement on RPi.GPIO.

Firstly, it meets the “easy to read, short as possible” requirement. While the RPi.GPIO setup statements are easy enough to understand, they’re not necessary. An LED will always be an output, so GPIO Zero sets up the pins behind the scenes. The result is just three lines of code to set up, then light an LED.

You might notice that there is no board mode setup in the GPIO Zero example. The library only uses Broadcom (BCM) numbering for the pins. Library designer Ben Nuttall explains why in a 2015 RasPi.tv interview :

“BOARD numbering might seem simpler but I’d say it leads new users to think all the pins are general purpose―and they’re not. Connect an LED to pin 11, why not connect some more to pins 1, 2, 3 and 4? Well 1 is 3V3. 2 and 4 are 5V. A lack of awareness of what the purpose of the pins is can be dangerous.”

Put this way, it makes absolute sense to use the BCM numbers. Given that it GPIO Zero will be standard in the Raspberry Pi documentation going forward, it’s worth learning!

Is GPIO Zero Actually Better?

While it seems more straightforward on the surface, does the new library have any problems? As with any new coding library, it is a matter of opinion. On the one hand, removing the setup code is excellent for beginners and seasoned coders alike. Writing code is more straightforward and quicker.

On the other hand, knowing exactly what is going on is important for learning. Take the example of setting up a button from the GPIO Zero documentation :


Why GPIO Zero Is Better Than RPi.GPIO for Raspberry Pi Projects

The button module simplifies setup for push buttons. It knows buttons are inputs, so uses the declared pin number for setup. Checking for a button press is easier too, with the .is_pressed to detect button presses.

We used this exact functionality in the Raspberry Pi button tutorial 2 Ways to Add a Button to Your Raspberry Pi Project 2 Ways to Add a Button to Your Raspberry Pi Project How do you connect a button to your Raspberry Pi? Here are two ways to get started, demonstrated using Python and an LED. Read More , which is a great way to familiarize yourself with the differences in the libraries.

Users of the RPi.GPIO library will notice that the internal pull-up/pull-down resistors of the Pi are not set up in code. This raises an interesting question. Is it essential for beginners to know about pull-up/down resistors? Again, Ben Nuttall has an answer to this question:

“You might argue that it’s good to know about pull ups and pull downs, and you’d be right―but why do I have to teach that on day one?[…] If you want to teach the electronics in more depth there’s plenty of scope for that―but it shouldn’t be mandatory if you’re just getting started.”

On the whole, the simple approach of GPIO Zero is likely a good thing for beginners and veterans alike. Besides, RPi.GPIO isn’t going anywhere. It will always be there to switch back to if needed.

Is Python the Only Option?

Python is the language the Pi is known for, but it’s not the only option. If you are already familiar with programming in the C language, then Wiring Pi has you covered.

Alternatively, if you already program in javascript, Node.js can easily be installed on the Pi. GPIO access is available through the rpi-gpio npm library . Ruby on Rails can also be installed on the Raspberry Pi, though the Pi might not be the best way to learn Rails !

All of these alternatives, along with multi-language libraries like the excellent pigpio can make choosing a library confusing. This is where GPIO Zero excels: for beginners wondering how and where to start.

If you are at a point where you need something it does not provide, you will be more than ready to dive into these other libraries at your own pace.

Getting Started With GPIO Zero Yourself

GPIO Zero is the newest library to make a splash for the Pi and with good reason. For most users, it makes coding for GPIO pins simpler to read and quicker to write.

Given the Raspberry Pi’s usage in education, anything that makes learning more natural is a good thing. While RPi.GPIO has been perfect up until now, GPIO Zero takes a good idea and makes it even better.

A great way to get started with GPIO Zero is to take a beginner project like theMusical Door Sensor Play Your Own Theme Tune When You Enter the Room With Raspberry Pi Play Your Own Theme Tune When You Enter the Room With Raspberry Pi Have you ever wanted to arrive home to a personal welcome? In this simple Raspberry Pi project we'll use a reed switch to trigger a tune when a door is opened. Read More and port it to the new library.


          Non-greedy correspondence with grep script      Cache   Translate Page      
Complex non-greedy correspondence with regular expressions

I'm trying to parse rows from a HTML table with cells containing specific values with regular expressions in python. My aim in this (contrived) example is to get the rows with "cow". import re response = ''' <tr class="someClass">

How to make a non greedy match in grep?

I want to grep the shortest match and the pattern should be something like: <car ... model=BMW ...> ... ... ... </car> ... means any character and the input is multiple lines.You're looking for a non-greedy (or lazy) match. To get a non-greedy

Perl regex non greedy correspondence

I am piping ls into Perl looking for lines that contain some any characters followed by ".mp4.mp3" at the end of the line. I want to remove the ".mp4" from the middle of the line. Here is my command: ls | perl -pe 's|(.+?)\.mp4\.mp3$|\

Replace the text using a non-greedy correspondence?

I have a SOAP Call that looks like this: <?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"

Keep TRUE non-greedy correspondence using regex Perl

From the following word "tacacatac", I want to match "cat". It seems like the regex c.*?t should give me this, but I guess it starts with the first occurrence of "c" and then from there finds the next "t", and thus,

Non-greedy correspondence In Geany, I want to match the titles of books. One example: Michael Lewis, Liar's Poker, Hodder & Stoughton Ltd, London, 1989 I try to do so with this regex code: ,\s.*?, This regex matches too much. it matches: [, Liar's Poker,] and [,London,]. I wa Non-greedy searches with Hpricot?

I'm using Hpricot for traversing an XML packet. For each node I'm on, I want to get a list of the immediate children . However when using (current_node/:section) I'm getting ALL descendant sections, not just the immediate children. How can I get arou

Non-greedy combination? with grep

I'm writing a bash script which analyses a html file and I want to get the content of each single <tr>...</tr>. So my command looks like: $ tr -d \\012 < price.html | grep -oE '<tr>.*?</tr>' But it seems that grep gives me the r

I need to look for a non-exact match on unix and I can not do it with grep

I know on the file A, has a string that contains dc034 however, i cannot get that with grep command either using the word count or by searching for the string. What am i doing wrong? Suggestions grep "dc034" filedirectoryA | wc 0 0 0 grep -Fv &q

Non-greedy match since the end of the channel with regsub

I have a folder path like following: /h/apps/new/app/k1999 I want to remove the /app/k1999 part with the following regular expression: set folder "/h/apps/new/app/k1999" regsub {\/app.+$} $folder "" new_folder But the result is /h: too

Is it possible to write a script that will allow windows 7 to open a non-xls file with Excel?

I'd like to know if it is possible to write a python script which will make Windows 7 open a non-xls file with Excel, such as: file_path = somefile open_file_with_excel(filepath) The script should find Excel application, because Excel's installation

How to use echo with grep in a Unix shell script?

I need to use echo with grep in a shell script. Can I use it? I tried this, but is incorrect: echo linux: grep "Linux" ~/workspace/ep-exercicios/m1/e2/intro-linux.html | wc -w I need show the message: Linux: (number of Linux word on the document

Problems with a non-greedy C ++ regex I want to parse the following Lua code: [1]={['x']=198;['y']=74;['width']=99;['height']=199;};[2]={['x']=82;['y']=116;['width']=82;['height']=164;}; Notice that there are two keys in the table: [1] and [2]. I want to get only the value for the [1] ke Batch script to return non-duplicate files with a different extension

I don't have much experience with batch scripting but this seems like a suitable task for it: I have a very large directory with recordings of a specific extension, say '.wav'. In the same folder, I'm supposed to have, for each of these recordings, a


          ΑΙΓΙΟ: Η Εθνική Λυρική Σκηνή επιστρέφει στο δημοτικό κινηματογράφο «Απόλλων»      Cache   Translate Page      
Η Εθνική Λυρική Σκηνή επιστρέφει στην οθόνη του δημοτικού κινηματογράφου «Απόλλωνα» το Σάββατο 13 Οκτωβρίου στις 20:00, με την ξεκαρδιστική εξωτική όπερα «Ο μικάδος» των Γκίλμπερτ και Σάλλιβαν. Επίσης, θα πραγματοποιηθούν πρωινές προβολές για σχολεία, της συναρπαστικής όπερας για παιδιά και νέους «Ο πρίγκιπας Ιβάν και το πουλί της φωτιάς» του Θοδωρή Αμπαζή, από τη Δευτέρα 15 Οκτωβρίου. Με τις δύο παραγωγές της Εναλλακτικής Σκηνής ΕΛΣ που σημείωσαν μεγάλη επιτυχία, συνεχίζεται το πρόγραμμα κινηματογραφικών προβολών παραστάσεων της Εθνικής Λυρικής Σκηνής, με τίτλο «Η Λυρική στη Μεγάλη Οθόνη», το οποίο φιλοξενείται σε 13 πόλεις της περιφέρειας.

Πρόκειται για ένα πρόγραμμα της Εθνικής Λυρικής Σκηνής που στόχο έχει να ταξιδέψει τη λυρική τέχνη σε κάθε γωνιά της Ελλάδας, μέσω των κινηματογραφικών προβολών παραστάσεων όπερας, οπερέτας και μπαλέτου και χρηματοδοτήθηκε από το ΕΠ «Ψηφιακή Σύγκλιση» ΕΣΠΑ 2007-2013, με τίτλο: «Virtual Πρεμιέρες Όπερας στην Ελληνική Περιφέρεια και στο διαδίκτυο & Ενημέρωση για το έργο και τις δραστηριότητες της Εθνικής Λυρικής Σκηνής μέσω σύγχρονων ψηφιακών συστημάτων».

Η παράσταση «Ο Μικάδος» των Γκίλμπερτ και Σάλλιβαν, σε συμπαραγωγή με την Ομάδα μουσικού θεάτρου Ραφή, μια από τις πιο δραστήριες ομάδες των τελευταίων ετών, μαγνητοσκοπήθηκε τον Δεκέμβριο του 2017 στην Εναλλακτική Σκηνή της ΕΛΣ στο Κέντρο Πολιτισμού Ίδρυμα Σταύρος Νιάρχος. Πρόκειται για μια υπέροχη κωμική όπερα από τους προπάτορες των Monty Python και των αδελφών Μαρξ με εξαιρετικές ερμηνείες, εντυπωσιακά κοστούμια και άφθονο γέλιο, σε μουσική διεύθυνση – ενορχήστρωση Μιχάλη Παπαπέτρου και σκηνοθεσία Ακύλλα Καραζήση. Η απόδοση του λιμπρέτου, παραγγελία για τη συμπαραγωγή της Εναλλακτικής Σκηνής της ΕΛΣ και της Ομάδας μουσικού θεάτρου Ραφή, ανήκει σε δύο διακεκριμένους μεταφραστές, τον Γιώργο Τσακνιά, που ανέλαβε την απόδοση των πεζών μερών, και την Κατερίνα Σχινά, που απέδωσε στα ελληνικά τα μελοποιημένα μέρη. 

Οι δημιουργοί του Μικάδου Ουίλλιαμ Σβενκ Γκίλμπερτ και Άρθουρ Σάλλιβαν, πρωτεργάτες της αγγλικής μουσικής κωμωδίας, εμπνέονται από την ατμοσφαιρική Άπω Ανατολή για να συνθέσουν μια κοφτερή σάτιρα των κοινωνικοπολιτικών ηθών του 19ου αιώνα. Η όπερα «Ο Μικάδος» ενθουσίασε κοινό και κριτικούς και όλοι οι συντελεστές χειροκροτήθηκαν θερμά στις sold out παραστάσεις στην Εναλλακτική Σκηνή ΕΛΣ.

Παράλληλα με την προβολή της όπερας «Ο Μικάδος» για το κοινό, στις 13 πόλεις θα γίνουν ειδικές προβολές για μαθητές σχολείων της όπερας για παιδιά και νέους «Ο πρίγκιπας Ιβάν και το πουλί της φωτιάς» του Θοδωρή Αμπαζή σε λιμπρέτο Σοφιάννας Θεοφάνους. Πρόκειται για ένα συναρπαστικό μουσικό παραμύθι με υπέροχες μελωδίες, πολύχρωμα σκηνικά και κοστούμια, γερές δόσεις χιούμορ, μαγικά κόλπα κι ένα καστ υψηλών προδιαγραφών. Για τους λιλιπούτειους θεατές αποτέλεσε μια ιδεώδη εισαγωγή στον υπέροχο κόσμο της όπερας.

Βασισμένο σε γνωστό ρωσικό παραμύθι, πάνω στο οποίο βασίστηκε και το διάσημο μπαλέτο του Ίγκορ Στραβίνσκι, «Ο πρίγκιπας Ιβάν και το πουλί της φωτιάς» εξιστορεί το συναρπαστικό ταξίδι του πρίγκιπα Ιβάν στο Μαύρο Δάσος του Μάγου Βασιλιά Κοσέι. Με τη βοήθεια του Λύκου Βολκ αναζητά το χρυσό Πουλί της Φωτιάς για να αποδείξει στον πατέρα του ότι αξίζει να γίνει ο επόμενος τσάρος. Πρωταγωνιστής της παράστασης η εξαιρετική μουσική του Θοδωρή Αμπαζή, που ξεχειλίζει από λυρισμό και συναίσθημα, κλείνοντας ταπεινά το μάτι στον δάσκαλο Στραβίνσκι.

          Golang developer needed for a few web applications.      Cache   Translate Page      
Rest server written in Golang. Write Good quality code, and create robust and scale Golang application. React and Ant Design experience is a plus. Please provide at least 2 work cases that you have worked on using Golang, and describe what you have done for that two 2 cases using Go... (Budget: $250 - $750 USD, Jobs: Golang, node.js, Python)
          Deal with algorithums      Cache   Translate Page      
I have few tasks solving algorithums more details in private chat box (Budget: $10 - $30 AUD, Jobs: Algorithm, C Programming, Java, Python)
          Attention: Temporary Ball Python Care Needed In Ottawa K1G - Posted By:Tara D. - Ottawa, ON      Cache   Translate Page      
Hi, I’m interested in someone coming to my home a couple times in the next week to look after a ball python. Basically, I would need 2 visits where you can...
From CA.Care.com - Tue, 25 Sep 2018 15:41:25 GMT - View all Ottawa, ON jobs
          Senior Data Analyst - William E. Wecker Associates, Inc. - Jackson, WY      Cache   Translate Page      
Experience in data analysis and strong computer skills (we use SAS, Stata, R and S-Plus, Python, Perl, Mathematica, and other scientific packages, and standard...
From William E. Wecker Associates, Inc. - Sat, 23 Jun 2018 06:13:20 GMT - View all Jackson, WY jobs
          Mean stack developer needed      Cache   Translate Page      
I prefer this web app to be built using a MEAN stack; however, I’m open to other development stacks. Requirements: The UI shall be built with the following front end tools 1. Angular, React The UI shall be built with the following back end tools 1... (Budget: $15 - $25 USD, Jobs: Django, Javascript, node.js, NoSQL Couch & Mongo, Python)
          Django customer+retailer site      Cache   Translate Page      
Require a django app which will allow customers and retailers to sign up. Will also need a blog feature which will list a full list of blog in a database table. 3 tables to start (Customer, Retailers, Blogposts)... (Budget: $30 - $250 CAD, Jobs: Django, HTML, Python)
          need to simulate sim-outorder simulator from simplescalar suite. You will need a unix system for this exercise.      Cache   Translate Page      
The simulator can be downloaded from here: http://www.simplescalar.com/. You need to download simplesim-3v0e.tgz file The benchmarks can be downloaded from here: http://faculty.cse.tamu.edu/djimenez/614-spring14/hw4/benchmarks/index.html... (Budget: ₹600 - ₹1500 INR, Jobs: C Programming, C++ Programming, Java, Linux, Python)
          GStreamer: GStreamer Conference 2018: Talks Abstracts and Speakers Biographies now available      Cache   Translate Page      

The GStreamer Conference team is pleased to announce that talk abstracts and speaker biographies are now available for this year's lineup of talks and speakers, covering again an exciting range of topics!

The GStreamer Conference 2018 will take place on 25-26 October 2018 in Edinburgh (Scotland) just after the Embedded Linux Conference Europe (ELCE).

Details about the conference and how to register can be found on the conference website.

This year's topics and speakers:

Lightning Talks:

  • gst-mfx, gst-msdk and the Intel Media SDK: an update (provisional title)
    Haihao Xiang, Intel
  • Improved flexibility and stability in GStreamer V4L2 support
    Nicolas Dufresne, Collabora
  • GstQTOverlay
    Carlos Aguero, RidgeRun
  • Documenting GStreamer
    Mathieu Duponchelle, Centricular
  • GstCUDA
    Jose Jimenez-Chavarria, RidgeRun
  • GstWebRTCBin in the real world
    Mathieu Duponchelle, Centricular
  • Servo and GStreamer
    Víctor Jáquez, Igalia
  • Interoperability between GStreamer and DirectShow
    Stéphane Cerveau, Fluendo
  • Interoperability between GStreamer and FFMPEG
    Marek Olejnik, Fluendo
  • Encrypted Media Extensions with GStreamer in WebKit
    Xabier Rodríguez Calvar, Igalia
  • DataChannels in GstWebRTC
    Matthew Waters, Centricular
  • Me TV – a journey from C and Xine to Rust and GStreamer, via D
    Russel Winder
  • ...and many more
  • ...
  • Submit your lightning talk now!

Many thanks to our sponsors, Collabora, Pexip, Igalia, Fluendo, Facebook, Centricular and Zeiss, without whom the conference would not be possible in this form. And to Ubicast who will be recording the talks again.

Considering becoming a sponsor? Please check out our sponsor brief.

We hope to see you all in Edinburgh in October! Don't forget to register!


          CircuitPython 3.0.3 released!      Cache   Translate Page      
From the GitHub release page: This is the a bug fix and minor feature release for the 3.x stable series. There is one fix in this release. Please check out the 3.0.0 release notes for full details on what’s new in 3.0.0. Changes since 3.0.2 atmel-samd: Fix AudioOut playback on the SAMD21. Thanks to jct4764 […]
          400 of these are going to the Hackaday Superconference @hackaday @hackadayio #supercon @adafruit      Cache   Translate Page      
400 of these hackable Python powered devices are going to the Hackaday Superconference | Pasadena Nov 2-4 2018. Adafruit is sponsor. The Hackaday Superconference is the greatest gathering of hardware hackers, builders, engineers and enthusiasts in the world. Supercon 2018 is 3 full days! Join us November 2-4 (2018) in Pasadena, CA. The conference begins on […]
          Economics Nobel laureate Paul Romer – Python & Jupyter notebooks @ThePSF @projectjupyter @paulmromer      Cache   Translate Page      
Economics Nobel laureate Paul Romer is a Python programming convert — Quartz. Economist Paul Romer, a co-winner of the 2018 Nobel Prize in economics, is many things. He is one of most important theorists on the drivers of economic growth. He is an ex-World Bank chief economist. He is a supporter of clear academic writing. He is […]
          ICYMI: Latest newsletter – CircuitPython creates new assistive tech opportunities @adafruit @circuitpython @micropython      Cache   Translate Page      
ICYMI (In case you missed it) – Today’s Python from micocontrollers from AdafruitDaily.com went out – if you did miss it, subscribe now! Next one goes out in a week and it’s the best way to keep up with all things Python for hardware, it’s the fastest growing newsletter out of ALL the Adafruit newsletters! […]
          Tooling Tuesday – Glob: find files in a directory with #Python @biglesp      Cache   Translate Page      
CircuitPythoner Les writes about Glob – Unix style pathname pattern expansion…or in simpler terms, it is a library that we can use to search drives, directories for files. Why is this useful? Well you might be writing a script that looks for certain files and then creates backups in a remote location. You can use glob […]
          Remote Senior DevOps Engineer      Cache   Translate Page      
An IT firm is seeking a Remote Senior DevOps Engineer. Must be able to: Provide insight and expertise that will accelerate the adoption of continuous delivery tools, processes, and culture Provide on-call support on a rotation basis during non-US working hours in a 24/7/365 environment Leverage scripting, frameworks, and modern tools to automate the provisioning of cloud infrastructure and security Qualifications for this position include: 6+ years’ experience in software engineering 1+ year hands on experience with Chef 5+ years’ experience scripting and development skills with one or more languages such as Ruby, Python, Golang, etc. Experience designing, building, and automating on AWS Development experience working with SaaS architecture and continuous delivery Bachelor's degree in Computer Science, Information Systems, or compensating experience
          Remote Industrial Processes Data Analytics Scientist in Houston      Cache   Translate Page      
A software company has a current position open for a Remote Industrial Processes Data Analytics Scientist in Houston. Must be able to: Consult with our customers and clients and data architecture strategies Utilize your deep knowledge of key algorithms and processes in the burgeoning fields of big data and data analytics Assist us in prototyping, specifying, and testing new algorithms to our toolset Qualifications Include: Travel for team link-ups several times a year, as well as, to customer sites for 2-day visits about once per month Must have an educational background in one or more related disciplines Previous experience working as an engineer involved with analyzing time-history and process data Experience using enterprise historians Programming experience in Python and other programming languages is expected
          MySQL Books - 2018 has been a very good year      Cache   Translate Page      
Someone once told me you can tell how healthy a software project is by the number of new books each year.  For the past few years the MySQL community has been blessed with one or two books each year. Part of that was the major shift with MySQL 8 changes but part of it was that the vast majority of the changes were fairly minor and did not need detailed explanations. But this year we have been blessed with four new books.  Four very good books on new facets of MySQL.Introducing the MySQL 8 Document Store is the latest book from Dr. Charles Bell on MySQL.  If you have read any other of Dr. Chuck's book you know they are well written with lots of examples.  This is more than a simple introduction with many intermediate and advanced concepts covered in detail. Introducing the MySQL 8 Document Store MySQL & JSON - A Practical Programming Guide by yours truly is a guide for developers who want to get the most of the JSON data type introduced in MySQL 5.7 and improved in MySQL 8.  While I love MySQL's documentation, I wanted to provide detailed examples on how to use the various functions and features of the JSON data type.  MySQL and JSON A Practical Programming Guide Jesper Wisborg Krogh is a busy man at work and somehow found the time to author and co-author two books.  The newest is MySQL Connector/Python Revealed: SQL and NoSQL Data Storage Using MySQL for Python Programmers which I have only just received.  If you are a Python Programmer (or want to be) then you need to order your copy today.  A few chapters in and I am already finding it a great, informative read. MySQL Connector/Python Revealed Jesper and Mikiya Okuno produced a definitive guide to the MySQL NDB cluster with Pro MySQL NDB Cluster.  NDB cluster is often confusing and just different enough from 'regular' MySQL to make you want to have a clear, concise guidebook by your side.  And this is that book. Pro MySQL NDB Cluster RecommendationEach of these books have their own primary MySQL niche (Docstore, JSON, Python & Docstore, and NDB Cluster) but also have deeper breath in that they cover material you either will not find in the documentation or have to distill that information for yourself.  They not only provide valuable tools to learn their primary facets of technology but also provide double service as a reference guide. 
          Around The World In One Hour      Cache   Translate Page      
Video: Around The World In One Hour
Watch This Video!
Studio: Global Video Pro
WORLD MONTAGE, One minute of various images from around the world. DEAD SEA, ISRAEL, Float on the Dead Sea. Eight times more salt than the ocean. Visitors from worldwide come to seek wellness from the water and healing black mud.
SNAKE CHARMER OF MALAYSIA, A dying breed, these snake charmers risk their lives to entertain audiences. frequently bitten by cobras and pit vipers, they still play a dangerous game! Have you had a 22 foot long python coiled around your body lately???
DIVE PHILIPPINES, the Philippines is known for its spectacular dive sites. Explore the beautiful undersea world around Cebu Island, teeming with a vast array of exotic sea creatures, caves and cliffs.... LAS VEGAS PREVIEW, tour of Las Vegas, aerials, casinos and Hoover Dam, etc.
HAWAII KAYAK ADVENTURE, paddle through the Big Islands ten tunnels high in the Kohala Mountains, by kayak. Some tunnels one mile long. The ultimate eco-tourism adventure!
ELEPHANT SHOW, THAILAND, see elephants perform amazing feats in Phuket, Thailand. Dancing, playing music, tricks, headstands, playing soccer and carrying boy with his head in the elephant's mouth. Daring stuff!

          Principal Data Scientist | IT - G2 PLACEMENTS TI - Montréal, QC      Cache   Translate Page      
Python (Sci-kit Learn, numpy, pandas, Tensorflow, Keras), R, Matlab, SQL. Principal Data Scientist *....
From Indeed - Sun, 07 Oct 2018 19:29:37 GMT - View all Montréal, QC jobs
          Scientifique des données - Data Scientist - Gameloft - Montréal, QC      Cache   Translate Page      
Connaissance de Python, Pandas et NumPy nécessaire. Knowledge of Python, pandas and Numpy are must-haves....
From Gameloft - Sat, 06 Oct 2018 03:08:15 GMT - View all Montréal, QC jobs
          Software Engineer - Valital Technologies Inc. - Montréal, QC      Cache   Translate Page      
Experience programming in Python libraries (numpy, pandas, matplotlib, sci-kit learn); &quot;Valital Technologies Inc.&quot;....
From Indeed - Mon, 01 Oct 2018 15:32:18 GMT - View all Montréal, QC jobs
          Principal Data Scientist - DMA Global - Montréal, QC      Cache   Translate Page      
Python (Sci-kit Learn, numpy, pandas, Tensorflow, Keras) R, Matlab, SQL. You will ideally have a Master's or PhD in Statistics, Mathematics, Computer Science,...
From Indeed - Thu, 13 Sep 2018 17:03:53 GMT - View all Montréal, QC jobs
          Lead Software Engineer, AI/data science - IVADO Labs - Montréal, QC      Cache   Translate Page      
Understanding of one or more of the modern AI/data science and data manipulation programming languages/libraries (e.g., Python, Scikit-Learn, Pandas, etc.)....
From IVADO Labs - Sat, 11 Aug 2018 03:14:21 GMT - View all Montréal, QC jobs
          Data Scientists / AI & Machine Learning Engineer - IVADO Labs - Montréal, QC      Cache   Translate Page      
Experience implementing AI/data science algorithms using one or more of the modern programming languages/frameworks (e.g., Python, Pandas, Scikit-learn,...
From IVADO Labs - Sat, 11 Aug 2018 03:14:21 GMT - View all Montréal, QC jobs
          Anaconda - Sage conflict in .bashrc      Cache   Translate Page      
I can't get Anaconda and Sage to play nicely with each other. This is on a new install of Linux Mint 19. I installed sage from the repository and everything working fine. Then I installed Anaconda as directed on Anaconda's Linux installation instructions , including having the installer add to the PATH in ~/.bashrc. This broke sage. When I try to run it now, I get the error Traceback (most recent call last): File " /usr/share/sagemath/bin/sage-ipython", line 6, in from sage.repl.interpreter import SageTerminalApp ImportError: No module named 'sage' When I comment out the lines Anaconda added to my .bashrc file where it's adding to the PATH, Sage works again, but Anaconda is broken. What's causing this problem? How can I get the two to work at the same time?
          Is it possible to run (may be partially) Sage with Python 3?      Cache   Translate Page      
I want to run sage with python3. I know that it isn't fully ported, but I want to use already ported functionality and hope it cover my needs. One particular reason is my necessity to use `multiprocessing.pool` with `lambda` function which doesn't work with python 2 and [both workarounds](http://stackoverflow.com/questions/4827432/how-to-let-pool-map-take-a-lambda-function ) seem to not work also. P.S. I found [third workaround](http://stackoverflow.com/a/37976180/359866) which seems to be working.
          Is it possible to update the Sage Python?      Cache   Translate Page      
I noticed Sage can run a Python notebook, but it's 2.7 when I'm used to using 3.x. It is possible to update the Python in Sage? Also I'm on Windows so I'm using the Sage Virtualbox appliance, which might make it impossible. But is it possible in a sage native host like Linux?
          python-sqlalchemy 1.2.12-1 x86_64      Cache   Translate Page      
Python SQL toolkit and Object Relational Mapper
          python2-sqlalchemy 1.2.12-1 x86_64      Cache   Translate Page      
Python 2 SQL toolkit and Object Relational Mapper
          Software Engineer w/ Active DOD Secret Clearance (Contract) - Preferred Systems Solutions, Inc. - Cheyenne, WY      Cache   Translate Page      
Experience with Java, JavaScript, C#, PHP, Visual Basic, Python, HTML, XML, CSS, and AJAX. Experience with software installation and maintenance, specifically...
From Preferred Systems Solutions, Inc. - Tue, 25 Sep 2018 20:46:05 GMT - View all Cheyenne, WY jobs
          Senior Data Engineer - DISH Network - Cheyenne, WY      Cache   Translate Page      
4 or more years of experience in programming and software development with Python, Perl, Java, and/or other industry standard language....
From DISH - Wed, 15 Aug 2018 05:17:45 GMT - View all Cheyenne, WY jobs
          IT Manager - Infrastructure - DISH Network - Cheyenne, WY      Cache   Translate Page      
Scripting experience in one or more languages (Python, Perl, Java, Shell). DISH is a Fortune 200 company with more than $15 billion in annual revenue that...
From DISH - Sun, 15 Jul 2018 05:30:30 GMT - View all Cheyenne, WY jobs
          Software Developer - Matric - Morgantown, WV      Cache   Translate Page      
Application development with Java, Python, Scala. Enterprise level web applications. MATRIC is a strategic innovation partner providing deep, uncommon expertise...
From MATRIC - Tue, 11 Sep 2018 00:02:33 GMT - View all Morgantown, WV jobs
          Configure AWS Instant and Install Python Script      Cache   Translate Page      
I would like to have a script installed:from Github https://github.com/wjnoh/nostagram installed on an Amazon EC2. Please tell me what type of EC2 instance would you need and also do a quick test to make... (Budget: $10 - $30 USD, Jobs: Amazon Web Services, Python)
          AMcom, Grupo Fleury e Localiza Hertz abrem vagas de emprego      Cache   Translate Page      
Oportunidades são para diversas localidades do país e cargos e requisitos são variados. Três empresas estão com vagas de emprego abertas em vários pontos do país. São elas: AMcom, Grupo Fleury e Localiza Hertz. AMcom A AMcom, companhia de tecnologia da informação especializada em desenvolvimento customizado, sustentação de sistemas, consultoria e alocação de profissionais, está com 50 vagas abertas para profissionais da área de TI. Os requisitos e candidatura das vagas podem ser consultados e realizados pelo site da empresa http://amcom.com.br/vagas A demanda, que foi impulsionada pela chegada de novos clientes requer, principalmente, desenvolvedores Java, Front-end e Python, consultores de pré-vendas e gerente de contas, entre outros cargos. As vagas também estão disponíveis para pessoa com deficiência. A seleção abrange todo o país e as oportunidades disponíveis são para, principalmente, a cidade de Blumenau, além de outras localidades do estado de Santa Catarina, como Florianópolis, Joinville, Gaspar e Biguaçu, e também São Paulo. Além do salário compatível com o mercado, as oportunidades contam com benefícios como planos de saúde e odontológico e seguro de vida 100% custeados, subsídios parciais para graduações, pós-graduações e MBA, e integrais para certificações. Grupo Fleury O Grupo Fleury abriu 6 vagas para assistente de coleta em seus hospitais parceiros das cidades de São Paulo e do ABC paulista. Os interessados podem se inscrever por meio do site http://bit.ly/2OsKzBH. As oportunidades continuarão em aberto até o preenchimento. Os profissionais selecionados atuarão na execução da coleta de materiais biológicos utilizando equipamentos, soluções químicas, materiais de apoio e instrumentos adequados para cada tipo de exame, envolvendo pacientes adultos e crianças, bem como atendimento em primeiros socorros, conforme treinamento e padronização do Grupo de Emergências, e suporte a problemas técnicos. As atribuições da vaga incluem, também, armazenar amostras para garantir viabilidade do material, registrar informações do atendimento por meio de sistemas de rastreamento, realizar atividades administrativas e de recepção relacionadas ao atendimento ao paciente e à Distribuição – conforme treinamentos específicos, processos, normas vigentes e direcionamento da liderança. Para participar do processo seletivo é necessário ter formação completa nos cursos técnicos de patologia clínica, laboratório, análises clínicas e biodiagnóstico. Não é necessário ter experiência na área, mas o conhecimento em culturas e raspados, bem como em rotinas hospitalares são desejáveis. A carga horária mensal das vagas pode ser de 180 horas ou 220 horas e é preciso ter disponibilidade para trabalhar nos finais de semanas e feriados. Localiza Hertz A Localiza Hertz, empresa de aluguel de carros, está com oportunidades abertas para profissionais com deficiência para o cargo de administrativo na cidade de São Paulo. A carga horária de trabalho é de 8 horas, e os candidatos devem ter ensino médio completo. É desejável experiência na função. A Localiza oferece benefícios: assistência médica, assistência odontológica, seguro de vida, previdência privada, vale-transporte, auxílio refeição, auxílio alimentação, auxílio creche e participação nos lucros. Os interessados na oportunidade devem cadastrar o currículo em www.vagas.com.br/localiza
          GOB -Beca Programación Scala/Python      Cache   Translate Page      
GOB -Beca Programación Scala/Python - REF: 4173011 - Fecha: 9-10-2018 14:17:37
          Economics Nobel Laureate Paul Romer Is a Python Programming Convert      Cache   Translate Page      
Economist Paul Romer, a co-winner of the 2018 Nobel Prize in economics, uses the programming language python for his research , according to Quartz. Romer reportedly tried using Wolfram Mathematica to make his work transparent, but it didn't work so he converted to a Jupyter notebook instead. From the report:

Romer believes in making research transparent. He argues that openness and clarity about methodology is important for scientific research to gain trust. As Romer explained in an April 2018 blog post , in an effort to make his own work transparent, he tried to use Mathematica to share one of his studies in a way that anyone could explore every detail of his data and methods. It didn't work. He says that Mathematica's owner, Wolfram Research, made it too difficult to share his work in a way that didn't require other people to use the proprietary software, too. Readers also could not see all of the code he used for his equations.

Instead of using Mathematica, Romer discovered that he could use a Jupyter notebook for sharing his research. Jupyter notebooks are web applications that allow programmers and researchers to share documents that include code, charts, equations, and data. Jupyter notebooks allow for code written in dozens of programming languages. For his research, Romer used Python -- the most popular language for data science and statistics. Importantly, unlike notebooks made from Mathematica, Jupyter notebooks are open source, which means that anyone can look at all of the code that created them. This allows for truly transparent research. In a compelling story for The Atlantic, James Somers argued that Jupyter notebooks may replace the traditional research paper typically shared as a PDF.


          Podcast.__init__: Building A Game In Python At PyWeek with Daniel Pope      Cache   Translate Page      
Summary

Many people learn to program because of their interest in building their own video games. Once the necessary skills have been acquired, it is often the case that the original idea of creating a game is forgotten in favor of solving the problems we confront at work. Game jams are a great way to get inspired and motivated to finally write a game from scratch. This week Daniel Pope discusses the origin and format for PyWeek, his experience as a participant, and the landscape of options for building a game in python. He also explains how you can register and compete in the next competition.


Podcast.__init__: Building A Game In Python At PyWeek with Daniel Pope
Do you want to try out some of the tools and applications that you heard about on Podcast.__init__? Do you have a side project that you want to share with the world? Check out Linode at linode.com/podcastinit or use the code podcastinit2018 and get a $20 credit to try out their fast and reliable linux virtual servers. They’ve got lightning fast networking and SSD servers with plenty of power and storage to run whatever you want to experiment on. Preface Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great. When you’re ready to launch your next app you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to scale up. Go to podcastinit.com/linode to get a $20 credit and launch a new server in under a minute. Visit thesite to subscribe to the show, sign up for the newsletter, and read the show notes. And if you have any questions, comments, or suggestions I would love to hear them. You can reach me on Twitter at @Podcast__init__ or email [emailprotected] ) To help other people find the show please leave a review on iTunes , or Google Play Music , tell your friends and co-workers, and share it on social media. Join the community in the new Zulip chat workspace atpodcastinit.com/chat Your host as usual is Tobias Macey and today I’m interviewing Daniel Pope about PyWeek, a one week challenge to build a game in Python Interview Introductions How did you get introduced to Python? Can you start by describing what PyWeek is and how the competition got started? What is your current role in relation to PyWeek and how did you get involved? What are the strengths of the Python lanaguage and ecosystem for developing a game? What are some of the common difficulties encountered by participants in the challenge? What are some of the most commonly used libraries and tools for creating and packaging the games? What are some shortcomings in the available tools or libraries for Python when it comes to game development? What are some examples of libraries or tools that were created and released as a result of a team’s efforts during PyWeek? How often do games that get started during PyWeek continue to be developed and improved? Have there ever been games that went on to be commercially viable? What are some of the most interesting or unusual games that you have seen submitted to PyWeek? Can you describe your experience as a competitor in PyWeek? How do you structure your time during the competition week to ensure that you can complete your game? What are the benefits and difficulties of the one week constraint for development? How has PyWeek changed over the years that you have been involved with it? What are your hopes for the competition as it continues into the future? Keep In Touch
          Django搭建个人博客:编写博客文章的Model模型      Cache   Translate Page      
Django 框架主要关注的是模型(Model)、模板(Template)和视图(Views),称为MTV模式。

它们各自的职责如下:

层次 职责 模型(Model),即数据存取层 处理与数据相关的所有事务: 如何存取、如何验证有效性、包含哪些行为以及数据之间的关系等。 模板(Template),即业务逻辑层 处理与表现相关的决定: 如何在页面或其他类型文档中进行显示。 视图(View),即表现层 存取模型及调取恰当模板的相关逻辑。模型与模板的桥梁。 简单来说就是Model存取数据,View决定需要调取哪些数据,而Template则负责将调取出的数据以合理的方式展现出来。

在 Django 里写一个数据库驱动的 Web 应用的第一步是定义 模型Model ,也就是数据库结构设计和附加的其它元数据。

模型包含了储存的数据所必要的字段和行为。Django 的目标是你只需要定义数据模型,其它的杂七杂八代码你都不用关心,它们会自动从模型生成。

所以让我们首先搞定 Model 。

编写 Model 如前面所讲,Django中通常一个模型(Model)映射一个数据库,处理与数据相关的事务。

对博客网站来说,最重要的数据就是文章。所以首先来建立一个存放文章的数据模型。

打开 article/models.py 文件,输入如下代码:

article/models.py from django.db import models # 导入内建的User模型。 from django.contrib.auth.models import User # timezone 用于处理时间相关事务。 from django.utils import timezone # 博客文章数据模型 class ArticlePost(models.Model): # 文章作者。参数 on_delete 用于指定数据删除的方式,避免两个关联表的数据不一致。 author = models.ForeignKey(User, on_delete=models.CASCADE) # 文章标题。models.CharField 为字符串字段,用于保存较短的字符串,比如标题 title = models.CharField(max_length=100) # 文章正文。保存大量文本使用 TextField body = models.TextField() # 文章创建时间。参数 default=timezone.now 指定其在创建数据时将默认写入当前的时间 created = models.DateTimeField(default=timezone.now) # 文章更新时间。参数 auto_now=True 指定每次数据更新时自动写入当前时间 updated = models.DateTimeField(auto_now=True) 复制代码

代码非常直白。**每个模型被表示为 django.db.models.Model 类的子类。**每个模型有一些类变量,它们都表示模型里的一个数据库字段。

**每个字段都是 Field 类的实例 。**比如字符字段被表示为 CharField ,日期时间字段被表示为 DateTimeField 。这将告诉 Django 每个字段要处理的数据类型。

**定义某些 Field 类实例需要参数。**例如 CharField 需要一个 max_length 参数。这个参数的用处不止于用来定义数据库结构,也用于验证数据。

**使用 ForeignKey 定义一个关系。**这将告诉 Django,每个(或多个) ArticlePost 对象都关联到一个 User 对象。Django本身具有一个简单完整的账号系统(User),足以满足一般网站的账号申请、建立、权限、群组等基本功能。

ArticlePost 类定义了一篇文章所必须具备的要素:作者、标题、正文、创建时间以及更新时间。**我们还可以额外再定义一些内容,规范 ArticlePost 中数据的行为。**加入以下代码:

article/models.py ... class ArticlePost(models.Model): ... # 内部类 class Meta 用于给 model 定义元数据 class Meta: # ordering 指定模型返回的数据的排列顺序 # '-created' 表明数据应该以倒序排列 ordering = ('-created',) # 函数 __str__ 定义当调用对象的 str() 方法时的返回值内容 def __str__(self): # return self.title 将文章标题返回 return self.title 复制代码

内部类 Meta 中的 ordering 定义了数据的排列方式。 -created 表示将以创建时间的倒序排列,保证了最新的文章总是在网页的最上方。 注意 ordering 是元组,括号中只含一个元素时不要忘记末尾的逗号。

** __str__ 方法定义了需要表示数据时应该显示的名称。**给模型增加 __str__ 方法是很重要的,它最常见的就是在Django管理后台中做为对象的显示值。因此应该总是返回一个友好易读的字符串。后面会看到它的好处。

整理并去掉注释,全部代码放在一起是这样:

article/models.py from django.db import models from django.contrib.auth.models import User from django.utils import timezone class ArticlePost(models.Model): author = models.ForeignKey(User, on_delete=models.CASCADE) title = models.CharField(max_length=100) body = models.TextField() created = models.DateTimeField(default=timezone.now) updated = models.DateTimeField(auto_now=True) class Meta: ordering = ('-created',) def __str__(self): return self.title 复制代码 恭喜你,你已经完成了博客网站最核心的数据模型的大部分内容。

代码不到20行,是不是完全没啥感觉。后面会慢慢体会Django的强大。

另外建议新手不要复制粘贴代码。科学表明,缓慢的敲入字符有助于提高编程水平。

代码分解 这部分内容如果不能理解也没关系,先跳过,待水平提高再回过头来阅读。 导入(Import)

Django框架基于python语言,而在python中用 import 或者 from...import 来导入模块。

模块其实就一些函数和类的集合文件,它能实现一些相应的功能。当我们需要使用这些功能的时候,直接把相应的模块导入到我们的程序中就可以使用了。

import 用于导入整个功能模块。但实际使用时往往只需要用模块中的某一个功能,为此导入整个模块有点大材小用,因此可以用 from a import b 表示从模块 a 中导入 b 给我用就可以了。

类(Class) Python作为面向对象编程语言,最重要的概念就是类(Class)和实例(Instance)。

类是抽象的模板,而实例是根据这个类创建出来的一个个具体的“对象”。每个对象都拥有相同的方法,但各自的数据可能不同。而这些方法被打包封装在一起,就组成了类。

比如说我们刚写的这个 ArticlePost 类,作用就是就为博客文章的内容提供了一个模板。每当有一篇新文章生成的时候,都要比对 ArticlePost 类来创建 author 、 title 、 body ...等等数据;虽然每篇文章的具体内容可能不一样,但是必须都遵循相同的规则。

在Django中,数据由模型来处理,而模型的载体就是类(Class)。

字段(Field)

字段(field)表示数据库表的一个抽象类,Django使用字段类创建数据库表,并将Python类型映射到数据库。

在模型中,字段被实例化为类属性并表示特定的表,同时具有将字段值映射到数据库的属性及方法。

比方说 ArticlePost 类中有一个 title 的属性,这个属性中保存着 Charfield 类型的数据:即一个较短的字符串。

ForeignKey外键 ForeignKey 是用来解决“一对多”问题的,用于关联查询。

什么叫“一对多”?

在我们的ArticlePost模型中, 一篇文章只能有一个作者,而一个作者可以有很多篇文章,这就是“一对多”关系 。

又比如一个班级的同学中,每个同学只能有一种性别,而每种性别可以对应很多的同学,这也是“一对多”。

因此,通过 ForeignKey 外键,将 User 和 ArticlePost 关联到了一起,最终就是将博客文章的作者和网站的用户关联在一起了。

既然有“一对多”,当然也有**“一对一”( OneToOneField )、“多对多”( ManyToManyField )**。目前用不到这些外键,后面再回头来对比其差别。

注意这里有个小坑,Django2.0 之前的版本 on_delete 参数可以不填;Django2.0以后 on_delete 是必填项,不写会报错。 内部类(Meta)

内部类 class Meta 用来使用类提供的模型元数据。模型元数据是**“任何不是字段的东西”**,例如排序选项 ordering 、数据库表名 db_table 、单数和复数名称 verbose_name 和 verbose_name_plural 。要不要写内部类是完全可选的,当然有了它可以帮助理解并规范类的行为。

在 class ArticlePost 中我们使用的元数据 ordering = ('-created',) ,表明了每当我需要取出文章列表,作为博客首页时,按照 -created (即文章创建时间,负号标识倒序)来排列,保证了最新文章永远在最顶部位置。

数据迁移(Migrations)

编写好了Model后,接下来就需要进行数据迁移。

迁移是Django对模型所做的更改传递到数据库中的方式。 因此每当对数据库进行了更改(添加、修改、删除等)操作,都需要进行数据迁移。

Django 的迁移代码是由你的模型文件自动生成的,它本质上只是个历史记录,Django 可以用它来进行数据库的滚动更新,通过这种方式使其能够和当前的模型匹配。

在虚拟环境中进入 my_blog 文件夹 (还没熟悉venv的再温习: 在windows中搭建Django的开发环境 ),输入 python manage.py makemigrations , 对模型的更改创建新的迁移表 :

(env) e:\django_project\my_blog>python manage.py makemigrations Migrations for 'article': article\migrations\0001_initial.py - Create model ArticlePost (env) e:\django_project\my_blog> 复制代码

通过运行 makemigrations 命令,Django 会检测你对模型文件的修改,并且把修改的部分储存为一次迁移。

然后输入 python manage.py migrate , 应用迁移到数据库中 :

(env) e:\django_project\my_blog>python manage.py migrate Operations to perform: Apply all migrations: admin, article, auth, contenttypes, sessions Running migrations: Applying contenttypes.0001_initial... OK ... Applying sessions.0001_initial... OK (env) e:\django_project\my_blog> 复制代码

migrate 命令选中所有还没有执行过的迁移并应用在数据库上,也就是将模型的更改同步到数据库结构上。迁移是非常强大的功能,它能让你在开发过程中持续的改变数据库结构而不需要重新删除和创建表。它专注于使数据库平滑升级而不会丢失数据。

有点拗口,如果没懂也没关系, 总之在迁移之后,对Model的编写就算完成了。


          生产级部署 Python 脚本,日志收集、崩溃自启,一键搞定      Cache   Translate Page      
点击上方" 承香墨影 ",选择“置顶或星标” 第一时间接收最新消息
生产级部署 Python 脚本,日志收集、崩溃自启,一键搞定

今天介绍一个生产级的流程管理工具 PM2,通常我们说到 PM2 的时候,都是在说如何部署 Node.js 程序,但是实际上 PM2 很强大,不仅仅可以用来管理 Node.js,它还可以用来管理 pythonphp、Ruby、perl 等等。

这里就以 Python 举例子,来看看 PM2 如何部署管理 Python 脚本。

PM2-Python

PM2 是一个生产级流程管理器,可以轻松管理后台进程,在 Python 的世界中,PM2 是可以和 Supervisord 相媲美的,并且 PM2 还有一些非常棒的功能。

使用 PM2,让崩溃重启、观察、检查日志甚至部署应用程序,都变的简单,并且 PM2 非常重视在命令行界面的操作体验,因此 PM2 非常易于使用和掌握。


生产级部署 Python 脚本,日志收集、崩溃自启,一键搞定

PM2 发展到今天,已经 5 年了,在 Github 上有超过 6500w 次下载,已经成为在生产服务器中运行 Node.js 的首选方式之一。但是它也支持 Python。

安装 PM2

PM2 依赖于 Node.js,所以需要提前安装 Node,这一步非常简单:

curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash - sudo apt-get install -y nodejs

其他平台如何安装 Node.js,可自行查找教程。

有了 Node 的环境后,就可以通过 npm 来安装 PM2 了。

$ sudo npm install pm2 -g

要初始化 PM2 ,可以使用 pm2 ls 命令,此时就可以看到一个很友好的界面。


生产级部署 Python 脚本,日志收集、崩溃自启,一键搞定

现在,已经成功安装好 PM2 了,让我们启动一个 Python 应用吧。

启动 Python

使用 PM2 启动应用非常的简单,它讲根据脚本扩展自动匹配解释器,用以运行指定的应用程序。

我们先创建一个简单的 Python 应用程序,例如:hello.py。

#!/usr/bin/python import time while 1: print("Start: %s" % time.ctime()) time.sleep(1)

我们有了一个简单的 Python 脚本,接下来我们就用 PM2 去启动它。

$ pm2 start hello.py

然后在 Terminal 里就可以看到该进程了。


生产级部署 Python 脚本,日志收集、崩溃自启,一键搞定

到这一步,这个 Python 程序就将永远的运行下去,这意味着如果该进程退出或者抛出异常,它将被自动重启。

此处的 mode 为 fork,也就是关闭当前的 Terminal 窗口,它依然可以检查到此应用的状态。

想要查看 PM2 运行管理的应用程序,可以使用 pm2 ls 命令进行查看。

检查日志

通过 PM2 运行的程序,如果想要查看 Log,可以输入 pm2 logs 命令。


生产级部署 Python 脚本,日志收集、崩溃自启,一键搞定

如果想要指定查看某个进程的 Log,可以使用 pm2 logs <app_name> 进行指定。

另外 PM2 还提供了自动化的日志轮换功能,但是需要安装 pm2-logrotate

$ pm2 install pm2-logrotate

pm2-logrotate 将提供每天日志轮换更新的功能,并保持总的日志控件大小为 10M。

查看某进程的信息

想要查看当前使用 PM2 启动的程序的详细信息,可以使用 pm describe <app_name> 命令进行查看。


生产级部署 Python 脚本,日志收集、崩溃自启,一键搞定

在输出中,可以看到日志文件的路径,已经解释器等信息。

管理 PM2 的进程状态

介绍完启动和查看日志,再看几个简单的管理命令。

1. 停止某个程序

$ pm2 stop hello

2. 重启某个程序

$ pm2 restart hello

3. 从进程列表中停止和删除某个程序

$ pm2 delete hello

更多的命令,可以直接查看官方文档。

服务器重启时,依然保持运行

使用 PM2 启动 Python 程序之后,PM2 只能保证启动的这个 Python 程序发生意外崩溃的时候,对他进行重启。如果你希望在重启服务器的时候,依然保持应用程序在线,则需要设置 init 脚本,用以告诉系统启动 PM2 以及你的应用程序。

想让 PM2 跟随系统启动,只需要运行此命令。

$ pm2 startup

startup 可以生成一个设置环境变量的命令。

- " - 2018-09-19-13-05-39

复制/粘贴此命令的最后一行,执行后将在系统重启时,自动启动 PM2。

现在已经可以重启 PM2 了,还需要告诉 PM2 那些进程状态需要在重启时保持不变,只需要输入命令:

$ pm2 save

这将创建一个转存文件,记录当前由 PM2 管理的进程状态,PM2 将在重启时,按照之前的状态恢复他们。


生产级部署 Python 脚本,日志收集、崩溃自启,一键搞定
监听 CPU/内存信息

要监听 CPU/内存并检查有关进程的一些信息,需要使用 pm2 monit 命令。

这将打开一个 termcaps 界面,允许试试了解正在运行的应用程序。


生产级部署 Python 脚本,日志收集、崩溃自启,一键搞定

你还可以使用 pm2 show <app_name> 获取有关应用程序的所有可能信息。

使用 Ecosystem 文件

如果有多个程序需要启动,或者在启动的时候需要传递不同的参数、选项等,可以使用 eocsystem 文件对应用程序进行配置。

Eocsystem 需要通过 ecosystem.config.js 文件进行配置,此文件可以通过 pm2 init 命令生成。生成好后,我们可以在其中配置一些配置信息。

module.exports = { apps : [{ name: 'echo-python', cmd: 'hello.py', args: 'arg1 arg2', autorestart: false, watch: true, pid: '/path/to/pid/file.pid', instances: 4, max_memory_restart: '1G', env: { ENV: 'development' }, env_production : { ENV: 'production' } }, { name: 'echo-python-3', cmd: 'hello.py', interpreter: 'python3' }] };

在这个例子中,我们声明了两个应用程序,通过 interpreter 配置程序启动的解释器,一个使用 Python2 (默认)运行,另一个使用 Python3 运行。

启动它,依然使用 pm2 start 命令。

$ pm2 start ecosystem.config.js

想要单独重启 “production” (env_production):

$ pm2 restart ecosystem.config.js --env production

Ecosystem.config.js 文件中,很多配置都是可以通过命令来指定,例如,可以通过 --interpreter 来指定解析程序。

通常我们会同时安装 Python2.x 和 Python3.x 的环境,而 PM2 在默认情况下,是通过脚本文件后缀来判断的,如果没有后缀就需要强制指定 --interpreter 。

{ ".sh": "bash", ".py": "python", ".rb": "ruby", ".coffee" : "coffee", ".php": "php", ".pl" : "perl", ".js" : "node" }

这些配置信息也标记了 PM2 支持的脚本程序。

那么如果需要使用 Python3.x 来执行某个脚本,就需要 --interpreter 了。

$ pm2 start hello.py --interpreter=python3 小结

PM2 的简单使用,就先介绍到这里。虽然这里使用 Python 来举例,但是本文所有相关命令,是可以适用其他 PM2 支持的脚本程序。

PM2 还有很多强大的功能,比如说利用 SSH 轻松部署到服务器、负载均衡等等都是一些不错的功能,有兴趣可以查阅文档。PM2 文档很健全,大部分问题都可以在文档中找到答案。

有任何问题,欢迎在留言区讨论,有用就分享吧,谢谢!

参考:

https://blog.pm2.io/managing-python-application-with-pm2

https://pm2.io/doc/en/runtime/quick-start/

「 联机圆桌 」:point_left:推荐我的知识星球,一年 50 个优质问题,上桌联机学习。

公众号后台回复成长『 成长 』,将会得到我准备的学习资料,也能回复『 加群 』,一起学习进步;你还能回复『 提问 』,向我发起提问。

推荐阅读:

图解 Chrome,架构篇 | 利用预处理脚本,管理小程序代码 | 分词,科普及解决方案 |图解:HTTP 范围请求 |小程序学习资料 |HTTP 内容编码 |辅助模式实战 |辅助模式玩出花样 |小程序 Flex 布局


生产级部署 Python 脚本,日志收集、崩溃自启,一键搞定

听说喜欢 留言和分享 的人,会有好运来哦

点击『 阅读原文 』查看更多精彩内容


          Learn to code with C++, Python, & Java from the comfort of home [DEALS]      Cache   Translate Page      

Learn to code with C++, Python, &amp; Java from the comfort of home [DEALS]

Knowing how to code can mean the difference between getting the tech job you want and not getting it. If you are like most, though, you probably don’t have the time ― or cash ― that’s required to head back to school full time. That doesn’t mean, however, that you’ll have to go without. On the contrary, you can learn to code from home with The Complete Learn to Code Masterclass Bundle, offered to readers of Android Community for just $39 ― a savings of over 90% off the regular price.

This package, with is normally valued at over $1370, includes nine courses and over 73 hours of content that introduce students to popular programming languages like C++, Java, and python. The courses are delivered online so you can learn from the comfort of home, the content is available 24/7 so you can set your own schedule, and you’ll enjoy lifetime access so you can take as long as you like to finish the whole thing.

Software runs the world and now you can get a grasp on it with The Complete Learn to Code Masterclass Bundle , only $39 here at Android Community Deals.


          用Python玩转数据 Data Processing Using Python      Cache   Translate Page      
Description

Featured on: Oct 9, 2018

About this course: 本课程 (Please click https://www.coursera.org/learn/python-data-processing for English version) 主要面向非计算机专业学生,从Python基本语法开始,到Python中如何从本地和网络上进行数据获取,如何解析和表示数据,再到如何利用Python开源生态系统SciPy对数据进行基础和高级的统计分析及可视化,到最后如何设计一个简单的GUI界面来表示和处理数据,层层推进。 整个课程以财经数据为基础,通过构建一个个喜闻乐见的案例,让大家可以以更直观的方式领略Python的简洁、优雅和健壮,同时探讨Python除了在商业领域之外在文学、社会学和新闻等人文社科类领域以及在数学和生物等理工类领域同样拥有便捷高效的数据处理能力,并可以触类旁通将其灵活应用于各专业中。 近期(2017年8月14日周内更新完毕)本课程进行了全面改版,新版主要在以下几个方面做了改变: 1. 由Python 2.x换成Python 3.x; 2. 增加网络爬虫基础实践网页爬取和解析,Web API等内容; 3. 其他包括调整了部分课程顺序,丰富了课程内容特别是项目实践部分的内容。


          Python GUI : From A-to-Z With 2 Final Projects      Cache   Translate Page      

Python GUI : From A-to-Z With 2 Final Projects
Description

Featured on: Oct 9, 2018

Learn How To Build A Powerful GUI in python programming Using Python And Tkinter. Build Your GUI in Python programming. This course is for those who want to learn gui using python ,this course teach you from scratch and for those who also have a knowledge in tkinter and want to learn how to write the code to build programs The course is ideal for people who haven't programmed before, but great for other programmers as well as far as they don't get offended by a bit of extra explanations. This course teaches you everything in GUI programming from creating windows to creating buttons , and how to create many advanced functions This tutorial has been designed for software programmers who want to understand the GUI in Python ,and those who wants to create programs .
          Introduction to Python and Hacking with Python      Cache   Translate Page      

Introduction to Python and Hacking with Python
Description

Featured on: Oct 9, 2018

Create own hacking scripts. In this course you can learn to create own script for hacking this course has 2 advantages 'first you will be capable to learn python and also you will be able to create your own hacking tool using python, this is complete basics course, you can enroll even if you know nothing about python, Trying particular injection manually every where is very much difficult, you need one kind of software but you get no where such software to do such injection the way you want it, by creating your python script you can save a lot of your time.
          Running the Same Task in Python and R      Cache   Translate Page      

According to a KDD poll fewer respondents used only R in 2017 than in 2018. At the same time more respondents used only python in 2017 than in 2016.

Let’s take a quick look at what happens when we try a task in both systems.

For our task we picked the painful exercise of directly reading a 50,000,000 row by 50 column data set into memory on a machine with only 8GB of ram.

In Python the Pandas package takes around 6 minutes to read the data, and then one is ready to work.


Running the Same Task in Python and R

In R both utils::read.csv() and readr::read_csv() fail with out of memory messages. So if your view of R is “ base R only”, or “ base R plus tidyverse only”, or “ tidyverse only”: reading this file is a “hard task.”


Running the Same Task in Python and R

With the above narrow view one would have no choice but to move to Python if one wants to get the job done.

Or, we could remember data.table . While data.table is obviously not part of the tidyverse , data.table has been a best-practice in R for around 12 years. It can read the data and is ready to work in R in under a minute.


Running the Same Task in Python and R

In conclusion, to get things done in a pinch: learn Python or learn data.table . And, in my opinion, “ tidyverse first teaching” (commonly code for “ tidyverse only teaching”) may not serve the R community in the long run.


          少说话多写代码之Python学习020――使用逗号输出      Cache   Translate Page      

前面我们的例子代码中使用了很多print来打印输出,无论是字符串还是对象都输出位字符串格式。print也可以打印多个表达式,使用逗号隔开就可以。比如,

print ('Age:',42)

输出

Age: 42

再比如,输出一串数字,

print(1,2,3)

输出

如果我们想输出一个字符串,而且字符串中有变量。一般我们会想到字符串的格式化方法。其实使用print也可以实现。

name='Green' salutation='Mr,' greeting='Hello,' print(greeting,salutation,name)

输出

Hello, Mr, Green

当然我们也可以使用字符串拼接来完成,比如把Hello,里的逗号去掉拼接一个也是可以的。

greeting='Hello' print(greeting+',',salutation,name)

输出

Hello, Mr, Green

工程文件下载: https://download.csdn.net/download/yysyangyangyangshan/10707305


          Reposurgeon’s Excellent Journey and the Waning of Python      Cache   Translate Page      

Time to make it public and official. The entire reposurgeon suite (not just repocutter and repomapper, which have already been ported) is changing implementation languages from python to Go. Reposurgeon itself is about 37% translated, with pretty good unit-test coverage. Three of my collaborators on the project (Daniel Brooks, Eric Sunshine, and Edward Cree) have stepped up to help with code and reviews.

I’m posting about this because the pressures driving this move are by no means unique to the reposurgeon suite. Python, my favorite working language for twenty years, can no longer cut it at the scale I now need to operate it can’t handle large enough working sets, and it’s crippled in a world of multi-CPU computers. I’m certain I’m not alone in seeing these problems; if I were, Google, which used to invest heavily in Python (they had Guido on staff there for a while) wouldn’t have funded Go.

Some of Python’s issues can be fixed. Some may be unfixable. I love Guido and the gang and I am vastly grateful for all the use and pleasure I have gotten out of Python, but, guys, this is a wake-up call. I don’t think you have a lot of time to get it together before Python gets left behind.

I’ll first describe the specific context of this port, then I’ll delve into the larger issues about Python, how it seems to be falling behind, and what can be done to remedy the situation.

The proximate cause of the move is that reposurgeon hit a performance wall on the GCC Subversion repository. 259K commits, bigger than anything else reposurgeon has seen by almost an order of magnitude; Emacs, the runner-up, was somewhere a bit north of 33K commits when I converted it.

The sheer size of the GCC repository brings the Python reposurgeon implementation to its knees. Test conversions take more than nine hours each, which is insupportable when you’re trying to troubleshoot possible bugs in what reposurgeon is doing with the metadata. I say “possible” because we’re in a zone where defining correct behavior is rather murky; it can be difficult to distinguish the effects of defects in reposurgeon from those of malformations in the metadata, especially around the scar tissue from CVS-to-SVN conversion and near particularly perverse sequences of branch copy operations.

I was seeing OOM crashas, too on a machine with 64GB of RAM. Alex, I’ll take “How do you know you have a serious memory-pressure problem?” for $400, please. I was able to head these off by not running a browser during my tests, but that still told me the working set is so large that cache misses are a serious performance problem even on a PC design specifically optimized for low memory-access latency.

I had tried everything else. The semi-custom architecture of the Great Beast, designed for this job load, wasn’t enough. Nor were accelerated Python implementations like cython (passable) or pypy (pretty good). Julien Rivaud and I did a rather thorough job, back around 2013, of hunting down and squashing O(n^^2) operations; that wasn’t good enough either. Evidence was mounting that Python is just too slow and fat for work on really large datasets made of actual objects.

That “actual objects” qualifier is important because there’s a substantial scientific-Python community working with very larger numeric data sets. They can do this because their Python code is mostly a soft layer over C extensions that crunch streams of numbers at machine speed. When, on the other hand, you do reposurgeon-like things (lots of graph theory and text-bashing) you eventually come nose to nose with the fact that every object in Python has a pretty high fixed minimum overhead.

Try running this program:

from __future__ import print_function
import sys
print(sys.version)
d = {
"int": 0,
"float": 0.0,
"dict": dict(),
"set": set(),
"tuple": tuple(),
"list": list(),
"str": "",
"unicode": u"",
"object": object(),
}
for k, v in sorted(d.items()):
print(k, sys.getsizeof(v))

Here’s what I get when I run it under the latest greatest Python 3 on my system:

3.6.6 (default, Sep 12 2018, 18:26:19)
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
dict 240
float 24
int 24
list 64
object 16
set 224
str 49
tuple 48
unicode 49

There’s a price to be paid for all that dynamicity and duck-typing that the scientific-Python people have evaded by burying their hot loops in C extensions, and the 49-byte per-string overhead is just the beginning of it. The object() size in that table is actually misleadingly low; object instance is a dictionary with its own hash table , not a nice tight C-like struct with fields at fixed offsets. Field lookup costs some serious time.

Those sizes may not look like a big deal, and they aren’t not in glue scripts. But if you’re instantiating 359K objects containing actual data the overhead starts to pile up fast.

Alas, I can’t emulate the scientific-Python strategy. If you try to push complex graph-theory computations into C your life will become a defect-riddled hell, for reason’s I’ve previously described asgreenspunity. This is not something you want to do, ever, in a language without automatic memory management.

Trying to break the GCC conversion problem into manageable smaller pieces won’t work either. This is a suggestion I’m used to hearing from smart people when I explain the problem. To understand why this won’t work, think of a Subversion repository as an annotated graph in which the nodes are (mainly) things like commit representations and the main link type is “is a parent of”. A git repository is a graph like that too, but with different annotations tied to a different model of revisioning.

The job of reposurgeon is to mutate a Subversion-style graph into a git-style graph in a way that preserves parent relationships, node metadata, and some other relations I won’t go into just now. The reason you can’t partition the problem is that the ancestor relationships in these graphs have terrible locality. Revisions can have parents arbitrarily far back in the history, arbitrarily close to the zero point. There aren’t any natural cut points where you can partition the problem. This is why the Great Beast has to deal with huge datasets in memory all at once.

My problem points at a larger Python issue: while there probably isn’t much work on large datasets using data structures quite as complex and poorly localized as reposurgeon’s, it’s probably less of an outlier in the direction of high overhead than scientific computation is in the direction of low. Or, to put it in a time-focused way, as data volumes scale up the kinds of headaches we’ll have will probably look more like reposurgeon’s than like a huge matrix-inversion or simulated-annealing problem. Python is poorly equipped to compete at this scale.

That’s a general problem in Python’s future. There are others, which I’ll get to. Before that, I want to note that settling on a new implementation language was not a quick or easy process. After the last siege of serious algorithmic tuning in 2013 I experimented with Common LISP , but that effort ran aground because it was missing enough crucial features to make the gap from Python look impractical to bridge. A few years later I looked even more briefly at Ocaml; same problem. actually even worse.

I didn’t make a really serious effort to move sooner than 2018 because, until the GCC repository, I was always able to come up with some new tweak of reposurgeon or the toolchain underneath it that would make it just fast enough to cope with the current problem. But the problems kept getting larger and nastier (I’ve noted theadverse selection problem here). The GCC repo was the breaking point.

While this was going on, pre-GCC, I was also growing somewhat discontented with Python for other reasons. The most notable one at the time was the Python team’s failure to solve the notorious GIL (Global Interpreter Lock) problem. The GIL problem effectively blocks any use of concurrency on programs that aren’t interrupted by I/O waits. What it meant, functionally, was that I couldn’t use multithreading in Python to speed up operations like comment-text searches; those never hit the disk or network. Annoying…here I am with a 16-core hot-rod and reposurgeon can only use one (1) of those processors.

It turns out the GIL problem isn’t limited to non-I/O-bound workloads like mine, either, and it’s worse than most Python developers know. There’s a rather terrifying talk by David Beazley showing that the GIL introduces a huge amount of contention overhead when you try to thread across multiple processors so much so that you can actually speed up your multi-threaded programs by disabling all but one of your processors!

This of course isn’t just a reposurgeon problem. Who’s going to deploy Python for anything serious if it means that 15/16ths of your machine becomes nothing more than a space heater? And yet the Python devs have shown no sign of making a commitment to fix this. They seem to put a higher priority on not breaking their C extension API. This…is not a forward-looking choice.

Another issue is the Python 2 to 3 transition. Having done my bit to make it as smooth as possible by co-authoring Practical Python porting for systems programmers with reposurgeon collaborator Peter Donis, I think I have the standing to say that the language transition was fairly badly botched. A major symptom of the botchery is that the Python devs unnecessarily broke syntactic compatibility with 2.x in 3.0 and didn’t restore it until 3.2. That gap should never have opened at all, and the elaborateness of the kluges Peter and I had to develop to write polyglot Python even after 3.2 are an indictment as well.

It is even open to question whether Python 3 is a better language than Python 2. I could certainly point out a significant number of functional improvements, but they are all overshadowed by the in my opinion extremely ill-advised decision to turn strings into Unicode code-point sequences rather than byte sequences.

I felt like this was a bad idea when 3.0 shipped; my spider-sense said “wrong, wrong, wrong” at the time. It then caused no end of complications and backward-incompatibilities which Peter Donis and I later had to paper over. But lacking any demonstration of how to do better I didn’t criticize in public.

Now I know what “Do better” looks like. Strings are still bytes. A few well-defined parts of your toolchain construe them as UTF-8 notably, the compiler and your local equivalent of printf(3). In your programs, you choose whether you want to treat string payloads as uninterpreted bytes (implicitly ASCII in the low half) or as Unicode code points encoded in UTF-8 by using either the “strings” or “unicode” libraries. If you want any other character encoding, you use codecs that run to and from UTF-8.

This is how Go does it. It works, it’s dead simple, it confines encoding dependencies to the narrowest possible bounds and by doing so it demonstrates that Python 3 code-point sequences were a really, really bad idea.

The final entry in our trio of tribulations is the dumpster fire that is Python library paths. This has actually been a continuing problem since GPSD and has bitten NTPSec pretty hard it’s a running sore on our issue tracker, so bad that were’re seriously considering moving our entire suite of Python client tools to Go just to get shut of it.

The problem is that where on your system you need to put a Python library module in order so that a Python main program (or other library) can see it and load it varies in only semi-predictable ways. By version, yes, but there’s also an obscure distinction between site-packages, dist-packages, and what for want of any better term I’ll call root-level modules (no subdirectory under the version directory) that different distributions and even different application packages seem to interpret in different and incompatible ways. The root of the problem seems to be that good practice is under-specified by the Python dev team.

This is particular hell on project packagers. You don’t know what version of Python your users will ve running, and you don;t know what the contents of their sys.path (library load path variable). You can’t know where your install production should put things so the Python pieces of your code ill be able to see each other. About all you can do is shotgun multople copies of your library to different plausible locations and hope ine of them intersreccts with yiur uer’s load path. And I shall draw a kindly veil over the even greater complications if you’re shipping C extension modules…

Paralysis around the GIL, the Python 3 strings botch, the library-path dumpster fire these are signs of a language that is aging, grubby, and overgrown. It pains me to say this, because I was a happy Python fan and advocate for a long time. But the process of learning Go has shed a harsh light on these deficiencies.

I’ve already noted that Go’s Unicode handling implicitly throws a lot of shade. So does its brute-force practice of building a single self-contained binary from source every time. Library paths? What are those?

But the real reason that reposurgeon is moving to Go rather than some other language I might reasonably think I could extract high performance from is not either of these demonstrations. Go did not get this design win by being right about Unicode or build protocols.

Go got this win because (a) comparative benchmarks on non-I/O-limited code predict a speedup of around 40x, which is good enough and competitive with Rust or C++, and (b) the semantic gap between Python and Go seemed surprisingly narrow, reducing the expected translation time lower than I could reasonably expect from any other language on my radar.

Yes, static typing vs. Python’s dynamic seems like it ought to be a big deal. Bur there are several features that converge these languages enough to almost swamp that difference. One is garbage collection; the second is the presences of maps/dictionaries; and the third is strong similarities in low-level syntax.

In fact, the similarities are so strong that I was able to write a mechanical Python-to-Go translator’s assistant pytogo that produces what its second user described as a “a good first draft” of a Go translation. I described this work in more detail in Rule-swarm attacks can outdo deep reasoning .

I wrote pytogo around roughly the 22% mark (just short of 4800) lines out of 14000 in the translation and am now up to 37% out of 16000. The length of the Go plus commented-out untranslated Python has been creeping up because Go is less dense all those explicit close brackets add up. I am now reasonably confident of success, though there is lots of translatuon left to do and one remaining serious technical challenge that I may discuss in a future post.

For now, though, I want to return to the question of what Python can do to right its ship. For this project the Python devs have certainly lost me; I can’t afford to wait on them getting their act together before finishing the GCC conversion. The question is what they can do to stanch more defections to Go, a particular threat because the translation gap is so narrow.

Python is never going to beat Go on performance. The fumbling of the 2/3 transition is water under the dam at this point, and I don’t think it’s realistically possible to reverse the Python 3 strings mistake.

But that GIL problem? That’s got to get solved. Soon. In a world where a single-core machine is a vanishing oddity outside of low-power firmware deployments, the GIL is a millstone around Python’s neck. Otherwise I fear the Python language will slide into shabby-genteel retirement the way Perl has, largely relegated to its original role of writing smallish glue scripts.

Smothering that dumpster fire would be a good thing, too. A tighter, more normative specification about library paths and which things go where might do a lot.

Of course there’s also a positioning issue. Having lost the performance-chasers to Go, Python needs to decide what constituency it wants to serve and can hold onto. That problem I can’t solve, just point out what technical problems are both seriously embarrassing and fixable. That’s what I’ve tried to do.

As I said at the beginning of this rant, I don’t think there’s a big window of time in which to act, either. I judge the Python devs do not have a year left to do something convincing about the GIL before Go completely eats their lunch, and I’m not sure they have even six months. They’d best get cracking.


          Awesome Adafruit: Python, Lasers and Mu!      Cache   Translate Page      

Limor ‘Ladyada’ Fried , founder of Adafruit and maker extraordinaire, has just released a video demonstrating LIDAR (laser based distance measurement) with Circuitpython and Mu.

The source code and documentation for the library Limor demonstrates can be found on GitHub . Under the hood, it’s an I2C based API which has beed abstracted into something Pythonic. The code example included in the README (reproduced below) demonstrates how easy it is to use the LIDAR sensor with CircuitPython. In only a few lines of code it outputs data which Mu can use with its built-in plotter:

import time import board import busio import adafruit_lidarlite # Create library object using our Bus I2C port i2c = busio.I2C(board.SCL, board.SDA) # Default configuration, with only i2c wires sensor = adafruit_lidarlite.LIDARLite(i2c) while True: try: # We print tuples so you can plot with Mu Plotter print((sensor.distance,)) except RuntimeError as e: # If we get a reading error, just print it and keep truckin' print(e) time.sleep(0.01) # you can remove this for ultra-fast measurements!

Great stuff!

It’s at this point in geeky blog posts that it’s traditional to bring up sharks, lasers and Dr.Evil . Happily, I ironically understand apophasis. ;-)


          Yet another introduction to golang interfaces      Cache   Translate Page      

I was peacefully trying to finish a post aboutuser namespaces when a friend came home and arrogantly told me I do know nothing about Golang interfaces. So, here we are. Context: Ubuntu 18.04 , python 3.7 and go version go1.11 linux/amd64 .

It's about defining & implementing behaviours

An interface is a description of the actions that an item can do. When you flip a light switch, the light goes on, or off , you don't care how things are implemented , you just care that it goes on, or off .

Python's repr function

You may have heard about the Python repr function, taking any valid Python object, or values of any type , and returning a string containing a printable representation of the object:

>>> type(1) <class 'int'> >>> repr(1) '1' >>> >>> type(1.1) <class 'float'> >>> repr(1.1) '1.1' >>> >>> type('foo') <class 'str'> >>> repr('foo') "'foo'" >>>

Another example:

>>> class A: ... name = 'nsukami' ... >>> p = A() >>> # A doesn't implement __repr__ method, default representation returned >>> repr(p) '<__main__.A object at 0x7f4f9dece898>' >>> >>> >>> class B: ... name = 'nsukami' ... def __repr__(self): ... return f"My name is {self.name}" ... >>> # B override or B implement __repr__ method, custom representation returned >>> repr(B()) 'My name is nsukami' >>> The __repr__ magic method

Yes, object.__repr__ method and all the other magic or special methods are Python's approach for allowing classes to define their own behavior .

When we call the repr function, what is happening behind the scenes is this:

the repr method takes values of any type as argument thanks to mro , the order in which __repr__ is overridden is known. if an implementation of __repr__ is found, then it will be applied Golang interfaces

Interface types express generalizations or abstractions about the behaviours of other types. Interfaces let us write functions that are more flexible and adaptable. Interfaces let us achieve polymorphism. Example :

package main import "fmt" // To satisfy I, you need to implement Foo behaviour type I interface { Foo() } type A struct {} // A is implicitly satisfying I w/o changing the definition of A func (p A) Foo() { fmt.Println("foo") } type B struct {} // B is implicitly satisfying I or wa can say: B "is a" I func (p B) Foo() { fmt.Println("bar") } // F will take any argument with Foo() behaviour // or F will take any argument satisfying I func F(i I) { i.Foo() } func main() { a := A{} b := B{} l := [...]I{a, b} for n, _ := range(l) { F(l[n]) // appears as I type, but behaviour changes depending on current instance } }

Now, let's do with Go , what we've done with Python:

package main import ( "fmt" ) type A struct{ name string } type B struct{ name string } // B struct is now implementing the Stringer interface func (b B) String() string { return fmt.Sprintf("My name is %s", b.name) } func main() { a := A{name: "Nsukami"} b := B{name: "Nsukami"} fmt.Println(a) fmt.Println(b) } The fmt.Println function

The fmt.Println function, is not returning a string like the Python's repr function, but that's not the point. The output of the fmt.Println function can be customized if the passed value implement the Stringer interface . With Golang, when we call fmt.Println , what's happening behind the scenes is :

fmt.Println takes an arbitrary number of empty interfaces as arguments. thanks to the way interface values are stored, all the implemented interfaces are known. if an implementation of Stringer interface is found, then it will be applied. The empty interface?

Yes. An empty interface may hold values of any type . Example :

package main import ( "fmt" //"reflect" ) type A struct{ name string } type B struct{ name string } // B struct is now implementing the Stringer interface func (b B) String() string { return fmt.Sprintf("My name is %s", b.name) } // f takes an empty interface as argument // f can take as argument, values of any type func f(i interface{}) { // nevertheless, we perfectly know the type that was passed to us // and we can retrieve the right implementation of the Stringer interface // fmt.Print("Dynamic type: ", reflect.TypeOf(i).String(), ", Concrete value: ", i, "\n") fmt.Printf("Dynamic type: %T, Concrete value: %v\n", i, i) } func main() { f(B{name: "foo"}) f(A{name: "foo"}) f(1) f(1.1) f("nsukami") } the way interface value are stored?

The best way to understand how interface values are stored, is to read the following awesome article , really.

Type assertion?

A type assertion is an operation applied to an interface value. A type assertion checks that the dynamic type of its operand matches the asserted type. Simply said: x.(T) asserts that x is not nil and that the concrete value stored in x is of type T . Example :

package main import ( "fmt" ) type B struct{ name string } func (b B) String() string { return fmt.Sprintf("My name is %s", b.name) } func f(i interface{}) { if _, ok := i.(B); ok { // if i is a B, do something fmt.Println("i is B, let's do something") }else{ fmt.Println("is is not a B") } } func main() { var i interface{} = "baz" // is i a string? s := i.(string) // if i is a string, no panic will occur fmt.Println(s) // if you uncomment the 2 following lines, the program will panic // because i is not a float 64 // r := i.(float64) // fmt.Println(r) // to handle panic gracefully, retrieve the 2nd returned value of the type assertion r, ok := i.(float64) fmt.Println(r, ok) // type assertion inside if conditions f(B{name: "foo"}) f("nsukami") } Recap? You achieve polymorphism in Go with interfaces, in Python, with inheritance, mixins, and ABC . Interfaces in Go are a little bit like Python magic methods , they help you implement behaviours . Type assertions in Go are a little bit like Python's built-in function isinstance . In Go, you can define your own interfaces. In Python, you cannot define your own magic methods. In Go, every type implements the empty interface . In Python3 , all objects are instances of object . More on the topic:

I hope I was at least able to bring you another perspective on this topic, really. May I please, recommend the following links?

Duck test Liskov substitution principle Python's special methods Difference between __str__ and __repr__ Rejected PEP 245 -- Python Interface syntax A tour of Go - Interfaces Go by examples - Interfaces Non exhaustive list of all interfaces in Go's standard library. Proposal: Default implementation for interface Go Data Structures: Interfaces Hold my beer

** YaitGi: Yet another introduction to Golang interfaces.

Not so unexpected Quote:

"Behaviour is a mirror in which every one displays his own image." Johann Wolfgang von Goethe


          Introduction to Redis streams with Python      Cache   Translate Page      

October 08, 2018 15:54 / python redis walrus /


Introduction to Redis streams with Python

Redis 5.0 contains, among lots of fixes and improvements, a new data-type and set of commands for working with persistent, append-only streams .

Redis streams are a complex topic, so I won't be covering all aspects of the APIs, but hopefully after reading this post you'll have a feel for how they work and whether they might be useful in your own projects.

Streams share some superficial similarities with list operations and pub/sub, with some important differences. For instance, task queues are commonly implemented by having multiple workers issue blocking-pop operations on a list. The benefit of this approach is that messages are distributed evenly among the available workers. Downsides, however, are:

Once a message is read it's effectively "gone forever". If the worker crashes there's no way to tell if the message was processed or needs to be rescheduled. This pushes the responsibility of retrying failed operations onto the consumer. Only one client can read a given message. There's no "fan-out". No visibility into message state after the message is read.

Similarly, Redis pub/sub can be used to publish a stream of messages to any number of interested consumers. Pub/sub is limited by the fact that it is "fire and forget". There is no history, nor is there any indication that a message has been read.

Streams allow the implementation of more robust message processing workflows, thanks to the following features:

streams allow messages to be fanned-out to multiple consumers or you can use stateful consumers ("consumer groups") to coordinate message processing among multiple workers. message history is preserved and visible to other clients. consumer groups support message acknowledgements, claiming stale unacknowledged messages, and introspecting pending messages, ensuring that messages are not lost in the event of an application crash. streams support blocking read operations.

The rest of the post will show some examples of working with streams using the walrus Redis library. If you prefer to just read the code, this post is also available as anipython notebook.

Getting started with streams

I maintain a Redis utility library named walrus that builds on and extends the Redis client from the redis-py package. I've added support for the new streams APIs, since they aren't available in redis-py at the time of writing. To follow along, you can install walrus using pip :

$ pip install walrus

Or to install the very latest code from the master branch:

$ pip install -e git+git@github.com:coleifer/walrus.git#egg=walrus

walrus supports low-level streams APIs , as well as offering high-level container types which are a bit easier to work with in Python.

Basic operations

Streams are append-only data-structures that store a unique identifier (typically a timestamp) along with arbitrary key/value data. When adding data to a stream, Redis can automatically provide you with a unique timestamp-based identifier, which is almost always what you want. When a new message is added, the message id is returned:

from walrus import Database # A subclass of the redis-py Redis client. db = Database() stream = db.Stream('stream-a') msgid = stream.add({'message': 'hello, streams'}) print(msgid) # Prints something like: # b'1539008591844-0'

Message ids generated by Redis consist of a timestamp, in milliseconds, along with a sequence number (for ordering messages that arrived at the same millisecond).

Let's add a couple more items so we have more data to work with:

msgid2 = stream.add({'message': 'message 2'}) msgid3 = stream.add({'message': 'message 3'})

Ranges of records can be read using slices. The message ids provided as the range endpoints are inclusive when using the range API:

# Get messages 2 and newer: messages = stream[msgid2:] # messages contains: [(b'1539008914283-0', {b'message': b'message 2'}), (b'1539008918230-0', {b'message': b'message 3'})] # We can use the "step" parameter to limit the number of records returned. messages = stream[msgid::2] # messages contains the first two messages: [(b'1539008903588-0', {b'message': b'hello, stream'}), (b'1539008914283-0', {b'message': b'message 2'})] # Get all messages in stream: messages = list(stream) [(b'1539008903588-0', {b'message': b'hello, stream'}), (b'1539008914283-0', {b'message': b'message 2'}), (b'1539008918230-0', {b'message': b'message 3'})]

The size of streams can be managed by deleting messages by id, or by "trimming" the stream, which removes the oldest messages. The desired size is specified when issuing a "trim" operation, though, due to the internal implementation of the stream data-structures, the size is considered approximate by default.

# Adding and deleting a message: msgid4 = stream.xadd({'message': 'delete me'}) del stream[msgid4] # How many items are in the stream? print(len(stream)) # Prints 3.

To see how trimming works, let's create another stream and fill it with 1000 items, then request it to be trimmed to 10 items:

stream2 = db.Stream('stream-2') for i in range(1000): stream2.add({'data': 'message-%s' % i}) nremoved = stream2.trim(10) # (approximate) trim to most recent 10 messages. print(nremoved) # 909 print(len(stream2)) # 91 # To trim to an exact number, specify `approximate=False`: stream2.trim(10, approximate=False) # Returns 81. print(len(stream2)) # 10 Processing data in real-time

The previous examples show how to add, read and delete messages from streams. When processing a continuous stream of events, though, it may be desirable to block until messages are added. For this we can use the read() API, which supports blocking until messages become available.

# By default, calling `stream.read()` returns all messages in the stream: stream.read() # Returns: [(b'1539008903588-0', {b'message': b'hello, stream'}), (b'1539008914283-0', {b'message': b'message 2'}), (b'1539008918230-0', {b'message': b'message 3'})]

We can pass a message id to read() , and unlike the slicing operations, this id is considered the "last-read message" and acts as an exclusive lower-bound:

stream.read(last_id=msgid2) # Returns: [(b'1539008918230-0', {b'message': b'message 3'})] # This returns None since there are no messages newer than msgid3. stream.read(last_id=msgid3)

We can make read() blocking by specifying a special id, "$", and a timeout in milliseconds. To block forever, you can use timeout=0 .

# This will block for 2 seconds, after which `None` is returned # (provided no messages are added while waiting). stream.read(timeout=2000, last_id='$')

While its possible to build consumers using these APIs, the client is still responsible for keeping track of the last-read message ID and coming up with semantics for retrying failed messages, etc. In the next section, we'll see how consumer groups can greatly simplify building a stream processing pipeline.

Consumer groups

Consumer groups make it easy to implement robust message processing pipelines. Consumer groups allow applications to read from one or more streams, while keeping track of which messages were read, who read them, when they were last read, and whether they were successfully processed (acknowledged). Unacknowledged messages can be inspected and claimed, simplifying "retry" logic.

# Consumer groups require that a stream exist before the group can be # created, so we have to add an empty message. stream_keys = ['stream-a', 'stream-b', 'stream-c'] for stream in stream_keys: db.xadd(stream, {'data': ''}) # Create a consumer-group for streams a, b, and c. We will mark all # messages as having been processed, so only messages added after the # creation of the consumer-group will be read. cg = db.consumer_group('cg-abc', stream_keys) cg.create() # Create the consumer group. cg.set_id('$')

To read from all the streams in a consumer group, we can use the read() method. Since we marked all messages as read and have not added anything new since creating the consumer group, the return value is None :

resp = cg.read() # None

For convenience, walrus exposes the individual streams within a consumer group as attributes on the ConsumerGroup instance. Let's add some messages to streams a, b, and c:

cg.stream_a.add({'message': 'new a'}) cg.stream_b.add({'message': 'new for b'}) for i in range(10): cg.stream_c.add({'message': 'c-%s' % i})

Now let's try reading from the consumer group again. We'll pass count=1 so that we read no more than one message from each stream in the group:

# Read messages across all streams in the group. cg.read(count=1) # Returns: {'stream-a': [(b'1539023088125-0', {b'message': b'new a'})], 'stream-b': [(b'1539023088125-0', {b'message': b'new for b'})], 'stream-c': [(b'1539023088126-0', {b'message': b'c-0'})]}

We've now read all the unread messages from streams a and b, but stream c still has messages. Calling read() again will give us the next unread message from stream c:

cg.read(count=1) # Returns: {'stream-c': [(b'1539023088126-1', {b'message': b'c-1'})]}

When using consumer groups, messages that are read need to be acknowledged. Let's look at the pending (read but unacknowledged) messages from stream a using the pending() method, which returns a list of metadata about each unacknowledged message:

# We read one message from stream a, so we should see one pending message. cg.stream_a.pending() # Returns a list of: # [message id, consumer name, message age, delivery count] [[b'1539023088125-0', b'cg-abc.c1', 22238, 1]]

To acknowledge receipt of a message and remove it from the pending list, use the ack() method on the consumer group stream:

# Read the pending message list for stream a. pending_list = cg.stream_a.pending() msg_id = pending_list[0][0] # Acknowledge the message. cg.stream_a.ack(msg_id) # Returns number of pending messages successfully acknowledged: 1

Consumer groups have the concept of individual consumers. These might be workers in a process pool, for example. Note that the pending() call returned the consumer name as "cg-abc.c1". Walrus uses the consumer group name + ".c1" as the name for the default consumer name. To create another consumer within a given group, we can use the consumer() method:

# Create a second consumer within the consumer group. cg2 = cg.consumer('cg-abc.c2')

Creating a new consumer within a consumer group does not affect the state of the group itself. Calling read() using our new consumer will pick up from the last-read message, as you would expect:

# Read from our consumer group using the new consumer. Recall # that we read all the messages from streams a and b, and the # first two messages in stream c. cg2.read(count=1) # Returns: {'stream-c': [(b'1539023088126-2', {b'message': b'c-2'})]}

If we look at the pending message status for stream c, we will see that the first and second messages were read by the consumer "cg-abc.c1" and the third message was read by our new consumer, "cg-abc.c2":

# What messages have been read, but were not acknowledged, from stream c? cg.stream_c.pending() # Returns list of [message id, consumer, message age, delivery count]: [[b'1539023088126-0', b'cg-abc.c1', 51329, 1], [b'1539023088126-1', b'cg-abc.c1', 43772, 1], [b'1539023088126-2', b'cg-abc.c2', 5966, 1]]

Consumers can claim pending messages, which transfers ownership of the message and returns a list of (message id, data) tuples to the caller:

# Unpack the pending messages into a couple variables. mc1, mc2, mc3 = cg.stream_c.pending() # Claim the first message for consumer 2: cg2.stream_c.claim(mc1[0]) # Returns a list of (message id, data) tuples for the claimed messages: [(b'1539023088126-0', {b'message': b'c-0'})]

Re-inspecting the pending messages for stream c, we can see that the consumer for the first message has changed and the message age has been reset:

cg.stream_c.pending() # Returns: [[b'1539023088126-0', b'cg-abc.c2', 2168, 1], [b'1539023088126-1', b'cg-abc.c1', 47141, 1], [b'1539023088126-2', b'cg-abc.c2', 9335, 1]]

Consumer groups can be created and destroyed without affecting the underlying data stored in the streams:

# Destroy the consumer group. cg.destroy() # All the messages are still in "stream-c": len(db.Stream('stream-c')) # Returns 10.

The individual streams within the consumer group support a number of useful APIs:

consumer_group.stream.ack(*id_list) - acknowledge one or more messages read from the given stream. consumer_group.stream.add(data, id='*', maxlen=None, approximate=True) - add a new message to the stream. The maxlen parameter can be used to keep the stream from growing without bounds. If given, the approximate flag indicates whether the stream maxlen should be approximate or exact. consumer_group.stream.claim(*id_list) - claim one or more pending messages. consumer_group.stream.delete(*id_list) - delete one or more messages by ID. consumer_group.stream.pending(start='-', stop='+', count=-1) - get the list of unacknowledged messages in the stream. The start and stop parameters can be message ids, while the count parameter can be used to limit the number of results returned. consumer_group.stream.read(count=None, timeout=None) - monitor the stream for new messages within the context of the consumer group. This method can be made to block by specifying a timeout (or 0 to block forever). consumer_group.stream.set_id(id='$') - set the id of the last-read message for the consumer group. Use the special id "$" to indicate all messages have been read, or "0-0" to mark all messages as unread. consumer_group.stream.trim(count, approximate=True) - trim the stream to the given size. TimeSeries API

Redis automatically uses the millisecond timestamp plus a sequence number to uniquely identify messages added to a stream. This makes streams a natural fit for time-series data. To simplify working with streams as time-series in Python, you can use the special TimeSeries helper class, which acts just like the ConsumerGroup from the previous section with the exception that it can translate between Python datetime objects and message ids automatically.

To get started, we'll create a TimeSeries instance, specifying the stream keys, just like we did with ConsumerGroup :

# Create a time-series consumer group named "demo-ts" for the # streams s1 and s2. ts = db.time_series('demo-ts', ['s1', 's2']) # Add dummy data and create the consumer group. db.xadd('s1', {'': ''}, id='0-1') db.xadd('s2', {'': ''}, id='0-1') ts.create() ts.set_id('$') # Do not read the dummy items.

Let's add some messages to the time-series, one for each day between January 1st and 10th, 2018:

from datetime import datetime, timedelta date = datetime(2018, 1, 1) for i in range(10): ts.s1.add({'message': 's1-%s' % date}, id=date) date += timedelta(days=1)

We can read messages from the stream using the familiar slicing API. For example, to read 3 messages starting at January 2nd, 2018:

ts.s1[datetime(2018, 1, 2)::3] # Returns messages for Jan 2nd - 4th: [<Message s1 1514872800000-0: {'message': 's1-2018-01-02 00:00:00'}>, <Message s1 1514959200000-0: {'message': 's1-2018-01-03 00:00:00'}>, <Message s1 1515045600000-0: {'message': 's1-2018-01-04 00:00:00'}>]

Note that the values returned are Message objects. Message objects provide some convenience functions, such as extracting timestamp and sequence values from stream message ids:

for message in ts.s1[datetime(2018, 1, 1)::3]: print(message.stream, message.timestamp, message.sequence, message.data) # Prints: s1 2018-01-01 00:00:00 0 {'message': 's1-2018-01-01 00:00:00'} s1 2018-01-02 00:00:00 0 {'message': 's1-2018-01-02 00:00:00'} s1 2018-01-03 00:00:00 0 {'message': 's1-2018-01-03 00:00:00'}

Let's add some messages to stream "s2" as well:

date = datetime(2018, 1, 1) for i in range(5): ts.s2.add({'message': 's2-%s' % date}, id=date) date += timedelta(days=1)

One difference between TimeSeries and ConsumerGroup is what happens when reading from multiple streams. ConsumerGroup returns a dictionary keyed by stream, along with a corresponding list of messages read from each stream. TimeSeries , however, returns a flat list of Message objects:

# Read up to 2 messages from each stream (s1 and s2): messages = ts.read(count=2) # "messages" is a list of messages from both streams: [<Message s1 1514786400000-0: {'message': 's1-2018-01-01 00:00:00'}>, <Message s2 1514786400000-0: {'message': 's2-2018-01-01 00:00:00'}>, <Message s1 1514872800000-0: {'message': 's1-2018-01-02 00:00:00'}>, <Message s2 1514872800000-0: {'message': 's2-2018-01-02 00:00:00'}>]

When inspecting pending messages within a TimeSeries the message ids are unpacked into (datetime, seq) 2-tuples:

ts.s1.pending() # Returns: [((datetime.datetime(2018, 1, 1, 0, 0), 0), 'events-ts.c', 1578, 1), ((datetime.datetime(2018, 1, 2, 0, 0), 0), 'events-ts.c', 1578, 1)] # Acknowledge the pending messages: for msgts_seq, _, _, _ in ts.s1.pending(): ts.s1.ack(msgts_seq)

We can set the last-read message id using a datetime :

ts.s1.set_id(datetime(2018, 1, 1)) # Next read will be 2018-01-02, ... ts.s1.read(count=2) # Returns: [<Message s1 1514872800000-0: {'message': 's1-2018-01-02 00:00:00'}>, <Message s1 1514959200000-0: {'message': 's1-2018-01-03 00:00:00'}>]

As with ConsumerGroup , the TimeSeries helper provides stream-specific APIs for claiming unacknowledged messages, creating additional consumers, etc.

Learning more

I haven't yet had time to write a stream-specific section in the walrus documentation, but the following APIs are documented:

Stream ConsumerGroup , and the consumer group stream helper . TimeSeries Low-level stream methods are documented starting here .

You can also find the code from this post condensed nicely into anipython notebook.

For more information on streams, I suggest reading the streams introduction on the Redis documentation site.


          『高级篇』docker之Python开发信息服务(11)      Cache   Translate Page      

信息服务准备用python来写,在现有的idea中添加python的模块。源码:https://github.com/limingios/msA-docker

idea安装python插件

安装后重新idea。


『高级篇』docker之Python开发信息服务(11)
『高级篇』docker之Python开发信息服务(11)
『高级篇』docker之Python开发信息服务(11)
安装python模块
『高级篇』docker之Python开发信息服务(11)
『高级篇』docker之Python开发信息服务(11)
安装thrift的pyhon插件
『高级篇』docker之Python开发信息服务(11)
『高级篇』docker之Python开发信息服务(11)

开始我用idea写python,下载个插件都费劲,我换成了pycharm来写美滋滋

编辑Python的服务代码 # coding: utf-8 from message.api import MessageService from thrift.transport import TSocket from thrift.transport import TTransport from thrift.protocol import TBinaryProtocol from thrift.server import TServer class MessageServiceHandler: def sendMobileMessage(self, mobile, message): print ("sendMobileMessage, mobile:"+mobile+", message:"+message) return True def sendEmailMessage(self, email, message): print ("sendEmailMessage, email:"+email+", message:"+message) return True if __name__ == '__main__': handler = MessageServiceHandler() processor = MessageService.Processor(handler) transport = TSocket.TServerSocket(None, "9090") tfactory = TTransport.TFramedTransportFactory() pfactory = TBinaryProtocol.TBinaryProtocolFactory() server = TServer.TSimpleServer(processor, transport, tfactory, pfactory) print ("python thrift server start") server.serve() print ("python thrift server exit")
『高级篇』docker之Python开发信息服务(11)
查看端口已经启动
『高级篇』docker之Python开发信息服务(11)

生成对应java 和python的命令

>都是根据thrift文件,生成对应的上级目录

thrift --gen py -out ../ message.thrift thrift --gen java -out ../ message.thrift

PS:thrift的开发流程是: 先定义thrift的文件,然后通过命令生成对应的python代码。通过实现定义的thrift方法,来完成thrift的调用。

百度未收录

>>原创文章,欢迎转载。转载请注明:转载自IT人故事会,谢谢!

>>原文链接地址:


          Electrical Engineer - Apollo Technical LLC - Fort Worth, TX      Cache   Translate Page      
Computer languages, supporting several microcontroller languages including (machine code, Arduino, .NET, ATMEL, Python, PASCAL, C++, Ladder, Function Block)....
From Apollo Technical LLC - Thu, 02 Aug 2018 18:22:07 GMT - View all Fort Worth, TX jobs
          Coding Instructor - Python STEAM! - The Curiosity Lab - Newmarket, ON      Cache   Translate Page      
Do you enjoy working with kids? Are you familiar with JavaScript and Python? Are you looking to add to your portfolio/teaching experience? Then we are... $18 - $20 an hour
From Indeed - Thu, 20 Sep 2018 12:28:28 GMT - View all Newmarket, ON jobs
          Urgent! Kids Coding Instructor, JavaScript, Python, Robotics Teacher - The Curiosity Lab - Aurora, ON      Cache   Translate Page      
As one of our instructors you will deliver our programs and workshops to kids. Do you enjoy working with kids?... $18 - $20 an hour
From Indeed - Wed, 26 Sep 2018 11:17:50 GMT - View all Aurora, ON jobs
          LXer: Iptables tricks, Linux command-line tips, Python, agile, DevOps, and more top reads      Cache   Translate Page      
Published at LXer: Tricks for sysadmins, Linux command-line tips, and a Python programming article were our top 3 posts last week. Do you have tricks, tips, or programming wisdom to share with...
          高手问答 | Python 绝技 —— 爬虫之术知多少?      Cache   Translate Page      

OSCHINA 本期高手问答我们请来了@梁睿坤 为大家解答关于 Python 爬虫方面的问题。


          开源堡垒机 Jumpserver 1.4.2 发布,支持 web sftp      Cache   Translate Page      

Jumpserver 1.4.2 已发布,新增 web sftp 支持。

Jumpserver 是全球首款完全开源的堡垒机,使用 GNU GPL v2.0 开源协议,是符合 4A 的专业运维审计系统。

Jumpserver使用 Python / Django 进行开发,遵循 Web 2.0 规范,配备了业界领先的 Web Terminal 解决方案,交互界面美观、用户体验好。

Jumpserver 采纳分布式架构,支持多机房跨区域部署,中心节点提供 API,各机房部署登录节点,可横向扩展、无并发限制。

下载地址:


          Apache Tika 1.19.1 发布,内容抽取工具集合      Cache   Translate Page      

Apache Tika 1.19.1 已发布,Tika 是一个内容抽取的工具集合(a toolkit for text extracting)。它集成了 POI 和 Pdfbox,并且为文本抽取工作提供了一个统一的界面。其次,Tika 也提供了便利的扩展 API,用来丰富其对第三方文件格式的支持。

Apache Tika 1.19.1 主要包括对 MP3Parser 和 SAX 解析处理的两个关键 bug 修复,具体如下:

  • Update PDFBox to 2.0.12, jempbox to 1.8.16 and jbig2 to 3.0.2

  • Fix regression in parser for MP3 files 

  • Updated Python Dependency Check for TesseractOCR

  • Improve SAXParser robustness

  • Remove dependency on slf4j-log4j12 by upgrading jmatio

  • Replace com.sun.xml.bind:jaxb-impl and jaxb-core with org.glassfish.jaxb:jaxb-runtime and jaxb-core

下载地址:

http://tika.apache.org/download.html


          Apache Arrow 0.11.0 发布,内存数据交换格式      Cache   Translate Page      

Apache Arrow 0.11.0 已发布。Apache Arrow 是 Apache 基金会的顶级项目之一,目的是作为一个跨平台的数据层来加快大数据分析项目的运行速度。它包含一组规范的内存中的平面和分层数据表示,以及多种语言绑定以进行结构操作。 它还提供低架构流式传输和批量消息传递,零拷贝进程间通信(IPC)和矢量化的内存分析库。

该版本包含大量改进和修复,部分亮点如下:

  • Support for CUDA-based GPUs in Python

  • New MATLAB bindings

  • R Library in Development

  • C++ CSV Reader Project

  • Parquet C GLib Bindings Donation

  • Parquet and Arrow C++ communities joining forces

  • Arrow Flight RPC and Messaging Framework

完整更新内容请查阅更新日志:

https://arrow.apache.org/release/0.11.0.html

下载地址:

https://arrow.apache.org/install/


          Apache Qpid Proton 0.26.0 发布,轻量级消息库      Cache   Translate Page      

Apache Qpid Proton 0.26.0 已发布,Apache Qpid Proton 是 AMQP 1.0 的消息库,高性能,轻量级,应用广泛。

新特性和改进

  • PROTON-1888 - [python] Allow configuration of connection details via a simple config file

  • PROTON-1935 - [cpp] Read a config file to get default connection parameters

  • PROTON-1940 - [c] normalize encoding of multiple="true" fields

bug 修复

  • PROTON-1928 - install static libraries

  • PROTON-1929 - [c] library prints directly to stderr/stdout

  • PROTON-1934 - [Python] Backoff class is not exported by reactor

  • PROTON-1942 - [c] decoding a message does not set the inferred flag.

下载地址:


          Pinball Video Games (FX3/TPA/SPA)      Cache   Translate Page      
Replies: 53 Last poster: Robolokotobo at 10-10-2018 04:21 Topic is Open Ngangatar schreef op dinsdag 9 oktober 2018 @ 20:06: [...] Die kast speelt zo lekker..... Ik heb nu alle 3 de challenges al met 15 sterren, heb bijna het idee dat er iets stuk is. Ik zal eens kijken wat mijn hoogste score is. (moves to Xbox) 132M. Is volgens mij mijn enige kast met een 100M+ score . Nog een newbie he, zo fijn. Veel te verbeteren nog. Ik moet trouwens vaak lachen om de teksten, ik meende net "shrubbery" te horen, moet welhaast een heerlijke verwijzing naar Monthy Python zijn. De andere kasten zijn ook leuk al zijn die nog even wennen. Fish Tales doet het inmiddels ook maar die is erg stijl. Oh en als je me ziet staan, add me dan gelijk, slimmerik. Goed bezig! Het is wel zo dat alle kasten compleet verschillend zijn met hoeveel punten je scoort. Op sommige kasten scoor je al snel een aantal miljard Ja die Medievil kast is een meesterwerkje en erg komisch. Van dezelfde maker als Attack from Mars. Die is eigenlijk nog toffer. Check maar even YouTube: Attack From Mars - Gameplay Komt waarschijnlijk ook nog wel naar FX3 Fish Tales is echt een moeilijk kast. Ik denk de moeilijkste die ik ooit gespeeld hebravw schreef op dinsdag 9 oktober 2018 @ 18:32: [...] ja daarom vind ik pinball vaak op de PC niet erg leuk om te doen ... Ik heb ze vroeger wel vaak gespeeld , kom van de 286 tijdperk en ouder dus pinball world of hoe die hete heb ik helemaal grijs gespeeld .. YouTube: DOS Game: Pinball World Psycho Pinball YouTube: Psycho Pinball (PC/DOS)1995, Codemasters Tilt YouTube: DOS Game: Tilt! en Natuurlijk Balls of steel YouTube: Balls of Steel - Duke Nukem Table Gameplay 3D Ultra Pinball YouTube: LGR - 3D Ultra Pinball - PC Game Review en zo zullen er nog wel ene paar zijn uit die tijd die ik noemde en nog gespeeld heb waar ik de naam niet meer van weet ... en er zullen er nog wel heel wat zijn die ik toen gespeeld heb maar de laatste jaren echt geen pinball meer aan geraakt Tof man die oude games Ik heb zonet Junkyard voor het eerst flink uitgeprobeerd. Speelt echt heerlijk weg. Is ook gelijk weer mooi puzzelen om erachter te komen hoe je veel punten scoort en hoe je de kast uitspeelt. Ik sta 6e op het Classic leaderboard. Classic is mijn favoriete mode. De kasten spelen zoals ze bedoelt zijn. Ik heb het vermoeden dat in alle kasten de bal sneller rolt dan in TPA omdat ze wat steiler staan. Ik vind het niet erg maar moet nog steeds een beetje wennen. Soms kijk ik nog voor waar ik moet schieten en dan is de bal al voorbij de flipper gerold voordat ik mijn blik weer op de bal heb
          Simple pet project      Cache   Translate Page      
I need help with my pet python project. (Budget: $10 - $30 AUD, Jobs: Python)
          Develop and Query a Graph with SQL Server 2017 and R Part 1      Cache   Translate Page      

By: Siddharth Mehta || Related Tips:More >SQL Server 2017

Problem

Graph analysis can be divided into two parts Graph Rendering and Graph Querying. There are many ready to use visualizations available in 3 rd party tools as well as frameworks like R and python which provides ready to use graph visualization where one can submit a dataset and the control will render the visualization. Often the limitation with such controls is that there are limited customization options. Such visuals are good for immediate or limited scale graph datasets. For large scale graph applications where the graph contains millions of nodes and edges, full control over each aspect of the graph is required in terms of its rendering as well as graph traversal.

In this tip we will see how to create a graph with different aesthetic customizations to represent the graphical nature of the data in a visually interpretable manner. Generating a graph is just the first part of the graph analysis process. The other part of the analysis is querying the graph and rendering the results of the query in the graph visualization. In this tip we will learn how to query graph data and render each step of the analysis in the graph visualization.

Solution

DiagrammeR is a R package that has all the necessary constructs to generate a graph with the minutest customization on every aesthetic element of the graph.

Steps to Create a Graph Using SQL Server and R

1) First we need to ensure that SQL Server 2017, SSMS and R Server is installed on the development machine. If you need to refer to the installation steps, you can follow theinstallation section of this R tutorial.

2) We will need to install R packages named DiagrammeR, Magittr and DiagrammeRsvg. You can read the instructions from this link on how to install packages on a R server.

3) Before we start developing the actual code there are some basic elements of a graph that we need to understand. At the very high level, there are at least two major elements of a graph a node and an edge. A node is the basic entity and edge represents the relationships between these entities. Using these two basic attributes, any kind of graph can be created or described.

4) Let's start by creating a node. We need to execute the sp_execute_external_script stored procedure which allows external R scripts to be executed in SQL Server. In the below code, we are creating 9 nodes using the create_node_df function. We are assigning a type attribute to the nodes as well as labels to each node. Here nodes mean the actual entities in a dataset. We are using hard-coded entities, but you have the option to also read data from a SQL Server table and read that data in the R script using InputDataSet data frame. After creating the nodes, we are using the create graph function with the nodes as an input parameter to generate a graph. After that we export the graph to a png image.

EXECUTE sp_execute_external_script @language = N'R', @script = N'
library(DiagrammeR)
library(magrittr)
library(DiagrammeRsvg)
nodes <- create_node_df(n=9,
type=c("fruit", "fruit", "fruit", "veg", "veg", "veg", "nut", "nut", "nut"),
label=c("pineapple", "apple", "apricot", "cucumber", "celery", "endive", "hazelnut", "almond", "chestnut"),
style="filled",
shape="polygon")
graph <- create_graph(nodes_df = nodes )
export_graph(graph, file_name = "C:\\temp\\GraphH.png", file_type = "png", width=800, height=800)
'

5) Once the above code is executed successfully, the visual would look as shown below. This diagram does not exactly look like a graph. The missing elements are the edges i.e. the relationship between the nodes. Before we start dealing with the aesthetics of the nodes and edges, we first need to make the visual look like an actual graph.


Develop and Query a Graph with SQL Server 2017 and R Part 1

6) To add the edges to the nodes, we need to use the create_edge_df function, which takes two arguments From and To. The From parameter specifies the start node and the To parameter specifies the end node, to create the edge from source to destination. After the edges are created, we pass this as a parameter to the graph.

EXECUTE sp_execute_external_script @language = N'R', @script = N'
library(DiagrammeR)
library(magrittr)
library(DiagrammeRsvg)
nodes <- create_node_df(n=9,
type=c("fruit", "fruit", "fruit", "veg", "veg", "veg", "nut", "nut", "nut"),
label=c("pineapple", "apple", "apricot", "cucumber", "celery", "endive", "hazelnut", "almond", "chestnut"),
style="filled",
shape="polygon")
edges <- create_edge_df(
from = c(9, 3, 6, 2, 4, 2, 8, 2, 5, 5),
to = c(1, 1, 4, 3, 7, 8, 1, 5, 3, 6))
graph <- create_graph(nodes_df = nodes, edges_df = edges )
export_graph(graph, file_name = "C:\\temp\\Graph.png", file_type = "png", width=800, height=800)
'

7) After you execute the above code successfully, your graph should look as shown below. If you compare it with the previous graph, you will be able to see the arrows linking the nodes, which are the edges that we created.


Develop and Query a Graph with SQL Server 2017 and R Part 1

8) We are now ready to start modifying the different aesthetic elements of the node. These attributes can be categorized in the following categories.

Shape Style Size Color Fonts Position Direction Labels

9) Let’s try to change the shape of the nodes. Add a parameter in the create_node_df function named shape = “oval” to change the shape of the nodes. Execute the code and the graph should look as shown below.


Develop and Query a Graph with SQL Server 2017 and R Part 1

10) The graph looks better and different now. We had used the type attribute while creating the nodes. We can color the nodes based on the types using the f illcolor attribute. Modify the above code and add one more parameter to the create_node_df function as mentioned below. Execute the code after modifying and the graph would look as shown below.

fillcolor = c("orange", "orange", "orange", "aqua", "aqua", "aqua", "lightgreen", "lightgreen", "lightgreen")
Develop and Query a Graph with SQL Server 2017 and R Part 1

11) The text of the graph is an essential element of the graph from a usability perspective. The fonts of the nodes can be changed using the following parameters as mentioned below. After adding the below parameters and executing the code, the graph would look as shown below.

fontname="Helvetica", fontsize="16", fontcolor="black"
Develop and Query a Graph with SQL Server 2017 and R Part 1

12) As you can see in the above graph, the fonts are not fitting in the size of the nodes. We can modify the width of the nodes using the width parameter and assigning it a value of 1.5 as shown below. The final code for the nodes should look as shown below after all the modifications mentioned above are done.

EX
          Data Mining Project      Cache   Translate Page      
Mining internet data by date on politics and entertainment awards. (Budget: $250 - $750 USD, Jobs: Data Mining, Data Processing, Java, Python, Web Scraping)
          QA Automation with python scripting - Evolution infosoft - Redwood City, CA      Cache   Translate Page      
*Job Summary* Job Tittle : QA Automation with python scripting Location : Redwood City, CA Duration : 6 Months job Description QA- Data automation and...
From Indeed - Mon, 01 Oct 2018 20:31:21 GMT - View all Redwood City, CA jobs
          Python Developer      Cache   Translate Page      
CA-San Jose, San Jose, California Skills : • SSL/PKI • Basic Linux administration • Basic Windows administration + IIS • Documentation • Python Description : • Experience in Python Scripting
          Java production support      Cache   Translate Page      
CA-Sunnyvale, Java production Support Sunnyvale,CA 12 Months Contract Telephonic/Skype Interview Mandatory Technical Skills Good hands-on experience on Java Technologies Good hands-on experience with Cassandra and Oracle. Good Linux/Unix hand-on experience. Shell/Python Scripting is a plus. Desirable Technical Skills Hands-on experience with splunk. Decent networking knowledge and understanding Mandatory Functi
          Offer - SAP S4 HANA FINANCE COURSE TRAINING - USA      Cache   Translate Page      
SAP S4 HANA FINANCE COURSE TRAININGSOFTNSOL is a Global Interactive Learning company started by proven industry experts with an aim to provide Quality Training in the latest IT Technologies. SOFTNSOL offers SAP S4HANA FINANCE your one stop & Best solution to learn SAP S4HANA FINANCE Online Training at your home with flexible Timings.We offer SAP S4HANA FINANCE Online trainings conducted on Normal training and fast track training classes.SAP S4HANA FINANCE TRAINING ONLINEwe offer you :1. Interactive Learning at Learners convenience time2. Industry Savvy Trainers3. Learn Right from Your Place4. Advanced Course Curriculum5. 24/7 system access6. Two Months Server Access along with the training7. Support after Training8. Certification GuidanceWe have a third coming online batch on SAP S4HANA FINANCE Online Training.We also provide online trainings on SAP ABAP,WebDynpro ABAP,SAP Workflow,SAP HR ABAP,SAP OO ABAP,SAP BOBI, SAP BW,SAP BODS,SAP HANA,SAP BW/4HANA,SAP S4HANA,SAP BW ON HANA, SAP S4 HANA,SAP S4 HANA Simple Finance,SAP S4 HANA Simple Logistics,SAP ABAP on HANA,SAP ABAP on S4HANA,SAP HR Renewal,SAP Success Factors,SAP Hybris,SAP FIORI,SAP UI5,SAP Basis,SAP BPC,SAP Security with GRC,SAP PI,SAP C4C,SAP CRM Technical,SAP FICO,SAP SD,SAP MM,SAP CRM Functional,SAP HR,SAP WM,SAP EWM,SAP EWM on HANA,SAP APO,SAP SNC,SAP TM,SAP GTS,SAP SRM,SAP Vistex,SAP MDG,SAP PP,SAP PM,SAP QM,SAP PS,SAP IS Utilities,SAP IS Oil and Gas,SAP EHS,SAP Ariba,SAP CPM,SAP Healthcare,SAP IBP,SAP CC,SAP Fashion Management,SAP PLM,SAP IDM,SAP PMR,SAP Hybris,SAP PPM,SAP RAR,SAP MDG,SAP Funds Management,SAP TRM,SAP MII,SAP ATTP,SAP GST,SAP TRM,SAP FSCM,Oracle,Oracle Apps SCM,Oracle DBA,Oracle RAC DBA,Oracle Exadata,Oracle HFM,Informatica,Testing Tools,MSBI,Hadoop,devops,Data Science,MS Dynamics Ax Trade & Logistics, Microsoft Dynamics AX Manufacturing,Robotic Process Automation RPA ,RPA blue prism,AWS Admin,Python, and Salesforce .Experience the Quality of our Online Training. For Free Demo Please ContactSOFTNSOL : India: +91 9573428933USA : +1 929-268-1172WhatsApp: +91 9573428933Skype id : softnsoltrainingsEmail id: info@softnsol.comhttp://softnsol.com//
          Offer - SAP S4 HANA FINANCE COURSE TRAINING - UK      Cache   Translate Page      
SAP S4 HANA FINANCE COURSE TRAININGSOFTNSOL is a Global Interactive Learning company started by proven industry experts with an aim to provide Quality Training in the latest IT Technologies. SOFTNSOL offers SAP S4HANA FINANCE your one stop & Best solution to learn SAP S4HANA FINANCE Online Training at your home with flexible Timings.We offer SAP S4HANA FINANCE Online trainings conducted on Normal training and fast track training classes.SAP S4HANA FINANCE TRAINING ONLINEwe offer you :1. Interactive Learning at Learners convenience time2. Industry Savvy Trainers3. Learn Right from Your Place4. Advanced Course Curriculum5. 24/7 system access6. Two Months Server Access along with the training7. Support after Training8. Certification GuidanceWe have a third coming online batch on SAP S4HANA FINANCE Online Training.We also provide online trainings on SAP ABAP,WebDynpro ABAP,SAP Workflow,SAP HR ABAP,SAP OO ABAP,SAP BOBI, SAP BW,SAP BODS,SAP HANA,SAP BW/4HANA,SAP S4HANA,SAP BW ON HANA, SAP S4 HANA,SAP S4 HANA Simple Finance,SAP S4 HANA Simple Logistics,SAP ABAP on HANA,SAP ABAP on S4HANA,SAP HR Renewal,SAP Success Factors,SAP Hybris,SAP FIORI,SAP UI5,SAP Basis,SAP BPC,SAP Security with GRC,SAP PI,SAP C4C,SAP CRM Technical,SAP FICO,SAP SD,SAP MM,SAP CRM Functional,SAP HR,SAP WM,SAP EWM,SAP EWM on HANA,SAP APO,SAP SNC,SAP TM,SAP GTS,SAP SRM,SAP Vistex,SAP MDG,SAP PP,SAP PM,SAP QM,SAP PS,SAP IS Utilities,SAP IS Oil and Gas,SAP EHS,SAP Ariba,SAP CPM,SAP Healthcare,SAP IBP,SAP CC,SAP Fashion Management,SAP PLM,SAP IDM,SAP PMR,SAP Hybris,SAP PPM,SAP RAR,SAP MDG,SAP Funds Management,SAP TRM,SAP MII,SAP ATTP,SAP GST,SAP TRM,SAP FSCM,Oracle,Oracle Apps SCM,Oracle DBA,Oracle RAC DBA,Oracle Exadata,Oracle HFM,Informatica,Testing Tools,MSBI,Hadoop,devops,Data Science,MS Dynamics Ax Trade & Logistics, Microsoft Dynamics AX Manufacturing,Robotic Process Automation RPA ,RPA blue prism,AWS Admin,Python, and Salesforce .Experience the Quality of our Online Training. For Free Demo Please ContactSOFTNSOL : India: +91 9573428933USA : +1 929-268-1172WhatsApp: +91 9573428933Skype id : softnsoltrainingsEmail id: info@softnsol.comhttp://softnsol.com//
          Python Developer - MJDP Resources, LLC - Radnor, PA      Cache   Translate Page      
Assemble large, complex data sets that meet business requirements and power machine learning algorithms. EC2, Lambda, ECS, S3.... $30 - $40 an hour
From Indeed - Tue, 18 Sep 2018 14:44:55 GMT - View all Radnor, PA jobs
           pythonabc.org was reported accessible in China       Cache   Translate Page      
URL: pythonabc.org
Title: pythonabc.org
Report Date: Oct 10, 2018 1:22:07 AM
Reporter Country: China
Reporter ISP:
Comments: Accessible in China according to https://en.greatfire.org/pythonabc.org-0

          [DesireCourse Com] Udemy - Introduction to Python and Hacking with Python      Cache   Translate Page      
none
          QA Automation with python scripting - Evolution infosoft - Redwood City, CA      Cache   Translate Page      
*Job Summary* Job Tittle : QA Automation with python scripting Location : Redwood City, CA Duration : 6 Months job Description QA- Data automation and...
From Indeed - Mon, 01 Oct 2018 20:31:21 GMT - View all Redwood City, CA jobs
          Cloud Application Developer (Local to TX preferred)      Cache   Translate Page      
TX-Plano, Must To Have Skills: - They need to be familiar with cloud APIs such as OpenStack, Puppet, Chef, etc. - They also need to have some real-world experience in deploying and supporting an infrastructure in a cloud based on AWS, Azure, Google, CloudFoundry etc. - Development experience in C#, Java and Python. Required Skills: - General programming best practices. - Specific knowledge of C#, Java and H
          5 Things You Have Never Done with a REST Specification      Cache   Translate Page      

What is a RESTful API?

It’s a myth.

If you think that your project has a RESTfulAPI, you are most likely mistaken. The idea behind a RESTful API is to develop in a way that follows all the architectural rules and limitations that are described in the REST specification. Realistically, however, this is largely impossible in practice.

On the one hand, REST contains too many blurry and ambiguous definitions. For example, in practice, some terms from the HTTP method and status code dictionaries are used contrary to their intended purposes, or not used at all.

On the other hand, REST development creates too many limitations. For example, atomic resource use is suboptimal for real-world APIs that are used in mobile applications. Full denial of data storage between requests essentially bans the “user session” mechanism seen just about everywhere.

But wait, it’s not that bad!

What Do You Need A REST API Specification for?

Despite these drawbacks, with a sensible approach, REST is still an amazing concept for creating really great APIs . These APIs can be consistent and have a clear structure, good documentation, and high unit test coverage. You can achieve all of this with a high-quality API specification .

Usually a REST API specification is associated with its documentation . Unlike a specification―a formal description of your API―documentation is meant to be human-readable: for example, read by the developers of the mobile or web application that uses your API.

A correct API description isn’t just about writing API documentation well. In this article I want to share examples of how you can:

Make your unit tests simpler and more reliable; Set up user input preprocessing and validation; Automate serialization and ensure response consistency; and even Enjoy the benefits of static typing.

But first, let’s start with an introduction to the API specification world.

OpenAPI

OpenAPI is currently the most widely accepted format for REST API specifications. The specification is written in a single file in JSON or YAML format consisting of three sections:

A header with the API name, description, and version, as well as any additional information. Descriptions of all resources, including identifiers, HTTP methods, all input parameters, response codes, and body data types, with links to definitions. All definitions that can be used for input or output, in JSON Schema format (which, yes, can also be represented in YAML.)

OpenAPI’s structure has two significant drawbacks: It’s too complex and sometimes redundant. A small project can have a JSON specification of thousands of lines. Maintaining this file manually becomes impossible. This is a significant threat to the idea of keeping the specification up-to-date while the API is being developed.

There are multiple editors that allow you to describe an API and produce OpenAPI output. Additional services and cloud solutions based on them include Swagger, Apiary, Stoplight, Restlet, and many others.

However, these services were inconvenient for me due to the complexity of quick specification editing and aligning it with code changes. Additionally, the list of features was dependant on a specific service. For example, creating full-fledged unit tests based on the tools of a cloud service is next to impossible. Code generation and mocking endpoints, while seeming to be practical, turn out to be mostly useless in practice. This is mostly because endpoint behavior usually depends on various things such as user permissions and input parameters, which may be obvious to an API architect but are not easy to automatically generate from an OpenAPI spec.

Tinyspec

In this article, I will use examples based on my own REST API definition format, tinyspec . Definitions consist of small files with an intuitive syntax. They describe endpoints and data models that are used in a project. Files are stored next to code, providing a quick reference and the ability to be edited during code writing. Tinyspec is automatically compiled into a full-fledged OpenAPI format that can be immediately used in your project.

I will also use Node.js (Koa, Express) and Ruby on Rails examples, but the practices I will demonstrate are applicable to most technologies, including python, php, and Java.

Where API Specification Rocks

Now that we have some background, we can explore how to get the most out of a properly specified API.

1. Endpoint Unit Tests

Behavior-driven development (BDD) is ideal for developing REST APIs. It is best to write unit tests not for separate classes, models, or controllers, but for particular endpoints. In each test you emulate a real HTTP request and verify the server’s response. For Node.js there are the supertest and chai-http packages for emulating requests, and for Ruby on Rails there is airborne .

Let’s say we have a User schema and a GET /users endpoint that returns all users. Here is some tinyspec syntax that describes this:

# user.models.tinyspec User {name, isAdmin: b, age?: i} # users.endpoints.tinyspec GET /users => {users: User[]}

And here is how we would write the corresponding test:

Node.js describe('/users', () => { it('List all users', async () => { const { status, body: { users } } = request.get('/users'); expect(status).to.equal(200); expect(users[0].name).to.be('string'); expect(users[0].isAdmin).to.be('boolean'); expect(users[0].age).to.be.oneOf(['boolean', null]); }); }); Ruby on Rails describe 'GET /users' do it 'List all users' do get '/users' expect_status(200) expect_json_types('users.*', { name: :string, isAdmin: :boolean, age: :integer_or_null, }) end end

When we already have the specification that describes server responses, we can simplify the test and just check if the response follows the specification. We can use tinyspec models, each of which can be transformed into an OpenAPI specification that follows the JSON Schema format.

Any literal object in JS (or Hash in Ruby, dict in Python, associative array in PHP, and even Map in Java) can be validated for JSON Schema compliance. There are even appropriate plugins for testing frameworks, for example jest-ajv (npm), chai-ajv-json-schema (npm), and json_matchers for RSpec (rubygem).

Before using schemas, let’s import them into the project. First, generate the openapi.json file based on the tinyspec specification (you can do this automatically before each test run):

tinyspec -j -o openapi.json Node.js Now you can use the generated JSON in the project and get the definitions key from it. This key contains all JSON schemas. Schemas may contain cross-references ( $ref ), so if you have any embedded schemas (for example, Blog {posts: Post[]} ), you need to unwrap them for use in validation. For this, we will use json-schema-deref-sync (npm). import deref from 'json-schema-deref-sync'; const spec = require('./openapi.json'); const schemas = deref(spec).definitions; describe('/users', () => { it('List all users', async () => { const { status, body: { users } } = request.get('/users'); expect(status).to.equal(200); // Chai expect(users[0]).to.be.validWithSchema(schemas.User); // Jest expect(users[0]).toMatchSchema(schemas.User); }); }); Ruby on Rails

The json_matchers module knows how to handle $ref references, but requires separate schema files in the specified location, so you will need to split the swagger.json file into multiple smaller files first :

# ./spec/support/json_schemas.rb require 'json' require 'json_matchers/rspec' JsonMatchers.schema_root = 'spec/schemas' # Fix for json_matchers single-file restriction file = File.read 'spec/schemas/openapi.json' swagger = JSON.parse(file, symbolize_names: true) swagger[:definitions].keys.each do |key| File.open("spec/schemas/#{key}.json", 'w') do |f| f.write(JSON.pretty_generate({ '$ref': "swagger.json#/definitions/#{key}" })) end end

Here is how the test will look like:

describe 'GET /users' do it 'List all users' do get '/users' expect_status(200) expect(result[:users][0]).to match_json_schema('User') end end

Writing tests this way is incredibly convenient. Especially so if your IDE supports running tests and debugging (for example, WebStorm, RubyMine, and Visual Studio). This way you can avoid using other software , and the entire API development cycle is limited to three steps:

Designing the specification in tinyspec files. Writing a full set of tests for added/edited endpoints. Implementing the code that satisfies the tests. 2. Validating Input Data

OpenAPI describes not only the response format, but also the input data. This allows you to validate user-sent data at runtime and ensure consistent andsecure database updates.

Let’s say that we have the following specification, which describes the patching of a user record and all available fields that are allowed to be updated:

# user.models.tinyspec UserUpdate !{name?, age?: i} # users.endpoints.tinyspec PATCH /users/:id {user: UserUpdate} => {success: b}

Previously, we explored the plugins for in-test validation, but for more general cases, there are the ajv (npm) and json-schema (rubygem) validation modules. Let’s use them to write a controller with validation:

Node.js (Koa)

This is an example for Koa, the successor to Express―but the equivalent Express code would look similar.

import Router from 'koa-router'; import Ajv from 'ajv'; import { schemas } from './schemas'; const router = new Router(); // Standard resource update action in Koa. router.patch('/:id', async (ctx) => { const updateData = ctx.body.user; // Validation using JSON schema from API specification. await validate(schemas.UserUpdate, updateData); const user = await User.findById(ctx.params.id); await user.update(updateData); ctx.body = { success: true }; }); async function validate(schema, data) { const ajv = new Ajv(); if (!ajv.validate(schema, data)) { const err = new Error(); err.errors = ajv.errors; throw err; } }

In this example, the server returns a 500 Internal Server Error response if the input does not match the specification. To avoid this, we can catch the validator error and form our own answer that will contain more detailed information about specific fields that failed validation, and follow the specification.

Let’s add the definition for the FieldsValidationError :

# error.models.tinyspec Error {error: b, message} InvalidField {name, message} FieldsValidationError < Error {fields: InvalidField[]}

And now let’s list it as one of the possible endpoint responses:

# users.endpoints.tinyspec PATCH /users/:id {user: UserUpdate} => 200 {success: b} => 422 FieldsValidationError

This approach allows you to write unit tests that test the correctness of error scenarios when invalid data comes from the client.

3. Model Serialization

Almost all modern server frameworks use object-relational mapping (ORM) in one way or another. This means that the majority of resources that an API uses are represented by models and their instances and collections.

The process of forming the JSON representations for these entities to be sent in the response is called serialization .

There are a number of plugins for doing serialization: For example, sequelize-to-json (npm), acts_as_api (rubygem), and jsonapi-rails (rubygem). Basically, these plugins allow you to provide the list of fields for a specific model that must be included in the JSON object, as well as additional rules. For example, you can rename fields and calculate their values dynamically.

It gets harder when you need several different JSON representations for one model, or when the object contains nested entities―associations. Then you start needing features like inheritance, reuse, and serializer linking.

Different modules provide different solutions, but let’s consider this: Can the specification help out again? Basically all the information about the requirements for JSON representations, all possible field combinations, including embedded entities, are already in it. And this means that we can write a single automated serializer.

Let me present the small sequelize-serialize (npm) module, which supports doing this for Sequelize models. It accepts a model instance or an array, and the required schema, and then iterates through it to build the serialized object. It also accounts for all the required fields and uses nested schemas for their associated entities.

So, let’s say we need to return all users with posts in the blog, including the comments to these posts, from the API. Let’s describe it with the following specification:

# models.tinyspec Comment {authorId: i, message} Post {topic, message, comments?: Comment[]} User {name, isAdmin: b, age?: i} UserWithPosts < User {posts: Post[]} # blogUsers.endpoints.tinyspec GET /blog/users => {users: UserWithPosts[]}

Now we can build the request with Sequelize and return the serialized object that corresponds to the specification described above exactly:

import Router from 'koa-router'; import serialize from 'sequelize-serialize'; import { schemas } from './schemas'; const router = new Router(); router.get('/blog/users', async (ctx) => { const users = await User.findAll({ include: [{ association: User.posts, required: true, include: [Post.comments] }] }); ctx.body = serialize(users, schemas.UserWithPosts); });

This is almost magical, isn’t it?

4. Static Typing

If you are cool enough to use TypeScript or Flow, you might have already asked, “What of my precious static types?!” With the sw2dts or swagger-to-flowtype modules you can generate all necessary static types based on JSON schemas and use them in tests, controllers, and serializers.

tinyspec -j sw2dts ./swagger.json -o Api.d.ts --namespace Api

Now we can use types in controllers:

router.patch('/users/:id', async (ctx) => { // Specify type for request data object const userData: Api.UserUpdate = ctx.request.body.user; // Run spec validation await validate(schemas.UserUpdate, userData); // Query the database const user = await User.findById(ctx.params.id); await user.update(userData); // Return serialized result const serialized: Api.User = serialize(user, schemas.User); ctx.body = { user: serialized }; });

And tests:

it('Update user', async () => { // Static check for test input data. const updateData: Api.UserUpdate = { name: MODIFIED }; const res = await request.patch('/users/1', { user: updateData }); // Type helper for request response: const user: Api.User = res.body.user; expect(user).to.be.validWithSchema(schemas.User); expect(user).to.containSubset(updateData); });

Note that the generated type definitions can be used not only in the API project, but also in client application projects to describe types in functions that work with the API. (Angular developers will be especially happy about this.)

5. Casting Query String Types

If your API for some reason consumes requests with the application/x-www-form-urlencoded MIME type instead of application/json , the request body will look like this:

param1=value&param2=777&param3=false

The same goes for query parameters (for example, in GET requests). In this case, the web server will fail to automatically recognize types: All data will be in string format , so after parsing you will get this object:

{ param1: 'value', param2: '777', param3: 'false' }

In this case, the request will fail schema validation, so you need to verify the correct parameters’ formats manually and cast them to the correct types.

As you can guess, you can do it with our good old schemas from the specification. Let’s say we have this endpoint and the following schema:

# posts.endpoints.tinyspec GET /posts?PostsQuery # post.models.tinyspec PostsQuery { search, limit: i, offset: i, filter: { isRead: b } }

Here is how the request to this endpoint looks:

GET /posts?search=needle&offset=10&limit=1&filter[isRead]=true

Let’s write the castQuery function to cast all parameters to required types:

function castQuery(query, schema) { _.mapValues(query, (value, key) => { const { type } = schema.properties[key] || {}; if (!value || !type) { return value; } switch (type) { case 'integer': return parseInt(value, 10); case 'number': return parseFloat(value); case 'boolean': return value !== 'false'; default: return value; } }); }

A fuller implementation with support for nested schemas, arrays, and null types is available in the cast-with-schema (npm) module. Now let’s use it in our code:

router.get('/posts', async (ctx) => { // Cast parameters to expected types const query = castQuery(ctx.query, schemas.PostsQuery); // Run spec validation await validate(schemas.PostsQuery, query); // Query the database const posts = await Post.search(query); // Return serialized result ctx.body = { posts: serialize(posts, schemas.Post) }; });

Note that three of the four lines of code use specification schemas.

Best Practices

There are a number of best practices we can follow here.

Use Separate Create and Edit Schemas

Usually the schemas that describe server responses are different from those that describe inputs and are used to create and edit models. For example, the list of fields available in POST and PATCH requests must be strictly limited, and PATCH usually has all fields marked optional. The schemas that describe the response can be more freeform.

When you generate CRUDL endpoints automatically , tinyspec uses New and Update postfixes. User* schemas can be defined in the following way:

User {id, email, name, isAdmin: b} UserNew !{email, name} UserUpdate !{email?, name?}

Try to not use the same schemas for different action types to avoid accidental security issues due to the reuse or inheritance of older schemas.

Follow Schema Naming Conventions

The content of the same models may vary for different endpoints. Use With* and For* postfixes in schema names to show the difference and purpose. In tinyspec, models can also inherit from each other. For example:

User {name, surname} UserWithPhotos < User {photos: Photo[]} UserForAdmin < User {id, email, lastLoginAt: d}

Postfixes can be varied and combined. Their name must still reflect the essence and make the documentation simpler to read.

Separating Endpoints Based on Client Type

Often the same endpoint returns different data based on client type, or the role of the user who sent the request. For example, the GET /users and GET /messages endpoints can be significantly different for mobile application users and back office managers. The change of the endpoint name can be overhead.

To describe the same endpoint multiple times you can add its type in parentheses after the path. This also makes tag use easy: You split endpoint documentation into groups, each of which is intended for a specific API client group. For example:

Mobile app: GET /users (mobile) => UserForMobile[] CRM admin panel: GET /users (admin) => UserForAdmin[] REST API Documentation Tools

After you get the specification in tinyspec or OpenAPI format, you can generate nice-looking documentation in HTML format and publish it. This will make developers who use your API happy, and it sure beats filling in a REST API documentation template by hand.

Apart from the cloud services mentioned earlier, there are CLI tools that convert OpenAPI 2.0 to HTML and PDF, which can be deployed to any static hosting. Here are some examples:

bootprint-openapi (npm, used by default in tinyspec) swagger2markup-cli (jar, there is a usage example , will be used in tinyspec Cloud ) redoc-cli (npm) widdershins (npm)

Do you have more examples? Share them in the comments.

Sadly, despite being released a year ago, OpenAPI 3.0 is still poorly supported and I failed to find proper examples of documentation based on it both in cloud solutions and in CLI tools. For the same reason, tinyspec does not support OpenAPI 3.0 yet.

Publishing on GitHub

One of the simplest ways to publish the documentation is GitHub Pages . Just enable support for static pages for your /docs folder in the repository settings and store HTML documentation in this folder.


5 Things You Have Never Done with a REST Specification

You can add the command to generate documentation through tinyspec or a different CLI tool in your scripts/package.json file to update the documentation automatically after each commit:

"scripts": { "docs": "tinyspec -h -o docs/", "precommit": "npm run docs" } Continuous Integration

You can add documentation generation to your CI cycle and publish it, for example, to Amazon S3 under different addresses depending on the environment or API version (like /docs/2.0 , /docs/stable , and /docs/staging .)

Tinyspec Cloud

If you like the tinyspec syntax, you can become an early adopter for tinyspec.cloud . We plan to build a cloud service based on it and a CLI for automated deployment of documentation with a wide choice of templates and the ability to develop personalized templates.

REST Specification: A Marvelous Myth

REST API development is probably one of the most pleasant processes in modern web and mobile services development. There are no browser, operating system, and screen-size zoos, and everything is fully under your control, at your fingertips.

This process is made even easier by the support for automation and up-to-date specifications. An API using the approaches I’ve described becomes well-structured, transparent, and reliable.

The bottom line is, if we are making a myth, why not make it a marvelous myth?

Understanding the Basics What is REST?

REST is a web service architectural style defining a set of required constraints. It is based around resources with unique identifiers (URIs) and the operations with said resources. Additionally, a REST specification requires a client-server model, a uniform interface, and the absence of server-stored state.

What is the OpenAPI specification?

The OpenAPI Specification is a generally accepted format for describing REST APIs. The specification consists of a single JSON or YAML file with general API information, descriptions for all used resources, and data in JSON Schema format.

What is Swagger?

Swagger is the name of the Open API specification prior to 2016. Currently Swagger is a separate project, with a number of open-source and commercial tools and cloud services for drafting and developing OpenAPI specifications. It also provides tools for server code generation and automated endpoint testing.

What is JSON Schema?

JSON Schema is a specification of a JSON object (document). It consists of a list of all available properties and their types, including the list of required properties.

What exactly is an API?

An application programming interface (API) is a way of communicating between units of software. Usually, it's a set of available methods, commands, and definitions that one software component provides to the others.

What is an API specification?

An API specification is a specially formatted document that provides clear definitions of an API's methods and features, as well as their possible configurations.

What is meant by API documentation?

API documentation is a human-readable document intended for third-party developers to learn API features and build their own software that makes use of the described API.

What makes a RESTful API?

It's an HTTP API where it's implied that it follows standards and constraints defined by the REST architectural style. In practice, however, hardly any APIs are 100% RESTful.

What is meant by behavior-driven development?

BDD is a software development technique implying that every small program change must be tested against pre-designated behavior. So developers first define аn expected behavior, often in the form of automated unit test, and then implement the code, making sure all behavior tests are passing.

What is tinyspec?

Tinyspec is a shorthand for REST API documentation, compilable into OpenAPI format. It's intended to make specification design and development easier. For example, it lets you split endpoint (resource) definitions and store data models in separate smaller files next to your source code.

About the author
5 Things You Have Never Done with a REST Specification

View full profile

Hire the Author

Alexander Zinchuk, Spain

member since August 8, 2016

javascript Ruby on Rails (RoR) Express.js PhoneGap Node.js React Yandex Maps API Behavior-driven Development (BDD) Cordova

With more than 12 years of experience in JavaScript, Alexander has a deep awareness of how the language internally works. He's also worked for multiple years for Yandex, one of the largest IT companies in Europe, leading a development team. Alexander specializes in building fault-tolerant systems and also has much know-how in software design patterns, algorithms, development methods, refactoring, and testing software. [click to continue...]
          Công Ty TNHH Framgia Việt Nam tuyển lập trình viên Python      Cache   Translate Page      
Hà Nội - Công Ty TNHH Framgia Việt Nam cần tuyển lập trình viên Python MÔ TẢ CÔNG VIỆC Tham gia vào các dự án phần mềm với những doanh nghiệp...
          Lập Trình Viên ERP (Junior)      Cache   Translate Page      
Tp Hồ Chí Minh - nghiệp vụ chuyên biệt của các bộ phận chuyên môn khác nhau. Làm việc theo quy trình Scrum - Agile. Đảm bảo chất lượng sản phẩm cao nhất... kinh nghiệm lập trình Odoo (Python - Javascript, Jquery, Ajax). Có khả năng phát triển/customize hoàn chỉnh cho 1 module theo yêu cầu nghiệp...
          Offer - ONLINE SAP SIMPLE LOGISTICS TRAINING COURSE - UK      Cache   Translate Page      
ONLINE SAP SIMPLE LOGISTICS TRAINING COURSESOFTNSOL is a Global Interactive Learning company started by proven industry experts with an aim to provide Quality Training in the latest IT Technologies. SOFTNSOL offers SAP S4HANA LOGISTICS online Training. Our trainers are highly talented and have Excellent Teaching skills. They are well experienced trainers in their relative field. Online training is your one stop & Best solution to learn SAP S4HANA LOGISTICS Online Training at your home with flexible Timings.We offer SAP S4HANA LOGISTICS Online Trainings conducted on Normal training and fast track training classes.SAP S4HANA LOGISTICS ONLINE TRAINING We offer you :1. Interactive Learning at Learners convenience time2. Industry Savvy Trainers3. Learn Right from Your Place4. Advanced Course Curriculum 5. 24/7 system access6. Two Months Server Access along with the training 7. Support after Training8. Certification Guidance We have a third coming online batch on SAP S4HANA LOGISTICS Online Training.We also provide online trainings on SAP ABAP,SAP WebDynpro ABAP,SAP ABAP ON HANA,SAP Workflow,SAP HR ABAP,SAP OO ABAP,SAP BOBI, SAP BW,SAP BODS,SAP HANA,SAP HANA Admin, SAP S4HANA, SAP BW ON HANA, SAP S4HANA,SAP S4HANA Simple Finance,SAP S4HANA Simple Logistics,SAP ABAP on S4HANA,SAP Success Factors,SAP Hybris,SAP FIORI,SAP UI5,SAP Basis,SAP BPC,SAP Security with GRC,SAP PI,SAP C4C,SAP CRM Technical,SAP FICO,SAP SD,SAP MM,SAP CRM Functional,SAP HR,SAP WM,SAP EWM,SAP EWM on HANA,SAP APO,SAP SNC,SAP TM,SAP GTS,SAP SRM,SAP Vistex,SAP MDG,SAP PP,SAP PM,SAP QM,SAP PS,SAP IS Utilities,SAP IS Oil and Gas,SAP EHS,SAP Ariba,SAP CPM,SAP IBP,SAP C4C,SAP PLM,SAP IDM,SAP PMR,SAP Hybris,SAP PPM,SAP RAR,SAP MDG,SAP Funds Management,SAP TRM,SAP MII,SAP ATTP,SAP GST,SAP TRM,SAP FSCM,Oracle,Oracle Apps SCM,Oracle DBA,Oracle RAC DBA,Oracle Exadata,Oracle HFM,Informatica,Testing Tools,MSBI,Hadoop,devops,Data Science,AWS Admin,Python, and Salesforce .Experience the Quality of our Online Training. For Free Demo Please ContactSOFTNSOL : India: +91 9573428933USA : +1 929-268-1172WhatsApp: +91 9573428933Skype id : softnsoltrainingsEmail id: info@softnsol.comWebsite : http://softnsol.com//
          Technical Business Analyst      Cache   Translate Page      
IL-Chicago, Chicago, Illinois Skills : .NET,Business Analyst,hadoop,Java,MS Excel,MS Office,MS Word,Python,Software Development,SQL Description : The Technical Business Analyst will analyze and translate the business needs and assess the feasibility for enhancements to existing applications or building new applications. In this role, you are responsible for elicit and documenting the business and functional r
          Comment on Tessa Hadley: “Cecilia Awakened” by Larry Bone      Cache   Translate Page      
I think Tessa Hadley has written "Cecilia Awakened" to be taken either as a very usual regular daughter coming of age type of story or a cautionary tale about two people who probably would have remained single if biology hadn't intervened at the last possible moment. It is sort of gently horrific as in "No one who knew them could quite imagine afterward how they had managed." The contrast on this is the grandmother "who was elegant and drank and had lovers." Father worked at a university library, Mother was a feminist, a modestly successful historical novelist who was able to snag her nerd/dude even though not caring how she dresses. American parents are bad enough. British parents are totally incomprehensible like a Monty Python skit on how rabbits populate. Could you imagine Ken and Angela on Facebook or meeting on Facebook? I sort of take this as a surreal horror story disguised as an ordinary occurance. I think the most telling parts of this story are the unspokens. Lets be blunt, Ken is not a manly man, a mousy Socialist working at a library. Withdrawn, lonely, boring. Probably bottom of the barrel 56th pick on the cricket draft. And Angela, a feminist, whose mother noticed "her awkward daughter had succeeded in hooking a man after all." Here are a horribly mismatched couple making a "biological" go of it. The ordinariness of this is that "biology had produced Cecelia". No mention of sex, attraction, testosterone or estrogen here, just biology. Kind of the ironic and understatement cloaked in familiarty? Am I missing something here? I shudder to imagine how Cecelia was conceived. The wonder of most children is that Mom is Mom and Dad is Dad and when they are young, they instinctively understand what is good about Mom and Dad and don't necessarily find falt. And often they adopt the strengths of both parents as part of their preliminary pre-puberty identity not all to the bad and sometimes their awakening is not really physical or sexual, it is some sort of higher awareness of some of the truths of life as they relate to their parents. And the whole beautifulness thing as though being beautiful will get you all of what not being beautiful won't get you or being a manly enough man will get a man everything he desires from that beautiful woman he fancies. Go ahead dude, objectify, objectify. I think of Virginia Woolf. To become so acutely aware of the strengths of one's mother and the inadequacies of one's father. I think when Cecelia returns to the hotel alone and "She would flop down on the bed -- their bed--not hers -- and feel herself seeping gradually back into her own shape, belonging only to herself." And I think of Virginia Woolf's "A Room of One's Own." And as adults, behaving as ordinary adults, one could ask, "What difference does it make?" And to me, the strength of Virginia Woolf is that she chronicles this hyper-sensitive awareness of her parents total inadequacy for one another within the modernism of "To The Lighthouse." One could say that was not what Tessa Hadley was writing about but how could one say it wasn't? Maybe the flaw of ordinariness is that one should never too seriously consider what can occur out of it. I think Hadley is painting a true picture of the world she sees but I think her restraint should not be mistaken for ordinariness. She is not telling the truth through a literary megaphone rather she is relying on the carefullness of the details. You see what she refers to or you might miss it entirely because its not what you expect. Maybe she overbalances or maybe not if you are looking in the right direction. I especially like the ending, "such a harmless little pond." And I think of Virginia Woolf drowning herself in a "harmless little pond." Now I know someone will probably ask, "And just which short story were you talking about?"
          Senior Data Analyst - William E. Wecker Associates, Inc. - Jackson, WY      Cache   Translate Page      
Experience in data analysis and strong computer skills (we use SAS, Stata, R and S-Plus, Python, Perl, Mathematica, and other scientific packages, and standard...
From William E. Wecker Associates, Inc. - Sat, 23 Jun 2018 06:13:20 GMT - View all Jackson, WY jobs
          学习Python - 初学者全程      Cache   Translate Page      


          Investigators want to know who left gator in Lake Michigan      Cache   Translate Page      
WAUKEGAN – Authorities don't know who dumped a 4-foot-long reptile into Lake Michigan, but they now know what kind it is.

After initially believing the animal spotted Monday swimming near Waukegan by a startled kayaker was a caiman, officials have said it actually is an alligator.

Either way, it had no business paddling around the suburban Chicago shoreline. Waukegan spokesman David Motley said Tuesday that animal control officers are trying to determine who abandoned the creature, which was found with its mouth kept shut by rubber bands.

Motley said officials thought the animal was a caiman for much of Monday, but Rob Carmichael, curator of the Wildlife Discovery Center in nearby Lake Forest, later told him it was a female alligator.

The two species look similar, but an alligator's snout is more rounded and only its upper teeth can be seen when its mouth is closed, whereas a caiman's upper and lower teeth can be seen, said Andrew Biddle, the head of reptiles at Wild Florida Airboats & Gator Park in Kenansville, Florida.

Carmichael said an alligator would be more capable than a caiman of handling the cold water of Lake Michigan and that the one rescued Monday could have been swimming around for weeks. He said it could have done this with its mouth shut because alligators can go months without food.

Carmichael said that although the rescued gator is weak, she has a pretty good chance to survive if she can get through the next few days.

This isn't the first time someone dropped off a wild animal on or in Lake Michigan, Motley said, pointing to a 2012 incident in which someone abandoned a 14-foot python on the lakefront.


          Senior Software Engineer - Python - Tucows - Toronto, ON      Cache   Translate Page      
Flask, Tornado, Django. Tucows provides domain names, Internet services such as email hosting and other value-added services to customers around the world....
From Tucows - Sat, 11 Aug 2018 05:36:13 GMT - View all Toronto, ON jobs
          Senior Python Developer - Chisel - Toronto, ON      Cache   Translate Page      
Chisel.ai is a fast-growing, dynamic startup transforming the insurance industry using Artificial Intelligence. Our novel algorithms employ techniques from...
From Chisel - Mon, 23 Jul 2018 19:50:37 GMT - View all Toronto, ON jobs
          Senior Production Engineer - Industrial Light & Magic - Vancouver, BC      Cache   Translate Page      
Experience designing and developing asynchronous services using tornado or other frameworks using python. The Senior Production Engineer is responsible for...
From Industrial Light & Magic - Mon, 17 Sep 2018 07:08:43 GMT - View all Vancouver, BC jobs
          Python Software Engineer - PageFreezer - British Columbia      Cache   Translate Page      
Experience using web framework such as Tornado with Python. Python Software Engineer....
From PageFreezer - Sat, 07 Jul 2018 11:06:23 GMT - View all British Columbia jobs
          Python slackbot and shell      Cache   Translate Page      
Python slackbot and shell. Need enhancements to python and shell to respond to user requests from slackbot (Budget: $30 - $250 USD, Jobs: Python, Shell Script)
          Coding To Identify the Dna of two People      Cache   Translate Page      
Coding To Identify the DNA of two People (Budget: $30 - $250 USD, Jobs: C Programming, C++ Programming, Java, Python)
          DevOps Technical Lead - Python/ElasticSearch (5-8 yrs) Ahmedabad (Systems/Product Sof      Cache   Translate Page      
Fortex Consulting - Ahmedabad, GJ DevOps Technical Lead - VONLY helps leading enterprises corporations with Internet Intelligence using Big Data...
          Data Engineer/Developer (Java (JVM), Python, Scala, Hadoop, NoSQL and Spark)      Cache   Translate Page      
Anson McCade - The City, London - Data Engineer/Developer (Java (JVM), Python, Scala, Hadoop, NoSQL and Spark) We need someone who can work within an agile environment...
          Data Warehouse Developer SQL Python      Cache   Translate Page      
Client Server - The City, London - Data Warehouse Developer London to £550 p/day Data Warehouse Developer (SQL Python). Leading FinTech is seeking an experienced Data... Warehouse Developer to be responsible for the design, architecture and implementation of the company’s data warehouse environment; participating in...
          Data Developer      Cache   Translate Page      
CPS Group - Coventry - Data Developer Coventry £400 - £425 3 months rolling Essential skills & experience required: Technical Skills: *Experience... of developing data services with C# and Python services using frameworks like Django, Flask *Expert in designing and developing complex data models...
          Reactive Programming in Python      Cache   Translate Page      

Reactive Programming in Python

Reactive Programming in Python
MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | Duration: 3.5 Hours | 584 MB
Genre: eLearning | Language: English


          Python Theory for Network Engineers      Cache   Translate Page      

Python Theory for Network Engineers

Python Theory for Network Engineers
MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | Duration: 5 Hours | 833 MB
Genre: eLearning | Language: English


          Python Programming Bootcamp (2018)      Cache   Translate Page      

Python Programming Bootcamp (2018)

Python Programming Bootcamp (2018)
.MP4 | Video: 1280x720, 30 fps(r) | Audio: AAC, 44100 Hz, 2ch | 1.11 GB
Duration: 3 hours | Genre: eLearning | Language: English


          Python Data Science & Financial Analytics For Investing      Cache   Translate Page      

Python Data Science & Financial Analytics For Investing

Python Data Science & Financial Analytics For Investing
MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | Duration: 2 Hours | Lec: 32 | 282 MB
Genre: eLearning | Language: English


          Data Engineer (Lead) Spark Hadoop Scala      Cache   Translate Page      
Data Team - South East London - Data Engineer London to £65k Data Engineer (Scala Spark Hadoop Python). Fantastic opportunity for a talented Data Engineer... to join a successful FinTech that provides data science services and complex software solutions to combat financial fraud. As a Data Engineer...
          Data Warehouse Developer SQL Python      Cache   Translate Page      
Contract Data Team - The City, London - Data Warehouse Developer London to £550 p/day Data Warehouse Developer (SQL Python). Leading FinTech is seeking an experienced Data... Warehouse Developer to be responsible for the design, architecture and implementation of the company’s data warehouse environment; participating in...
          Python Developer Data SQL Hedge Fund      Cache   Translate Page      
Data Team - Central London - Python Developer / Data Engineer London to £75k Python Developer / Data Engineer (SQL Jupyter Parquet). Are you a skilled Python... Developer with a keen interest in data looking for your next challenge? Collaborating with Data Scientists you will build new iterations of the...
          Snowflake - Data Engineer - Axius Technologies - Renton, WA      Cache   Translate Page      
Must have Skills: Data Warehouse: Snowflake or other Datawarehouse Data Integration: Alooma or other Data Integration tools Scripting: Python or other...
From Axius Technologies - Thu, 20 Sep 2018 17:26:32 GMT - View all Renton, WA jobs
          Python Developer      Cache   Translate Page      
NY-New York, Genesis10 is currently seeking a Python developer with our client in the financial industry in their New York, NY location. This is a 12 month + contract position. Description: Seeking a Python developer Responsibilities: Design and implementation of Credit Instrument and its associated lifecycle Design and implementation of Pricers for various credit products Develop relationships with the Quant
          Cyber Security Engineer - CCX Technologies - Ottawa, ON      Cache   Translate Page      
Preference will be given to candidates with experience writing applications in Python, and experience working with Avionics or Defense systems....
From Indeed - Tue, 11 Sep 2018 12:13:00 GMT - View all Ottawa, ON jobs
          Offer - 6 Months project based Industrial Training In Noida|NSOP - INDIA      Cache   Translate Page      
Noida School Of Programming is a best choice for 6 months industrial training in Noida. We Provide 6 months industrial training in Noida for python & Django platform. Contact us today.
          cleverbot - A python3 binding of cleverbot wich support special characters      Cache   Translate Page      
A python3 binding of cleverbot wich support special characters.

          QA Automation with python scripting - Evolution infosoft - Redwood City, CA      Cache   Translate Page      
*Job Summary* Job Tittle : QA Automation with python scripting Location : Redwood City, CA Duration : 6 Months job Description QA- Data automation and...
From Indeed - Mon, 01 Oct 2018 20:31:21 GMT - View all Redwood City, CA jobs
          [转]阿里飞猪曝杀熟;贾跃亭“没有抛弃”恒大;百度起诉搜狗浏览器劫持流量 | 极客头条...      Cache   Translate Page      

640?wx_fmt=gif

640?wx_fmt=jpeg

「CSDN 极客头条」,是从 CSDN 网站延伸至官方微信公众号的特别栏目,专注于一天业界事报道。风里雨里,我们将每天为朋友们,播报最新鲜有料的新闻资讯,让所有技术人,时刻紧跟业界潮流。

快讯速知

  • FF回应:贾跃亭没有“操控”董事会

  • 海淀法院对搜狗下达行为保全禁令:停止劫持百度流量

  • BAT投资AI已超硅谷巨头,北京成风投增长第一城

  • 飞猪否认利用大数据杀熟:从来没有,也永远不会

  • 中兴:携手行业合作伙伴,发布首个共享空调标准

  • 谷歌新品发布会前瞻:毫无秘密的Pixel系列新品

  • 特斯拉称Model 3是有史以来最安全汽车

  • Facebook被指违法采集儿童隐私,17家组织要求调查

  • 若依后台管理系统3.0发布,进行模块拆分

  • 中国行政区划选择器v-region 1.8.1发布


国内要闻

 FF回应:贾跃亭没有“操控”董事会

640?wx_fmt=jpeg

昨天,恒大健康发布公告称,贾跃亭要与恒大结束合作。

今天,法拉第(以下简称“FF”)发声明予以回应。

FF在声明中称,“在支付了首笔8亿美元之后,2018年7月,恒大主动提出签署原投资协议的补充修订协议,并同意在原合约约定日期之前,进一步向FF提供资金保障,包括在2018年内支付剩余12亿美元中的5亿”、“包括FF全球CEO贾跃亭在内的任何人,都没有对董事会进行‘操控’,以达成相应的补充协议”。

此外,FF还在公告中说,“‘FF解除所有协议’的唯一原因是因为恒大未能实现其意图,继而拒绝支付其已同意支付的资金。这是最基本,最常识性的公平问题 —— 恒大不应该一方面拒绝支付资金,另一方面享受补充协议生效后的权益,包括接管FF中国的大部分经营管理权”。

海淀法院对搜狗下达行为保全禁令:停止劫持百度流量

640?wx_fmt=jpeg

针对搜狗恶意劫持百度流量案,在5月底终审宣判搜狗败诉后,搜狗拒不履行判决赔偿及道歉。

对此,百度已向北京市海淀区人民法院申请强制执行,目前法院已经受理。

国庆节前夕,北京市海淀区人民法院针对百度起诉搜狗浏览器恶意劫持百度hao123流量、经百度多次要求、但仍拒不停止劫持行为的情况,对搜狗公司下达行为保全禁令,勒令其立即停止相关劫持行为。

BAT投资AI已超硅谷巨头,北京成风投增长第一城

640?wx_fmt=jpeg

据英国《每日电讯报》报告,今年以来,最大的八家美国和中国科技公司,投入到AI领域的资金,大约为140亿美元。

其中,百度、阿里巴巴、蚂蚁金融和腾讯参与的投资涉及总额,共计约128亿美元,而谷歌母公司Alphabet、亚马逊、苹果和Facebook,参与的AI投资交易,共计约17亿美元。

从对投资增长的贡献上来看,北京成为全球创业城市第一位。

飞猪否认利用大数据杀熟:从来没有,也永远不会

640?wx_fmt=png

今日晚间,针对知名作家王小山,炮轰飞猪旅行App,利用大数据杀熟一事,飞猪官方微博回应称,“飞猪敢于承诺:从来没有,也永远不会利用大数据损害消费者利益。”

王小山此前发布微博表示,从利马到布宜诺斯艾利斯的机票,同一航班,别家卖2500元,飞猪卖给他的售价,却高达3211元。

另外,他还指出,飞猪App存在欺诈消费的嫌疑,“一张机票,查1104,到订的界面变成2322,过几个小时变成2796。在别家订了,1300,返回飞猪,又变回2322。”

不过,王小山的此条微博,已经设为“仅自己可见”,他表示,“随便吐个槽,居然上热搜了,我哪是喜欢上热搜的人啊。飞猪工作人员一直挺靠谱的,有点不忍心。”

中兴:携手行业合作伙伴,发布首个共享空调标准

640?wx_fmt=jpeg

近日,中兴通讯联合海尔空调、中国标准化研究院、中国电信上海研究院、中国循环经济协会、中标能效科技(北京)有限公司等共享家电生态联盟成员,在上海共同发布了,物联网时代首个基于NB-IoT(基于蜂窝的窄带物联网)的共享空调标准。

中兴称,标准实施后,将有效规范共享空调、净化器等空气设备的市场发展,保障用户共享空调的使用体验,同时填补行业空白,为共享家电新领域的业务大规模部署奠定基础。


国际要闻

谷歌新品发布会前瞻:毫无秘密的Pixel系列新品

640?wx_fmt=jpeg

明天(10月9日),谷歌将在纽约举行主题为“谷歌制造”(Made By Google)的硬件发布会,届时将有包括Pixel 3和Pixel 3XL两款智能手机在内的一系列新产品发布。

此前公布过新功能的Android Pie系统,也会有正式版本亮相。

虽然谷歌尚未正式宣布,其即将上市的任何设备,但网上的传闻和泄露的图片,已经展示了这两款新手机。

就在发布前3天,还有媒体直接拿到了新品,进行了评测。据悉,新发布的XL机型,会采用刘海屏,Pixel 3和 Pixel 3XL还在背部,采用了金属+玻璃双拼接的设计。

此外,Pixel 3XL采用6.3寸刘海全面屏,搭载高通旗舰芯片骁龙845,随机存取存储器为4GB,可能会有高配6GB版,分辨率为2960X1440,电池容量是3732mAh,基本上是今年的标准旗舰配置。

价格方面,已有外国网友,分享了一张新机售价截图,图片显示,64GB版本的谷歌Pixel 3手机,在加拿大售价为1099.99加元(约合人民币5840元),而谷歌64GB版Pixel 3XL售价为1269.99加元(约合人民币6745元)。

640?wx_fmt=jpeg

外国网友分享的新机售价截图

特斯拉称Model 3是有史以来最安全汽车

特斯拉公司在当地时间周日晚间表示,已实现了让其Model 3轿车,成为有史以来最安全汽车的目标。

其在一份博文中称,在美国国家公路交通安全管理局的测试中,Model 3不仅在所有类别和亚类项目中,均获得五星安全评级,而且“在该安全机构测试过的所有汽车中,受伤概率最低。”

Facebook被指违法采集儿童隐私,17家组织要求调查

640?wx_fmt=jpeg

据多家美国媒体报道,美国17家个人隐私保护组织,已向美国联邦贸易委员会(FTC),提交了投诉信。

要求该机构对Facebook展开彻底调查,投诉者指出,Facebook公司旗下的聊天工具“Messenger少儿版”,并未遵守美国保护青少年隐私的法律法规COPPA,而Facebook则宣称自己遵守了该法律。


程序员专区

若依后台管理系统3.0发布,进行模块拆分

若依管理系统v3.0已发布,更新日志:

  • 升级POI到最新版3.17;

  • 导出修改临时目录绝对路径;

  • 升级layDate升级到最新版5.0.9;

  • 升级Spring Boot到最新版本2.0.5;

  • 优化开始/结束时间校验限制;

  • 重置密码参数表中获取默认值;

  • 修复头像修改显示问题;

  • 新增数据权限过滤注解;

  • 新增表格检索折叠按钮;

  • 新增清空(登录、操作、调度)日志;

  • 固定按钮位置(提交/关闭);

  • 部门/菜单支持(展开/折叠);

  • 部分细节调整优化;

  • 项目采用分模块。

若依从3.0开始,进行模块拆分,将原先的单应用转变为多模块,如需单应用,请移步RuoYi-fast。

下载地址:https://gitee.com/y_project/RuoYi

中国行政区划选择器v-region 1.8.1发布

基于Vue2的、且简洁易用的、中国行政区划选择器v-region1.8.1版本更新,更新内容:

  • 增加UI选择器模式下,呼出选择器的默认按钮;

  • 修复省/直辖市切换时,城市显示不正确;

  • 修改UI选择器模式,在选择完最后一个级别内容时,自动关闭选择器界面;

  • 下拉菜单容器使用插件v-dropdown;

  • 调整图标字体名称,避免与用户环境中的Iconfont内容冲突;

  • 完善I18N内容。

插件简介:

v-region,是基于Vue2的简洁易用的、中国行政区划选择器,可选择行政区域级别,有省/直辖市、市、区/县、乡/镇/街道四个级别;

插件的使用模式,包含常规表单下拉列表模式、UI下拉选择器模式、以及选中区域的纯文本展示模式。

“程序员专区”内容来自开源中国社区https://www.oschina.net/,版权归对方所有。

推荐阅读:

640?wx_fmt=gif

640?wx_fmt=gif

作者:csdnnews 发表于 2018/10/08 22:17:44 原文链接 https://blog.csdn.net/csdnnews/article/details/82975986
阅读:199

          [原]程序员入错行怎么办?      Cache   Translate Page      

640?wx_fmt=gif

程序员应该选择什么技术领域才能获得最高的回报?

本文详细解读了 2018 年最热门的五大领域,对行业现状、薪资概况及具体的技能要求给出了深入的分析,希望给担心“入错行”的你提供些指导。

640?wx_fmt=jpeg

七天国庆黄金周转眼就过,退散的除了出游热情,还有买房炒房的浪潮。

坊间来报,自 10 月 15 日起中国央行将下调部分金融机构存款准备金率,降准之外还会再释放增量资金约 7500 亿元——这次金融领域的大动作,对技术领域而言,开发者最直接的观感大抵就是备受诟病的房价要稳了。

“安土重迁,黎民之幸。”自古以来,房子都是人们安身立命的根本所在。而对于广大的开发者而言,买房也是绕不开的话题,但是高昂的房价之下各种“逃离北上广”、“逃离一线城市”的声音一直层出不穷。与房价高对等的,是开发者们“高薪多酬”、“996”、“压力大”、“不修边幅”等扯不掉的标签。

那么真实的开发者现状究竟是怎样的?

每年都有大量的开发者调查报告发布,报告的主体也不尽相同,从技术开发者的全局画像到细分领域的剖析解读应有尽有。下面我们就从大数据、云计算、AI、区块链、物联网这五个具体领域,结合最新的技术发展动态,给大家呈现出最为真实的中国开发者绘卷。

 

640?wx_fmt=png

水涨船高”的大数据开发者人才需求和薪资报酬

 

大数据时代,数据所蕴含的价值已毋庸置疑,在政府、企业、科研等领域都有其身影。事实上,它已经上升到了国家战略层面,中国、美国以及欧盟等国家都已经将大数据列入其中,微软、谷歌、百度以及亚马逊等科技巨头也紧跟其后,将大数据技术视为未来发展的重大筹码。

这点从黑客们“前仆后继”的信息窃取行径中也可见一斑。

仅这一年,就有多起大型数据泄漏事件发生:Facebook 上多达 5000 万的用户信息被泄露,用于操纵选民投票;WiFi 万能钥匙被爆窃取了 9 亿用户隐私,用于营销推广和谋取暴利;QQ 浏览器、百度手机输入法涉嫌私自调动摄像头、自动录音等侵权手段;A 站近千万条数据公开泄露;1.23 亿条华住旗下所有酒店的数据被泄漏和公开售卖......

由此可见,数据价值之高,大数据技术的重要性也不言而喻。

CSDN 2017 年调研数据显示,78% 的企业在进行大数据相关的开发和应用。虽然目前大约 57% 的企业对大数据的应用更多仍体现在统计分析、报表及数据可视化上,而且因为整体的大数据行业还不十分成熟,企业的需求定位尚不明确,因此深层次的应用还未普及也是情理之中了。但是这个比例与 2015 年、2016 年相比,已经有了非常大的提升。

640?wx_fmt=png

这种情况下,大数据开发者的人才需求和薪资报酬自然也是水涨船高。

640?wx_fmt=jpeg

根据中国商业联合会数据分析专业委员会统计表明,未来中国基础性数据分析人才的缺口将达到 1400 万,而在 BAT 等企业招聘的职位里,60% 以上都在招聘大数据人才。此外据领英报告显示,在大数据开发者的各个岗位中,数据分析人才的供给指数仅为 0.05,属于高度稀缺,数据分析人才跳槽速度也最快,平均跳槽速度为 19.8 个月......再以北京 2017 年的大数据开发者工资收入水平为例,五成以上的开发者月薪高于 30K,均薪可达 30230 元。

640?wx_fmt=jpeg

对于想要投身大数据抑或是身在坑底的开发者来说,最好的建议是找准一个切入点,比如平台搭建、ETL、离线处理程序、实时数据分析等,然后再往更大的领域扩充自己的知识储备——这样或许会让数据开发之路走得更稳。

 

640?wx_fmt=png

44% 的人认为数据库管理是收入最高的云计算技能

 

2017 年发布的 Gartner 技术成熟度曲线中,云计算已经不在“新兴技术”之列,转而进入到快速发展的车道了。2006 年 3 月,亚马逊推出第一个云计算服务的时候外界并不看好,但是随着云计算步入第二个发展 10 年,全球云计算的市场已经趋于稳定增长,逐渐远离单纯的“虚拟化或是网络服务”,成为了独立、成型以及普及度较高的 IT 基础设施服务。

640?wx_fmt=jpeg

容器、微服务、DevOps 等技术在不断推动着云计算的变革,科技巨头们也相继把云计算提到了战略的高度:亚马逊、谷歌、微软以及阿里云、腾讯云等疯狂地兴建数据中心,彼此之间也在围绕着客户“融合”,比如 Instagram 从亚马逊 AWS 迁移到 Facebook 的自有平台,Zynga 从自有平台迁移到亚马逊 AWS,苹果公司为了分摊风险将一部分业务从 AWS 分散到 GoogleCloud,Verizon 抛弃微软 office 回归谷歌 G Suite……

这些动作都在表明云计算的边界日益模糊,业务上的深度融合似乎是大势所趋。与此同时,中国的云计算市场也处于高速增长的阶段。CSDN 调研数据显示,有 83% 的企业正在使用云服务,仅有不到 1 成的企业对云计算不甚关注。在具体应用上,企业在虚拟机、网络存储、负载均衡三方面的应用较为普遍,基于 Docker 或 OpenStack 是当前云平台部署的两种主流框架。

640?wx_fmt=png

而从云计算开发者的角度而言,随着企业将基础设施迁移到公有云中,对云计算技术的专业人员需求将不断加大。Rackspace 去年发布了“云成本的专业知识”研究报告,该调查与伦敦证券交易所学者和 Vanson Bourne 合作。

调查发现,近四分之三的 IT 决策者(71%)认为他们的组织由于缺乏云技术而失去了收入,占全球云计算收入的 5%。报告指出,由于人才缺口巨大,IT 团队需要花费五周的时间才能完成招聘任务。

那么云计算开发者的哪些技能最受欢迎?Rackspace 调查的受访者确定了一些企业迫切需求的云计算技能:数据库管理,44% 的人表示这是收入最高的云计算技能,24% 的人认为这是最难招到相应人才的职位;云安全,业界不断发生的数据泄露事件不断增加云安全专业人才的需求;服务管理,涉及供应、监控和编排组织对云工具的使用;项目迁移管理,36% 的受访者认为这是极难招到合适的掌握该技能的人才;自动化,随着越来越多的组织采用 DevOps 的方式,越来越多的企业正在使用自动化工具来处理云端和内部数据中心基础设施的日常配置和管理任务;此外,云原生应用开发、Microsoft Azure、测试、DevOps 等相关的技术人才也逐渐受到追捧。

不过,安全问题仍是云服务最大的顾虑所在。在互联网系云计算服务商中,阿里、腾讯等巨头正在大力投入安全领域,其他玩家能否跟进还尚不可知。

 

 

640?wx_fmt=png

AI 软件工程师和算法工程师是最受欢迎的岗位

 

据麦肯锡 2017 年发布的《人工智能,下一个数字前沿》报告显示,机器人和语音识别作为最受欢迎的两大投资领域,已经吸纳了全球科技巨头们高达 200 亿至 300 亿美元不等的巨额资本——最近一年的 AI 炒得尤其火热。

以 BAT 等互联网公司为例,百度作为首家号称“All in AI”的科技公司,一直专注于对话式人工智能系统 DuerOS 和自动驾驶系统 Apollo 平台;阿里巴巴也在全面布局 AI 生态,疯狂投资 AI 初创公司,而且还蓄力智能云、AI 芯片等技术;起步较晚的腾讯同样不甘示弱,不仅成立了 AI Lab,还网罗了大量人工智能专家,积极推动语音识别、人脸识别等技术内部产品化......

而近来搅翻了国内搜索市场的 Google,也在上个月的上海开发者大会上将人工智能贯穿始终,从 Android 到智能穿戴,从 TensorFlow 到 AR 应用,要么构筑底层生态,要么引领技术潮流,不一而足。

CSDN 调研显示,虽然当前国内 AI 的普及率还偏低,但发展潜力巨大,只有 25% 的开发者表示完全没有人用过。

640?wx_fmt=png

而据猎聘大数据研究院近期发布的问卷调研中发现,AI 核心职能对学历要求明显增高,AI 人才主要分布在北京、上海和深圳这三个一线城市,在行业方面,AI 人才的分布以互联网为主,但也向其它行业逐渐渗透。在 Top 10 核心职能上,AI 软件工程师和算法工程师遥遥领先,是最紧俏的职能岗位。

640?wx_fmt=jpeg

此外,根据美国知名研究机构 CB Insights 最新发布的《2018 年必看的人工智能热门趋势》(Top AI Trends To Watch In 2018)显示,通过对 AI 行业发展现状进行了深入的研究剖析,人工智能的薪资水准已明显超越前后端开发、移动开发等岗位。

640?wx_fmt=other

而据普华永道发布的一份报告显示,随着人工智能扩展到更为具体的领域,它将需要数据科学家和人工智能专家通常缺乏的各领域的专业知识和技能。未来,对于 AI 开发者而言,更为全方位的技术储备是必不可少的。

640?wx_fmt=jpeg

 

640?wx_fmt=png

区块链技术开发者仍热情高涨

 

近年来“跌宕起伏”的区块链市场,也将区块链技术应用带到了大众眼前。

据摩根士丹利的研究报告显示,“比特币的价格上涨速度,大约是纳斯达克综合指数的 15 倍。”2017-2018 年比特币的价格走势,和 1998 年前的互联网泡沫期间的纳斯达克综合指数走势很像,但是速度要快得多,摩根士丹利分析师认为,这“预示着纳斯达克的历史正在重复上演”。

640?wx_fmt=jpeg

但是各种“泡沫”的质疑声下,开发者学习区块链技术的热情依旧高涨。

据 CodeMentor 发起的“ 区块链开发生态现状调查”研究显示,虽然有 46% 的受访者表示他们没有计划在短期内(未来三个月内)学习区块链这项新技术,但计划在未来数月内开始学习区块链技术的开发者占到九成之多。

在薪酬方面,BOSS 直聘数据显示,2018 年第一季度,区块链技术岗位平均招聘薪酬增长 31%,打败了其他所有岗位。“但区块链人才池太小,挖人很难。挖一个区块链的人,要付出 200% 的努力。”为了招人,各家公司也使尽了浑身解数——但绝大多数的从业者都是不合格的,要成为区块链的技术精英,不仅要懂计算机、编程语言,还要对经济学和博弈论有深刻理解,人才的严重短缺或许也是区块链市场泡沫形成的一大诱因吧。

此外,区块链当前的应用仍相对较少。CSDN 调查显示,正在用或者准备用区块链技术解决技术问题的人群仅占受访者的 10%,有 20% 的人对区块链完全不了解。缺少开发经验、技术资料以及落地的应用和场景是当前区块链开发的主要挑战,此次调研中分别占 56%、54%、50%。

640?wx_fmt=png

 

640?wx_fmt=png

优秀的物联网人才“供”远小于“求”

 

从智能家居到医疗监控,从可穿戴设备到能源供给,物联网已经成为了我们生活中不可分割的主要部分,国内外科技巨头也竞相布局物联网。

今年年初阿里就曾表示,loT 是集团继电商、金融、物流、云计算之后的一条新的主赛道,并提出 5 年要完成 100 亿设备连接的目标;百度也推出了百度云天工智能物联网平台;华为推动 NB-IoT 标准制定,并发布了物联网操作系统 LiteOS、NB-IoT 端到解决方案;腾讯推出“QQ 物联智能硬件开放平台”,将 QQ 账号体系及关链、QQ 消息通道等核心能力提供给可穿戴设备、智能家居、智能车载、传统硬件等领域合作伙伴,实现用户与设备及设备与设备之间的互联互通互动......

但是据 Eclipse IoT 发布的《2018 年物联网开发者调查报告》显示,企业开发物联网解决方案的增长率仅为 5.8%。不过虽然增长缓慢,但也透露出物联网企业们正在摆脱理论领域,更多地将理论付诸于实践。

640?wx_fmt=jpeg

这其中,物联网的构建难度之高不得不提。在物联网中,组网、人机交互、数据、安全特性等技术碎片化太过严重,因此它不单单是纯软件的开发,还需要掌握硬件的嵌入式等技能。这种背景下,物联网开发者的热度也自然很高。仅从国内某知名招聘平台上,就可以发现物联网工程师平均就业薪资可以达到 15K/月,且全网的招聘需求高达 14000+ 条。

640?wx_fmt=png

此外,作为国家倡导的新兴战略性产业,物联网备受各界重视,并成为就业前景广阔的热门领域。自 2011 年以来,全国各地高校纷纷设立物联网专业,物联网工程导论、嵌入式系统与单片机、无线传感器网络与 RFID 技术、物联网技术及应用、云计算与物联网、物联网安全、物联网体系结构及综合实训、信号与系统概论、现代传感器技术等课程以及多种选修课。

对于物联网开发者本身而言,则建议在学习时找准物联网的角度,深入学习,掌握知识和项目实战技能才是重中之重。

 

640?wx_fmt=png

我们真实的开发者究竟是什么样子的?

 

代码改变世界,开发者所创造的技术世界正给我们的生活带来革命性的变化。以上的五大领域开发者现状描摹也只是技术更迭下的时代缩影,在快速发展的当下,我们的开发者画像又会呈现出怎样的趋势变化呢?

自 2004 年开始,CSDN 通过对开发人员、开发技术以及开发工具、平台的状况和发展趋势等进行深入的调研,为各相关行业提供了中国软件开发者群体以及软件开发服务领域市场所提供的重要参考资料。迄今为止,已有数以万计的开发者参与其中,共同绘就了真实的中国开发者画像。

而现在,2018 年 CSDN 软件开发者大调查活动已经正式启动了!作为技术开发社区的一份子,我们诚邀你加入我们的大调查活动。

现在扫描以下二维码即可参与:

640?wx_fmt=png

此外,我们还为你准备了精美的礼品,华为 nova3 智能手机、小爱智能音箱、CSDN 背包、CSDN 定制T恤、数百本技术图书等你来拿!参与即有机会获赠,还等什么,快来试试吧!

640?wx_fmt=png

点击下方的“阅读原文”或复制官网链接(https://www.csdn.net/2018dev/)至浏览器访问,也可立即参与。

 

640?wx_fmt=jpeg

 

微信改版了,

想快速看到CSDN的热乎文章,

赶快把CSDN公众号设为星标吧,

打开公众号,点击“设为星标”就可以啦!

640?wx_fmt=png


“征稿啦”

CSDN 公众号秉持着「与千万技术人共成长」理念,不仅以「极客头条」、「畅言」栏目在第一时间以技术人的独特视角描述技术人关心的行业焦点事件,更有「技术头条」专栏,深度解读行业内的热门技术与场景应用,让所有的开发者紧跟技术潮流,保持警醒的技术嗅觉,对行业趋势、技术有更为全面的认知。

如果你有优质的文章,或是行业热点事件、技术趋势的真知灼见,或是深度的应用实践、场景方案等的新见解,欢迎联系 CSDN 投稿,联系方式:微信(guorui_1118,请备注投稿+姓名+公司职位),邮箱(guorui@csdn.net)。

 

推荐阅读:

640?wx_fmt=gif

640?wx_fmt=gif

作者:csdnnews 发表于 2018/10/08 22:17:44 原文链接 https://blog.csdn.net/csdnnews/article/details/82975989
阅读:1227

          [原]3 天后,微软紧急叫停 Windows 10 更新!      Cache   Translate Page      

640?wx_fmt=gif

国庆期间,避开了拥堵的车流、避开了人山人海,万万没想到待在家的技术宅还是没能逃脱微软带来的万点暴击,所以,你手快一时升级到 Windows 10 最新版本的电脑还好吗?

640?wx_fmt=jpeg

北京时间 10 月 3 日,微软在纽约召开了新品发布会,重磅推出了 Surface Pro 6、Surface Laptop 2、Surface Studio 2、Surface Headphone 等产品,与此同时,还正式发布了Windows 10 的十月更新,版本号为 Build 17763,即 Windows 10 1809。


640?wx_fmt=png

Windows 10 十月版的新特性


作为最后一个版本号的 Windows 10, 微软曾于去年宣布,将每年对其进行两次重大更新,时间分别在 3 月和 9 月左右。这意味着自此以后的 Windows 更新都会带来更多的新功能与特性,那么本年度第二次的更新,它究竟带来了哪些内容?

Android 手机连接到 Windows 10 PC,可传输信息和照片

近十年间,移动互联网迅猛崛起,如今也已趋于成熟,而以 Android 为首的移动操作系统也随之快速占领了曾经桌面操作系统 Windows 的霸主之位。正因如此,微软在时代的潮流中迫于转型,将曾是顶梁柱的 Windows 部门砍掉,投身于 AI 和云平台的研究。不过,独立的 Windows 部门消失并不意味着 Windows 将灭亡,在最新版的 Windows 10 中,我们可以窥见微软选择的战略是将移动设备和 PC 进行紧密的结合。

640?wx_fmt=png

基于此,微软研发了一款 Your Phone 应用程序,用户可将 Android 手机连接到 Windows 10 电脑,发送和接受文本信息,也可以互相传照片。

640?wx_fmt=jpeg

在 iPhone、Android 设备上查看 Windows 10 的时间轴

此外,微软还将在 Android 和 iPhone 设备上推出时间轴功能。

今年早些时候,微软就在 Windows 10 上推出了 Timeline(时间轴),该功能可以记录用户曾经加载过的网页、打开过的应用程序、编辑过的文档、也可通过点击恢复来查找内容。

现在,Windows 10 十月版中 Android 用户可以安装 Microsoft Launcher 应用程序来访问相同的时间轴,这样用户可以找到所需的网站或 Office 365 中的文件。至于 iPhone 用户,微软表示,该功能也即将上线。

640?wx_fmt=jpeg

Cloud Clipboard 功能可以从一台计算机复制文本或图片,并将其粘贴到另一台计算机上

30 年未变的 Clipboard 功能在今年终于有了更新,目前 Windows 10 可通过剪贴板功能记录用户复制的所有内容,包括文本和图像。它还可以通过键盘上的 Windows + V 键操作来查看已复制的所有内容。

借助新的 Cloud Clipboard 功能,用户复制的内容也可以与其他 Windows 10 设备同步。对于在多台 Windows 10 PC 之间切换的人来说,它应该是有用的。

640?wx_fmt=jpeg

Windows 10 新添了“暗黑模式”

640?wx_fmt=jpeg

显示哪些应用程序最为耗电

这个倒是一个不错的功能,可以让我们更好地控制使用 Windows 10 笔记本电脑的电池寿命。

640?wx_fmt=jpeg

更好地支持 HDR 显示器和 NVIDIA 的新“光线跟踪”技术

曾经 Windows 10 在充分利用高端 HDR 显示器方面的研究并不成功,但本次 10 月的更新有了一定的进展。同时该版本还兼容 NVIDIA 最新的“光线跟踪”技术,该技术据说可以使视频游戏更加逼真。不过,目前很少有游戏可支持光线跟踪。

640?wx_fmt=png

使文本更容易阅读

这个功能可以在不改变其他所有内容的情况下改变 Windows 10 中文本的大小。这对于更高分辨率的显示器尤其有用。不过该功能不是那么容易引起注意,找到它的最佳方法是在 Windows 10 搜索栏中搜索“make text larger”。

640?wx_fmt=jpeg


640?wx_fmt=png

升级一时爽,恢复火葬场


基于此,我们不难看出,微软虽已错失了移动互联网时代的红利,但如今的它仍希望以 PC 端为入口,至少与移动领域搭建桥梁。不仅如此,Windows 10 1809 版也可谓为提升用户的粘性下足了功夫,不过新的东西越多导致的问题也会越多,在新版本操作系统发布之后,接踵而至的并不是用户的欢呼雀跃声,而是深深的怨念。

系统无故删除文件与照片

不少用户在第一时间手动更新后,悔恨地分分钟钟想剁手。因为过往的文件无故被系统删除,具体的说是位于「C:/Users/[username]/Documents/」目录下的文件。对此,一位名为 Robert Ziko 的 Windows 10 用户就在微软支持论坛上无奈的抱怨更新 Windows 10 version 1809 后被删除了保存了 23 年多达 220G 的文件。

640?wx_fmt=jpeg

同时,还有网友哭诉称:

  • 我更新后发现文档、照片、音乐、视频都不见了。

  • 我的 D:\Document 文件夹都丢了。这没有配置到文档库,甚至根本不在系统盘里。我没有备份,因为系统更新根本不应该影响非系统盘。

  • 根据我从新闻上了解的信息,只要没有同步到 OneDrive,所有存放在“文档”里的内容都会被删除。另外,“文档”文件夹其实是汇总了不同路径的内容——不清楚具体是什么机制,可能跟这个有关系。

对此,微软称如果数据丢失及时与官方联系,其会提供工具,解决问题:

如果你手动检查了更新,并且确定更新后遇到遗失文件问题,请尽量减少使用受影响设备,并直接通过 +1-800-MICROSOFT 与我们联系,或者查阅你所在地区的本地服务号码。

640?wx_fmt=jpeg

根据微软官方支持信息,中国区服务热线如下:

400 820 3800

800 820 3800

Your Phone 应用传输的照片被标记为只读

作为此次系统更新中最大的亮点 Your Phone 应用程序功能,在从该应用中传输照片至 PC 电脑时,照片的属性被标记为只读。对此,微软的开发者已经承认了该 Bug。

640?wx_fmt=jpeg

重复提示安装

有部分用户表示,在安装最新版的 Windows 10 之后,系统再次提示安装。

UWP 应用、Edge 浏览器无法联网

从微软商店中下载的应用程序无法连接到网络。其次 Edge 无法加载网站,并在每个应用程序中显示的是无法访问的页面提示。不过微软的 IE、Google Chrome 等浏览器都是可以联网,对此,据外媒 Neowin 报道,微软证实了该问题的存在,并表示,版本 1809 的更新阻止 Microsoft Store 应用程序在已禁用 IPv6 的设备上连接到 Internet,目前正在解决这个问题。与此同时,用户可以通过启用 IPv6 来解决此问题。

其他 Bug

  • 英特尔 Audio Display 设备驱动不兼容;

  • 任务管理器不能正常显示 CPU 使用情况;

  • 更新程序影响电池寿命;

  • ......

诸如让人头疼的 Bug,真是应了那句「升级一时爽,恢复火葬场」。


640?wx_fmt=png

微软紧急叫停 Windows 10 1809 版


最终,微软于更新程序上线三天后的 10 月 6 号这一天,紧急叫停了该版本的更新,并称“我们暂停了 Windows 10 2018 10 月更新(v1809)面向全体用户的推送工作,正在就用户报告丢失文件问题进行调查。”

640?wx_fmt=png

在此,我们也庆幸这些问题发现已算及时。毕竟微软本预计将于 9 号这一天,全面向用户推送更新程序,而我们都了解在微软放弃 XP、Windows 7 的主流支持后,如今形单影只的 Windows 10 早已被其标记为自动更新。倘若 9 号之后,问题再被爆出,用户的损失及失望未必是微软能在一时之间所能解决和安抚的了。

你升级到了 Windows 10 1809 吗?有何体验,欢迎在下方留言分享你的看法。

参考:

https://www.businessinsider.com/windows-10-october-2018-update-best-new-features-and-updates-2018-10


640?wx_fmt=png


微信改版了,

想快速看到CSDN的热乎文章,

赶快把CSDN公众号设为星标吧,

打开公众号,点击“设为星标”就可以啦!

640?wx_fmt=png


征稿啦

CSDN 公众号秉持着「与千万技术人共成长」理念,不仅以「极客头条」、「畅言」栏目在第一时间以技术人的独特视角描述技术人关心的行业焦点事件,更有「技术头条」专栏,深度解读行业内的热门技术与场景应用,让所有的开发者紧跟技术潮流,保持警醒的技术嗅觉,对行业趋势、技术有更为全面的认知。

如果你有优质的文章,或是行业热点事件、技术趋势的真知灼见,或是深度的应用实践、场景方案等的新见解,欢迎联系 CSDN 投稿,联系方式:微信(guorui_1118,请备注投稿+姓名+公司职位),邮箱(guorui@csdn.net)。

推荐阅读:

2018 AI开发者大会

只讲技术,拒绝空谈

2018 AI开发者大会首轮重磅嘉宾及深度议题现已火热出炉,扫码抢“鲜”看。国庆特惠,购票立享 折优惠!

640?wx_fmt=jpeg

点击“阅读原文”,也可立即报名。

640?wx_fmt=gif

640?wx_fmt=gif

作者:csdnnews 发表于 2018/10/08 16:27:05 原文链接 https://blog.csdn.net/csdnnews/article/details/82975999
阅读:110

          [转]谁人敢信贾跃亭?      Cache   Translate Page      

640?wx_fmt=gif

640?wx_fmt=jpeg

作者 | 王舷歌

本文经授权转载自“深响”(ID:deep-echo)

贾跃亭有股神奇的魔力。这股魔力对于普通人或许无效,但对于成功企业家却总能切中要害,在相当一段时间里获其心智。

在昨日恒大健康发布关于贾跃亭要与恒大结束合作的公告之后,一些媒体起了“先骗孙宏斌,后坑许家印”之类的标题,直指贾跃亭曾8次被列入失信被执行人,不可信任。

同时,一个本应该是常识的问题也再次被摆上桌面——为什么总有人相信他?而且相信他的人还都是些无比精明、在商界影响力巨大的“大人物”?

一种坊间闲谈结合了贾跃亭第一次出走海外的背景将问题引向了贾是否是红顶商人,背后是否还有神秘势力。

另一种猜测则相对实在,无论是乐视时期一众放弃自身事业加入的高管,放心投资的明星,还是后来救火的孙宏斌、许家印,或许贾跃亭切中的是他们心中那丝以小博大与那丝不甘心。对于贾跃亭的投资与其说是信任他,倒不如说是自己内心深处的冒险精神与赌徒心理的回响。

毕竟,在贾跃亭之前,史玉柱的巨人帝国轰然倒塌但又高楼再起的故事还是给人们留下了念想。

640?wx_fmt=jpeg

恒大健康公告


640?wx_fmt=png

真信吗?


昨晚的公告来的有些突然。

恒大健康称,贾跃亭半年耗尽恒大注资的8亿美元,又向恒大提出再提前支付7亿美元的要求,并在未达目的后提出仲裁,要求剥夺恒大融资同意权,撕毁所有合作协议。

这距离许家印亲自视察FF不到三个月的时间。

当时“秀恩爱”的场景还历历在目,根据FF的官方消息:“许家印高度赞赏FF的技术实力,并称’眼见为实’,投资FF绝对是正确的决定,恒大将会在资金和生产基地、产品销售方面基于FF全面支持。贾跃亭对许家印和恒大集团的大力支持表示感谢。”

640?wx_fmt=jpeg

贾跃亭陪同许家印与FF全球生产制造高级副总裁Dag Reckhorn交流

Dag为原特斯拉生产制造主要负责人,右二)

无论如何也想不到,情势变化地如此之快。

恒大指出,按照协议约定,时颖在2018年底前支付8亿美元、2019年支付6亿美元、2020年支付6亿美元。时颖在2018年5月25日已提前支付完毕2018年底前应付的8亿美元。但在2018年7月,由贾跃亭实际控制的FF原股东提出时颖的8亿美元已基本用完,并要求时颖再提前支付7亿美元。

这就尴尬了,用钱的似乎比给钱的还要“厉害”。

“FF原股东利用其在Smart King多数董事席位的权利,操控Smart King,并在没达到合约付款条件的情况下,要求时颖付款,并以此为借口于2018年10月3日向香港国际仲裁中心提出仲裁,要求剥夺时颖作为股东享有的有关融资的同意权;并解除所有协议,剥夺时颖在相关协议下的权利。”公告如是陈述“闪崩”的缘由。

今年6月。恒大集团以67.46亿港元收购香港时颖公司100%股份,间接获得Smart King公司45%的股权,成为公司第一大股东,FF持有Smartking33%的股份,剩下22%的股份将预留作为根据股权激励计划配发予雇员的股权。这意味着恒大正式入主贾跃亭的FF。

除了资金,当时恒大与FF的合作还有两个关键点:

一是AB股模式,贾跃亭享有“ 1股10票”的权力。粗略计算一下,恒大健康透过时颖公司仅持有Smart King12%的投票权,而贾跃亭等FF原股东投票权则高达88%。通过这种同股不同权的架构,贾跃亭在Smart King股东会中依旧具有一票权。也就是说恒大健康入股后,贾跃亭虽然仅为“二股东”,但仍将实际控制FF的经营决策。

而这一AB模式的设置前提是,在贾跃亭FF原股东违约的情况下,其投票权将出现反转,特别投票权将回转到恒大手中。另外,员工股权激励的股份不具有任何投票权。

二是恒大入股的同时,还与FF原股东签订了对赌协议,如果FF无法在2019年第一季度做到首批电动车量产交付,贾跃亭将失去对公司控制。

对赌成为了纠纷的关键。

今年8月28日,FF91首台预量产车下线,9月19日,这台预量产车从亚利桑那州测试场被运回洛杉矶总部。而这离FF91年底实现量产的计划相去甚远。如果无法完成对赌,贾跃亭势必会失去对FF的控制,那么此刻撕毁协议就是最后的办法了。

640?wx_fmt=jpeg

北京时间8月28日凌晨02:34

贾跃亭在微博发布FF91预量产车下线消息

反过来站在恒大的角度,其实这笔买卖并没有想象中那么“傻”。

首先是恒大处于对高科技产业的积极探索期。2018年初,恒大提出要积极拓展高科技产业,将在未来十年投入1000亿元。投资FF的这些钱,对于恒大来说,不算多。

“目前只是要探索高科技产业,但进入的产业一定是大产业,如果投个两三千万的,那是小产业。比如航天、人工智能、生命科学干细胞,互联网,在有机会有条件的情况下都会去探索。”许家印强调,“目前仅仅是探索,有机会就探索。”

其次,恒大入主FF后,也在短时间内从高层架构、产能规划、研发布局上全面接手。 

8月14日,恒大法拉第未来智能汽车(中国)集团揭牌仪式在广州恒大中心举行。这也是继恒大收购FF以来的第一次公开发布会,FF中国总部高管团队首次亮相。在一众高管中,恒大的力量已经覆盖了FF的所有高位。而在有关法拉第未来的“十年规划战略”中,计划在中国华东、华西、华南、华北和华中地区建设五大研发生产基地。

当然,虽然FF的口碑现在并不算太好,但客观上,FF在全球是拥有超过1000名科研专家及380件专利的,这些技术是价值非凡的。

就像恒大集团投资足球,不仅仅是基于体育的考虑,而是也为地产生意抬高溢价一样,许家印买入FF也是更看重背后的新能源、人工智能等高科技产业,他不仅坚信“10-20年后新能源汽车的年销量可达几千万辆,甚至上亿辆”的愿景,还认为智能汽车附带的智能探测器、雷达技术、面部识别技术等可以盘活整个恒大高科技版块的持续发展。

而目前看来,对赌中的“2019年第一季度量产”似乎难以实现,到那时彻底剥夺贾跃亭的控制权就顺理成章了。有媒体报道称,为避免与贾跃亭发生联系,恒大曾“叮嘱”媒体在报道时“不要带上贾跃亭”。

所以,恒大对于贾跃亭的信任究竟有多少呢?恐怕更多的还是相互的利益考量。


640?wx_fmt=png

看错人


相比于恒大,孙宏斌的损失就更惨重一些了。他可是动了真感情。

2017年1月的时候,孙宏斌和贾跃亭一起举办了一场新闻发布会,宣布融创入股乐视。在发布会现场,贾跃亭与孙宏斌互表惺惺相惜之情。孙宏斌形容他和贾跃亭:“有些人认识很多年你还是觉得陌生,有一些人一见面经过短时间的交往就觉得很亲,像兄弟。”

640?wx_fmt=jpeg

孙宏斌谈到乐视时落泪

(图片来自网络)

据孙宏斌回忆,他第一次跟贾跃亭谈了六七个小时,谈完之后,就有投资冲动了。正因如此,融创出资150亿入股乐视的买卖,双方也只用了36天就决定了。

故事的结局大家都知道了,去年7月,贾跃亭在公开表示自己将“承担所有的责任,会对乐视的员工、用户、客户和投资者尽责到底”的当天晚上,便辞去乐视网包括董事长在内的一切职务并退出董事会,不再具有任何决策权。有媒体报道称,那时贾跃亭已经抵达美国了。

乐视的烂摊子全都留给了孙宏斌。孙宏斌开始“去贾跃亭化”,并对乐视进行“大换血”。

今年3月乐视网发出公告,孙宏斌因工作安排调整原因向公司申请辞去乐视网董事长职务。此时,距离2017年7月21日孙宏斌上任已过去236天,乐视的奇迹并没有出现。

“人有时候要敢叫日月换新天,有时候也要愿赌服输。”

“乐视是一个失败的投资,这不是壮士断臂,而是断头了。”

有“金句王”之称的孙宏斌甚至一度落泪:“投资乐视以前,我觉得人生没有遗憾;投资乐视以后,觉得不把乐视经营好,就真的有遗憾了。”

孙宏斌在接受采访时表示,对乐视关联交易知情,但错判了关联方欠上市公司的债务无法得到有效偿还。而有融创内部人士曾向媒体感慨“老孙这个人,看大方向真的很准,就是看人经常会跑眼。”

640?wx_fmt=jpeg

贾跃亭与孙宏斌

(图片来自网络)

对贾跃亭流露过“亲情”的还有乐视的很多高管。

当年贾跃亭第一次滞留海外,乐视风雨飘摇,但之后乐视汽车、乐视手机、乐视体育等乐视生态的重要组成部分,都是在那段时间搭建的业务架构和团队。

时任乐视汽车负责人吕征宇曾在韩国大宇、福特、通用、法拉利、英菲尼迪等公司工作过。他接受《福布斯》采访时回忆称:“说自己一点疑虑和担心都没有,肯定是假的。”但2014年10月,他在香港和尚未回国的贾跃亭聊了三个小时之后,最终决定加入乐视。“因为我见到了贾跃亭这个人。他很真诚、坦率,我选择相信他。”

640?wx_fmt=jpeg

吕征宇

非常神奇的是,不只是吕征宇,很多业界翘楚都是在乐视低谷时期的2014年底到2015年初选择相信贾跃亭的。比如原新浪体育频道合作总监于航、原联想集团副总裁冯幸、原联想集团联通业务总经理董志升、联想集团运营管理总监崔战良以及原微软北亚及大中国区售后运营部高级经理綦滨。

贾跃亭究竟有什么魅力能让这些顶尖人才听之信之呢?

如果细看,你会发现贾跃亭设计的股权激励策略相当“鸡贼”——不光让高管在自辖业务中有利益牵扯,同时也把高管编入整个乐视集团中,保证了单个项目高管和集团公司利益一致。而另一方面,新项目前期不融资的策略,保证有更多股权给团队。这种管理层双重持股的架构,让高管不仅持有自身业务板块股权,如果其他板块业务高速成长,他也可以从中分享到增长的红利。

谁都不是傻子。

这些高管在原有的公司或许难以取得职业突破,这时的乐视反倒成为了一个大舞台,贾跃亭能把足够的头衔给到他们。虽然乐视整体风险看上去较大,但乐视网的股票、工作价值和薪水都是他们选择贾跃亭的理由。

640?wx_fmt=jpeg

贾跃亭与霍思燕、杜江等明星合影

不过,贾跃亭还是辜负了他们,而他们也为自己当初的赌注式选择付出了职业生涯的代价。

根据腾讯科技的报道,在2016年6月开始,贾就已经在为自己谋化退路了。直到雷振剑(乐视体育CEO)要求财务向乐视控股申请工资发放,一些乐视体育高层才发现,公司已经没钱了。

从2017年下半年开始,伴随着乐视帝国的崩塌,高管们也陆续跳船——阿不力克木、赵一成、梁军、杨丽杰、梁军、任冠军、杨永强、郑孝明、高飞、敖铭、于航、程益中、沈威、强炜、邱志伟、丁磊、张海亮、杨新军、吴亚洲、王大勇等等VP以上级别的乐视高管都离开了,不知他们还愿不愿意与人谈起身在乐视的日子,以及愿不愿将这段经历写在简历的显眼位置。

640?wx_fmt=png

事到如今,贾跃亭到了又一个可能遭遇“众叛亲离”的路口。强势的恒大面前,贾跃亭解除协议的可能性几乎为零。如若2019年,耗尽了贾跃亭全部力气的FF不能量产,那么贾跃亭将会失去他逆风翻盘的最后一根稻草。

而他也将进一步地失去他的名誉。融创、恒大接连遇坑,此后还有谁人敢信贾跃亭?

END

微信改版了,

想快速看到CSDN的热乎文章,

赶快把CSDN公众号设为星标吧,

打开公众号,点击“设为星标”就可以啦!

640?wx_fmt=png


征稿啦

CSDN 公众号秉持着「与千万技术人共成长」理念,不仅以「极客头条」、「畅言」栏目在第一时间以技术人的独特视角描述技术人关心的行业焦点事件,更有「技术头条」专栏,深度解读行业内的热门技术与场景应用,让所有的开发者紧跟技术潮流,保持警醒的技术嗅觉,对行业趋势、技术有更为全面的认知。

如果你有优质的文章,或是行业热点事件、技术趋势的真知灼见,或是深度的应用实践、场景方案等的新见解,欢迎联系 CSDN 投稿,联系方式:微信(guorui_1118,请备注投稿+姓名+公司职位),邮箱(guorui@csdn.net)。

推荐阅读:

2018 AI开发者大会

只讲技术,拒绝空谈

2018 AI开发者大会首轮重磅嘉宾及深度议题现已火热出炉,扫码抢“鲜”看。国庆特惠,购票立享 折优惠!

640?wx_fmt=jpeg

点击“阅读原文”,也可立即报名。

640?wx_fmt=gif

640?wx_fmt=gif

作者:csdnnews 发表于 2018/10/08 12:28:06 原文链接 https://blog.csdn.net/csdnnews/article/details/82975988
阅读:66

          [转]千万小时机器学习训练后,我从趟过的坑中学习到的 16 个技巧!      Cache   Translate Page      

640?wx_fmt=gif

在经历成千上万个小时机器学习训练时间后,计算机并不是唯一学到很多东西的角色,作为开发者和训练者的我们也犯了很多错误,修复了许多错误,从而积累了很多经验。

在本文中,作者基于自己的经验(主要基于 TensorFlow)提出了一些训练神经网络的建议,还结合了案例,可以说是过来人的实践技巧了。

640?wx_fmt=jpeg

出品 | AI 科技大本营


640?wx_fmt=png

通用技巧


有些技巧对你来说可能就是明摆着的事,但在某些时候可能却并非如此,也可能存在不适用的情况,甚至对你的特定任务来说,可能不是一个好的技巧,所以使用时需要务必要谨慎!

1.使用 ADAM 优化器

确实很有效。与更传统的优化器相比,如 Vanilla 梯度下降法,我们更喜欢用ADAM优化器。用 TensorFlow 时要注意:如果保存和恢复模型权重,请记住在设置完AdamOptimizer 后设置 Saver,因为 ADAM 也有需要恢复的状态(即每个权重的学习率)。

2.ReLU 是最好的非线性(激活函数)

就好比 Sublime 是最好的文本编辑器一样。ReLU 快速、简单。而且,令人惊讶的是,它们工作时,不会发生梯度递减的情况。虽然 Sigmoid 是常见的激活函数之一,但它并不能很好地在 DNN 进行传播梯度。

3.不要在输出层使用激活函数

这应该是显而易见的道理,但如果使用共享函数构建每个层,那就很容易犯这样的错误:所以请确保在输出层不要使用激活函数。

4.请在每一个层添加一个偏差

这是 ML 的入门知识了:偏差本质上就是将平面转换到最佳拟合位置。在 y=mx+b 中,b 是偏差,允许曲线上下移动到“最佳拟合”位置。

5.使用方差缩放(variance-scaled)初始化

在 Tensorflow 中,这看起来像tf.reemaner.variance_scaling_initializer()。

根据我们的经验,这比常规的高斯函数、截尾正态分布(Truncated Normal)和 Xavier 能更好地泛化/缩放。

粗略地说,方差缩放初始化器根据每层的输入或输出数量(TensorFlow中的默认值是输入数量)调整初始随机权重的方差,从而有助于信号更深入地传播到网络中,而无须额外的裁剪或批量归一化(Batch Normalization)。Xavier 与此相似,只是各层的方差几乎相同;但是不同层形状变化很大的网络(在卷积网络中很常见)可能不能很好地处理每层中的相同方差。

6.归一化输入数据

对于训练,减去数据集的均值,然后除以它的标准差。在每个方向的权重越少,你的网络就越容易学习。保持输入数据以均值为中心且方差恒定有助于实现这一点。你还必须对每个测试输入执行相同的规范化,因此请确保你的训练集与真实数据相似。

以合理保留其动态范围的方式缩放输入数据。这与归一化有关,但应该在归一化之前就进行。

例如,真实世界范围为 [0,140000000] 的数据 x 通常可以用 tanh(x) 或 tanh(x/C) 来控制,其中 C 是一些常数,它可以拉伸曲线,以适