Next Page: 10000

          The Energy Investment Model With A Glaring Problem      Cache   Translate Page      
The first master limited partnership (MLP) was formed by Apache Oil Company in 1981. In 1987 Congress legislated the rules for publicly traded partnerships in Internal Revenue Code Section 7704. MLPs slowly gained in popularity during the 1980s and 1990s, with about two new MLP IPOs each year. Then, in the 2000s, the popularity of the MLP model began to soar. There were six new MLP IPOs in 2004, ten in 2005, and eighteen in 2006. The recession and oil price crash of 2008-2009 briefly derailed the momentum for new MLPs, but demand began to surge…
          Using Mongo and Map Reduce on Apache Access Logs      Cache   Translate Page      
Introduction With more and more traffic pouring into websites, it has become necessary to come up with creative ways to parse and analyze large data sets. One of the popular ways to do that lately is using MapReduce which is a framework used across distributed systems to help make sense of large data sets. There […]
          Apache's hunt tactics for developer skills      Cache   Translate Page      
none
          script to update ldap      Cache   Translate Page      
I have an olive oil society The script (ideally in powershell or python) can import a .ldif file in ldap (Apache Directory Studio 2.0) It should be at the output of the script a log of success or error... (Budget: €250 - €750 EUR, Jobs: Linux, Powershell, Python, Shell Script, Windows Server)
          Middleware/Container Administrator (FT) - CGI - Montréal, QC      Cache   Translate Page      
Manage container instance worldwide as part of the container global team (instance of tomcat, weblogic, IIS, apache, PHP)....
From CGI - Tue, 06 Nov 2018 22:53:12 GMT - View all Montréal, QC jobs
          Hands on Apache Beam, building data pipelines in Python      Cache   Translate Page      

Hands on Apache Beam, building data pipelines in Python

Apache Beam is an open-source SDK which allows you to build multiple data pipelines from batch or stream based integrations and run it in a direct or distributed way. You can add various transformations in each pipeline. But the real power of Beam comes from the fact that it is not based on a specific compute engine and therefore is platform independant. You declare which “runner” you want to use to compute your transformation. It is using your local computing resource by default, but you can specify a Spark engine for example or Cloud Dataflow…

In this article, I will create a pipeline ingesting a csv file, computing the mean of the Open and Close columns fo a historical S&P500 dataset. The goal here is not to give an extensive tutorial on Beam features, but rather to give you an overall idea of what you can do with it and if it is worth for you going deeper in building custom pipelines with Beam. Though I only write about batch processing, streaming pipelines are a powerful feature of Beam!

Beam’s SDK can be used in various languages, Java, python… however in this article I will focus on Python.


Hands on Apache Beam, building data pipelines in Python
Installation

At the date of this article Apache Beam (2.8.1) is only compatible with Python 2.7, however a Python 3 version should be available soon. If you have python-snappy installed, Beam may crash. This issue is known and will be fixed in Beam 2.9.

pip install apache-beam Creating a basic pipeline ingesting CSV Data

For this example we will use a csv containing historical values of the S&P 500. The data looks like that:

Date,Open,High,Low,Close,Volume 03 01 00,1469.25,1478,1438.359985,1455.219971,931800000 04 01 00,1455.219971,1455.219971,1397.430054,1399.420044,1009000000 Basic pipeline

To create a pipeline, we need to instantiate the pipeline object, eventually pass some options, and declaring the steps/transforms of the pipeline.

import apache_beam as beam from apache_beam.options.pipeline_options import PipelineOptions options = PipelineOptions() p = beam.Pipeline(options=options)

From the beam documentation:

Use the pipeline options to configure different aspects of your pipeline, such as the pipeline runner that will execute your pipeline and any runner-specific configuration required by the chosen runner. Your pipeline options will potentially include information such as your project ID or a location for storing files.

The PipelineOptions() method above is a command line parser that will read any standard option passed the following way:

--<option>=<value> Custom options

You can also build your custom options. In this example I set an input and an output folder for my pipeline:

<strong>class</strong> <strong>MyOptions</strong>(PipelineOptions): @classmethod
def _add_argparse_args(cls, parser):
parser.add_argument('--input',
help='Input for the pipeline',
default='./data/')
parser.add_argument('--output',
help='Output for the pipeline',
default='./output/') Transforms principles

In Beam, data is represented as a PCollection object. So to start ingesting data, we need to read from the csv and store this as a PCollection to which we can then apply transformations. The Read operation is considered as a transform and follows the syntax of all transformations:

[Output PCollection] <strong>=</strong> [Input PCollection] <strong>|</strong> [Transform]

These tranforms can then be chained like this:

[Final Output PCollection] = ([Initial Input PCollection] | [First Transform]
| [Second Transform]
| [Third Transform])

The pipe is the equivalent of an apply method.

The input and output PCollections, as well as each intermediate PCollection are to be considered as individual data containers. This allows to apply multiple transformations to the same PCollection as the initial PCollection is immutable. For example:

[Output PCollection 1] = [Input PCollection] | [Transform 1]
[Output PCollection 2] = [Input PCollection] | [Transform 2] Reading input data and writing outputdata

So let’s start by using one of the readers provided to read our csv, not forgetting to skip the header row:

csv_lines = (p | ReadFromText(input_filename, skip_header_lines=1) | ...

At the other end of our pipeline we want to output a text file. So let’s use the standard writer:

... <strong>|</strong> beam<strong>.</strong>io<strong>.</strong>WriteToText(output_filename) Transforms

Now we want to apply some transformations to our PCollection created with the Reader function. Transforms are applied to each element of the PCollection individually.

Depending on the worker that you chose, your transforms can be distributed. Instances of your transformation are then executed on each node.

The user code running on each worker generates the output elements that are ultimately added to the final output PCollection that the transform produces.

Beam has core methods (ParDo, Combine) that allows to apply a custom transform, but also has pre written transforms called composite transforms . In our example we will use the ParDo transform to apply our own functions.

We have read our csv into a PCollection , so let’s split it so we can access the Date and Close items:

… beam.ParDo(Split()) …

And define our split function so we only retain the Date and Close and return it as a dictionnary:

class Split(beam.DoFn): def process(self, element): Date,Open,High,Low,Close,Volume = element.split(“,”) return [{ ‘Open’: float(Open), ‘Close’: float(Close), }]

Now that we have the data we need, we can use one of the standard combiners to calculate the mean over the entire PCollection.

The first thing to do is to represent the data as a tuple so we can group by a key and then feed CombineValues with what it expects. To do that we use a custom function “CollectOpen()” which returns a list of tuples containing (1, <open_value>).

class CollectOpen(beam.DoFn): def process(self, element): # Returns a list of tuples containing Date and Open value result = [(1, element[‘Open’])] return result

The first parameter of the tuple is fixed since we want to calculate the mean over the whole dataset, but you can make it dynamic to perform the next transform only on a sub-set defined by that key.

The GroupByKey function allows to create a PCollection of all
          ASP.NET, Apache e Mono      Cache   Translate Page      

Alcuni consigli su come usare applicazioni sviluppate mediante il framework .NET, eventualmente con Mono, sfruttando le potenzialità del web server Apache.

Leggi ASP.NET, Apache e Mono


          Apache Struts Team Urges Users for Library Update to Plug Years-Old Bugs      Cache   Translate Page      
In an advisory yesterday, the Apache Software Foundation reiterates its recommendation for users of Struts to make sure their installations run a version of the Commons FileUpload library newer than 1.3.2, lest they expose their projects to possible remote code execution attacks. [...]
          Reply To: Unable to update WordPress (4.9.4) to 4.9.6      Cache   Translate Page      

Changing owner and the group to the www-data (apache ones) of all the files and folders appearing in the errors during wordpress upgrade solved the issue for me


          「Apache Struts 2」のライブラリに脆弱性、直ちに更新を      Cache   Translate Page      
「Commons FileUpload」については2016年11月に脆弱性が見つかってパッチが公開されていたが、Struts 2.3.xには古いバージョンのCommons FileUploadが使われていた。
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Stop us if you've heard this one: Remote code hijacking flaw in Apache Struts, patch ASAP      Cache   Translate Page      

Advisory issued over yet another critical security vulnerability

The Apache Foundation is urging developers to update their Struts 2 installations and projects using the code – after a critical security flaw was found in a key component of the framework.…


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          【预警通告】Apache mod_jk访问控制绕过漏洞CVE-2018-11759      Cache   Translate Page      
近日,Apache Tomcat官方发布了mod_jk存在访问控制绕过漏洞(CVE-2018-11759)的安全通告,目前PoC已经公开,请相关用户引起注意,及时采取防范措施。
          Daily Kos Elections 2018 election night liveblog thread #13      Cache   Translate Page      

Follow: Daily Kos Elections on Twitter

Results: CNNHuffPostNew York TimesPolitico

Guides: Poll Closing TimesHour-by-Hour Guide Ballot MeasuresLegislative ChambersCounty Benchmarks

Cheat Sheet: Key Race Tracker

Wednesday, Nov 7, 2018 · 3:38:04 AM +00:00 · David Nir

The early vote has been tallied in much of Arizona, where it's likely that over 60% of all ballots were cast this way. In AZ-Sen, Dem Kyrsten Sinem has a 5,000-vote lead on Republican Martha McSally, with blue Apache County not reporting yet. In AZ-02, Dem Ann Kirkpatrick is up 55-45 on Republican Lea Marquez Peterson, which would be another pickup (this is McSally's seat).

Wednesday, Nov 7, 2018 · 3:39:55 AM +00:00 · David Nir

All of these would be Dem pickups if these results hold:

IA-03 (38% in): Axne (D) 56, Young (R-Inc): 41

IL-13: Dirksen-Londrigan up 52-48 on Rodney Davis with 65% in

IL-14: Dem Lauren Underwood still up, 51.5-48.5 on Randy Hultgren with 73% in

NY-19 (24% in): Delgado (D) 54, Faso (R) 44

Wednesday, Nov 7, 2018 · 3:43:36 AM +00:00 · David Nir

OH-Gov: This one’s not looking good. Though polls suggested it was a tossup, Republican Mike DeWine has a 52-45 lead on Dem Rich Cordray with 87% reporting.
WI-Gov: Democrat Tony Evers has a small 50-48 edge on Republican Gov. Scott Walker with 59% reporting. This would be a pickup.
MN-Gov: Democrat Tim Walz is crushing Republican Jeff Johnson 59-38 with 36% reporting.

Wednesday, Nov 7, 2018 · 3:46:07 AM +00:00 · David Nir

NM-Gov: Democrat Michelle Lujan Grisham is up 55-45 with 45% reporting. This would be a pickup.
GA-Gov: Republican Brian Kemp is up 55-44 on Democrat Stacey Abrams with 64% reporting.

Wednesday, Nov 7, 2018 · 3:48:58 AM +00:00 · David Nir

Good news: Voters in Missouri have passed an amendment that would replace the state’s partisan method of redrawing legislative maps with an independent redistricting commission.

Wednesday, Nov 7, 2018 · 3:51:54 AM +00:00 · David Nir

MN-Gov: The AP has called this one for Democrat Tim Walz. A good hold for Team Blue.

Wednesday, Nov 7, 2018 · 3:54:10 AM +00:00 · David Nir

MN-Gov: The AP calls it for Democrat Michelle Lujan Grisham, who picks up another governorship for Democrats.

Wednesday, Nov 7, 2018 · 3:55:28 AM +00:00 · David Nir

NC-13: Republican Rep. Ted Budd has hung on to defeat Democrat Kathy Manning in what was a tougher shot for Dems.


          Ελληνικά Apache στην Ανατολική Μεσόγειο και το Ισραήλ      Cache   Translate Page      
Φωτογραφίες από την άσκηση υψηλής επικινδυνότητας, στο πλαίσιο της οποίας τέσσερα ελικόπτερα Apache και δύο Chinook πέταξαν σε χαμηλό ύψος από το Στεφανοβίκειο έως το Ισραήλ και από εκεί στην Κύπρο και πίσω στην Ελλάδα, έδωσε στη δημοσιότητα το Γενικό Επιτελείο Στρατού (ΓΕΣ). Στο πλαίσιο της άσκησης που είχε αποκαλύψει η εφημερίδα «Καθημερινή» τις 24 […]
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          How can I measure average waiting time as nodes increase efficiently?      Cache   Translate Page      

Hello Everyone,

 

I have implemented the logic for dispatcher on Apache Server programatically.

 

My logic for dispatcher is like below one.

-Step01 : Delete old cache.

-Step02 : Create new cache.

 

But I found serious issue when I tested cache on dispatcher.

 

New cache was always deleted because creating new cache was faster than deleting old cache.

 

So I had to add 1 second for waiting time before new cache was created like below one.

 

boolean deleted = DispatcherUtil.invokeToDeleteCache(resolver, replicationAction.getPaths(), ReplicationActionType.DELETE);

              if (deleted) {

                   try {

                              TimeUnit.SECONDS.sleep(1); // wait 1 second

                    } catch (InterruptedException e) {

                                log.error(e.getMessage(), e);

                    }

                    DispatcherUtil.invokeToCreateCache(resolver, replicationAction.getPaths());

               }

 

After I added 1 second for waiting time, new cache has created successfully.

 

So I wonder what the average waiting time is as nodes increase like below picture.

 

How can I measure average waiting time as nodes increase efficiently?

 

Regards

Chung Yong.


          ASP.NET, Apache e Mono      Cache   Translate Page      

Alcuni consigli su come usare applicazioni sviluppate mediante il framework .NET, eventualmente con Mono, sfruttando le potenzialità del web server Apache.

Leggi ASP.NET, Apache e Mono


          「Apache Struts 2」のライブラリに脆弱性、直ちに更新を      Cache   Translate Page      
「Commons FileUpload」については2016年11月に脆弱性が見つかってパッチが公開されていたが、Struts 2.3.xには古いバージョンのCommons FileUploadが使われていた。
          Evergreen ILS: OpenSRF 3.0.2 released      Cache   Translate Page      

We are pleased to announce the release of OpenSRF 3.0.2, a message routing network that offers scalability and failover support for individual services and entire servers with minimal development and deployment overhead.

OpenSRF 3.0.2 is a bugfix release, but all users of OpenSRF 3.0.1 are advised to upgrade as soon as possible. In particular, users of NGINX as a reverse proxy for Evergreen systems are encouraged to upgrade to take advantage of a more secure NGINX configuration.

The following bugs are fixed in OpenSRF 3.0.2:

  • LP#1684970: When running behind a proxy such as NGINX, the HTTP translator was not getting the IP address of the user agent. As a consequence, it was possible that two different HTTP translator clients could end up talking to the same OpenSRF worker process. This issue is resolved by using the remoteip Apache module to extract the user agent’s IP address from the X-Real-IP HTTP header.
  • LP#1702978: OpenSRF could fail to retrieve memcached values whose keys contain the % character. This resulted in breaking authentication in Evergreen when the username or barcode contained a %.
  • LP#1711145: The sample NGINX configuration file shipped with OpenSRF had weak SSL settings. As of this release, it now
    • Enables http2
    • Adds a commented section on enabling SSL everywhere.
    • Apply a 5-minute proxy read timeout to avoid too-short timeouts on long API calls.
    • Adds a commented section on sending NGINX logs to syslog.
    • Includes INSTALL notes on generating the dhparam file.
  • LP#1776510: The JavaScript client code was not detecting when the WebSockets gateway threw a transport error, e.g. when a request was made of a nonexistent service. This situation can now be caught by error-handling callbacks.

To download OpenSRF, please visit the downloads page.

We would also like to thank the following people who contributed to the release:

  • Galen Charlton
  • Bill Erickson
  • Mike Rylander
  • Jason Stephenson
  • Cesar Velez

          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          VPS新机安装:2步配置快速完成CentOS 6+Nginx+PHP的安装;解决Nginx下PHP空白的问题      Cache   Translate Page      
作者:王志勇(自由勇) 和平海底博客
地址:http://www.auiou.com/relevant/00000948.jsp
发布时间:2018年11月07日 07:11

由于之前的Ubuntu 14+Nginx+PHP安装成功,本文和之前的方法几乎是类似的,这一篇是从前一篇的Ubuntu的方法里移植过来的,只有少量的不同。请有需要的读者,使用CentOS 6系统(x86、x64、完整版或minimal版都可以),按照本文走一遍调试的过程,就可以避免Nginx下PHP空白的问题。本文以CentOS 6.5为例

准备前工作:
首先确保系统里没有Apache、PHP,因为会导致冲突。检测Apache是否存在的命令为:
if which httpd; then echo "Yes"; else echo "No"; fi;

如果显示Yes,这样卸载,命令:
rpm -qa|grep httpd

会返回Apache的完整名称,例如httpd-2.2.3-22.el5.centos
用这样的命令卸载:
yum -y remove httpd-2.2.3-22.el5.centos

PHP是否存在、卸载命令与上述同理,命令为:
if which php; then echo "Yes"; else echo "No"; fi;
如果显示Yes,这样卸载:
rpm -qa|grep php
yum -y remove ……

正式开始安装Nginx+PHP:

CentOS 6无需先update,这一点和Ubuntu不同。

命令1,安装rpm依赖包,耗时2秒以内:
rpm -ivh http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm

命令2,安装nginx、php,耗时约16秒:
yum -y install nginx php-fpm

至此CentOS 6+Nginx+PHP已经安装完毕。接下来需要做2个关键配置:

1. 使用Xftp软件,连接VPS。找到/etc/nginx/conf.d路径,下载该目录下的default.conf文件。
用文本工具打开此文件,找到如下这些代码:

#location ~ \.php$ {
#    root           html;
#    fastcgi_pass   127.0.0.1:9000;
#    fastcgi_index  index.php;
#    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
#    include        fastcgi_params;
#}

上述的代码删除,修改为:

location ~ \.php$ {
root /usr/share/nginx/html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}

这一步和前文的Ubuntu系统不同,Ubuntu系统下只需2行即可。

2. /etc/php-fpm.d/www.conf
此文件无需修改,因为默认的值就是:listen = 127.0.0.1:9000
如果不是此值,请修改为此值。

3. /etc/php.ini
找到short_open_tag这一行,默认的设置为short_open_tag = Off,改为:

short_open_tag=On

之后,重启Nginx、PHP,命令:
service nginx restart && service php-fpm restart

此时,已经能够顺利运行PHP程序,其它的配置文件无需再修改。
这时,上传一个写好的PHP程序1.php到默认的/usr/share/nginx/html目录,测试一下吧。
1.php写入内容:

成功如下截图:

最后,要重点说第3步,这个步骤至关重要。其实到第1步已经能支持PHP,第2步无需配置。第3步的作用是,允许<?$e1=5;?>这样的短标签,否则PHP的程序部分会显示空白(PHP不运行)。

对此文发表评论(1)

          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          ASP.NET, Apache e Mono      Cache   Translate Page      
Alcuni consigli su come usare applicazioni sviluppate mediante il framework .NET, eventualmente con Mono, sfruttando le potenzialità del web server Apache. Leggi ASP.NET, Apache e Mono
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          How to root Apache Q88      Cache   Translate Page      
We bought a new smartphone or tablet operating systems Android, and do not know how extend the functionality and gain root right on Apache Q88? Site Guideroot show how to realize our plans. About Root Rights details root prava- is elevated privileges . They can significantly speed up the operation of the device, effectively adjust the … Continue reading
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Apache Guacamole - Remote Desktop Gateway      Cache   Translate Page      
Apache Guacamole is a clientless remote desktop gateway. It supports standard protocols like VNC, RDP, and SSH. Guacamole client is an HTML5 web application, use of your computers is not tied to any one device or location. As long as you have access to a web browser, you have access to your machines.

          PMD - An extensible cross-language static code analyzer      Cache   Translate Page      
PMD is a source code analyzer. It finds common programming flaws like unused variables, empty catch blocks, unnecessary object creation, and so forth. It supports Java, JavaScript, Salesforce.com Apex and Visualforce, PLSQL, Apache Velocity, XML, XSL.

          ASP.NET, Apache e Mono      Cache   Translate Page      
Alcuni consigli su come usare applicazioni sviluppate mediante il framework .NET, eventualmente con Mono, sfruttando le potenzialità del web server Apache. Leggi ASP.NET, Apache e Mono

I virus e le vulnerabilità non vengono più aggiornate. Visitate il nostro sito per altri contenuti di qualità.
           2010 TVS Apache RTR 51000 Kms       Cache   Translate Page      
Price: ₹ 11,000, Model: Apache RTR, Year: 2010 , KM Driven: 51,000 km,
It's good condition contact me 78269977one three https://www.olx.in/item/2010-tvs-apache-rtr-51000-kms-ID1p5Ejj.html
           2010 TVS Apache RTR 62000 Kms..       Cache   Translate Page      
Price: ₹ 25,000, Model: Apache RTR, Year: 2010 , KM Driven: 62,000 km,
Single Hand, Tubeless new tyres With one year warranty and Battery.. New disk break, https://www.olx.in/item/2010-tvs-apache-rtr-62000-kms-ID1p5COh.html
          卡巴斯基2018 Q3全球DDoS攻击分析报告      Cache   Translate Page      
在DDoS攻击方面,2018年第三季度相对平静。所谓“相对”,是因为主要资源上没有出现很多高级别或者连续多日的DDoS攻击。然而,犯罪分子攻击能力日趋增强,而攻击的总数却丝毫没有显示出下降的迹象。 7月初对“暴雪娱乐”的攻击成为今年夏天的头条新闻。Battle.net服务器被脱机发送控制,近三天时间内,玩家不能登录启动游戏。一个名为PoodleCorp的组织声称对此负责,该团体曾经在Twitter上露面。声称如果他们的消息被转发2000次以上,他们承诺将离开公司。不久之后,“暴雪娱乐”发布报告说“已经解决了玩家遇到不能登录的技术问题。” 在7月底之前,又发生了一系列针对另一家游戏发行商Ubisoft的攻击。结果,玩家无法登录他们的帐户,不能开启多人游戏模式。据该公司发言人称,用户数据没有受到损害。没有关于行动目的的报告。攻击者可能已经考虑到财务收益,或者只是针对最近游戏进行的一些更新提出抗议。 另外一个重大且持续数天的攻击,英语区的三大扑克类游戏网站:America’s Card Room,PokerStars和Partypoker非常恼火。受害的经营者被迫取消他们的一些活动,引起了网站成员的不满,因此他们失去了大笔资金。 与往常一样,DDoS攻击几乎可以肯定是由政治紧张造成的。8月底,瑞典社会民主党网站长达六分钟的中断,就是这种攻击的一个鲜明例子。同样,政治原因也被认为导致了加利福尼亚民主党国会候选人网站攻击。一个月之后,“政治”的标签也可能受到激进分子的推动,助长了对德国RWE的攻击:通过点击他们的网站,活动人士试图引起公众注意。 无论如何,一般公众仍然对导致南非共和国劳动部遭受痛苦的原因感到茫然(对其网络资源的攻击发生在9月初,据该部发言人称,没有内部系统或数据受到损害)。关于荷兰政府服务DigiD袭击事件背后的动机存在同样的不确定性:7月底,它在一周内遭到三次袭击,使许多公民无法获得与税收相关的其他功能。同样,没有报告表明数据存在泄漏。 DDoS攻击者的工具集没有太多更新;虽然一些好奇的新技术和一些新的漏洞确实在专家的视线范围内。因此,在7月20日,他们发现了针对D-Link路由器的大规模“招募活动”,该路由器使用了超过3,000个IP和一个命令服务器。该漏洞利用在企业环境中并不十分成功;还有待观察它是否能够创建一个新的用户路由器僵尸网络(以及它有多大)。 谈到特洛伊木马,报道于7月底开始传播有关新设计的特洛伊木马死亡案例,该案件通过招募监控摄像机来构建僵尸网络。臭名昭着的黑客Elit1Lands使用这个恶意软件AVTech漏洞,于2016年10月公开。安全研究员Ankit Anubhav设法联系网络犯罪分子并了解到目前为止僵尸网络尚未用于大规模DDoS攻击。 此外,在8月底和9月初,安全专家首先看到了新版本的Mirai和Gafgyt僵尸网络,利用了SonicWall和Apache Struts中的漏洞(在最后一种情况下,与信用参考中的大量数据泄露相关的错误相同)局Equifax)。 与此同时,Mirai的原始版本的三位作者,他们已经公开发布,最终被判刑。阿拉斯加联邦法院命令Paras Jha,Josiah White和Dalton Norman支付大量的补偿金,并提供2,500小时的社区服务。从表面上看,他们将代表FBI工作,而这句话的实际温和性是由于在这个过程中三名受试者与联邦调查人员正式合作:根据法庭文件,这三名男子已经积累通过将他们的专业知识用于至少十几项调查,提供超过1,000小时的社区服务。 此外,英国警方逮捕了我们上次报告中提到的针对ProtonMail的DDoS攻击背后的一名入侵者。这位19岁的新秀黑客成了英国公民,也参与了对学校,学院和航空公司制造恶作剧炸弹的威胁。他的父母坚持认为,通过玩“我的世界”游戏,他在网上被“认真的人”“整理”了。这个故事很难以年轻神童的工作结束,尽管他确实面临可能引渡到美国:根据调查,他的曝光主要是因为他没有实践非常好的操作安全。 季度趋势 与去年第三季度相比,由于9月份,DDoS攻击次数略有增加,而在夏季和全年,DDoS攻击次数明显减少。 2017 – 2018年卡巴斯基DDoS保护失败的季度DDoS攻击次数(2017年攻击次数为100%) 上图显示,与去年相比略有增加归因于9月,占所有攻击的最大份额(约为2017年的5倍)。相反,7月和8月与去年相比变得更安静了。2017年,没有观察到这种不成比例。 卡巴斯基DDoS保护在9月遭到DDoS攻击,与2017年和2018年的Q3总量成比例 DDoS恰好在9月份出现了一个相当普遍的事情:年复一年的主要目标是教育系统,针对学校,大学和测试中心的网络资源进行攻击。英国一所顶尖学校 - 爱丁堡大学于9月12日开始并持续近24小时的攻击,成为今年最大的头条新闻。 根据统计数据,这种情况往往归咎于国家的敌人,但这些指控是没有根据的。因此,在我们的私人调查过程中,我们发现攻击主要发生在大学生在学时间,并在休假期间消退。英国非营利组织Jisc得到了几乎相同的结果:通过收集有关大学攻击的统计数据,它了解到学生在度假时受到的攻击较少。每日课外时间也是如此:主要的DDoS干扰是学校在上午9:00到下午4:00期间经历的。 当然,这可能表明肇事者只是将他们的行为与大学的作息时间同步……但解释越简单,就越有可能:这些攻击也很可能是由年轻人设计的,可能有一些“好”的理由来惹恼他们的老师,其他学生或一般学校。与此假设一致,我们的专家能够在社交网络中找到DDoS攻击准备的痕迹;虽然我们来自英国的同事遇到了一个相当有趣的案例:一名针对宿舍服务器的攻击由一名学生发起,企图打败他的网络游戏对手。 从各方面来看,这些周期性的爆发将在未来再次发生 - 要么所有的教育机构都要坚持不懈的防御,要么直到所有的学生和他们的老师都对DDoS攻击及其后果有了全新的认识。然而,应该提到的是,虽然大多数攻击是由学生组织的,但并不意味着没有任何“严重”攻击。 例如,在9月份发起的针对美国供应商Infinite Campus 的DDoS活动,为其所在地区的许多学校提供家长门户服务,是如此强大和旷日持久以至于引起美国国土安全部的注意。学龄儿童的努力难以解释。 无论如何,虽然9月份好转的原因很可能与新学年的到来有关,但解释经济衰退有点困难。我们的专家认为,大多数僵尸网络所有者已经将其能力重新配置为更有利可图且相对更安全的收入来源:加密货币挖掘。 DDoS攻击最近便宜了很多,但仅限于客户。至于组织者,他们的成本仍然很高。至少,必须购买处理能力(有时甚至装备数据中心),编写自己的木马或修改现有木马(例如流行的Mirai),使用木马来组装僵尸网络,找到一个客户,发动攻击等等。更不用说这些东西都是非法的。执法部门可以采取一切措施:Webstresser.org的垮台随后是一系列逮捕行为就是一个很好的例子。 另一方面,加密货币挖掘现在几乎是合法的:唯一的非法方面是使用别人的硬件。在某些安排到位的情况下,采矿系统上的采矿过于明显对其所有者来说很明显,没有太多机会不得不处理网络警察。网络犯罪分子还可以重新利用他们已经拥有的硬件用于挖掘,从而完全逃脱了执法部门的注意力。例如,最近有关于新僵尸网络的报道MikroTik路由器,最初是作为加密货币挖掘工具创建的。还有间接证据表明,许多具有当之无愧声誉的僵尸网络的所有者现在已将其重新配置为采矿。因此,成功的僵尸网络yoyo的DDoS活动已经下降得非常低,尽管没有关于它被拆除的信息。 逻辑中有一个公式,其中包含:相关性并不意味着因果关系。换句话说,如果两个变量以类似的方式变化,则这些变化不一定有任何共同之处。因此,尽管将加密货币开采的增长与今年DDoS攻击的松弛联系起来似乎是合乎逻辑的,但这并不能说是最终的事实。而是一个有效的假设。 卡巴斯基实验室在打击网络威胁方面有着悠久的历史,包括各种类型和复杂性的DDoS攻击。该公司的专家使用卡巴斯基DDoS智能系统监控僵尸网络。 作为卡巴斯基DDoS保护的一部分,DDoS智能系统拦截并分析机器人从其管理和控制服务器接收的命令。要启动保护,不必等到用户设备被感染或直到攻击者的命令被执行。 此报告包含2018年第3季度的DDoS Intelligence统计信息。 就本报告而言,单独的(一个)DDoS攻击是指僵尸网络繁忙时段之间的间隔不超过24小时。例如,如果相同的资源在24小时或更长时间的暂停后第二次被同一僵尸网络攻击,则会记录两次攻击。如果属于不同僵尸网络的僵尸程序查询相同的资源,则攻击也被视为单独的。 DDoS攻击和命令服务器的受害者的地理位置根据其IP进行注册。该报告按季度统计中的唯一IP地址数计算唯一DDoS目标的数量。 DDoS Intelligence统计数据仅限于卡巴斯基实验室迄今为止检测和分析的僵尸网络。还应该记住,僵尸网络只是用于DDoS攻击的工具之一,本节并未涵盖给定时期内每一次DDoS攻击。 季度总结 和以前一样,中国在攻击次数最多的地方(78%)名列前茅,美国已经重新获得第二名(12.57%),澳大利亚排名第三(2.27%) - 比以往任何时候都高。虽然进入门槛低得多,但韩国有史以来第一次进入前10名。 在独特目标的分布方面也出现了类似的趋势:韩国已经跌至评级列表的最底层;澳大利亚已攀升至第三位。 在数量方面,使用僵尸网络实现的DDoS攻击在8月份达到了主要的高峰;7月初观察到最安静的一天。 持续攻击的数量有所下降;然而,持续时间不足4小时的短期增长17.5 pp(至86.94%)。独特目标的数量增加了63%。 Linux僵尸网络的份额仅比上一季度略有增长。在这种情况下,DDoS攻击的类型分布没有太大变化:SYN洪水仍然排在第一位(83.2%)。 在过去的一个季度中,托管命令服务器数量最多的国家/地区列表发生了巨大变化。像希腊和加拿大这样的国家,以前已进入前十名,现在已经排在榜单的前列。 攻击地理 中国仍然占据上限,其份额从59.03%飙升至77.67%。美国重新获得第二个位置,尽管它已经增长了0.11个百分点,达到12.57%。这就是惊喜的开始。 首先,自监测开始以来,韩国首次跌出前十名:其份额从上季度的3.21%下降至0.30%,从第四位到第十一位下坡。与此同时,澳大利亚从第六位攀升至第三位:现在它占传出DDoS攻击总数的2.27%。这表明过去几个季度出现的非洲大陆的增长趋势仍然存在。香港从第二位升至第四位:其份额从17.13%下降至1.72%。 除了韩国,马来西亚也排名前十;这两个被新加坡(0.44%)和俄罗斯(0.37%)分别取代 - 分别排名第七和第十位。他们的股价从第二季度开始增长很少,但由于中国的飞跃,准入门槛变得不那么苛刻了。法国的例子证明了这一点:第二季度法国排名第十,占DDoS攻击总数的0.43%;本季度其份额降至0.39%,但该国仍然排名第八。 同样,来自前10名以外的所有国家的综合百分比从3.56%下降到2.83%。 各国的DDoS攻击,2018年第2季度和第3季度  在各国的独特目标评级中也发生了类似的过程:中国的份额增长了18个百分点,达到70.58%。目标数量的前五个位置看起来与攻击次数基本相同,但排名前十的位置有点不同:韩国仍然存在,尽管它的份额大幅减少(从下降到0.39%) 4.76%)。此外,评级表失去了马来西亚和越南,取而代之的是俄罗斯(0.46%,第八名)和德国(0.38%,第十名)。 按国家,2018年第二季度和第三季度的独特DDoS目标 动态DDoS攻击次数 第三季度的开始和结束并没有充分的攻击,但是8月和9月初的特征是锯齿状的图形,有很多山峰和山谷。最大的峰值发生在8月7日和20日,间接与大学收集申请人的论文并公布录取分数的日期相关。7月2日结果最安静。尽管不是很忙,但本季度末的攻击仍然比开始时更多。 2018年第三季度DDoS攻击数量动态  本季度的分配日相当均匀。星期六现在是本周最“危险”的一天(15.58%),从周二(13.70%)夺走了手掌。星期二在攻击次数方面倒数第二,仅在星期三之前,目前是本周最安静的一天(12.23%)。 按星期几,二季度和2018年第三季度进行DDoS攻击 DDoS攻击的持续时间和类型 第三季度最长的袭击持续了239个小时 - 短短10天。只是提醒你,上一季度最长的一个开启了将近11天(258小时)。 大规模,长期攻击的比例大幅下降。这不仅适用于持续时间超过140小时的“冠军”,也适用于所有其他类别长达5小时的冠军。最显着的下降发生在持续5到9小时的类别中:这些攻击从14.01%下降到5.49%。 然而,不到4个小时的短暂攻击增长了近17.5个百分点,达到86.94%。与此同时,目标数量比上一季度增长了63%。 按持续时间,小时,Q2和Q3 2018进行DDoS攻击 按攻击类型分布的数据几乎与上一季度相同。SYN Flood一直保持着第一的位置;其份额增长甚至达到83.2%(第二季度为80.2%,第一季度为57.3%)。UDP流量排在第二位;它也小幅上涨至11.9%(上一季度为10.6%)。其他类型的攻击损失了几个百分点但在相对发生率方面没有变化:HTTP仍然是第三,而TCP和ICMP分别是第四和第五。 按类型,Q2和Q3 2018进行DDoS攻击 (下载) Windows和Linux僵尸网络的比例与上一季度的比例大致相同:Windows僵尸网络已经上升(并且Linux下降了1.4个百分点)这与攻击类型变化动态相关。 Windows vs. Linux僵尸网络,2018年第3季度  僵尸网络分布地理 在僵尸网络命令服务器数量最多的十大区域列表中出现了一些重组。美国保持第一,但其份额从上季度的44.75%下降至37.31%。俄罗斯的市场份额从2.76%上升至8.96%,上升至第二位。希腊名列第三:它占指挥服务器的8.21% - 从0.55%上升到上一季度前十名之外。 中国仅有5.22%,仅为第五,被加拿大击败,得分为6.72%(比第二季度的数字高出数倍)。 与此同时,前十名以外国家的合计份额大幅增加:增长近5个百分点,目前为16.42%。 Botnets按国家/地区命令服务器,2018年第3季度 (下载) 结论 过去三个月没有发生过重大的高调袭击事件。与夏季放缓相反,9月份对学校的袭击事件特别明显。它已经成为卡巴斯基实验室多年来观察到的循环趋势的一部分。 另一个显着的发展是长期攻击数量的减少以及越来越多的独特目标:僵尸网络所有者可能正在用小型攻击取代大规模攻击(有时在英语媒体中被称为“爬行”攻击),通常无法区分“网络噪音”。我们已经看到过去几个季度这种范式变化的前奏。 就C&C僵尸网络数量而言,前十大阵容正在连续第二季度突然重组。可能是攻击者试图扩展到新的地区或试图安排其资源的地理冗余。原因可能是经济(电价,暴露于不可预见的情况下的业务稳健性)和合法的反网络犯罪行为。 过去两个季度的统计数据使我们相信DDoS社区目前正在展开某些转型过程,这可能会在不久的将来严重重新配置这一领域的网络犯罪活动。 *本文作者:bingbingxiaohu,转载请注明来自FreeBuf.COM
          Victor De Paz y su Conjunto del 900 - Grabaciones realizadas en el sello Philips (1959-1960)      Cache   Translate Page      

Genero: Tango, Conjuntos de tango
MP3-CBR 192 kbps-Lame 3.99.5-Total Size: 76.6 Mb/Total time: 56:00

En los archivos y tags se pueden leer:  Autores, genero y fecha aproximada de grabación
Gentileza de Eduardo Sibilin y Daniel Cavo
Tracks
1 El apache Argentino
2 Armenoville
3 Tinta verde
4 Ensueño
5 Marcando el paso
6 Copos de espuma
7 Pelele
8 Pabellon de las rosas
9 A su memoria
10 La cumparsita
11 Barriada se San Telmo
12 La chiflada
13 Bataraz
14 La última cita
15 El escoberito
16 La pulpera de Santa Lucia
17 Una lagrima
18 La briosa
19 Don Goyo
20 El flete
21 Noche Calurosa
22 Aprontes de bacana
23 Cordón de oro
24 Pura uva
Bajar

          Instrumental Tangos of the Old Guard -Tangos Instrumentales de la Guardia Vieja      Cache   Translate Page      


Genero: Tango, Orquestas de tango, Guardia Vieja, Instrumentales
MP3-CBR 320 kbps-Lame 3.98.4-Total Size: 134 Mb/Total time: 69:00

En los tags al reproducir los temas, se leen: Autores, Fecha de grabacion y Genero
Este CD nos llega por gentileza de Cecilia de Argentina
Biografia de Eduardo Arolas
Biografia de Juan Maglio"Pacho"
Biografia Alberto Alonso
Biografia de Enrique Di Cico (Minotto)
Biografia de Augusto Pedro Berto

Tracks
01 Orquesta Tipica Eduardo Arolas********El gaucho Nestor
02 Orquesta Tipica Eduardo Arolas********Yo la sabia
03 Orquesta Tipica Eduardo Arolas********Adios Buenos Aires
04 Orquesta Tipica Eduardo Arolas********El gitanillo
05 Orquesta Tipica Eduardo Arolas********Cosa papa
06 Orquesta Tipica Augusto Pedro Berto********El Canchero
07 Orquesta Tipica Juan Felix Maglio (Pacho)********Risas Amargas
08 Orquesta Tipica Juan Felix Maglio (Pacho)********La Paisana
09 Orquesta Tipica Juan Felix Maglio (Pacho)********Siempre tuyo
10 Orquesta Tipica Augusto Pedro Berto********Cinco a dos
11 Orquesta Tipica Augusto Pedro Berto********Clarita
12 Orquesta Tipica Alonso- Minotto********Homero
13 Orquesta Tipica Augusto Pedro Berto********Recondita
14 Orquesta Tipica Juan Felix Maglio (Pacho)********Jeanne
15 Orquesta Tipica Juan Felix Maglio (Pacho)********El apache argentino
16 Quinteto Criollo (Berto)********Amores privados
17 Orquesta Tipica Juan Felix Maglio (Pacho)********Adelita
18 Orquesta Tipica Juan Felix Maglio (Pacho)********Emancipacion
19 Orquesta Tipica Juan Felix Maglio (Pacho)********Un copetin
20 Orquesta Tipica Juan Felix Maglio (Pacho)********El manton de manila
21 Orquesta Tipica Juan Felix Maglio (Pacho)********Jeannette
22 Orquesta Tipica Juan Felix Maglio (Pacho)********El tejado
Bajar
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          ASP.NET, Apache e Mono      Cache   Translate Page      

Alcuni consigli su come usare applicazioni sviluppate mediante il framework .NET, eventualmente con Mono, sfruttando le potenzialità del web server Apache.

Leggi ASP.NET, Apache e Mono


          关于HttpClient绕过SSL认证以及NTLM认证      Cache   Translate Page      

本篇文章只涉及本人在工作上使用HttpClient遇到的情况,并不会详细地展开讲如何使用HttpClient.


关于HttpClient绕过SSL认证以及NTLM认证
1. 为什么使用HttpClient?

一开始其实是考虑使用RestTemplate的,但遇到的难题自然是SSL认证以及NTLM的认证.以目前的RestTemplate还做不到NTLM认证.而且使用SSL认证的过程也是挺复杂的. 复杂的是:居然还是要借助HttpClient .

@Bean public RestTemplate buildRestTemplate(List<CustomHttpRequestInterceptor> interceptors) throws KeyStoreException, NoSuchAlgorithmException, KeyManagementException { HttpComponentsClientHttpRequestFactory factory = new HttpComponentsClientHttpRequestFactory(); factory.setConnectionRequestTimeout(requestTimeout); factory.setConnectTimeout(connectTimeout); factory.setReadTimeout(readTimeout); // https SSLContextBuilder builder = new SSLContextBuilder(); builder.loadTrustMaterial(null, (X509Certificate[] x509Certificates, String s) -> true); SSLConnectionSocketFactory socketFactory = new SSLConnectionSocketFactory(builder.build(), new String[]{"SSLv2Hello", "SSLv3", "TLSv1", "TLSv1.2"}, null, NoopHostnameVerifier.INSTANCE); Registry<ConnectionSocketFactory> registry = RegistryBuilder.<ConnectionSocketFactory>create() .register("http", new PlainConnectionSocketFactory()) .register("https", socketFactory).build(); PoolingHttpClientConnectionManager phccm = new PoolingHttpClientConnectionManager(registry); phccm.setMaxTotal(200); CloseableHttpClient httpClient = HttpClients.custom().setSSLSocketFactory(socketFactory).setConnectionManager(phccm).setConnectionManagerShared(true).build(); factory.setHttpClient(httpClient); RestTemplate restTemplate = new RestTemplate(factory); List<ClientHttpRequestInterceptor> clientInterceptorList = new ArrayList<>(); for (CustomHttpRequestInterceptor i : interceptors) { ClientHttpRequestInterceptor interceptor = i; clientInterceptorList.add(interceptor); } restTemplate.setInterceptors(clientInterceptorList); return restTemplate; } 复制代码 2. 为什么要绕过SSL认证?

至于为什么要绕过SSL认证,因为装证书的这些操作我并不会.同时也想试试能不能忽略这个证书认证调用接口.

首先如果想绕过证书,都必先创建X509TrustManager这个对象并且重写它的方法.

X509TrustManager该接口是一个用于Https的证书信任管理器,我们可以在这里添加我们的证书,让该管理器知道我们有那些证书是可以信任的.

该接口会有三个方法:

void checkClientTrusted(X509Certificate[] xcs, String str) void checkServerTrusted(X509Certificate[] xcs, String str) X509Certificate[] getAcceptedIssuers() 复制代码

第一个方法checkClientTrusted.该方法检查客户端的证书,若不信任该证书则抛出异常。由于我们不需要对客户端进行认证,因此我们只需要执行默认的信任管理器的这个方法。JSSE中,默认的信任管理器类为TrustManager。

第二个方法checkServerTrusted.该方法检查服务器的证书,若不信任该证书同样抛出异常。通过自己实现该方法,可以使之信任我们指定的任何证书。在实现该方法时,也可以简单的不做任何处理,即一个空的函数体,由于不会抛出异常,它就会信任任何证书。

第三个方法getAcceptedIssusers,返回受信任的X509证书数组。

而我们只需要重写这三个方法,并且不需要修改里面的内容.然后再交给HttpClient就可以实现绕过SSL认证了.

X509TrustManager trustManager = new X509TrustManager() { @Override public X509Certificate[] getAcceptedIssuers() { return null; } @Override public void checkClientTrusted(X509Certificate[] xcs, String str) { } @Override public void checkServerTrusted(X509Certificate[] xcs, String str) { } SSLContext ctx = SSLContext.getInstance(SSLConnectionSocketFactory.SSL); ctx.init(null, new TrustManager[]{trustManager}, null); //生成工厂 SSLConnectionSocketFactory socketFactory = new SSLConnectionSocketFactory(ctx, NoopHostnameVerifier.INSTANCE); //并注册到HttpClient中 Registry<ConnectionSocketFactory> socketFactoryRegistry = RegistryBuilder.<ConnectionSocketFactory>create() .register("http", PlainConnectionSocketFactory.INSTANCE) .register("https", socketFactory).build(); HttpClientBuilder httpClientBuilder = HttpClients.custom().setConnectionManager(connectionManager); CloseableHttpClient httpClient = httpClientBuilder.build(); 复制代码

回顾一下步骤:

创建X509TrustManager对象并重写方法. 创建SSLContext实例,并交到工厂管理. 注册到HttpClient中.通过ConnectionManager最后生成httpClient. 3. 什么是NTLM?

NTLM是NT LAN Manager的缩写,这也说明了协议的来源。NTLM 是 windows NT 早期版本的标准安全协议,Windows 2000 支持 NTLM 是为了保持向后兼容。Windows 2000内置三种基本安全协议之一。

NTLM的原理

NTLM的工作原理描述

其实我对这个了解得不是很深,因为遇上这种情况的感觉不会很多,所以网上的资源也不太多. 这里只是针对HttpClient遇上NTLM认证的情况详细描述一下.有兴趣的朋友可以通过以上的链接了解下.

4. 如何使用HttpClient进行NTLM认证?

这个查阅了官网的文档.官网也给出了解决方案.

hc.apache.org/httpcompone…

需要把这几个类编写一下.

JCIFSEngine:

public final class JCIFSEngine implements NTLMEngine { private static final int TYPE_1_FLAGS = NtlmFlags.NTLMSSP_NEGOTIATE_56 | NtlmFlags.NTLMSSP_NEGOTIATE_128 | NtlmFlags.NTLMSSP_NEGOTIATE_NTLM2 | NtlmFlags.NTLMSSP_NEGOTIATE_ALWAYS_SIGN | NtlmFlags.NTLMSSP_REQUEST_TARGET; @Override public String generateType1Msg(final String domain, final String workstation) throws NTLMEngineException { final Type1Message type1Message = new Type1Message(TYPE_1_FLAGS, domain, workstation); return Base64.encode(type1Message.toByteArray()); } @Override public String generateType3Msg(final String username, final String password, final String domain, final String workstation, final String challenge) throws NTLMEngineException { Type2Message type2Message; try { type2Message = new Type2Message(Base64.decode(challenge)); } catch (final IOException exception) { throw new NTLMEngineException("Invalid NTLM type 2 message", exception); } final int type2Flags = type2Message.getFlags(); final int type3Flags = type2Flags & (0xffffffff ^ (NtlmFlags.NTLMSSP_TARGET_TYPE_DOMAIN | NtlmFlags.NTLMSSP_TARGET_TYPE_SERVER)); final Type3Message type3Message = new Type3Message(type2Message, password, domain, username, workstation, type3Flags); return Base64.encode(type3Message.toByteArray()); } } 复制代码

JCIFSNTLMSchemeFactory:

public class JCIFSNTLMSchemeFactory implements AuthSchemeProvider { public AuthScheme create(final HttpContext context){ return new NTLMScheme(new JCIFSEngine()); } } 复制代码

最后就在HttpClient注册:

Registry<AuthSchemeProvider> authSchemeRegistry = RegistryBuilder.<AuthSchemeProvider>create() .register(AuthSchemes.NTLM, new JCIFSNTLMSchemeFactory()) .register(AuthSchemes.BASIC, new BasicSchemeFactory()) .register(AuthSchemes.DIGEST, new DigestSchemeFactory()) .register(AuthSchemes.SPNEGO, new SPNegoSchemeFactory()) .register(AuthSchemes.KERBEROS, new KerberosSchemeFactory()) .build(); CloseableHttpClient httpClient = HttpClients.custom() .setDefaultAuthSchemeRegistry(authSchemeRegistry) .build(); 复制代码

最后就同时使用绕过SSL验证以及NTLM验证:

private static PoolingHttpClientConnectionManager connectionManager; private static RequestConfig requestConfig; private static Registry<AuthSchemeProvider> authSchemeRegistry; private static Registry<ConnectionSocketFactory> socketFactoryRegistry; private static CredentialsProvider credsProvider; public void init() { try { X509TrustManager trustManager = new X509TrustManager() { @Override public X509Certificate[] getAcceptedIssuers() { return null; } @Override public void checkClientTrusted(X509Certificate[] xcs, String str) { } @Override public void checkServerTrusted(X509Certificate[] xcs, String str) { } }; SSLContext ctx = SSLContext.getInstance(SSLConnectionSocketFactory.SSL); ctx.init(null, new TrustManager[]{trustManager}, null); SSLConnectionSocketFactory socketFactory = new SSLConnectionSocketFactory(ctx, NoopHostnameVerifier.INSTANCE); NTCredentials creds = new NTCredentials("用户名", "密码", "工作站(workstation)", "域名"); credsProvider = new BasicCredentialsProvider(); credsProvider.setCredentials(AuthScope.ANY, creds); socketFactoryRegistry = RegistryBuilder.<ConnectionSocketFactory>create() .register("http", PlainConnectionSocketFactory.INSTANCE) .register("https", socketFactory).build(); connectionManager = new PoolingHttpClientConnectionManager(socketFactoryRegistry); connectionManager.setMaxTotal(18); connectionManager.setDefaultMaxPerRoute(6); requestConfig = RequestConfig.custom() .setSocketTimeout(30000) .setConnectTimeout(30000) .build(); authSchemeRegistry = RegistryBuilder.<AuthSchemeProvider>create() .register(AuthSchemes.NTLM, new JCIFSNTLMSchemeFactory()) .register(AuthSchemes.BASIC, new BasicSchemeFactory()) .register(AuthSchemes.DIGEST, new DigestSchemeFactory()) .register(AuthSchemes.SPNEGO, new SPNegoSchemeFactory()) .register(AuthSchemes.KERBEROS, new KerberosSchemeFactory()) .build(); } catch (Exception e) { e.printStackTrace(); } } 复制代码
          Stop us if you've heard this one: Remote code hijacking flaw in Apache Struts, patch ASAP - The Register      Cache   Translate Page      

The Register

Stop us if you've heard this one: Remote code hijacking flaw in Apache Struts, patch ASAP
The Register
The Apache Foundation is urging developers to update their Struts 2 installations and projects using the code – after a critical security flaw was found in a key component of the framework. A warning this week from Apache reveals that devs should make ...

and more »

          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Alchemia smaków - Jabłko cynamon      Cache   Translate Page      


Skład: Czarna herbata, Suszone jabłka, Cynamon, Aromat jabłkowo-cynamonowy.

Czarna herbata liściasta z dodatkiem suszonego jabłka, kawałków kruszonego cynamonu o jabłkowo-cynamonowym aromacie. Idealnie komponuje się z piernikiem lub jabłecznikiem. Są herbaty które urzekają swoim wyglądem, są też takie które zachwycają smakiem, niewiele jest takich które są i urocze i przepyszne. Ta pozycja niewątpliwie się do takich zalicza. W pierwszej chwili zachwyca wyjątkowo słodkim zapachem cynamonu, by po chwili zachwycić oko, ukazując czarny liść herbaty otulony żółtymi płatkami słonecznika a pomiędzy nimi kawałki jabłek. Herbata w smaku jest przepyszna a podana z jabłecznikiem, szarlotką lub piernikiem stanowi świetnie uzupełnioną kompozycję zachwycającą wszystkie zmysły.

Jest to rodzaj jabłkowo-cynamonowego naparu o dominującym smaku cynamonu. Nie czuć tutaj kwaskowatej szarlotki jak w Teapigs czy czarnej herbaty. Jej smak przypomina aksamitną, łagodną masala chai. Brakuje troszkę owocowości. Jest bardzo słodka!


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Apache Struts Patches Remote Code Execution Vulnerability in FileUpload Library (CVE-2016-1000031)      Cache   Translate Page      

Apache Software Foundation announces a security update for Apache Struts to address a vulnerability in the Commons FileUpload library that could lead to remote code execution. We recommend updating now.

Background

On November 5, the Apache Software Foundation (ASF) published a security announcement to Apache Struts project administrators about CVE-2016-1000031, a vulnerability in the Commons FileUpload library originally reported by Tenable’s Research team in 2016. This library ships as part of Apache Struts 2 and is used as the default mechanism for file uploads. The ASF reports that Apache Struts 2.3.36 and prior are vulnerable. A remote attacker could use this vulnerability to gain remote code execution on publicly accessible websites running a vulnerable version of Apache Struts.

Vulnerability details

For details about this vulnerability, please review the Tenable Research Advisory for the Apache Commons FileUpload DiskFileItem File Manipulation Remote Code Execution (LOBSTER).

Urgently required actions

The ASF confirms that Apache Struts version 2.5.12 and above include the patched version of the commons-fileupload library, version 1.3.3. If possible, Apache Struts project administrators should upgrade to 2.5.12 and above. The ASF also notes that the patched version of the commons-fileupload library can be dropped into projects that have already been deployed by simply replacing the JAR file in the WEB-INF/lib path with the fixed version. Maven based Struts projects can address this vulnerability by adding in the following dependency:

<dependency>
  <groupId>commons-fileupload</groupId>
  <artifactId>commons-fileupload</artifactId>
  <version>1.3.3</version>
</dependency>

Identifying affected systems

A list of Nessus plugins to identify this vulnerability can be found here.

Get more information:

Learn more about Tenable.io, the first Cyber Exposure platform for holistic management of your modern attack surface. Get a free 60-day trial of Tenable.io Vulnerability Management.


          GSA accounts: RAD Studio, Delphi, C++Builder 10.2.3 Inline ISO      Cache   Translate Page      
This ISO is for GSA accounts only. If you are not a GSA customer, please find your ISO at <a href="http://cc.embarcadero.com/item/30842">http://cc.embarcadero.com/item/30842</a>.<p>ISO for RAD Studio, Delphi and C++Builder 10.2 Release 3 Inline <p>RAD Studio 10.2.3 Tokyo, build 3231 is available for installation. It is an update of Delphi 10.2 Tokyo, C++Builder 10.2 Tokyo, and RAD Studio 10.2 Tokyo available for any active Update Subscription customer. If you have already installed 10.2.3 released in March 2018, installing build 3231 will require a full uninstall and reinstall.<p>This release includes the following 10.2 Tokyo patches and hotfixes:<p><ul><li>RAD Server 10.2.3 Performance Patch (July 13th, 2018)</li><li>iOS 11.3 and CodeInsight Patch (June 26th, 2018)</li><li>Delphi - RAD Server Linux Apache Patch (May 17th, 2018)</li><li>iOS 11.3 Patch (May 8th, 2018)</li><li>C++Builder - C++ Compiler 4k Stack Allocation Patch (April 17th, 2018)</li><li>Context Help Patch (April 9th, 2018)</li><li>EMS Package Wizard Patch (March 27th, 2018)</li><li>Android Push Notification Patch (March 27th, 2018)</li></ul>Note: If you have already installed the patches listed above on top of your existing RAD Studio 10.2.3 Tokyo installation (build 2631), you do not need to install the updated 10.2.3 build (build 3231).<p>See <a href="http://docwiki.embarcadero.com/RADStudio/Tokyo/en/10.2_Tokyo_-_Release_3" target=_blank>the release notes</a> for more details.<p>English, French, German and Japanese<p>A Double Layer (dual layer) high capacity DVD is required for burning a physical disc.<p><b>Note</b>: Installing RAD Studio 10.2.3 using this ISO installation requires that you first uninstall your existing RAD Studio 10.2 Tokyo installation. To perform an uninstall, navigate to Add or Remove Programs in your Windows Control Panel. As part of the uninstall process, you will be able to preserve your configuration settings. Those settings can be imported as part of the new 10.2.2 installation.<p>MD5: ECE3015BEDBC4950C69D6AC5EDA6B9B0
          GSA accounts: RAD Studio, Delphi, C++Builder 10.2.3 Inline ISO      Cache   Translate Page      
This ISO is for GSA accounts only. If you are not a GSA customer, please find your ISO at <a href="http://cc.embarcadero.com/item/30840">http://cc.embarcadero.com/item/30840</a>.<p>ISO for RAD Studio, Delphi and C++Builder 10.2 Release 3 Inline<p>RAD Studio 10.2.3 Tokyo, build 3231 is available for installation. It is an update of Delphi 10.2 Tokyo, C++Builder 10.2 Tokyo, and RAD Studio 10.2 Tokyo available for any active Update Subscription customer. If you have already installed 10.2.3 released in March 2018, installing build 3231 will require a full uninstall and reinstall.<p>This release includes the following 10.2 Tokyo patches and hotfixes:<p><ul><li>RAD Server 10.2.3 Performance Patch (July 13th, 2018)</li><li>iOS 11.3 and CodeInsight Patch (June 26th, 2018)</li><li>Delphi - RAD Server Linux Apache Patch (May 17th, 2018)</li><li>iOS 11.3 Patch (May 8th, 2018)</li><li>C++Builder - C++ Compiler 4k Stack Allocation Patch (April 17th, 2018)</li><li>Context Help Patch (April 9th, 2018)</li><li>EMS Package Wizard Patch (March 27th, 2018)</li><li>Android Push Notification Patch (March 27th, 2018)</li></ul>Note: If you have already installed the patches listed above on top of your existing RAD Studio 10.2.3 Tokyo installation (build 2631), you do not need to install the updated 10.2.3 build (build 3231).<p>See <a href="http://docwiki.embarcadero.com/RADStudio/Tokyo/en/10.2_Tokyo_-_Release_3" target=_blank>the release notes</a> for more details.<p>If this is your first time installing 10.2 and you ran into problem, you can also get the ISO at <a href="http://cc.embarcadero.com/item/30844">http://cc.embarcadero.com/item/30844</a>.<p>English, French, German and Japanese<p>A Double Layer (dual layer) high capacity DVD is required for burning a physical disc.<p><b>Note</b>: Installing RAD Studio 10.2.3 using this ISO installation requires that you first uninstall your existing RAD Studio 10.2 Tokyo installation. To perform an uninstall, navigate to Add or Remove Programs in your Windows Control Panel. As part of the uninstall process, you will be able to preserve your configuration settings. Those settings can be imported as part of the new 10.2.2 installation.<p>MD5: ECE3015BEDBC4950C69D6AC5EDA6B9B0
          RAD Studio, Delphi, C++Builder 10.2 Release 3 Inline ISO      Cache   Translate Page      
ISO for RAD Studio, Delphi, C++Builder 10.2 Release 3 Inline<p>RAD Studio 10.2.3 Tokyo, build 3231 is available for installation. It is an update of Delphi 10.2 Tokyo, C++Builder 10.2 Tokyo, and RAD Studio 10.2 Tokyo available for any active Update Subscription customer. If you have already installed 10.2.3 released in March 2018, installing build 3231 will require a full uninstall and reinstall.<p>This release includes the following 10.2 Tokyo patches and hotfixes:<p><ul><li>RAD Server 10.2.3 Performance Patch (July 13th, 2018)</li><li>iOS 11.3 and CodeInsight Patch (June 26th, 2018)</li><li>Delphi - RAD Server Linux Apache Patch (May 17th, 2018)</li><li>iOS 11.3 Patch (May 8th, 2018)</li><li>C++Builder - C++ Compiler 4k Stack Allocation Patch (April 17th, 2018)</li><li>Context Help Patch (April 9th, 2018)</li><li>EMS Package Wizard Patch (March 27th, 2018)</li><li>Android Push Notification Patch (March 27th, 2018)</li></ul>Note: If you have already installed the patches listed above on top of your existing RAD Studio 10.2.3 Tokyo installation (build 2631), you do not need to install the updated 10.2.3 build (build 3231).<p>A Double Layer (dual layer) high capacity DVD is required for burning a physical disc.<p>Available only to registered users of Delphi, C++Builder, RAD Studio 10.2, and Embarcadero All-Access XE<p>English, French, German and Japanese<p><b>Note</b>: Installing RAD Studio 10.2.3 using this ISO installation requires that you first uninstall your existing RAD Studio 10.2 Tokyo installation. To perform an uninstall, navigate to Add or Remove Programs in your Windows Control Panel. As part of the uninstall process, you will be able to preserve your configuration settings. Those settings can be imported as part of the new 10.2.3 installation.<p>MD5: 40D693B9989F7CCDF07C07EA676D1AB2
          RAD Studio, Delphi, C++Builder 10.2 Release 3 Inline ISO      Cache   Translate Page      
ISO for RAD Studio, Delphi, C++Builder 10.2 Release 3 Inline<p>RAD Studio 10.2.3 Tokyo, build 3231 is available for installation. It is an update of Delphi 10.2 Tokyo, C++Builder 10.2 Tokyo, and RAD Studio 10.2 Tokyo available for any active Update Subscription customer. If you have already installed 10.2.3 released in March 2018, installing build 3231 will require a full uninstall and reinstall.<p>This release includes the following 10.2 Tokyo patches and hotfixes:<p><ul><li>RAD Server 10.2.3 Performance Patch (July 13th, 2018)</li><li>iOS 11.3 and CodeInsight Patch (June 26th, 2018)</li><li>Delphi - RAD Server Linux Apache Patch (May 17th, 2018)</li><li>iOS 11.3 Patch (May 8th, 2018)</li><li>C++Builder - C++ Compiler 4k Stack Allocation Patch (April 17th, 2018)</li><li>Context Help Patch (April 9th, 2018)</li><li>EMS Package Wizard Patch (March 27th, 2018)</li><li>Android Push Notification Patch (March 27th, 2018)</li></ul>Note: If you have already installed the patches listed above on top of your existing RAD Studio 10.2.3 Tokyo installation (build 2631), you do not need to install the updated 10.2.3 build (build 3231).<p>See <a href="http://docwiki.embarcadero.com/RADStudio/Tokyo/en/10.2_Tokyo_-_Release_3" target=_blank>the release notes</a> for more details.<p>If this is your first time installing 10.2 and you ran into problem, you can also get the ISO at<br><a href="http://cc.embarcadero.com/item/30842">http://cc.embarcadero.com/item/30842</a>.<p>A Double Layer (dual layer) high capacity DVD is required for burning a physical disc.<p>Available only to registered users of Delphi, C++Builder, RAD Studio 10.2, and Embarcadero All-Access XE<p>English, French, German and Japanese <p>RAD Studio 10.2 also includes HTML5 Builder. That software is not included on this ISO. The HTML5 Builder .zip that can be burned to disc is available at <a href="http://cc.embarcadero.com/item/29545">http://cc.embarcadero.com/item/29545</a>.<p><b>Note</b>: Installing RAD Studio 10.2.3 using this ISO installation requires that you first uninstall your existing RAD Studio 10.2 Tokyo installation. To perform an uninstall, navigate to Add or Remove Programs in your Windows Control Panel. As part of the uninstall process, you will be able to preserve your configuration settings. Those settings can be imported as part of the new 10.2.3 installation.<p>MD5: 40D693B9989F7CCDF07C07EA676D1AB2
          RAD Studio, Delphi, C++Builder 10.2 Release 3 Inline Web Install      Cache   Translate Page      
Web Installer for RAD Studio, Delphi, C++Builder 10.2 Release 3 inline<p>RAD Studio 10.2.3 Tokyo, build 3231 is available for installation. It is an update of Delphi 10.2 Tokyo, C++Builder 10.2 Tokyo, and RAD Studio 10.2 Tokyo available for any active Update Subscription customer. If you have already installed 10.2.3 released in March 2018, installing build 3231 will require a full uninstall and reinstall.<p>This release includes the following 10.2 Tokyo patches and hotfixes:<p><ul><li>RAD Server 10.2.3 Performance Patch (July 13th, 2018)</li><li>iOS 11.3 and CodeInsight Patch (June 26th, 2018)</li><li>Delphi - RAD Server Linux Apache Patch (May 17th, 2018)</li><li>iOS 11.3 Patch (May 8th, 2018)</li><li>C++Builder - C++ Compiler 4k Stack Allocation Patch (April 17th, 2018)</li><li>Context Help Patch (April 9th, 2018)</li><li>EMS Package Wizard Patch (March 27th, 2018)</li><li>Android Push Notification Patch (March 27th, 2018)</li></ul>Note: If you have already installed the patches listed above on top of your existing RAD Studio 10.2.3 Tokyo installation (build 2631), you do not need to install the updated 10.2.3 build (build 3231).<p>English, French, German and Japanese<p>Available only to registered users of RAD Studio, Delphi, C++Builder 10.2 and All-Access<p>MD5: E10A2E9CCD3AC98EF9869109E3C82329
          RAD Studio, Delphi, C++Builder 10.2 Release 3 Inline Web Install      Cache   Translate Page      
Web Installer for RAD Studio, Delphi, C++Builder 10.2 Release 3 Inline<p>RAD Studio 10.2.3 Tokyo, build 3231 is available for installation. It is an update of Delphi 10.2 Tokyo, C++Builder 10.2 Tokyo, and RAD Studio 10.2 Tokyo available for any active Update Subscription customer. If you have already installed 10.2.3 released in March 2018, installing build 3231 will require a full uninstall and reinstall.<p>This release includes the following 10.2 Tokyo patches and hotfixes:<p><ul><li>RAD Server 10.2.3 Performance Patch (July 13th, 2018)</li><li>iOS 11.3 and CodeInsight Patch (June 26th, 2018)</li><li>Delphi - RAD Server Linux Apache Patch (May 17th, 2018)</li><li>iOS 11.3 Patch (May 8th, 2018)</li><li>C++Builder - C++ Compiler 4k Stack Allocation Patch (April 17th, 2018)</li><li>Context Help Patch (April 9th, 2018)</li><li>EMS Package Wizard Patch (March 27th, 2018)</li><li>Android Push Notification Patch (March 27th, 2018)</li></ul>Note: If you have already installed the patches listed above on top of your existing RAD Studio 10.2.3 Tokyo installation (build 2631), you do not need to install the updated 10.2.3 build (build 3231).<p>English, French, German and Japanese<p>For first time users or if you have trouble downloading this, please visit <a href="http://cc.embarcadero.com/item/30840">http://cc.embarcadero.com/item/30840</a>, and download from there.<p>Available only to registered users of RAD Studio, Delphi, C++Builder 10.2 and All-Access<p>MD5: E10A2E9CCD3AC98EF9869109E3C82329
          Delphi 10.2.3 RAD Server Linux Apache Patch      Cache   Translate Page      
This patch resolves a number of issues pertaining to deploying RAD Server EMS packages on Linux using Delphi 10.2.3 Tokyo. In some circumstances, when deploying an EMS package to Linux, libmod_emsserver would fail to start due to an exception related with the TEndpointContext declaration.<p>The patch resolves also the following issue reported on Quality Portal:<br><a href="https://quality.embarcadero.com/browse/RSP-17907" target="_blank">RSP-17907</a> - FloatToStr does not work on Linux Apache Module<p>English, French, German and Japanese<p>Available only to registered users of RAD Studio, Delphi 10.2 (Enterprise or higher) and All-Access
          Apache Struts 2.3.x vulnerable to two year old RCE flaw      Cache   Translate Page      

The Apache Software Foundation is urging users that run Apache Struts 2.3.x to update the Commons FileUpload library to close a serious vulnerability that could be exploited for remote code execution attacks. The probem Apache Struts 2 is a widely-used open source web application framework for developing Java EE web applications. The Commons FileUpload library is used to add file upload capabilities to servlets and web applications. The vulnerability (CVE-2016-1000031) is present in Commons FileUpload … More

The post Apache Struts 2.3.x vulnerable to two year old RCE flaw appeared first on Help Net Security.


          SQL Server数据库入门基础知识      Cache   Translate Page      
SQL Server数据库相关知识点

1、为什么要使用数据库?

数据库技术是计算机科学的核心技术之一。使用数据库可以高效且条理分明地存储数据、使人们能够更加迅速、方便地管理数据。数据库具有以下特点:

可以结构化存储大量的数据信息,方便用户进行有效的检索和访问

可以有效地保持数据信息的一致性.完整性,降低数据冗余

可以满足应用的共享和安全方面的要求

2、数据库的基本概念

⑴什么是数据?

数据就是描述事物的符号记录,数据包括数字、文字、图形、声音、图像等;数据在数据库中以“记录”的形式存储,相同格式和类型的数据将存放在一起;数据库中,每一行数据就是一条“记录”。

⑵什么是数据库和数据库表?

不同的记录组织在一起就是数据库的“表”,也就数说表就是来存放数据的,而数据库就是“表”的集合。

⑶什么是数据库管理系统?

数据库管理系统(DBMS)是实现对数据库资源有效组织、管理和存取的系统软件。它在操作系统的支持下,支持用户对数据库的各种操作。DBMS主要有以下功能:

数据库的建立和维护功能:包括建立数据库的结构和数据的录入与转换、数据库的转储与恢复、数据库的重组与性能监视等功能

数据定义功能:包括定义全局数据结构、局部逻辑数据结构、存储结构、保密模式及数据格式等功能。保证存储在数据库中的数据正确、有效和相容,以防止不合语义的错误数据被输入或输出,

数据操纵功能:包括数据查询统计和数据更新两个方面

数据库的运行管理功能:这是数据库管理系统的核心部分,包括并发控制、存取控制、数据库内部维护等功能

通信功能:DBMS与其他软件之间的通信

⑷什么是数据库系统?

数据库系统是一人一机系统,一由硬件、操作系统、数据库、DBMS、应用软件和数据库用户组成。

⑸数据库管理员(DBA)

一般负责数据库的更新和备份、数据库系统的维护、用户管理工作、保证数据库系统的正常运行。

3、数据库的发展过程

初级阶段-第一代数据库:在这个阶段IBM公司研制的层次模型的数据库管理系统-IMS问世

中级阶段-关系数据库的出现:DB2的问世、SQL语言的产生

高级阶段-高级数据库:各种新型数据库的产生;如工程数据库、多媒体数据库、图形数据库、智能数据库等

4、数据库的三种模型

网状模型:数据关系多对多、多对一,较复杂

层次模型:类似与公司上下级关系

关系模型:实体(实现世界的事物、如×××、银行账户)-关系

5、当今主流数据库

SQLServer:Microsoft公司的数据库产品,运行于windows系统上。

Oracle:甲骨文公司的产品;大型数据库的代表,支持linux、unix系统。

DB2:IBM公司的德加考特提出关系模型理论,13年后IBM的DB2问世

mysql:现被Oracle公司收购。运行于linux上,Apache和Nginx作为Web服务器,MySQL作为后台数据库,php/Perl/python作为脚本解释器组成“LAMP”组合

6、关系型数据库

⑴基本结构

关系数据库使用的存储结构是多个二维表格,即反映事物及其联系的数据描述是以平面表格形式体现的。在每个二维表中,每一行称为一条记录,用来描述一个对象的信息:每一列称为一个字段,用来描述对象的一个属性。数据表与数据库之间存在相应的关联,这些关联用来查询相关的数据。关系数据库是由数据表之间的关联组成的。其中:

数据表通常是一个由行和列组成的二维表,每一个数据表分别说明数据库中某一特定的方面或部分的对象及其属性

数据表中的行通常叫做记录或者元组,它代表众多具有相同属性的对象中的一个

数据表中的列通常叫做字段或者属性,它代表相应数据库中存储对象的共有的属性

⑵主键和外键

主键:是唯一标识表中的行数据,一个主键对应一行数据;主键可以有一个或多个字段组成;主键的值具有唯一性、不允许为控制(null);每个表只允许存在一个主键。

外键:外键是用于建立和加强两个表数据之间的链接的一列或多列;一个关系数据库通常包含多个表,外键可以使这些表关联起来。

⑶数据完整性规则

实体完整性规则:要求关系中的元组在主键的属性上不能有null

域完整性规则:指定一个数据集对某一个列是否有效或确定是否允许null

引用完整性规则:如果两个表关联,引用完整性规则要求不允许引用不存在的元组

用户自定义完整性规则

7、SQLServer系统数据库

master数据库:记录系统级别的信息,包括所有的用户信息、系统配置、数据库文件存放位置、其他数据库的信息。如果该数据库损坏整个数据库都将瘫痪无法使用。

model数据库:数据库模板

msdb数据库:用于SQLServer代理计划警报和作业

tempdb数据库:临时文件存放地点

SQL Server数据库文件类型

数据库在磁盘上是以文件为单位存储的,由数据文件和事务日志文件组成,一个数据库至少应该包含一个数据文件和一个事务日志文件。

数据库创建在物理介质(如硬盘)的NTFS分区或者FAT分区的一个或多个文件上,它预先分配了将要被数据和事务日志所使用的物理存储空间。存储数据的文件叫做数据文件,数据文件包含数据和对象,如表和索引。存储事务日志的文件叫做事务日志文件(又称日志文件)。在创建一个新的数据库的时候仅仅是创建了一个“空壳,必须在这个“空壳”中创建对象(如表等),然后才能使用这个数据库。

SQLServer2008数据库具有以下四种类型的文件

主数据文件:主数据文件包含数据库的启动信息,指向数据库中的其他文件,每个数据库都有一个主数据文件,主数据文件的文件扩展名是.mdf。

次要(辅助)数据文件:除主数据文件以外的所有其他数据文件都是次要数据文件,某些数据库可能不包含任何次要数据文件,而有些数据库则包含多个次要数据文件,次要数据文件的文件扩展名是.ndf。

事务日志文件:包含恢复数据库所有事务日志的信息。每个数据库必须至少有一个事务日志文件,当然也可以有多个,事务日志文件的推荐文件扩展名是.ldf。

文件流( Filestream):可以使得基于 SQLServer的应用程序能在文件系统中存储非结构化的数据,如文档、图片、音频等,文件流主要将SQLServer数据库引擎和新技术文件系统(NTFS)集成在一起,它主要以varbinary (max)数据类型存储数据。

Linux公社的RSS地址 : https://www.linuxidc.com/rssFeed.aspx

本文永久更新链接地址: https://www.linuxidc.com/Linux/2018-11/155182.htm


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          demande d'aide pour désinfection sur un Mac      Cache   Translate Page      
Désinfections et demandes d'analyse
Statistiques : 6 Réponses || 43 Vus Dernier message par lenapache
          rpcx      Cache   Translate Page      
Faster multil-language bidirectional RPC framework in Go, like alibaba Dubbo and weibo Motan in Java, but with more features, Scale easily. 

License GoDoc travis Go Report Card coveralls QQ群 QQ企业群sourcegraph

Cross-Languages

you can use other programming languages besides Go to access rpcx services.
  • rpcx-gateway: You can write clients in any programming languages to call rpcx services via rpcx-gateway
  • http invoke: you can use the same http requests to access rpcx gateway
  • Java Client: You can use rpcx-java to access rpcx servies via raw protocol.
If you can write Go methods, you can also write rpc services. It is so easy to write rpc applications with rpcx.

Installation

install the basic features:
go get -u -v github.com/smallnest/rpcx/...
If you want to use reuseportquickcpzookeeperetcdconsul registry, use those tags to go get 、 go build or go run. For example, if you want to use all features, you can:
go get -u -v -tags "reuseport quic kcp zookeeper etcd consul ping rudp utp" github.com/smallnest/rpcx/...
tags:
  • quic: support quic transport
  • kcp: support kcp transport
  • zookeeper: support zookeeper register
  • etcd: support etcd register
  • consul: support consul register
  • ping: support network quality load balancing
  • reuseport: support reuseport

Features

rpcx is a RPC framework like Alibaba Dubbo and Weibo Motan.
rpcx 3.0 has been refactored for targets:
  1. Simple: easy to learn, easy to develop, easy to intergate and easy to deploy
  2. Performance: high perforamnce (>= grpc-go)
  3. Cross-platform: support raw slice of bytesJSONProtobuf and MessagePack. Theoretically it can be used with java, php, python, c/c++, node.js, c# and other platforms
  4. Service discovery and service governance: support zookeeper, etcd and consul.
It contains below features
  • Support raw Go functions. There's no need to define proto files.
  • Pluggable. Features can be extended such as service discovery, tracing.
  • Support TCP, HTTP, QUIC and KCP
  • Support multiple codecs such as JSON, ProtobufMessagePack and raw bytes.
  • Service discovery. Support peer2peer, configured peers, zookeeperetcdconsul and mDNS.
  • Fault tolerance:Failover, Failfast, Failtry.
  • Load banlancing:support Random, RoundRobin, Consistent hashing, Weighted, network quality and Geography.
  • Support Compression.
  • Support passing metadata.
  • Support Authorization.
  • Support heartbeat and one-way request.
  • Other features: metrics, log, timeout, alias, circuit breaker.
  • Support bidirectional communication.
  • Support access via HTTP so you can write clients in any programming languages.
  • Support API gateway.
  • Support backup request, forking and broadcast.
rpcx uses a binary protocol and platform-independent, which means you can develop services in other languages such as Java, python, nodejs, and you can use other prorgramming languages to invoke services developed in Go.
There is a UI manager: rpcx-ui.

Performance

Test results show rpcx has better performance than other rpc framework except standard rpc lib.
Test Environment
  • CPU: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz, 32 cores
  • Memory: 32G
  • Go: 1.9.0
  • OS: CentOS 7 / 3.10.0-229.el7.x86_64
from https://github.com/smallnest/rpcx

          Google 发布 gVisor - 容器沙箱运行时(sandboxed container runtime)      Cache   Translate Page      

这是来自官方博客 Open-sourcing gVisor, a sandboxed container runtime 的摘要及翻译

自 Docker 的普及开始,我们开发、打包和部署应用的方式发生了根本性的变化,但是由于容器的隔离技术所限,并不是所有的人都推崇使用容器技术,因为其共享内核机制,系统还存在很大的攻击面,这就会存在恶意应用程序侵入系统的威胁。
为了运行那些不可信以及存在潜在威胁的容器,人们开始更加重视沙箱容器:一种能在宿主机和应用程序之间提供更安全的隔离机制。
Google 发布的 gVisor ,就是这样一种新型的沙箱容器技术,它能为容器提供更安全的隔离,同时比虚拟机(VM)更轻量。 而且,gVisor 还能和 Docker 以及 Kubernetes 集成在一起,使得在生产环境中运行沙箱容器更简单。

传统的 Linux 容器并非沙箱

传统 Linux 容器中运行的应用程序与常规(非容器化)应用程序以相同的方式访问系统资源:直接对主机内核进行系统调用。内核以特权模式运行,允许它与必要的硬件交互并将结果返回给应用程序。
在传统的容器技术中,内核会对应用程序需要访问的资源施加一些限制。这些限制通过使用 Linux 的 cgroups 和命名空间技术来实现,然而并非所有的资源都可以通过这些机制来进行控制。此外,即使使用这些限制,内核仍然面向恶意程序暴露出过多的攻击面。
像 seccomp 这样的技术可以在应用程序和主机内核之间提供更好的隔离,但是它们要求用户创建预定义的系统调用白名单。在实际中,很难事先罗列出应用程序所需要的所有系统调用。如果你需要调用的系统调用存在漏洞,那么这类过滤器也很难发挥作用。

已有基于 VM 的容器技术

提高容器隔离性的一种方法是将容器运行在其自己的虚拟机(VM)之内。也就是为每个容器提供自己专用的“机器”,包括内核和虚拟化设备,并与主机完全分离。即使 guest 虚拟机存在漏洞,管理程序( hypervisor )仍会隔离主机以及主机上运行的其他应用程序/容器。
在不同的 VM 中运行容器提供了很好的隔离性、兼容性和性能,但也可能需要更大的资源占用。
Kata containers 是一个开源项目,它使用精简的虚拟机来尽量减少资源的占用,并最大限度地提高隔离容器的性能。与 gVisor 一样,Kata 也包含与 Docker 和 Kubernetes 兼容的 OCI (Open Container Initiative )运行时。

基于 gVisor 的沙箱容器( Sandboxed containers )

gVisor 比 VM 更轻量,同时具备相同的隔离级别。 gVisor 的核心是一个以普通非特权进程方式运行的内核,它支持大多数 Linux 系统调用。这个内核是用 Go 编写的,选择 Go 语言是由于其较小的内存占用以及类型安全等特性。和虚拟机一样,在 gVisor 沙箱中运行的应用程序也可以拥有独立于主机和其他沙箱、自己独自的内核和一组虚拟设备。
gVisor 通过拦截应用程序的系统调用,并充当 guest 内核,提供了非常强的隔离性,而所有的这些都运行在用户空间。和虚拟机在创建时需要一定的资源不同,gVisor 可以像普通 Linux 进程一样,随时调整自己的资源使用。可以将 gVisor 看做是一个完全虚拟化的操作系统,但是与完整的虚拟机相比,它具有灵活的资源占用和更低的固定成本。
但是,这种灵活性的代价是单个系统调用的消耗、应用程序的兼容性(和系统调用相关)以及其他问题。
“安全工作负载(workloads)是业界的首要任务,我们很高兴看到像 gVisor 这样的创新,并期待在规范方面进行合作,并对相关技术组件进行改进,从而为生态系统带来更大的安全性。”
  • Samuel Ortiz,Kata 技术指导委员会成员,英特尔公司首席工程师
“Hyper 非常高兴看到 gVisor 这样全新的提高容器隔离性的方法。行业需要一个强大的安全容器技术生态系统,我们期待通过与 gVisor 的合作让安全容器成为主流。“
  • Xu Wang,Kata 技术指导委员会成员,Hyper.sh CTO

和 Docker、Kubernetes 集成

gVisor 运行时可以通过 runsc(run Sandboxed Container) 和 Docker 以及 Kubernetes 进行无缝集成。
runsc 运行时与 Docker 的默认容器运行时 runc 可以互换。runsc 的安装很简单,一旦安装完成,只需要在运行 docker 的时候增加一个参数就可以使用沙箱容器:
$ docker run --runtime=runsc hello-world
$ docker run --runtime=runsc -p 3306:3306 mysql
在 Kubernetes 中,大多数资源隔离都以 Pod 为单位,因此 Pod 也自然成为了 gVisor 沙箱的边界(boundary)。Kubernetes 社区目前正在致力于实现沙箱 Pod API,但是今天 gVisor 沙箱已经可以在实验版中(experimental support)可用。
runsc 运行时可以通过 cri-o 或 cri-containerd 等项目来在 Kubernetes 群集中使用沙箱技术,这些项目会将 Kubelet 中的消息转换为 OCI 运行时命令。
gVisor 实现了大部分 Linux 系统 API ( 200 个系统调用,并且还在增加中),但并不是所有的系统调用。目前有一些系统调用和参数还没有支持,以及 /proc 和 /sys 文件系统的一些部分内容。因此,并不是说所有的应用程序都可以在 gVisor 中正常运行,但大部分应用程序应该都可以正常运行,包括 Node.js、Java 8、MySQL、Jenkins、Apache、Redis 和 MongoDB 等等。
gVisor 已经开源,可以在 https://github.com/google/gvisor 查看到其内容,相信这里将会是大家了解 gVisor 最好的开始.

          GitHub - shanghai-edu/ldap-test-tool      Cache   Translate Page      

ldap-test-tool

一个轻量级的 ldap 测试工具

支持:

  • ldap 认证
  • ldap 查询(默认基于用户)
  • 自定义 filter 的 ldap 查询
  • 多用户的批量 ldap 认证
  • 多用户的批量 ldap 查询
  • 支持批量查询结果输出到 csv
  • REST API

编译

go get ./...
go build

release

可以直接下载编译好的 release 版本

提供 win64 和 linux64 两个平台的可执行文件

https://github.com/shanghai-edu/ldap-test-tool/releases/

配置文件

默认配置文件为目录下的 cfg.json,也可以使用 -c--config 来加载自定义的配置文件。

openldap 配置示例

{
    "ldap": {
        "addr": "ldap.example.org:389",
        "baseDn": "dc=example,dc=org",
        "bindDn": "cn=manager,dc=example,dc=org",
        "bindPass": "password",
        "authFilter": "(&(uid=%s))",
        "attributes": ["uid", "cn", "mail"],
        "tls":        false,
        "startTLS":   false
    },
    "http": {
        "listen": "0.0.0.0:8888"
    }
}

AD 配置示例

{
    "ldap": {
        "addr": "ad.example.org:389",
        "baseDn": "dc=example,dc=org",
        "bindDn": "manager@example.org",
        "bindPass": "password",
        "authFilter": "(&(sAMAccountName=%s))",
        "attributes": ["sAMAccountName", "displayName", "mail"],
        "tls":        false,
        "startTLS":   false
    },
    "http": {
        "listen": "0.0.0.0:8888"
    }
}

命令体系

命令行部分使用 cobra 框架,可以使用 help 命令查看命令的使用方式

# ./ldap-test-tool help
ldap-test-tool is a simple tool for ldap test
build by shanghai-edu.
Complete documentation is available at github.com/shanghai-edu/ldap-test-tool

Usage:
  ldap-test-tool [flags]
  ldap-test-tool [command]

Available Commands:
  auth        Auth Test
  help        Help about any command
  http        Enable a http server for ldap-test-tool
  search      Search Test
  version     Print the version number of ldap-test-tool

Flags:
  -c, --config string   load config file. default cfg.json (default "cfg.json")
  -h, --help            help for ldap-test-tool

Use "ldap-test-tool [command] --help" for more information about a command.

认证

./ldap-test-tool auth -h
Auth Test

Usage:
  ldap-test-tool auth [flags]
  ldap-test-tool auth [command]

Available Commands:
  multi       Multi Auth Test
  single      Single Auth Test

Flags:
  -h, --help   help for auth

Global Flags:
  -c, --config string   load config file. default cfg.json (default "cfg.json")

Use "ldap-test-tool auth [command] --help" for more information about a command.
单用户测试

命令行说明

Single Auth Test

Usage:
  ldap-test-tool auth single [username] [password] [flags]

Flags:
  -h, --help   help for single

Global Flags:
  -c, --config string   load config file. default cfg.json (default "cfg.json")

示例

./ldap-test-tool auth single qfeng 123456
LDAP Auth Start 
==================================

qfeng auth test successed 

==================================
LDAP Auth Finished, Time Usage 47.821884ms 
批量测试

命令行说明

# ./ldap-test-tool auth multi -h
Multi Auth Test

Usage:
  ldap-test-tool auth multi [filename] [flags]

Flags:
  -h, --help   help for multi

Global Flags:
  -c, --config string   load config file. default cfg.json (default "cfg.json")

示例

# cat authusers.txt 
qfeng,123456
qfengtest,111111

用户名和密码以逗号分隔(csv风格) authusers.txt 中有两个用户,密码正确的 qfeng 和密码错误的 qfengtest

# ./ldap-test-tool auth multi authusers.txt 
LDAP Multi Auth Start 
==================================

Successed count 1 
Failed count 1 
Failed users:
 -- User: qfengtest , Msg: Cannot find such user 

==================================
LDAP Multi Auth Finished, Time Usage 49.582994ms 

查询

# ./ldap-test-tool search -h
Search Test

Usage:
  ldap-test-tool search [flags]
  ldap-test-tool search [command]

Available Commands:
  filter      Search By Filter
  multi       Search Multi Users
  user        Search Single User

Flags:
  -h, --help   help for search

Global Flags:
  -c, --config string   load config file. default cfg.json (default "cfg.json")

Use "ldap-test-tool search [command] --help" for more information about a command.
[root@wiki-qfeng ldap-test-tool]# 
单用户查询

命令行说明

# ./ldap-test-tool search user -h
Search Single User

Usage:
  ldap-test-tool search user [username] [flags]

Flags:
  -h, --help   help for user

Global Flags:
  -c, --config string   load config file. default cfg.json (default "cfg.json")
[root@wiki-qfeng ldap-test-tool]# 

示例

# ./ldap-test-tool search user qfeng
LDAP Search Start 
==================================


DN: uid=qfeng,ou=people,dc=example,dc=org
Attributes:
 -- uid  : qfeng 
 -- cn   : 冯骐测试 
 -- mail : qfeng@example.org


==================================
LDAP Search Finished, Time Usage 44.711268ms 

PS: 如果属性有多值,将以 ; 分割

LDAP Filter 查询
# ./ldap-test-tool search filter -h
Search By Filter

Usage:
  ldap-test-tool search filter [searchFilter] [flags]

Flags:
  -h, --help   help for filter

Global Flags:
  -c, --config string   load config file. default cfg.json (default "cfg.json")

示例

# ./ldap-test-tool search filter "(cn=*测试)"
LDAP Search By Filter Start 
==================================


DN: uid=test1,ou=people,dc=example,dc=org
Attributes:
 -- uid  : test1 
 -- cn   : 一号测试 
 -- mail : test1@example.org 


DN: uid=test2,ou=people,dc=example,dc=org
Attributes:
 -- uid  : test2 
 -- cn   : 二号测试 
 -- mail : test2@example.org 


DN: uid=test3,ou=people,dc=example,dc=org
Attributes:
 -- uid  : test3
 -- cn   : 三号测试 
 -- mail : test3@example.org 

results count  3

==================================
LDAP Search By Filter Finished, Time Usage 46.071833ms 
批量查询测试

命令行说明

# ./ldap-test-tool search multi -h
Search Multi Users

Usage:
  ldap-test-tool search multi [filename] [flags]

Flags:
  -f, --file   output search to users.csv, failed search to failed.csv
  -h, --help   help for multi

Global Flags:
  -c, --config string   load config file. default cfg.json (default "cfg.json")

示例

# cat searchusers.txt 
qfeng
qfengtest
nofounduser

searchuser.txt 中有三个用户,其中 nofounduser 是不存在的用户

# ldap-test-tool.exe search multi .\searchusers.txt
LDAP Multi Search Start
==================================

Successed users:

DN: uid=qfeng,ou=people,dc=example,dc=org
Attributes:
 -- uid  : qfeng
 -- cn   : 冯骐
 -- mail : qfeng@example.org


DN: uid=qfengtest,ou=people,dc=example,dc=org
Attributes:
 -- uid  : qfengtest
 -- cn   : 冯骐测试
 -- mail : qfeng@example.org

nofounduser : Cannot find such user

Successed count 2
Failed count 1

==================================
LDAP Multi Search Finished, Time Usage 134.744ms

当使用 -f 选项时,查询的结果将输出到 csv 中。csv 将以配置文件中 attributes 的属性作为 title。因此当使用 -f 选项时,attributes 不得为空。

# ./ldap-test-tool search multi searchusers.txt -f
LDAP Multi Search Start 
==================================

OutPut to csv successed

==================================
LDAP Multi Search Finished, Time Usage 88.756956ms 

# ls | grep csv
failed.csv
users.csv

HTTP API

HTTP API 部分使用 beego 框架 使用如下命令开启 HTTP API

# ldap-test-tool.exe http
2018/03/12 14:30:25 [I] http server Running on http://0.0.0.0:8888
健康状态

检测 ldap 健康状态

# curl http://127.0.0.1:8888/api/v1/ldap/health   
{
  "msg": "ok",
  "success": true
}
查询用户

查询单个用户信息

# curl  http://127.0.0.1:8888/api/v1/ldap/search/user/qfeng
{
  "user": {
    "dn": "uid=qfeng,ou=people,dc=example,dc=org",
    "attributes": {
      "cn": [
        "冯骐"
      ],
      "mail": [
        "qfeng"
      ],
      "uid": [
        "qfeng"
      ]
    }
  },
  "success": true
}
Filter 查询

根据 LDAP Filter 查询

# curl  http://127.0.0.1:8888/api/v1/ldap/search/filter/\(cn=*测试\)
{
  "results": [
    {
      "dn": "uid=test1,ou=people,dc=example,dc=org",
      "attributes": {
        "cn": [
          "一号测试"
        ],
        "mail": [
          "test1@example.org"
        ],
        "uid": [
          "test1"
        ]
      }
    },
    {
      "dn": "uid=test2,ou=people,dc=example,dc=org",
      "attributes": {
        "cn": [
          "二号测试"
        ],
        "mail": [
          "test2@example.org"
        ],
        "uid": [
          "test2"
        ]
      }
    },
    {
      "dn": "uid=test3,ou=people,dc=example,dc=org",
      "attributes": {
        "cn": [
          "三号测试"
        ],
        "mail": [
          "test3@example.org"
        ],
        "uid": [
          "test3"
        ]
      }
    },
  ],
  "success": true
}
多用户查询

同时查询多个用户,以 application/json 方式发送请求数据,请求数据示例

["qfeng","qfengtest","nofounduser"]

curl 示例

# curl -X POST  -H 'Content-Type:application/json' -d '["qfeng","qfengtest","nofounduser"]' http://127.0.0.1:8888/api/v1/ldap/search/multi
{
  "success": true,
  "result": {
    "successed": 2,
    "failed": 1,
    "users": [
      {
        "dn": "uid=qfeng,ou=people,dc=example,dc=org",
        "attributes": {
          "cn": [
            "冯骐"
          ],
          "mail": [
            "qfeng@example.org"
          ],
          "uid": [
            "qfeng"
          ]
        }
      },
      {
        "dn": "uid=qfengtest,ou=people,dc=example,dc=org",
        "attributes": {
          "cn": [
            "冯骐测试"
          ],
          "mail": [
            "qfeng@example.org"
          ],
          "uid": [
            "qfengtest"
          ]
        }
      }
    ],
    "failed_messages": [
      {
        "username": "nofounduser",
        "message": "Cannot find such user"
      }
    ]
  }
}

认证

单用户认证

单个用户认证测试,以 application/json 方式发送请求数据,请求数据示例

{
	"username": "qfeng",
	"password": "123456"
}

curl 示例

# curl -X POST  -H 'Content-Type:application/json' -d '{"username":"qfeng","password":"123456"}' http://127.0.0.1:8888/api/v1/ldap/auth/single
{
  "msg": "user 20150073 Auth Successed",
  "success": true
}
多用户认证

同时发起多个用户认证测试,以 application/json 方式发送请求数据,请求数据示例

[{
	"username": "qfeng",
	"password": "123456"
}, {
	"username": "qfengtest",
	"password": "1111111"
}]

curl 示例

# curl -X POST  -H 'Content-Type:application/json' -d '[{"username":"qfeng","password":"123456"},{"username":"qfengtest","password":"1111111"}]' http://127.0.0.1:8888/api/v1/ldap/auth/multi
{
  "success": true,
  "result": {
    "successed": 1,
    "failed": 1,
    "failed_messages": [
      {
        "username": "qfengtest",
        "message": "LDAP Result Code 49 \"Invalid Credentials\": "
      }
    ]
  }
}

LICENSE

Apache License 2.0


          Apache Struts Warns Users of Two-Year-Old Vulnerability      Cache   Translate Page      
Users must update their vulnerable libraries manually.
          (USA-VA-Herndon) Scrum Master: Startup | Small Business | $100K - $135K Salary      Cache   Translate Page      
Scrum Master: Startup | Small Business | $100K - $135K Salary Scrum Master: Startup | Small Business | $100K - $135K Salary - Skills Required - Scrum Master, Project Management, CSM | PMP-ACP | CSP Certification, Agile Principles | Practices | Theory, Online Products, Software as a Service (SaaS) Products, Agile Approaches (Scrum | Lean | XP | Kanban), Atlassian JIRA, PSM I | PSM II | SAFe, DevOps Tools and CI/CD Processes If you're a Certified Scrum Master with SMB and/or Startup experience, please read on! We apply artificial intelligence to solve complex, real-world problems at scale. Our Human+AI operating system, blends capabilities ranging from data handling, analytics, and reporting to advanced algorithms, simulations, and machine learning, enabling decisions that are just-in-time, just-in-place, and just-in-context. If this type of environment sounds exciting, please read on! **Top Reasons to Work with Us** - Benefits start on day 1 - Free onsite gym - Unlimited snacks and drinks - Located 1 mile from Wiehle-Reston East Station on the Silver line **What You Will Be Doing** RESPONSIBILITIES: As Scrum Master / Agile Lead, you will manage Agile-focused engineering development of our proprietary technology. You'll be working closely with the Product Manager and Engineer Lead, facilitating the Sprint process from beginning to end, ensuring the accurate and timely completion of product requirements. In addition, you'll drive the team's education in best practices of Agile development through a variety of methods meant to provide positive impact across the team. You should have a strong Project Management and/or Scrum Master background and want to be an active member of the team, not just someone who enforces Agile methodology. A background in working with online products and Software as a Service (SaaS) is strongly preferred, and the ability to handle the challenges faced by startups is key. Along with the Product and Engineering leads, you'll be part of the 3-person leadership team who will bring cutting edge and sometimes complex products to market, then identifying ways to evolve these products over time. You'll work closely with your leadership colleagues to ensure road-maps are built using reality as the driving force, not smoke and mirrors or unjustified optimism. - Lead diverse software development teams to on-time fulfillment of applications and services in a hybrid public-private cloud/computing environment - Facilitate the development and delivery of our proprietary products between Product and Engineering teams that drive market leading features into our customers hands - Act as a liaison for your team throughout the organization - Provide day-to-day oversight of the product road-map to ensure that timelines are met, tasks are prioritized appropriately, there is clear definition of scope and the team isn't over-committing - Ensure teams have appropriate direction and priority at all times to remain on track to reach internal and customer commitments and deadlines - Work with Product and Engineering to ensure the right level of understanding and the right culture/process exists to take high level concepts and feature ideas and help break them down into stories and sub-tasks - Track and report status of all projects you own during our weekly updates. In identifying areas where any risk may exist, proactively build, communicate and lead recovery actions - Participate in “scrum-of-scrum” meetings and assist in ongoing coordination of corporate wide road-maps to ensure alignment between teams across the organization - Manage the delivery of software enhancements and fixes per release schedules using Agile development methodologies - Help to shape and improve the product road-map by consolidating and communicating feedback captured during the execution and retrospective analysis of sprints and strategy meetings - Work with Product and Sales team to understand customer engagement needs and ensure product readiness is in alignment with road-map commitments and the teams ability to deliver - Work with Software Development leadership and teams to ensure technical requirements for engagement can be met on schedule; if necessary, identify and plan for any customer specific engineering requirements with Product - Follow customer integration needs, update implementation plans and report customer satisfaction outcomes internally to cross-functional teams - Ensure that all product milestones are tracking to schedule and communicate effectively to Product and or cross-functional teams whenever obstacles to meeting customer expectations surface **What You Need for this Position** QUALIFICATIONS: - Bachelor's Degree - 2+ years of experience as a full-time Scrum Master - Scrum Master certification (CSM) PMP-ACP or CSP certification - Expertise in diligently applying Agile principles, practices, and theory (User Stories, Story Pointing, Velocity, Burndown Metrics, Agile Games, Tasking, Retrospectives, Cycle Time, Throughput, Work in Progress levels, Product Demos) - Experience working with online products and Software as a Service (SaaS) - Experience working with different Agile approaches (Scrum, Lean, XP, Kanban, etc.) - Hands on experience with Agile ALM tools like Atlassian Jira with knowledge of other tools such as Aha!, Wrike, or ProductPlan - Knowledge of servant leadership, facilitation, situational awareness, conflict resolution, continual improvement, empowerment, influence and transparency - Agile or Project Management certification (PSM I, PSM II, SAFe) - Extensive experience with Atlassian products - Understanding of DevOps tools and CI/CD processes - Experience and interest in large-scale distributed systems - Interest in with Apache Spark, Apache Pig, AWS Pipelines, Google Dataflow, or MapReduce **What's In It for You** - Competitive Salary ($100,000 - $135,000) - Incentive Stock Options - Medical, Dental & Vision Coverage - 401(K) Plan - Flexible “Personal Time Off (PTO) Plan - 10+ Paid Holiday Days Per Year So, if you're a Certified Scrum Master with SMB and/or Startup experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Scrum Master: Startup | Small Business | $100K - $135K Salary* *VA-Herndon* *WT1-1492800*
          Java swing log4j in Textarea      Cache   Translate Page      
Frage: Hallo,ich arbeite gerade daran mein log4j in einem Textarea anzuzeigen. Das schreiben in eine Datei und das anzeigen in der Konsolle funktioniert schon. Meine Frage ist wie muss ich den Appender oder die properties anpassen, damit die Ausgabe im Textarea angezeigt wird. Die Ausgaben mit System.out habe ich auch bereits im Textarea. Mein Projekt sieht wie folgt aus:<code>src package1 ApplicationLogger.java package2 MainWindow.javalog4j.properties</code>ApplicationLogger.java<code>package package1;import java.io.FileInputStream;import java.io.IOException;import java.util.Properties;import org.apache.log4j.FileAppender;import org.apache.log4j.Logger;import org.apache.log4j.PatternLayout;import org.apache.log4j.PropertyConfigurator;public class ApplicationLogger { private static Logger logger = null; public static Logger getInstance() { if (logger = null) { initLogger(); } return logger; } private static void ... 0 Kommentare, 49 mal gelesen.
          Apache Struts Users Told to Update Vulnerable Component      Cache   Translate Page      

Apache Struts developers are urging users to update a file upload library due to the existence of two vulnerabilities that can be exploited for remote code execution and denial-of-service (DoS) attacks.

read more


          (USA-CA-Sunnyvale) ECL Data Scientist - Remote      Cache   Translate Page      
ECL Data Scientist - Remote ECL Data Scientist - Remote - Skills Required - ECL, Lexis Nexis, Apache, Pig If you are an ECL Data Scientist with experience, please read on! We are a cutting edge technology company dominating the IoT (Internet of Things) market where we solve real world problems in real-time for our clients. We connect entire ecosystems creating digital enterprises. Due to growth and demand for our product and services, we are in need of hiring a Data Scientist who has hands-on experience with ECL and a product focused mind set. If you are interested in joining a leading technology company that cares about its employees and their environment, then apply immediately. **Top Reasons to Work with Us** Competitive Salary 100% Paid Medical Benefits Bonus Generous Equity **What You Will Be Doing** In this role, you will bring your passion for technology and apply your skills to our platform. You will be a dynamic hands-on leader playing a key role in additions to our data science team. You will be responsible for both research and technical aspects of projects reporting directly to the CEO. **What You Need for this Position** More Than 5 Years of experience and knowledge of: - ECL - Lexis Nexis - Apache - Pig **What's In It for You** - $$200k-$300k (DOE) - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k So, if you are an ECL Data Scientist with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *ECL Data Scientist - Remote* *CA-Sunnyvale* *PZ1-1493037*
          (USA-CA-Costa Mesa) Software Engineer-React      Cache   Translate Page      
Software Engineer-React Software Engineer-React - Skills Required - JavaScript, RUBY, Clojure, Scala, HTML/CSS, Full Software Development Lifecycle, Agile If you are a Software Engineer with experience, please read on! WE ARE a fast growing, exciting Data Analytics Company located in sunny SoCal! We offer a fun and flexible environment with freedom for you to do what you do best! This role will be crucial to our growth moving forward and we're looking for someone who loves solving problems and getting the job done! **What You Will Be Doing** - Develop and build the most customer-friendly products - Add new functionality to and iterate on existing products - Participate in all phases of the software development life cycle, including deployment - Collaborate with team to architect, design, and build products that solve customer problems - Maximize software agility, maintainability, and extensibility. - Minimize the cost of change, feedback time, and time to recover from problems. **What You Need for this Position** - Solid demonstrable skill in two or more languages: Javascript, Ruby, Java, Clojure, Scala -Strong experience with React / React Native - Strong hands-on experience with HTML and CSS - Experience writing and integrating with API's or microservices - Strong hands-on experience with relational and non- relational databases. - 5+years experience with full lifecycle software development - Track record of using test-driven/behavior-driven development (TDD/BDD) - Preference for Agile methodologies and rapid prototyping over detailed specs BONUS (not required): - Development with consumer facing high-traffic, mission-critical sites, systems, or processes - Experience with financial systems or Loan Origination Systems - Hands-on experience with mobile app development - Hands-on experience with Apache Spark, Kafka or Hadoop Stack - Responsive web and mobile web development - Strong hands-on experience with at least one NoSQL variant, i.e. Redis, Mongo, Cassandra - Current with HTML5 and CSS3 and front-end JavaScript frameworks, such as React.js, Angular.js, or Ember.js So, if you are a Software Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Software Engineer-React* *CA-Costa Mesa* *PM6-1492883*
          (USA-MN-Cottage Grove) Senior Software Engineer - PHP, JavaScript, LAMP      Cache   Translate Page      
Senior Software Engineer (PHP) - PHP, JavaScript, LAMP Senior Software Engineer (PHP) - PHP, JavaScript, LAMP - Skills Required - PHP, JavaScript, LAMP, Linux, Apache, MySQL, JQuery, HTML, CSS, API If you are a Senior Software Engineer with PHP and JavaScript experience, please read on! We are located just south of the beautiful Saint Paul, MN area and we are a cutting edge software company in the hospitality space. We were just named one of the fastest growing tech companies in the area and are rapidly expanding. We have a very fast paced and collaborative environment that really enjoys the culture we have created. We need a skilled full stack software engineer who is well-versed with PHP and also has experience working with JavaScript. This person will help create cutting edge web applications and improve the efficiency and scalability of our business applications. This developer will also help design/develop front-end interfaces, underlying APIS, and backend systems. **Top Reasons to Work with Us** -Work with cutting edge technology -Significant room for growth -Outstanding work environment **What You Will Be Doing** -Full stack development in PHP and JavaScript -Design/develop front-end interfaces, underlying APIS, and backend systems -Enhance existing business applications -Help lead/mentor more junior developers **What You Need for this Position** At Least 3-5 Years of experience and knowledge of: - PHP - JavaScript - API Design - HTML/CSS Strong nice to haves: -SQL - LAMP stack - SQL - JQuery **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - 401k So, if you are a Senior Software Engineer with PHP and JavaScript experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Software Engineer - PHP, JavaScript, LAMP* *MN-Cottage Grove* *MB6-1492934*
          Moscow Apache Ignite Meetup #5      Cache   Translate Page      
Всем привет!

14 ноября приглашаем на очередную встречу Apache Ignite в Москве. Будет интересно архитекторам и разработчикам, интересующимся open source платформой для распределённых приложений Apache Ignite.

Программа

18:30 — 19:00 — Сбор гостей, приветственный кофе

Доклады:

  • Измерение производительности Apache ignite. Как мы делаем бенчмарки — Илья Сунцов (GridGain)
  • Apache Ignite TeamCity Bot: боремся с нестабильными тестами в Open Source сообществе — Дмитрий Павлов (GridGain) и Николай Кулагин (Сбербанк Технологии)
  • Transparent Data Encryption. История разработки major feature в большом open source проекте — Николай Ижиков, Apache Ignite Committerа

22:00 — 22:30 — Розыгрыш полезных книг и свободное общение

Мероприятие бесплатное, нужно зарегистрироваться
          (USA-NC-Cary) Software Engineer - PHP, Java, JavaScript      Cache   Translate Page      
Software Engineer - PHP, Java, JavaScript Software Engineer - PHP, Java, JavaScript - Skills Required - PHP, Java, JavaScript, MySQL, MVC, Linux, Apache, GIT, Subversion If you are a Software Engineer with PHP and Java experience, please read on! Based in Cary, NC - we provide accurate contact information and sales leads related to the e-commerce industry. Our company is looking for the right candidate to join our in-house engineering team. So, if you are interested in joining our growing team, please apply today! **Top Reasons to Work with Us** 1. Amazing Reputation. 2. Work with and learn from the best in the business. 3. Opportunity for career and income growth. **What You Will Be Doing** - Improve our next generation web technology tracking systems using machine learning and AI - Design and develop algorithms and techniques for high-volume data analysis - Build upon some of our most advanced platforms to maximize performance. **What You Need for this Position** At Least 3 Years of experience and knowledge of: - PHP - Java - JavaScript - MySQL - MVC - Linux - Apache - GIT - Subversion **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k So, if you are a Java Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Software Engineer - PHP, Java, JavaScript* *NC-Cary* *JW8-1492814*
          COSCon Bridges East & West, Open Source Powers Now & Future      Cache   Translate Page      

The OSI was honored to participate in the 2018 China Open Source Conference (COSCon'18) hosted by OSI Affiliate Member KAIYUANSHE in Shenzhen, China. Over 1,600 people attended the exciting two-day event, with almost another 10,000 watching via live-stream online. The conference boasted sixty-two speakers from twelve countries, with 11 keynotes (including OSI Board alum Tony Wasserman), 67 breakout sessions, 5 lightning talks (led by university students), 3 hands-on camps, and 2 specialty forums on Open Source Education and Open Source Hardware.

COSCon'18 also served as an opportunity to make several announcements, including the publication of "The 2018 China Open Source Annual Report", the launch of "KCoin Open Source Contribution Incentivization Platform", and the unveiling of KAIYUANSHE's "Open Hackathon Cloud Platform".

Since its foundation in October of 2014, KAIYUANSHE has continuously helped open source projects and communities thrive in China, while also contributing back to the world by, "bringing in and reaching out". COSCon'18 is one more way KAIYUANSHE serves to: raise awareness of, and gain expereince with, global open source projects; build and incentivise domestic markets for open source adoption; study and improve open source governance across industry sectors; promote and serve the needs of local develoeprs, and; identify and incubate top-notch local open source projects.

In addition to all of the speakers and attendees, KAIYUANSHE would like to thank their generous sponsors for all of their support in making COSCon'18 a great success.

2018 China Open Source Annual Report - Created by KAIYUANSHE volunteers over the past six months, the 2018 Open Source Annual Report describes the current status, and unique dynamics, of Open Source Software in China. The report provides a global perspective with contributions from multiple communities, and is now available on GitHub: contributions welcome.

KCoin - Open Source Contribution Incentivization Platform - KCoin, an open source, blockchain-based, contribution incentivization mechanism was launched at COSCon'18. KCoin is curently used by three projects including, KFCoding--a next generation interactive developer learning community, ATN--an AI+Blockchain-based open source platform, and Dao Planet--a contribution-based community incentive infrastructure.

Open Hackathon Platform Donation Ceremony - Open Hackathon Platform is a one-stop cloud platform for hosting or participating online in hackathons. Originally developed by and run internally for Microsoft development, the platform was officially donated to KAIYUANSHE by Microsoft during the conference. Since May of 2015 the open source platform has hosted more than 10 hackathons and other collabrative development eforts including hands-on camps and workshops, and is the first project to be contributed by a leading international corporation to a Chinese open source community. Ulrich Homann, Distinguished Architect at Microsoft who presided over the dedication offered, “We are looking forward to contributions from the KAIYUANSHE community which will make the Open Hackathon Cloud Platform an even better platform for your needs. May the source be with you!”

Open Source 20-Year Anniversary Celebration Party - Speakers, sponsors, community and media partners, and KAIYUANSHE directors and officers came together to celebrate the 20-year anniversary of Open Source Software and the Open Source Initiative. The evening was hosted by OSI Board Director Tony Wasserman, and Ross Gardler of the Apache Software Foundation, who both shared a few thoughts about the long journey and success of Open Source Software. Other activities included, a "20 Years of Open Source Timeline", where attendees added their own memories and milestones; "Open-Source-Awakened Jedi" cosplay with Kaiyuanshe directors and officers serving OSI 20th Anniversary cake as Jedi warrior's (including cutting the cake with light sabers!).

The celebration also provided an opportunity to recognize the outstanding contributions to KAIYUANSHE and open source by two exceptional individuals. Cynthia Xin and Junbo Wang were both awarded the "Open Source Star" trophy. Cynthia was recogmized for her work as both the Event Team Lead and Community Partnership Team Lead, while Junbo Wang, was recognized for contributions as the Open Hackathon Cloud Platform Infrastructure Team Lead, and KCoin Project Lead.

"May the source be with you!" Fun for all at the 20th Anniversary of Open Source party during COSCon'18.

 

Other highlights included:

  • A "Fireside Chat" with Nat Friedman, GitHub CEO, and Ted Liu, Kaiyuanshe Chairman
  • Apache Project Incubation
  • Implementing Open Source Governance at Scale
  • Executive Roundtable: "Collision of Cultures"
  • 20 years of open source: Where can we do better?
  • How to grow the next generation of university talent with open source.
  • Open at GitLab: discussions and engagement.
  • Three communities--Open Source Software (OSS), Open Source Hardware (OSHW) and Creative Commons (CC)--on stage, sharing and brainstorming.
  • Made in China, "Xu Gu Hao": open source hardware and education for the fun of creating!
Former OSI Board Director Tony Wasserman presents at COSCon'18

 

COSCon'18 organizers would like to recognize and thank their international and domestic communities for their support, Apache Software Foundation (ASF), Open Source Initiative (OSI), GNOME, Mozilla, FreeBSD and another 20+ domestic communities. As of Oct. 23rd, there were more than 120,000 viewerships from the retweet of the articles published for the COSCon'18 by the domestic communities and more retweets to come from the international communities. We are grateful for these lovely community partners. The board of GNOME Foundation also sent a greeting video for the conference.

Many attendees also offered their thoughts on the event...

COSCon was a great opportunity to meet developers and learn how GitHub can better serve the open source community in China. It is exciting to see how much creativity and passion there is for open source in China.
---- Nat Friedman, CEO, GitHub

COSCon is the meetup place for open source communities. No matter where you are, on stage or in the audience crowd, the spirits of openness, freedom, autonomy and collaboration run through the entire conference. Technologies rises and falls, only the ecosystem sustains over the community.
---- Tao Jiang, Founder of CSDN

When I visited China in 2015, I said "let's build the bridge together", in 2018 China Open Source Conference, I say "let's cross the bridge together!"
---- Ross Gardler, Executive Vice President, Apache Software Foundation

The conference was an excellent opportunity to learn about "adoption and use of FOSS from industry leaders in China and around the world."
---- Tony Wasserman, OSI Board Member Alumni, Professor of Carnegie Mellon University

I'm very glad to see the increasing influence power of KAIYUANSHE and wish it gets better and better.
---- Jerry Tan, Baidu Open Source Lead & Deep Learning Evangelist

It is a great opportunity to share Microsoft’s Open source evolution with the OSS community in China through the 2018 ConsCon conference. I am honored to officially donate the Microsoft Open Hackathon platform to the Kayuanshe community. Contributing over boundaries of space and time is getting more important than ever – an open platform like the Microsoft Open Hackathon environment can bring us together wherever we are, provide a safe online environment enabling us to solve problems, add unique value and finally have lots of fun together.
---- Ulrich Homann,Distinguished Architect, Microsoft

I was impressed by the vibrant interest in the community for OSS and The Apache Software Foundation, particularly by young developers.
---- Dave Fisher, Apache Incubator PMC member & mentor

Having the China Open Source Conference is a gift for the 20-year anniversary of the birth of open source from the vast number of Chinese open source fans. In 2016, OSI officially announced that Kaiyuanshe becomes an OSI affiliate member in recognizing Kaiyuanshe's contribution in promoting open source in China. Over the years, the influence of Kaiyuanshe has been flourishing, and many developers have participated & contributed to its community activities. In the future, Huawei Cloud is willing to cooperate with Kaiyuanshe further to contribute to software industry growth together.
---- Feng Xu, founder & general manager of DevCloud, Huawei Cloud


          (USA-NJ-Mount Laurel) Lead Database Software Engineer      Cache   Translate Page      
Lead Database Software Engineer Lead Database Software Engineer - Skills Required - "Apache Avro", PostgreSQL, Java, Thrift, "Protocol Buffer" If you are a Software Engineer with database development experience please read on! **Top Reasons to Work with Us** - This is the chance to be part of the team on the front-lines of the evolving cyber threat landscape - Be a part of a "got your back" team in an innovative cybersecurity R&D environment - Develop the next generation of tools and capabilities to make a real difference in the world of cyber security! **What You Will Be Doing** - Lead development of core database technologies - Ensure data platform can support program data scale and latency requirements - Evolve program data model, based on Apache Avro - Support formulation and optimization of cyber battlespace queries **What You Need for this Position** - Hands on experience developing and scaling data-centric applications - Expert developing with rational database technologies in production environments, specifically PostgreSQL - Expertise developing in Java - Working knowledge of data sterilization technologies - Experience with Agile methodologies We are actively interviewing so please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Lead Database Software Engineer* *NJ-Mount Laurel* *JG9-1492765*
          (USA-VA-Arlington) Lead Database Software Engineer      Cache   Translate Page      
Lead Database Software Engineer Lead Database Software Engineer - Skills Required - APACHE AVRO, PostgreSQL, Java, Thrift, "Protocol Buffers" If you are a Software Engineer with database development experience please read on! **Top Reasons to Work with Us** - This is the chance to be part of the team on the front-lines of the evolving cyber threat landscape - Be a part of a "got your back" team in an innovative cybersecurity R&D environment - Develop the next generation of tools and capabilities to make a real difference in the world of cyber security! **What You Will Be Doing** - Lead development of core database technologies - Ensure data platform can support program data scale and latency requirements - Evolve program data model, based on Apache Avro - Support formulation and optimization of cyber battlespace queries **What You Need for this Position** - Hands on experience developing and scaling data-centric applications - Expert developing with rational database technologies in production environments, specifically PostgreSQL - Expertise developing in Java - Working knowledge of data sterilization technologies - Experience with Agile methodologies We are actively interviewing so please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Lead Database Software Engineer* *VA-Arlington* *JG9-1492747*
          (USA-VA-Reston) Java Engineer      Cache   Translate Page      
Java Engineer (Mid-level to Senior) Java Engineer (Mid-level to Senior) - Skills Required - Java Servlets, Java Server Pages, Apache Tomcat, MySQL, Java 8, Hashmap, Lock, JDBC, GIT, Agile If you are a Java Engineer (Mid-level to Senior) with experience, please read on! **Top Reasons to Work with Us** Competitive salaries and benefits packages offered. As part of our team, you'll spend time solving challenging problems in a customer-centric environment with a team that always make tasks fun. This client has a 7-year history of innovation, stability, and profitability. **What You Will Be Doing** -Participate in product design reviews to provide input on functional requirements, product designs, and development schedules -Analyze implementation choices for selected algorithms -Implement real-time distributed services for SaaS solution -Integrate third-party interactions via real-time API calls **What You Need for this Position** -MS in Computer Science or Software Engineering (GPA 3.5+/4), plus three (3) years of software development experience; OR BS Degree in Computer Science or Software Engineering (GPA 3.5+/4) plus five (5) years of software development experience; -At least four (4) years of programming experience in Java, Java Servlets, and Java Server Pages with application deployment on Apache Tomcat and MySQL as the most recent development experience; at least 100,000 lines of written Java code in the past three (3) years; and at least four (4) years of real-time enterprise development experience, including the support for multiple tenants, distributed transactions, persistent queues, and thread-safe service delivery. -Must also have thorough understanding of advanced abstract data structures such as lists, queues, trees including the operations (e.g., insert, remove) with their corresponding computational complexities (e.g., big-O (1)); -Knowledge of analysis of algorithms with emphasis on worst-case complexity and time/space tradeoffs covering general-purpose sorting (e.g., selection sort), traversal (e.g., breadth-first), search (e.g., binary search), and storage; -Excellent knowledge of core Java 8, including available abstract data structures (e.g., HashMap), concurrency control mechanisms (e.g., Lock), memory management, security, object serialization and persistence via JDBC; -Source control management experience with emphasis on Git; advanced understanding of different software development models with emphasis on iterative approaches such as Agile; and experience with Linux user, basic shell programming **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k So, if you are a Java Engineer (Mid-level to Senior) with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Java Engineer* *VA-Reston* *JG2-1493046*
          VirtualHostX 8.4.1      Cache   Translate Page      
VirtualHostX 8.4.1
VirtualHostX 8.4.1 | macOS | 15 mb

VirtualHostX is the easiest way to build and test multiple websites on your Mac. It's the perfect solution for web designers working on more than one project at a time. With VirtualHostX you can easily create and manage unlimited Apache websites with just a few clicks.


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Urgent Opening for Application / IIS, Apache, Jboss, wildfly, Qlik View - Ace Computer Services - Noida, Uttar Pradesh      Cache   Translate Page      
*Job Summary* Dear Candidate, Greetings from ACE Computer Services!! ACE Computer services is a leading brand in HR Outsourcing industry, having PAN India...
From Indeed - Fri, 26 Oct 2018 09:57:30 GMT - View all Noida, Uttar Pradesh jobs
          SOA Application Developer II - Zantech - Kearneysville, WV      Cache   Translate Page      
Experience with messaging middleware products such as Red Hat JBoss A-MQ, Apache ActiveMQ, Apache Camelis strongly preferred....
From Zantech - Fri, 28 Sep 2018 05:35:22 GMT - View all Kearneysville, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Intermediate Full Stack Software Developer - LotLinx - Winnipeg, MB      Cache   Translate Page      
LAMP (Linux, Apache My SQL, PHP), 4J’s (jQuery, JavaScript, Java, JSP), AWS Cloud (EC2, S3, RDS, Route53, ElastiCache), MVC architecture, Agile Development, SVN...
From LotLinx - Fri, 02 Nov 2018 20:10:50 GMT - View all Winnipeg, MB jobs
          BWW Review: Zao Theatre Presents THE HUNCHBACK OF NOTRE DAME      Cache   Translate Page      

Following his inspired direction of THE ELEPHANT MAN, Zao Theatre's artistic director, Mickey Bryce, returns to themes that address the essence of humanity and expose societal hypocrisy in an equally moving and uplifting production of THE HUNCHBACK OF NOTRE DAME.

The musical adaptation of Victor Hugo's 1831 classic about life, love, and justice in medieval Paris is a far and predictable cry from the depth and drama of the original. After all, this current iteration is a descendant of a 1996 Disney animated feature, reworked by the prolific Peter Parnell, and embellished with additional songs by composers Alan Menken and Stephen Schwartz. The story line and areas of emphasis have gone through the Hollywood/Broadway mill, but the result is blessedly still a stirring allegory about human kindness and the potential for redemptive action.

Bryce has managed to transform the atmosphere of his spacious Church hall into a cathedral-like setting, accentuated by the harmonic chants of a 47-member cast and the glorious music of C. J. O'Hara's orchestra. There is a discernible rise from one's seat as The Bells of Notre Dame echo throughout the great room and the cast congregates onstage to open the show.

The story is bookended by two incisive and complementary riddles: A front end teaser, "Who is the monster and who is the man?" A final penetrating invitation to reflection, "What makes a monster and what makes a man?" The answers to both questions will become apparent, although there will likely be room for ample contemplation.

The figures around whom these probing questions revolve are Quasimodo (Nicholas Hambruch), the hunchbacked bell ringer of Notre-Dame de Paris, and Claude Frollo (Andrew McKee), the Cathedral's self-righteous and sanctimonious archdeacon, whose guardianship of Quasimodo is more akin to that of a jailer than a protector.

Frollo, having adopted the wretched offspring of his late brother Jehan (Robert Andrews), has condemned Quasimodo to a life of toil, hiding his deformity from the rowdy crowds, and relegating him to a bell tower that is more cruel confinement than sanctuary.

If this be a story of hope and salvation, then Esmeralda (Taryn Cantrell) is the saving Grace ~ the gypsy girl, herself and her tribe objects of derision, who sees, beyond Quasimodo's deformity, a gentle man deserving of human respect. She comes to the hunchback's aid after he's been abused by the crowds that have assembled for the Feast of Fools. (He will return the favor.) Noticed amorously by Captain Phoebus (Zac Bushman), a member of the Cathedral Guard, and lustfully by Frollo, Esmeralda is caught in a web that she will struggle unsuccessfully to survive.

Supplementing the fine performances of the central characters are vigorous turns by Bryan Stewart as Clopin, the bold and flashy lord of misrule and the master of the musical's ceremonies, and the octet of agile actors portraying the animated gargoyles with whom Quasimodo communes and who propel him to a final and dramatic act of justice.

Hunchback is an ideal vehicle for addressing issues of the heart and soul and of faith and honor that are as relevant today as they have ever been. (Is it merely coincidence that lately local theatre has been presenting plays about John Merrick, Frankenstein or Jekyll and Hyde. With all the demonization of the other that contaminates public discourse these days, theatre has thankfully entered the fray, as it always does, to challenge these toxic impulses and remind us of our humanity.)

Director/Pastor Bryce has delivered another wholesome and spiritually elevating thought piece for audiences to digest. Kudos!

THE HUNCHBACK OF NOTRE DAME runs through November 17th at Centerstage Church in Apache Junction, AZ.

Poster credit to Zao Theatre


          Apache Struts Vulnerability Would Allow System Takeover      Cache   Translate Page      
none
          Nginxを用いたリバースプロキシについて。 【やりたい事】 ①https://example.com:3...      Cache   Translate Page      
Nginxを用いたリバースプロキシについて。 【やりたい事】 ①https://example.com:3030/ へのアクセスをNginxで受ける ②Nginxでhttp化してApacheへ投げる ※イメージとしては以下の通り (クライアント)→https://example.com:3030/→(Nginxで変換)→http://example.com:3030/→(Apache) 上記を実現するにはどのように設定したら良いでしょうか?
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          spark scala nlp specialities to optimize my code      Cache   Translate Page      
Hi, I wrote a apache spark scala program to find tf-idf using corpus, It's hanging on at point near group by statement. I want someone can fix that issue. I have list of articles stored in s3 as parquet, so first I'm reading it as dataframe and creating n-grams and keeping it in one hand... (Budget: $2 - $8 USD, Jobs: Natural Language, Scala, Spark)
          Apashët amerikanë zbarkojnë në Kroaci (FOTO LAJM)      Cache   Translate Page      
Grupi i shtatë helikopterëve luftarakë AC 64 Apache u ul të mërkurën në Kroaci. Air Force Base arriti në pistën e Plesos si pjesë e bashkëpunimit ushtarak dypalësh. Bëhet fjalë për helikopterë luftarakë AH-64 Apache që përdorin teknologjinë moderne dhe një gamë të gjerë armësh në misionet e ndrys
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Senior Big Data Architect – PSJH - Providence Health & Services - Renton, WA      Cache   Translate Page      
Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions. Providence is calling a Senior Big Data Architect ...
From Providence Health & Services - Sat, 25 Aug 2018 20:01:08 GMT - View all Renton, WA jobs
          She Could Become The First Native American Congresswoman | #CrashTheParty      Cache   Translate Page      


HuffPost: She Could Become The First Native American Congresswoman | #CrashTheParty

Meet Deb Haaland, Democratic candidate for New Mexico’s first district and, if elected, the nation’s first Native American woman in Congress. Subscribe to HuffPost today: http://goo.gl/xW6HG

POP QUIZ

1. USCIS 100:21. The House of Representatives has how many voting members?
▪ four hundred thirty-five (435)

2. USCIS 100:22. We elect a U.S. Representative for how many years?
▪ two (2)

3. USCIS 100:23. Name your U.S. Representative.
▪ Answers will vary. (house.gov)

4. USCIS 100:25. Why do some states have more Representatives than other states?
▪ (because of) the state’s population
▪ (because) they have more people
▪ (because) some states have more people
* Deb Haaland is running for the NM-1 seat congressional seat

5. USCIS 100:48. There are four amendments to the Constitution about who can vote. Describe one of them.
▪ Citizens eighteen (18) and older (can vote).
▪ You don’t have to pay (a poll tax) to vote.
▪ Any citizen can vote. (Women and men can vote.)
▪ A male citizen of any race (can vote).

6. USCIS 100:55. What are two ways that Americans can participate in their democracy?
▪ vote
▪ join a political party
▪ help with a campaign
▪ join a civic group
▪ join a community group
▪ give an elected official your opinion on an issue
▪ call Senators and Representatives
▪ publicly support or oppose an issue or policy
▪ run for office
▪ write to a newspaper

7. USCIS 100:59. Who lived in America before the Europeans arrived?
▪ American Indians
▪ Native Americans

8. USCIS 100:77. What did Susan B. Anthony do?
▪ fought for women’s rights

▪ fought for civil rights

9. USCIS 100:87. Name one American Indian tribe in the United States.
[USCIS Officers will be supplied with a list of federally recognized American Indian tribes. Note: Deb Haaland is a member of the Laguna tribe]
▪ Cherokee
▪ Navajo
▪ Sioux
▪ Chippewa
▪ Choctaw
▪ Pueblo
▪ Apache
▪ Iroquois
▪ Creek
▪ Blackfeet
▪ Seminole
▪ Cheyenne
▪ Arawak
▪ Shawnee
▪ Mohegan
▪ Huron
▪ Oneida
▪ Lakota
▪ Crow
▪ Teton
▪ Hopi
▪ Inuit

10. USCIS 100:93. Name one state that borders Mexico.
▪ California
▪ Arizona
▪ New Mexico
▪ Texas

Also watch:
NowThis: Voting While Native American in Montana
'For 500 years they've tried to kill us off, and for 500 years we keep coming back.' — These activists are trying to make 2018 the year Native Americans change an election.
          Personal Care Aide (PCA) - HealthCare Innovations - Apache, OK      Cache   Translate Page      
If you like caring for people and helping with their everyday needs, have dependable transportation and a great work ethic call us today for an interview!...
From Indeed - Wed, 31 Oct 2018 14:48:29 GMT - View all Apache, OK jobs
          STORE MANAGER CANDIDATE in APACHE, OK - Dollar General - Apache, OK      Cache   Translate Page      
Occasional driving/providing own transportation to make bank deposits, attend management meetings and to other Dollar General stores....
From Dollar General - Thu, 25 Oct 2018 04:32:03 GMT - View all Apache, OK jobs
          LEAD SALES ASSOCIATE-FT in APACHE, OK - Dollar General - Apache, OK      Cache   Translate Page      
Occasional or regular driving/providing own transportation to make bank deposits, attend management meetings and travel to other Dollar General stores....
From Dollar General - Mon, 27 Aug 2018 10:49:40 GMT - View all Apache, OK jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          What Are Apache Handler?      Cache   Translate Page      

Introduction
This feature is for cPanel and WHM version 64. One can navigate to (Home >> Advanced >> Apache Handlers) to work on this feature.

It controls how one’s particular site’s Apache web server software manages certain file types and file extensions. Apache is able to handle CGI scripts and server-parsed files. The file extensions for these files comprises .cgi, .pl, .plx, .ppl, .perl, and .shtml.

Apache can configure to use an existing handler to handle a new file type. For this, one needs to manually add the handler and extension in this interface.

Create An Apache Handler
Steps:

In the Handler text box, one can enter the handler name. The cPanel will include the built-in handler mentioned below:
default-handler — It will send the file and use Apache’s default handler for static content.
send-as-is — It will send the file with HTTP headers intact.
cgi-script — It will handle the file as a CGI…

The post What Are Apache Handler? appeared first on BuycPanel.


          Nuisance: Kills some automatic plugin updates      Cache   Translate Page      
Before I get into it, I just want to state that the issues I describe below may be server side rather than with this plugin. I say this because I decided to switch from wpfastest cache to Litespeed cache, but *only* because my hosting provider migrated me from Apache to Litespeed. I figured I might […]
          特定Apache Struts 2版本含有陳年漏洞,曝遠端執行程式攻擊風險      Cache   Translate Page      
Apache軟體基金會(Apache Software Foundation)於周一(11/5)警告,特定版本的Struts 2採用了含有遠端程式攻擊漏洞的Commons FileUpload函式庫,呼籲用戶儘快更新。 Struts為一開源的Java網路應用程式框架,Commons FileUpload則是該框架所內建的檔案上傳機制,此編號為CVE-2016-1000031的漏洞存在於Commons FileUpload 1.3.2,早在2016年就被發現及修補。
          Update phpmyadmin on ubuntu 18.x      Cache   Translate Page      
I need help updating phpmyadmin for 4.6.6 to the latest version on ubuntu 18.x. i can grant ssh access. this work is not urgent (Budget: $10 - $30 USD, Jobs: Apache, Linux, MySQL, System Admin, Ubuntu)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Američki borbeni helikopteri AH-64 Apache na Plesu      Cache   Translate Page      

Skupina od sedam borbenih helikoptera tipa AC 64 Apache sletjeli su u srijedu u 91. Zrakoplovnu bazu Hrvatskog ratnog zrakoplovstva na Plesu u sklopu bilateralne vojne suradnje, a obišli su ih ministar obrane Damir Krstičević i američki veleposlanik u Hrvatskoj Robert Kohorst. Nakon obilaska helikoptera, Krstičević je izjavio da su SAD ključni partner i saveznik […]

Članak Američki borbeni helikopteri AH-64 Apache na Plesu je objavljen na Kamenjar.


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Why you should use Gandiva for Apache Arrow      Cache   Translate Page      

Over the past three years Apache Arrow has exploded in popularity across a range of different open source communities. In the Python community alone, Arrow is being downloaded more than 500,000 times a month. The Arrow project is both a specification for how to represent data in a highly efficient way for in-memory analytics, as well as a series of libraries in a dozen languages for operating on the Arrow columnar format.

In the same way that most automobile manufacturers OEM their transmissions instead of designing and building their own, Arrow provides an optimal way for projects to manage and operate on data in-memory for diverse analytical workloads, including machine learning, artificial intelligence, data frames, and SQL engines.

To read this article in full, please click here

(Insider Story)
          [ZT] President Trump en de Amerikaanse politiek deel 2      Cache   Translate Page      
Replies: 3745 Last poster: n3othebest at 07-11-2018 14:19 Topic is Open Apache4u schreef op woensdag 7 november 2018 @ 13:40: [...] En het feit dat het economisch voor de wind gaat maakt deze democratsiche overwinningen ook redelijk speciaal.Hangt een beetje af van welke statistieken je naar kijkt of het economisch voor de wind gaat.
          Daily Kos Elections 2018 election night liveblog thread #13      Cache   Translate Page      

Follow: Daily Kos Elections on Twitter

Results: CNNHuffPostNew York TimesPolitico

Guides: Poll Closing TimesHour-by-Hour Guide Ballot MeasuresLegislative ChambersCounty Benchmarks

Cheat Sheet: Key Race Tracker

Wednesday, Nov 7, 2018 · 3:38:04 AM +00:00 · David Nir

The early vote has been tallied in much of Arizona, where it's likely that over 60% of all ballots were cast this way. In AZ-Sen, Dem Kyrsten Sinem has a 5,000-vote lead on Republican Martha McSally, with blue Apache County not reporting yet. In AZ-02, Dem Ann Kirkpatrick is up 55-45 on Republican Lea Marquez Peterson, which would be another pickup (this is McSally's seat).

Wednesday, Nov 7, 2018 · 3:39:55 AM +00:00 · David Nir

All of these would be Dem pickups if these results hold:

IA-03 (38% in): Axne (D) 56, Young (R-Inc): 41

IL-13: Dirksen-Londrigan up 52-48 on Rodney Davis with 65% in

IL-14: Dem Lauren Underwood still up, 51.5-48.5 on Randy Hultgren with 73% in

NY-19 (24% in): Delgado (D) 54, Faso (R) 44

Wednesday, Nov 7, 2018 · 3:43:36 AM +00:00 · David Nir

OH-Gov: This one’s not looking good. Though polls suggested it was a tossup, Republican Mike DeWine has a 52-45 lead on Dem Rich Cordray with 87% reporting.
WI-Gov: Democrat Tony Evers has a small 50-48 edge on Republican Gov. Scott Walker with 59% reporting. This would be a pickup.
MN-Gov: Democrat Tim Walz is crushing Republican Jeff Johnson 59-38 with 36% reporting.

Wednesday, Nov 7, 2018 · 3:46:07 AM +00:00 · David Nir

NM-Gov: Democrat Michelle Lujan Grisham is up 55-45 with 45% reporting. This would be a pickup.
GA-Gov: Republican Brian Kemp is up 55-44 on Democrat Stacey Abrams with 64% reporting.

Wednesday, Nov 7, 2018 · 3:48:58 AM +00:00 · David Nir

Good news: Voters in Missouri have passed an amendment that would replace the state’s partisan method of redrawing legislative maps with an independent redistricting commission.

Wednesday, Nov 7, 2018 · 3:51:54 AM +00:00 · David Nir

MN-Gov: The AP has called this one for Democrat Tim Walz. A good hold for Team Blue.

Wednesday, Nov 7, 2018 · 3:54:10 AM +00:00 · David Nir

MN-Gov: The AP calls it for Democrat Michelle Lujan Grisham, who picks up another governorship for Democrats.

Wednesday, Nov 7, 2018 · 3:55:28 AM +00:00 · David Nir

NC-13: Republican Rep. Ted Budd has hung on to defeat Democrat Kathy Manning in what was a tougher shot for Dems.


          Shifting DevOps Models and Their Impact on Application Security Tools and Strate ...      Cache   Translate Page      

While application security has never been more advanced, one could argue that it has also never been more difficult. Keeping pace with the growth and evolution of applications, evaluating the endless number of available solutions, and recruiting the expertise to manage the solutions and evaluate the data are just a few of the challenges modern security teams face. The team at Threat X is comprised of engineers, developers, and security practitioners that have faced one of more of these challenges in their careers. That’s what fuels our passion every day.

On this note, I am writing a multi-article series that addresses some of the key trends and challenges facing application security today and how security teams can adapt. In the first article , I highlight the shift in application development and integration, and the impact on security teams. In this article, I will dive into how new DevOps models are affecting security strategies and ushering in a new age of security tools.

The Age of DevOps and Continuous Delivery
Shifting DevOps Models and Their Impact on Application Security Tools and Strate ...

Modern applications are developed and updated faster than ever before. Highly agile DevOps and Continuous Integration / Continuous Delivery (CICD) models are quickly becoming the norm, with many teams releasing an update or more every day. While these processes are highly beneficial to the organization, the speed of change certainly introduces new security challenges.

The good news is that many DevOps teams are integrating security into the development process itself, helping to deliver more secure code. However, even the most secure software needs protection from threats, and this protection phase is where the constantly evolving nature of DevOps can make things tricky. If tuning signatures and rules was painful in the old model, it becomes nearly impossible when the application itself can be updated on a daily basis.

In this constantly changing environment, security tools need to keep up with changes in real-time without impacting the development team or slowing the process down. To support the frequent, real-time code deployments, an effective WAF is one that can automatically keep up with these changes and protect against any newly introduced security vulnerabilities without having to update signatures or have any manual intervention. In this manner, security and dev are working cohesively. Combine this approach with automated, dynamic testing of new code that includes forced rollback if issues are detected, and this enables a truly secure CI/CD process.

Securing the Microservice Architecture

The internal structure of applications has also changed from monolithic architectures to containerized microservices that make applications far more modular and easier to update. These microservices are often connected via a service mesh, and securing the east-west RESTful API calls between microservices can be a challenge for WAFs.

If a WAF is not containerized, then it will be virtually impossible to provide protection down to the microservice level. On the other hand, if the WAF is built-in to the service itself, such as via a plugin within NGINX or Apache, then simple rules and intelligence updates to the WAF can bring down the application in order to support the update. As a result, security teams need to ensure that security makes it to the level of the microservices without getting in the way of the application itself.

As containerization becomes more the standard, traditional approaches to the WAF and AppSec are becoming obsolete. Even legacy applications are shifting to be deployed as containers. And this is the reality that security teams must face. Security must be delivered to microservices, APIs, or any other way that application functionality can be accessed. If you are interested in this topic, check out our recent blog that dives into how to scale security in microservice architectures .

These are just some of the important ways that changes in the application landscape are affecting security. And as always, as applications and technology evolve, security will likewise need to adapt. In the next segment of this series, we will shift our focus to changes in the threat landscape and what it means for our defenses. We’ll take a look at the many types of threats facing modern applications from the OWASP Top 10 to malicious automation and how to use threat-facing techniques to find, verify, and stop these threats before they do damage.


Shifting DevOps Models and Their Impact on Application Security Tools and Strate ...

          Dead Men - Das Gold der Apachen      Cache   Translate Page      
Gold und Macht ist alles, was im unbarmherzigen Wilden Westen zählt. Und die Fähigkeit zu überleben. Als Goldminenbesitzer Roy Struther von Cole Roberts und seiner Gang brutal ermordet wird, wird auch sein Sohn Jesse gefangen genommen. Nur durch eine glückliche Fügung kann Jesse entkommen und findet beim Stamm der Apachen Unterschlupf. Jetzt, nach vielen Jahren, ist Jesse immer noch getrieben von dem Wunsch, den Tod seines Vaters zu rächen. Hier beginnt sein Rachefeldzug. Er weiß, dass er der ...
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Apache Struts users have to update FileUpload library to fix years-old flaws      Cache   Translate Page      

Apache Struts Users have to update the Commons FileUpload library in Struts 2 that is affected by two vulnerabilities. Apache Struts developers have addressed two vulnerabilities in the Commons FileUpload library in Struts 2, the flaws can be exploited for remote code execution and denial-of-service (DoS) attacks. “Apache today released an advisory, urging users who run Apache Struts 2.3.x to […]

The post Apache Struts users have to update FileUpload library to fix years-old flaws appeared first on Security Affairs.


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          ASP.NET, Apache e Mono      Cache   Translate Page      

Alcuni consigli su come usare applicazioni sviluppate mediante il framework .NET, eventualmente con Mono, sfruttando le potenzialità del web server Apache.

Leggi ASP.NET, Apache e Mono


          LXer: How to install Hadoop on Ubuntu 18.04 Bionic Beaver Linux      Cache   Translate Page      
Apache Hadoop is an open source framework used for distributed storage as well as distributed processing of big data on clusters of computers which runs on commodity hardwares. Hadoop stores data in Hadoop Distributed File System (HDFS) and the processing of these data is done using MapReduce. YARN provides API for requesting and allocating resource in the Hadoop cluster.
          Urgent Opening for Application / IIS, Apache, Jboss, wildfly, Qlik View - Ace Computer Services - Noida, Uttar Pradesh      Cache   Translate Page      
*Job Summary* Dear Candidate, Greetings from ACE Computer Services!! ACE Computer services is a leading brand in HR Outsourcing industry, having PAN India...
From Indeed - Fri, 26 Oct 2018 09:57:30 GMT - View all Noida, Uttar Pradesh jobs
          AH-1 Viper Cobra Ops - helicopter flight simulator v1.0.2      Cache   Translate Page      
Realistic flight simulator of UH-60 BlackHawk and Carrier operationsRealistic flight simulator of AH-1Z Cobra combat helicopter. You are to complete combat missions with a real scenarios. Viper Cobra Operations offers excellent console like graphics and is fully optimised for any Samsung and many other mobile devices.
No more looking for good helicopter flight game... become a pilot right now.

The AH-1 Cobra is an attack helicopter used in many countries. It was developed using the engine, transmission and rotor system of the Bell's UH-1 Iroquois. The AH-1 is also referred to as the HueyCobra or Snake.

The AH-1 has been replaced by the AH-64 Apache in Army service. Upgraded versions continue to fly with the militaries of several other nations. The AH-1 twin engine versions remain in service with United States Marine Corps (USMC) as the service's primary attack helicopter.

Don't forget to play our other games like: Chinook Ops, Carrier Ops, Mi-24 Hind, V-22 Osprey Operations and more helicopter simulators.

FEATURES:

- fully 3D detailed virtual cockpit (pilot and gunner)
- helmet mount HUD
- guided Hell-fire missiles
- Hydra rockets
- gunner FLIR view control with thermal BHOT/WHOT modes and DTV
- advanced flight physics and controls
- extremely detailed environment with destroyable vehicles and buildings
- perfect performanceAH-1 Viper Cobra Ops - helicopter flight simulatorhttps://lh3.googleusercontent.com/4ohLA-dPZqSALHBJwr5EQUTx66APj1oEiDHUeW32mzJjhqK9PL4RnljlMEwINEsuHcs=w200https://lh3.googleusercontent.com/MO3VClzTFPSrrHBdQcp2USRmeFnv7cyPyvyR5SZwP5J9ewJ7k46eLoiIu_NR7Wy2tNO-=w700https://play.google.com/store/apps/details?id=com.vipercobraopsPROMA.CB.s.r.oPROMA CB s.r.o.ActionAction1.0.2November 6, 20184.3 and up86.24M3.8Rated for 18+100,000 - 200,000black screen fix on some devices / better performance / new devices suppor1,23768213313761224DOWNLOAD APK
          Senior Big Data Architect – PSJH - Providence Health & Services - Renton, WA      Cache   Translate Page      
Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions. Providence is calling a Senior Big Data Architect ...
From Providence Health & Services - Sat, 25 Aug 2018 20:01:08 GMT - View all Renton, WA jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Dead Men - Das Gold der Apachen      Cache   Translate Page      
Gold und Macht ist alles, was im unbarmherzigen Wilden Westen zählt. Und die Fähigkeit zu überleben. Als Goldminenbesitzer Roy Struther von Cole Roberts und seiner Gang brutal ermordet wird, wird auch sein Sohn Jesse gefangen genommen. Nur durch eine glückliche Fügung kann Jesse entkommen und findet beim Stamm der Apachen Unterschlupf. Jetzt, nach vielen Jahren, ist Jesse immer noch getrieben von dem Wunsch, den Tod seines Vaters zu rächen. Hier beginnt sein Rachefeldzug. Er weiß, dass er der ...
          Exercise Wallaby 2018 - Volga Dnepr Antonov AN-124 RA-82044 at Rockhampton Airport to take RSAF Apache Helicopters Home      Cache   Translate Page      

On Sunday 4 November, Volga-Dnepr Airlines Antonov AN-124-100 RA-82044 made a spectacular arrival onto Runway 33 at Rockhampton Airport.  It arrived as "VDA1119" direct from Dhaka, Bangladesh. 





...

          (USA-CA-Sunnyvale) Staff Systems Engineer, Linux Server Platform      Cache   Translate Page      
LinkedIn was built to help professionals achieve more in their careers, and every day millions of people use our products to make connections, discover opportunities and gain insights. Our global reach means we get to make a direct impact on the world’s workforce in ways no other company can. We’re much more than a digital resume – we transform lives through innovative products and technology. Searching for your dream job? At LinkedIn, we strive to help our employees find passion and purpose. Join us in changing the way the world works. Responsibilities Participation in systems engineering duties for the Cloud Platforms Engineering Team, including management of existing and deploying new Linux (RHEL) Server systems, provide overall support and automation of the Linux server platform and related LinuxBased platform Services, including troubleshooting issues, defining disaster recovery plans, establish procedures and documentation. Build and maintain Red Hat Satellite infrastructure as well as automation around Linux host configuration and software package creation for various applications. Develop automation, mostly in Python, for systems administration, deployment and configuration of Linux servers and developer desktops. Act as a Staff resource to lead the configuration and lifecycle of the Linux Server OS environment including automation and customization. Key contributor on a DevOps-oriented team to facilitate the provisioning of Customer Application Servers and continuous lifecycle ofLinux Server-based applications running in a Hybrid Cloud infrastructure. Responsible for designing, implementing and automating our build, release, deploy, monitoring and configuration processes. Work collaboratively with peers in the team, and perform cloud and cross-platform interoperability tasks. Participate in a 12x7 on-call engineer rotation supporting our core services and handle escalations from Operations Team. Will handle escalations from the US and India Operations Teams. Participate in Tier 3 escalation issues and on-call rotation. Work with business units to translate needs into technical requirements to design, implement, and support applications utilizing recommended best practices. Basic Qualifications -B. . /B. . in a technical field, or equivalent practical experience. -5+ years in IT with Linux experience , specifically related to management of Linux Server systems hosted in a virtual environment -3+ years maintaining Red Hat Satellite or other package/systems management infrastructure -3+ years developing scripts in Python and other automation technologies -Experience supporting Linux server systems in a heterogenous environment with Windows, Mac and Linux clients -Experience creating and maintaining gold OS images for deployment in a virtual environment -Experience securing Linux OS, Satellite Servers, and Linux Applications/Services -Experience working with global teams across multiple time zones -Experience in mentoring and guiding peers in technical skills and development -Experience participating in code review Preferred Qualifications -3+ Years in a Devops role with experience in configuration management tools like Ansible, Puppet, Chef or Salt , experience in automation software like Jenkins and building various stage of the CI/CD pipeline including test automation. -8+ years in IT with Linux experience, 5+ years specifically in management of Linux Server systems hosted on Azure -Experience configuring and troubleshooting n-tier applications utilizing a least privilege model. -Experience with cloud and hybrid compute architecture including VMWare ESXi, Microsoft Azure. -Experience participating in IT compliance audits (PCI, SOX, GDPR, etc) -Experience in scripting with Bash, Python and use/creation of APIs -Familiarity with Apache-based services such as Apache ATS, Tomcat as well as creating services using frameworks like Flask.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          PHP Developer      Cache   Translate Page      
/Apache administration Nice to have:CakePHP or other modern PHP frameworksGoogle Webmaster ToolsNo C2CFor immediate
          Baptist Churches - The Church at Queen Valley - Queen Valley, AZ      Cache   Translate Page      
Queen Valley is located 40 miles east of Phoenix. It was developed as a retirement community, centered around a golf course, in 1975. The Church at Queen Valley began as a mission of the First Baptist Church of Apache Junction. The first meeting was held January 14, 1975. Construction of the church building took place in 1978. The church continues to support an active congregation.
http://queenvalleybaptistchurch.com/scrapbook---a-peak-into-the.html



          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          例: httpClient      Cache   Translate Page      
import java.io.ByteArrayOutputStream; import java.io.IOException; import java.util.ArrayList; import java.util.List; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache....
          kafka文档: 配置选项翻译      Cache   Translate Page      
问题导读 1.broker.id的作用是什么? 2.max.message.bytes表示什么含义? 3.group.id是用来标识什么的? 来源:http://kafka.apache.org/documentation.html#configuration 3. 配置项 Kafka在配置文件中使用key-value方式进行属性配置。这些values可以通过文件或者编程方式提供。 3.1 Broker Configs 基本配置如下: -b...
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          LF Commerce, an open source ecommerce dashboard. ReactJS + ExpressJS      Cache   Translate Page      
LF Commerce

An ecommerce dashboard written in ReactJS + ExpressJS.

Test account

test@test.com

123

Installation Yarn yarn install NPM npm install How to run this? Yarn yarn client NPM npm run client Unit Test

For every main directory (components, containers etc.), there should be a __tests__ directory for all unit test cases.

yarn test [test_directory] How to contribute to this project?

Your contribution is appreicated. For the purpose of having good project management, I encourage you to understand the project structure and way of working before you start to contribute to this project.

├── client # The web frontend written in ReactJS │ ├── public # Static public assets and uploads │ ├── src # ReactJS source code │ │ ├── actions # Actions and Action creators of Redux │ │ ├── apis # Files for REST APIs │ │ │ ├── mocks # Mocked API response │ │ ├── components # React components │ │ | ├── __tests__ # Unit test for components │ │ ├── containers # React containers │ │ | ├── __tests__ # Unit test for containers │ │ ├── reducers # React reducers │ │ | ├── __tests__ # Unit test for reducers │ │ ├── sagas # Redux saga files │ │ | ├── __tests__ # Unit test for sagas │ │ ├── translations # All language translation .json files │ │ └── App.css # Your customized styles should be added here │ │ └── App.js # ** Where React webapp routes configured. │ │ └── index.js # React webapp start point └── .travis.yml # Travis CI config file └── .eslintrc.json # **Don't change settings here. └── package.json # All project dependancies └── app.js # Restful APIs written in ExpressJS └── README.md # **Don't change contents here. 1. Always work on your own feature or bugfix branch.

You will need to follow the naming convention if it's a new feature: feature/xxx-xxx-xx

or fix/xxx-xxx-xx if it's a bug or other type of fixing branch.

2. Always run eslint

Before creating a PR, you should run:

yarn lint:client

to make sure all formatting or other issues have been properly fixed.

... TBC

License

LF Commerce is Apache-2.0 licensed.


          SOA Application Developer II - Zantech - Kearneysville, WV      Cache   Translate Page      
Experience with messaging middleware products such as Red Hat JBoss A-MQ, Apache ActiveMQ, Apache Camelis strongly preferred....
From Zantech - Fri, 28 Sep 2018 05:35:22 GMT - View all Kearneysville, WV jobs
          problems with the new intel last driver Version: 25.20.100.6373      Cache   Translate Page      

hi intel I was trying to install the new intel driver graphics but don't let me to install it I get a error

 

but I notice that in the website you have 2 drivers that say they are 64bit but they are 32bit you need to fix that I think that the problem.  that why don't let me install the last driver in my windows 10 1089 64bit

 

my gaming laptop is msi GE72MVR 7RG apache pro

CPU  i7 77000HQ

 

I leave everthing here so you can see

 

 

another bug is that you intel support assistant don't scan correctly the computer driver most of the time fail to install the driver using the intel driver support assistant  or you get a error

 

try to make a new intel support assistant that scan all the computer to find the correct model of all cpu  and scan all cpu performance and cpu drops and resolutions and 4k  performance so you can improve your drivers for better graphics and engine stability  and the best performance and display  monitor ect

 

just helping  I hope you can fix this


          Dropsolid: Dropsolid at Drupal Europe      Cache   Translate Page      
Dropsolid-booth at Drupal Europe

Drupal Europe

dropsolid8
Drupalcon
Drupal conferenties
general Drupal
Drupal

Last September Dropsolid sponsored and attended Drupal Europe. Compared to the Northern America’s conferences, getting Europeans to move to another location is challenging. Certainly when there are many conferences of such high quality that compete such as Drupalcamps, Drupal Dev Days, Frontend United, Drupalaton, Drupaljam, Drupal Business Days. I’m happy for the team they succeeded in making Drupal Europe profitable, this is a huge accomplishment and it also sends a strong signal to the market!

Knowing these tendencies, it was amazing to see that there is a huge market-fit for the conference that Drupal Europe filled in. Also a great sign for Drupal as a base technology and the growth of Drupal. Hence, for Dropsolid it was a must to attend, help and to sponsor such an event. Not only because it helps us getting the visibility in the developer community but also to connect with the latest technologies surrounding the Drupal ecosystem.

The shift to decoupled projects is a noticeable one for Dropsolid and even the Dropsolid platform is a Drupal decoupled project using Angular as our frontend. Next to that, we had a demo at our booth that showed a web VR environment in our Oculus Rift where cotent came from a Drupal 8 application.

 

People trying our VR-demo at Drupal Europe

 

On top of that, Drupal Europe was so important to us that our CTO helped the content team by being a volunteer and selection the sessions that were related to Devops & Infrastructure. Nick has been closely involved in this area and we’re glad to donate his time to help curate and select qualitative sessions for Drupal Europe.

None of this would have been possible without the support of our own Government who supports companies like Dropsolid to be present at these international conferences. Even though Drupal Europe is a new concept, it was seen and accepted as a niche conference that allows companies like Dropsolid to get brand awareness and knowledge outside of Belgium. We thank them for this support!

 

Afbeeldingsresultaat voor flanders investment and trade

 

From Nick: “One of the most interesting sessions for me was the keynote about the “Future of the open web and open source”. The panel included, next to Dries, Barb Palser from Google, DB Hurley from Mautic and Heather Burns. From what we gathered Matt Mullenberg was also supposed to be there but he wasn’t present. Too bad, as I was hoping to see such a collaboration and discussion. The discussion that got me the most is the “creepifying” of our personal data and how this could be reversed. How can one gain control the access of your own data and how can one revoke such an access. Just imagine, how many companies have your personal name and email and how could technology disrupt such a world where an individual controls what is theirs. I recommend watching the keynote in any case!”

 

 

We’ve also seen how Drupal.org could look like with the announced integration with Gitlab. I can’t recall myself being more excited when it comes to personal maintenance pain. In-line editing of code being one of the most amazing ones. More explanation can be found at https://dri.es/state-of-drupal-presentation-september-2018.

 

 

From Nick: 
“Another session that really caught our eye and is worthy of a completely separate blogpost is the session of Markus Kalkbrenner about Advanced Solr. Perhaps to give you some context, I’ve been working with Solr for more than 9 years. I can prove it with a commit even!  https://cgit.drupalcode.org/apachesolr_ubercart/commit/?id=b950e78. This session was mind blowing. Markus used very advanced concepts from which I hardly knew the existence of, let alone found an application for it. 

One of the use cases is a per-user sort based on the favorites of a user. The example Markus used was a recipe site where you can rate recipes. Obviously you could sort on the average rating but what if you want to sort the recipe’s by “your” rating. This might seem trivial but is a very hard problem to solve as you have to normalize a dataset in Solr which is by default a denormalized dataset. 

Now, what if you want to use this data to get personalized recommendations. This means we have to learn about the user and use this data on the fly to get these recommendations based on the votes the user applied to recipes. Watch how this work in the recording of Markus and be prepared to have your mind blown.”

 

 

There were a lot of other interesting sessions and most of them had recordings and their details can be found and viewed at https://www.drupaleurope.org/program/schedule. If you are interested in the future of the web and how Drupal plays an important role in this we suggest you take a look. If you are more into meeting people in real-time and being an active listener there is Drupalcamp Ghent (http://drupalcamp.be) at the 23rd and the 24th of November. Dropsolid is also a proud sponsor of this event.

And an additional tip: Markus’s session will also be presented there ;-)


          Un bolsillo lleno de besos by Penn, Audrey, 1947- author.      Cache   Translate Page      
"Este tierno cuento, continuacio´n del cla´sico Un beso en mi mano de Audrey Penn, libro de gran venta segu´n el New York Times, brinda a los padres otra historia de amor y seguridad para compartir con sus hijos. Chester Mapache tiene un hermanito, y ese hermanito parece querer aduen~arse de su territorio. Cuando Chester ve que su mama´ le da un beso en la mano, el beso de su mano, se siente muy triste, pero la sen~ora Mapache calma sus miedos con su especial "gotas de sabiduri´a", y le deja sab
          Offer - Big data and Hadoop training in USA with Job Assistance - USA      Cache   Translate Page      
QA Training in USA based in Atlanta, GA provides online training on the fastest growing technology. Our Hadoop training includes 40+ hours of detailed explanation on the core concepts of Big data/Hadoop, real-time projects on Pig, Apache Hive, Apache HBase and practical real-time examples. Our instructors are highly qualified and are experts in the field of analytics. Our Big data/ Hadoop Online training includes course material, video recordings, and lifetime access to big data/ Hadoop course. Enrolled students get guidance on resume, interview, and job preparation. Enroll today for our Big Data/Hadoop course and kickstart your career in the most promising field of Analytics! https://www.qatraininginusa.com/courses/hadoop-big-data-online-training/ | USA +1(678)919-1990 | training@h2kinfosys.com
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Web System Administrator      Cache   Translate Page      
VA-Richmond, job summary: The Web Systems Administrator position is responsible for administration of the companies web server environment. The current environment includes Windows, Apache, IIS, Tomcat, PHP, Java, Drupal, WordPress, and MySql. This position is a member of the Web Technologies team, which supports SharePoint, SQL Reporting Services, Project Server, the companies websites and web portals. This p
          Украина выставит свои нефтегазовые месторождения на международный аукцион       Cache   Translate Page      
В январе 2019 года Украина объявит первый международный аукцион по продаже специальных разрешений на добычу углеводородов.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Connessione a Oracle con Java e Apache DbUtils      Cache   Translate Page      

Apache Commons DbUtils è una libreria per Java per gestire le operazioni su database attraverso JDBC.

Nasce con lo scopo di migliorare gli strumenti standard di Java, e mi sembra che ci riesca.

Oggi vediamo come usarla connettendeci ad un database Oracle; ma usando JDBC, la possiamo usare per tutti i database compatibili.


          Offer - Big data and Hadoop training in USA with Job Assistance - USA      Cache   Translate Page      
QA Training in USA based in Atlanta, GA provides online training on the fastest growing technology. Our Hadoop training includes 40+ hours of detailed explanation on the core concepts of Big data/Hadoop, real-time projects on Pig, Apache Hive, Apache HBase and practical real-time examples. Our instructors are highly qualified and are experts in the field of analytics. Our Big data/ Hadoop Online training includes course material, video recordings, and lifetime access to big data/ Hadoop course. Enrolled students get guidance on resume, interview, and job preparation. Enroll today for our Big Data/Hadoop course and kickstart your career in the most promising field of Analytics! https://www.qatraininginusa.com/courses/hadoop-big-data-online-training/ | USA +1(678)919-1990 | training@h2kinfosys.com
          Azure Event Hubs for Apache Kafka | Azure Friday      Cache   Translate Page      

Shubha Vijayasarathy joins Scott Hanselman to discuss Azure Event Hubs, which makes data ingestion simple, secure, and scalable. As a distributed streaming platform, Event Hubs enables you to stream your data from any source—storing and processing millions of events per second— so you can build dynamic data pipelines and respond to business challenges in real time.

With Azure Event Hubs for Apache Kafka, we're bringing together two powerful distributed streaming platforms, so you can access the breadth of Kafka ecosystem applications without having to manage servers or networks. Event Hubs for Kafka provides a Kafka endpoint so that any Kafka client running Kafka 1.0 or newer protocols can publish/subscribe events to/from Event Hubs with a simple configuration change.


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          VirtualHostX 8.4.1 – Host multiple websites on your Mac.      Cache   Translate Page      
VirtualHostX is the easiest way to host and share multiple websites on your Mac. It’s the perfect solution for web designers working on more than one project at a time. With VirtualHostX you can easily create and manage unlimited Apache websites with just a few clicks. New in Version 8.0 Built on-top of the power […]
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          服务器监控      Cache   Translate Page      

编辑推荐: 本文来自于csdn,本文介绍了将linux系统的机器作为服务器,当在上面搭建服务时,如何对一些常用的性能指标进行监控。

服务器监控

在搭建服务器时,除了部署webapp之外,还需要服务的异常信息与服务器性能指标进行监控,一旦有异常则通知管理员。

服务器使用Linux+Nginx-1.9.15+Tomcat7+Java搭建的。

编写脚本检测错误日志和服务器性能指标,一旦新生错误日志或者性能降低到设定的阈值时,则使用云监控将报警上传到云账号。

服务运行监控

错误日志包含以下三个方面:

nginx 错误信息监控(nginx.conf配置)

${NGINX_HOME}/logs/error.log

tomcat 错误信息监控(server.xml配置)

${TOMCAT_HOME}/logs/catalina.out

webapp错误信息监控(log4j)

${WEBAPP_HOME}/log/error

机器性能指标

一般都会使用linux系统的机器作为服务器,那么当在上面搭建服务时,需要对一些常用的性能指标进行监控,那么一般包含哪些指标呢?下面对其进行一些总结,欢迎补充…

指标

1.CPU(Load) CPU使用率/负载

2.Memory 内存

3.Disk 磁盘空间

4.Disk I/O 磁盘I/O

5.Network I/O 网络I/O

6.Connect Num 连接数

7.File Handle Num 文件句柄数

CPU

1.说明

机器的CPU占有率越高,说明机器处理越忙,运算型任务越多。一个任务可能不仅会有运算部分,还会有I/O(磁盘I/O与网络I/O)部分,当在处理I/O时,时间片未完其CPU也会释放,因此某个时间点的CPU占有率没有太大的意义,因此需要计算一段时间内的平均值,那么平均负载(Load Average)这个指标便能很好得对其进行表征。平均负载:它是根据一段时间内占有CPU的进程数目和等待CPU的进程数目计算出来的,其中等待CPU的进程不包括处于wait状态的进程,比如在等待I/O的进程,即指那些就绪状态的进程,运行只缺CPU这个资源。具体如何计算可以参见Linux内核代码,计算出一个数之后,然后除以CPU核数,结果:

<=3 则系统性能较好。

<=4 则系统性能可以,可以接收。

>5 则系统性能负载过重,可能会发生严重的问题,那么就需要扩容了,要么增加核,要么分布式集群。

2.查看命令

vmstat

vmstat n m

n表示每隔n秒采集一次,m表示一共采集多少次,如果m没有,那么会一直采集下去. 在终端键入 vmstat 5


服务器监控

结果各字段解释如下(这里只解释与CPU相关的):

r:表示运行队列(就是说多少个进程真的分配到CPU),当这个值超过了CPU数目,就会出现CPU瓶颈了。这个也和top的负载有关系,一般负载超过了3就比较高,超过了5就高,超过了10就不正常了,服务器的状态很危险。top的负载类似每秒的运行队列。如果运行队列过大,表示你的CPU很繁忙,一般会造成CPU使用率很高。

b:表示阻塞的进程,如在等待I/O请求。

in:每秒CPU的中断次数,包括时间中断。

cs:每秒上下文切换次数,例如我们调用系统函数,就要进行上下文切换,线程的切换,也要进程上下文切换,这个值要越小越好,太大了,要考虑调低线程或者进程的数目,例如在apache和nginx这种web服务器中,我们一般做性能测试时会进行几千并发甚至几万并发的测试,选择web服务器的进程可以由进程或者线程的峰值一直下调,压测,直到cs到一个比较小的值,这个进程和线程数就是比较合适的值了。系统调用也是,每次调用系统函数,我们的代码就会进入内核空间,导致上下文切换,这个是很耗资源,也要尽量避免频繁调用系统函数。上下文切换次数过多表示你的CPU大部分浪费在上下文切换,导致CPU干正经事的时间少了,CPU没有充分利用,是不可取的。

us:用户CPU时间占比(%),例如在做高运算的任务时,如加密解密,那么会导致us很大,这样,r也会变大,造成系统瓶颈。

sy:系统CPU时间占比(%),如果太高,表示系统调用时间长,如IO频繁操作。

id :空闲 CPU时间占比(%),一般来说,id + us + sy = 100,一般认为id是空闲CPU使用率,us是用户CPU使用率,sy是系统CPU使用率。

wt:等待IO的CPU时间。

uptime


服务器监控

17:53:46为当前时间

up 158 days, 6:23机器运行时间,时间越大说明你的机器越稳定

2 users用户连接数,而不是总用户数

oad average: 0.00, 0.00, 0.00 最近1分钟,5分钟,15分钟的系统平均负载。

将平均负载值除以核数,如果结果不大于3,那么系统性能较好,如果不大于4那么系统性能可以接受,如果大于5,那么系统性能较差。

top


服务器监控

top命令用于显示进程信息,top详细见http://www.cnblogs.com/peida/archive/2012/12/24/2831353.html

这里主要关注Cpu(s)统计那一行:

us:用户空间占用CPU的百分比

sy:内核空间占用CPU的百分比

ni:改变过优先级的进程占用CPU的百分比

id: 空闲CPU百分比

wa: IO等待占用CPU的百分比

hi:硬中断(Hardware IRQ)占用CPU的百分比

si:软中断(Software Interrupts)占用CPU的百分比

从top的结果看CPU负载情况,主要看us和sy,其中us<=70,sy<=35,us+sy<=70说明状态良好,同时可以结合idle值来看,如果id<=70 则表示IO的压力较大。也可以和uptime一样,看第一行。引用[1]

3.分析

表示系统CPU正常,主要有以下规则:

CPU利用率:us <= 70,sy <= 35,us + sy <= 70。引用[1] 上下文切换:与CPU利用率相关联,如果CPU利用率状态良好,大量的上下文切换也是可以接受的。引用[1]

可运行队列:每个处理器的可运行队列<=3个线程。

Memory

1.说明

内存也是系统运行性能的一个很重要的指标,如果一个机器内存不足,那么将会导致进程运行异常而退出。如果进程发生内存泄漏,则会导致大量内存被浪费而无足够可用内存。内存监控一般包括total(机器总内存)、free(机器可用内存)、swap(交换区大小)、cache(缓存大小)等。

2.查看命令

vmstat


服务器监控

结果各字段解释如下(这里只解释与Memory相关的):

swpd:虚拟内存已使用的大小,如果大于0,表示你的机器物理内存不足了,如果不是程序内存泄露的原因,那么你该升级内存了或者把耗内存的任务迁移到其他机器,单位为KB。

free :空闲的物理内存的大小,我的机器内存总共8G,剩余4457612KB,单位为KB。

buff:Linux/Unix系统来存储目录里面有什么内容,权限等的缓存,这里大概占用280M,单位为KB。

cache:cache直接用来记忆我们打开的文件,给文件做缓冲,这里大概占用280M(这里是Linux/Unix的聪明之处,把空闲的物理内存的一部分拿来做文件、目录和进程地址空间的缓存,是为了提高程序执行的性能,当程序使用内存时,buffer/cached会很快地被使用),单位为KB。

si: 每秒从磁盘读入虚拟内存的大小,如果这个值大于0,表示物理内存不够用或者内存泄露了,要查找耗内存进程解决掉。本机内存充裕,一切正常,单位为KB。

so:每秒虚拟内存写入磁盘的大小,如果这个值大于0,同上,单位为KB。

free


服务器监控

第二行是内存信息,total为机器总内存,used为多少已经使用,free为多少空闲,shared为多个进程共享的内存总额,buffers与cache都是磁盘缓存的大小,分别同vmstat里面的buff与cache. 单位都是M.

第三行是buffers与cache总额的used与free. 单位都是M.

第四行是交换区swap的总额、已用与free. 单位都是M.

区别:第二行(mem)的used/free与第三行(-/+ buffers/cache) used/free的区别。 这两个的区别在于使用的角度来看,第二行是从OS的角度来看,因为对于OS,buffers/cached 都是属于被使用,所以他的可用内存是4353M, 已用内存是3519M, 其中包括,内核(OS)使用+Application(X, oracle,etc)使用的+buffers+cached.

第三行所指的是从应用程序角度来看,对于应用程序来说,buffers/cached 是等于可用的,因为buffer/cached是为了提高文件读取的性能,当应用程序需在用到内存的时候,buffer/cached会很快地被回收。

所以从应用程序的角度来说,可用内存=系统free memory+buffers+cached。

top


服务器监控

只关注与内存相关的统计信息,即Mem与Swap行。分别是Mem与Swap的总额、已用量、空闲量、buffers与cache. 这里便验证了buffers是缓存目录里面有什么内容,权限等信息的,而cache是用来swap的缓存的.

cat /proc/meminfo


服务器监控

主要这几个字段:

MemTotal:内存总额

MemFree:内存空闲量

Buffers:同top命令的buffers

Cached:同top命令的cache

SwapToatl:Swap区总大小

SwapFree:Swap区空闲大小

3.分析

表示系统Mem正常,主要有以下规则:

swap in (si) == 0,swap out (so) == 0

可用内存/物理内存 >= 30%

磁盘

说明

机器的磁盘空间也是一个重要的指标,一旦使用率超过阈值而使得可用不足,那么就需要进行扩容或者清除一些无用的文件。

查看命令

df


服务器监控

Filesystem:文件系统的名称

1K-blocks:1K块的文件系统

Used:已使用量,单位为KB

Available:空闲量,单位为KB

Use%:已使用占比

Mounted on:挂载的目录

分析

表示系统磁盘空间正常,主要有以下规则:

Use% <= 90%

磁盘I/O

说明

机器的磁盘空间也是一个重要的指标,一旦磁盘I/O过重,那么说明运行的进程在大量的文件读写并且cache命中率低。那么一个简单的方法便是增大文件缓存大小来提高命中率从而降低I/O。

在Linux中,内核希望能尽可能产生次缺页中断(从文件缓存区读),并且能尽可能避免主缺页中断(从硬盘读),这样随着缺页中断的增多,文件缓存区也逐步增大,直到系统只有少量可用物理内存的时候 Linux 才开始释放一些不用的页。引用[1]

查看命令

vmstat


服务器监控

bi :块设备每秒接收的块数量,这里的块设备是指系统上所有的磁盘和其他块设备,默认块大小1024byte。

bo:块设备每秒发送的块数量,例如我们读取文件,bo就要大于0。bi和bo一般都要接近0,不然就是IO过于频繁,需要调整。

iostat


服务器监控

Linux段为机器系统信息: 系统名称、hostname、当前时间、系统版本.

avg-cpu段为cpu的统计信息(平均值):

%user:用户级别运行所使用的CPU的百分比.

%nice:nice操作所使用的CPU的百分比.

%sys:在系统级别(kernel)运行所使用CPU的百分比.

%iowait:CPU等待硬件I/O时,所占用CPU百分比.

%idle:CPU空闲时间的百分比.

Device段段为设备信息(上图中有两个盘vda与vdb):

tps: 每秒钟发送到的I/O请求数.

Blk_read/s: 每秒读取的block数.

Blk_wrtn/s: 每秒写入的block数.

Blk_read: 读入的block总数.

Blk_wrtn: 写入的block总数.

sar -d 1 1


服务器监控

sar -d表示查看磁盘报告 1 1 表示间隔1s,运行1次

其实cpu、缓存区、文件读写、系统交换区等信息都可以通过该命令查看,只是选项不同,具体参见:http://blog.chinaunix.net/uid-23177306-id-2531032.html

第一个段为机器系统信息,同iostat

第二个段为每次运行的dev I/O信息,这里因为只运行一次,并有两个设备dev252-0与dev252-16:

tps:每秒从物理磁盘I/O的次数.多个逻辑请求会被合并为一个I/O磁盘请求,一次传输的大小是不确定的.

rd_sec/s:每秒读扇区数

wr_sec/s:每秒写扇区数

avgrq-sz:平均每次设备I/O操作的数据大小 (扇区)

avgqu-sz:平均I/O队列长度

await:为平均每次设备I/O操作的等待时间(单位ms),包括请求在队列中的等待时间和服务时间

svctm:为平均每次设备I/O操作的服务时间(单位ms)

%util:表示一秒中有百分之几的时间用于I/O操作

如果svctm的值与await很接近,表示几乎没有I/O等待,磁盘性能很好,如果await的值远高于svctm的值,则表示I/O队列等待太长,系统上运行的应用程序将变慢。

如果%util接近100%,表示磁盘产生的I/O请求太多,I/O系统已经满负荷的在工作,该磁盘请求饱和,可能存在瓶颈。idle小于70% I/O压力就较大了,也就是有较多的I/O。引用[1] 同时可以结合vmstat 查看b参数(等待资源的进程数)和wa参数(IO等待所占用的CPU时间的百分比,高过30%时IO压力高)。引用[1]

分析

表示系统磁盘空间正常,主要有以下规则:

I/O等待的请求比例 <= 20%

提高命中率的一个简单方式就是增大文件缓存区面积,缓存区越大预存的页面就越多,命中率也越高。

Linux 内核希望能尽可能产生次缺页中断(从文件缓存区读),并且能尽可能避免主缺页中断(从硬盘读),这样随着次缺页中断的增多,文件缓存区也逐步增大,直到系统只有少量可用物理内存的时候 Linux 才开始释放一些不用的页。引用[1]

网络I/O

说明

如果服务器网络连接过多,那么会造成大量的数据包在缓冲区长时间得不到处理,一旦缓冲区不足,便会造成数据包丢失问题,对于TCP,数据包丢失便会进行重传,这有会导致大量的重传;对于UDP,数据包丢失不会进行重传,那么数据便会丢失。因此,服务器的网络连接不宜过多,需要进行监控。

服务器一般接收UDP与TCP请求,都是无状态连接,TCP(传输控制协议)是一种提供可靠的数据传输协议,UDP(用户数据报协议)是一种面向无连接的协议,即其传输简单但不可靠。关于它们二者之间的区别,可以查阅相关资料。

查看命令

netstat

UDP

(1) netstat -ludp | grep udp


服务器监控

Proto:协议名

Recv-Q:收到的请求个数

Send-Q:发送的请求个数

Local Address:本地地址与端口

Foreign Address:远程地址与端口

State:状态

PID/Program name:进程ID与进程名

(2) 进一步查看UDP接收的数据包情况 netstat -su


服务器监控

画圈的便是UDP数据包丢失统计,该项值增加了,说明存在udp数据包丢失,即网卡收到了,但是应用层没有来得及处理而造成的丢包。

TCP

(1) netstat


服务器监控

各字段含义同UDP

(2) 查看重传率

因为TCP是可靠传输协议,如果数据包丢失会进行重传,因此TCP需要查看其重传率.

cat /proc/net/snmp | grep Tcp


服务器监控

那么重传率为RetransSegs/OutSegs

分析

UDP丢包率或者TCP重传率不能高于多少,这两个值由系统开发定义,此处,拍脑袋决定UDP包丢包率与TCP包重传率不能超过1%/s。

连接数

说明

对于每一台服务器,都应该限制同时连接数,但是这个阈值又不好确定,因此当监测到系统负载过重时,然后取其连接数,这个值便可作为参考值。

命令

netstat

netstat -na | sed -n '3,$p' |awk '{print $5}' | grep -v 127\.0\.0\.1 | grep -v 0\.0\.0\.0 | wc -l


服务器监控

分析

系统负载过重时,该值作为服务器的峰值参考值。

如果超过1024报警

文件句柄数

说明

文件句柄数即当前打开的文件数,对于linux,系统默认支持的最大句柄数是1024,当然每个系统可以不一样,也可以修改,最大不能超过无符号整型最大值(65535),可以使用ulimit -n命令进行查看,即因此如果同时打开的文件数超过这个数便会发生异常。因此这个指标也需要进行监控。

查看命令

lsof

lsof -n | awk '{print $1,$2}' | sort | uniq -c | sort -nr


服务器监控

三列分别是打开的文件句柄数, 进程名,进程号

分析

将所有的行的第一列相加便是系统目前打开的文件句柄数num,如果num<=max_num*85%则报警。

性能指标总结

CPU

CPU利用率:us <= 70,sy <= 35,us + sy <= 70。

上下文切换:与CPU利用率相关联,如果CPU利用率状态良好,大量的上下文切换也是可以接受的。

可运行队列:每个处理器的可运行队列<=3个线程。

Memory

swap in (si) == 0,swap out (so) == 0

可用内存/物理内存 >= 30%

Disk

Use% <= 90%

Disk I/O

I/O等待的请求比例 <= 20%

Network I/O

UDP包丢包率与TCP包重传率不能超过1%/s。

Connect Num

<= 1024

File Handle Num

num/max_num <= 90%

总结

脚本检测nginx、tomcat与webapp运行异常日志(包括nginx与tomcat是否正在运行)与服务器性能七个指标,一旦发现异常信息和性能超标,那么马上发送邮件给管理员,也可以使用云监控push给管理员的云账号。


          2019年13个值得关注的Linux和开源会议      Cache   Translate Page      

有时,一个how-to的演讲可以为你节省一周的工作时间。小组讨论可以帮助你发现制定企业开源战略的一个要素。当然,你可以从书本或GitHub中学习。但没有比听取已经完成某项工作的人解释他们如何解决你所面临的同样问题更好的了。开源项目的运作方式决定了人们定期会沟通和交流来创建优秀的项目(例如云原生计算),可能今天你都没有听说过的技术明天就可以帮到你。

在2019年那么多会议中,你怎么选择?一些涉及广泛的开源主题;其他可能特定于你的技术堆栈。

在这里,笔者按时间顺序,列出了13个2019年最好的开源会议,以帮助你的职业生涯、技能和业务。


2019年13个值得关注的Linux和开源会议
Southern California linux Expo(SCALE)

网址: http://www.socallinuxexpo.org/scale/16x

日期:2019年3月7日至10日

SCALE是北美最大的由社区运作的开源和免费软件会议。它为从初学者到专家的每个人提供课程和研讨会。

例如,在更复杂的一面,2018年会议有关于微服务架构以及如何快速完成Debian / Ubuntu应用程序的演讲。与此同时,不太熟悉Linux的人可以参加有关容器和虚拟机基础知识以及如何保证Ubuntu安全的会议。SCALE还提供关于不太常见但至关重要的主题的会议。例如,去年,著名的开源律师Karen Sandler主持了一个关于开源程序员就业合同的会议。

Linux Foundation Open Source Leadership Summit

网址: https://events.linuxfoundation.org/events/open source-leadership-summit-2019 / register /

日期:2019年3月12日至14日

Linux基金会的Open Source Leadership Summit是一个仅限受邀者参加的会议。

这不是面向程序开发人员或系统管理员的会议。这是面向开源社区管理者以及项目和公司领导者的会议。会议内容包括如何审查开源项目的可行性等主题的高级别小组讨论和演示;开源贡献的最佳实践;如何处理专利、许可和其他开源知识产权问题。

SUSECon

网址: https://www.susecon.com/

日期:2019年4月1日至5日

对于那些围绕SUSE Linux Enterprise Server(SLES)构建IT堆栈的人来说,SUSECon是必须关注的。与Red Hat一样,SUSE正在围绕OpenStack构建自己的云堆栈,所以如果OpenStack让你感兴趣,你也得关注这个会议。

你可以了解到SUSE最新版本的信息。你还可以找到有关如何充分利用SUSE功能和程序的会议,例如基于Ceph的SUSE Storage 5.5、如何管理使用YaST的服务器,以及如何管理SLES上的高可用性。

Open Networking Summit

网址: https://events.linuxfoundation.org/events/open-networking-summit-north-america-2019/attend/

日期:2019年4月2日至5日

你的工作是否需要了解21世纪的网络技术,如软件定义网络(SDN)、网络功能虚拟化(NFV)和相关技术?如果是这样,这个会议不要错过。

如此多的SDN / NFV项目(如OpenDaylight、Open Network Operating System、Open Platform for Network Virtualization 和Tungsten Fabric),很难全部跟踪。2018年,LF Nerwork Fund出现,现在发展得如何了?这个会议会给你答案。

如果你想更深了解SDN / NFV,那么Open Networking Summit North America不要错过。除了关于SDN的会议和对话,还会有NFV以及SDN和NFV技术的祖父OpenFlow的培训。

Cloud Foundry Summit

网址: https://www.cloudfoundry.org/event/nasummit2019/

日期:2019年4月2日至4日

随着IT公司从服务器迁移到容器,从数据中心迁移到云,从旧式程序迁移到云原生,了解平台即服务(PaaS)是必须的。Cloud Foundry是一个开源PaaS云平台,可以弥补传统软件和云原生程序之间的差距。

如果你的公司正在使用这些工具构建基础设施,Cloud Foundry Summit将是一个很棒的会议。它使你可以访问项目中的mover和shaker,并深入了解Cloud Foundry的工作原理。在明年的会议上,去找到对容器、物联网、机器学习、Node.js和无服务器计算的深入探讨。

LinuxFest Northwest

网址: https://linuxfestnorthwest.org/conferences/2019

日期:2019年4月28日至29日

LinuxFest Northwest是历史最悠久的社区开源会议,在2019年年满20周年。像SCALE一样,它适合每个人。例如,去年的会议包括诸如《Git简介(即使是非开发人员)》、《关于黑客的经验教训》和《技术设计中的限制和权衡》等。

OpenStack峰会

网址: https://www.openstack.org/summit/denver-2019/

日期:4月29日~5月2日

笔者非常看好OpenStack作为基础设施即服务云的未来。但这也意味着OpenStack非常复杂。峰会不仅包括热门内容,还包括如何充分利用构成OpenStack的众多组件。

以下是OpenStack在上次会议中的一些精彩内容:《通过构建Zuul CI / CD云,我们学到了什么》/《Kubernetes管理101:从零到(初级)英雄》、《谷歌,请给我创建一个VM》,这些都是很幽默的高技术性的演讲。

红帽峰会

网址: https://www.redhat.com/en/summit/2019

日期:2019年5月7日至9日

贵公司是否使用RHEL?或者Fedora、CentOS?如果你使用其中任何一个(大多数企业IT部门的回答都是肯定的 ),那么红帽峰会是必须的。除了获取有关红帽产品和服务的最新消息外,该会议还可以很方便地可以获得红帽许多认证的培训,如性能调优、使用Java EE实现微服务架构和OpenStack管理。还有许多动手实验室和专家讲座。

你还可以期待有关容器和Kubernetes开发人员的最佳实践、工具和框架,在企业中使用Ansible DevOps,调优Red Hat Gluster存储的回忆。简而言之,这是一场程序员、系统管理员以及弥合两者之间差距的人的盛会。

O'Reilly‘s Open Source Convention(OSCON)

网址: https://conferences.oreilly.com/oscon/oscon-or

日期:2019年7月15日至18日

OSCON关注的是开源作为商业和社会变革的催化剂。因此,虽然OSCON确实探索和解释了热门语言、工具和开发实践,但它也将开源置于社会环境中。

在这样一个大型会议中,你可以找到来自主题专家的大量会议,了解当今最热门开源技术代码的来龙去脉。你还可以期待找到区块链等前沿主题,Kotlin、Go和Elm等新兴语言,以及大数据的Spark、Mesos、Akka、Cassandra和Kafka(SMACK)堆栈。

这个会议适合所有人,从开发人员到CxO,再到黑客和极客。

Open Source Summit North Ameria

网址: https://events.linuxfoundation.org/events/open source-summit-north-america-2019 /

日期:2019年8月21日至23日

Linux基金会的Open Source Summit是开源节目的展示。除了业界大咖的主题演讲外,它还包括许多其他业务和技术方面的内容。

除了Linux、器和云基础知识之外,会议内容还包括网络、无服务器、边缘计算和AI。也包括培训课程和Docker和rkt容器等技术的实践研讨会,以及Kubernetes和Prometheus容器监控。只要是开源的,全包括在内。

在会话中,你将通过聆听顶级Linux内核开发人员的小组讨论了解Linux开发圈中正在发生的事情,了解经验丰富的从业者或编写代码的程序员是如何使用Linux的。无论你是开源新手还是经验丰富的老手,你都会发现一些有用的东西。

ApacheCon

网址: https://www.apachecon.com/frontpage.html

日期:2019年9月10日至12日(暂定)

贵公司是否依赖Apache软件?如果是的,那么你需要参加ApacheCon。这是一个小型会议,可能有500名与会者,但如果你严重依赖Tomcat、CloudStack、Struits或几乎任何大数据开源软件,那么它是你的最佳选择。

为了对会议内容有个大概的了解,可以看看去年在蒙特利尔举行的会议,会上展示了即将推出的Apache Tomcat版本、Apache Web服务器上的HTTP / 2协议和TLS / SSL技术状态,以及如何在重要的迁移中部署CloudStack。对于使用Apache软件的人来说,这是一次非常实用的实用会议。

Open Source Summit Europe

网址: https://events.linuxfoundation.org/upcoming-events/

日期:2019年10月28日至30日

这个会议涵盖了所有开源主题。例如,去年在爱丁堡,有一个关于日志信息穿越分布式数据流管道过程的会议,还有如何在Kubernetes上构建一个容错的自定义资源控制器,以及在企业开源软件上使用GitHub的最佳实践。

KubeCon和CloudNativeCon

网址: https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/

日期:2019年11月18日至21日

Kubernetes已成为重要的云容器编排项目。由于AWS采用Kubernetes,所有主要的云现在都支持它。如果你正在使用云上的容器,你必须知道Kubernetes。

云原生计算技术正变得越来越流行。与容器和Kubernetes一样,在当今基于云的IT世界中,云原生编程技能越来越重要。

该会议高度关注当前正在发生的事情以及如何使用最新的工具。你可以期待找到有关如何使用生产就绪的Kubernetes、该如何处理容器构建清单以及使用GPU和Kubernetes扩展AI工作负载的会话。该会议面向的是具备一定云原生和Kubernetes知识的人。

原文链接:

https://www.hpe.com/us/en/insights/articles/the-top-linux-and-open-source-conferences-in-2019-1810.html


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Dba and data analytics      Cache   Translate Page      
Required Skill Sound understanding of Database technologies Proficiency in MySQL, NoSQL, Hadoop Strong Data understanding Expertise to write robust, scalable, clean, maintainable and standard code. Application performance Background with Database design Hadoop, Apache Spark, Python Development Tools Defining consistent data architecture and data model Responsibilities Can work unaided Strong...
          Senior php developer      Cache   Translate Page      
Highly skilled tech expert with exposure in web-technologies and a product oriented thought process - Ideally, the person should have 3-seven years of background. - Technical lead who can morph into tech-cofounder / CTO of the company Responsibilities and Duties Hands on with Apache including troubleshooting. - Expertise to debug a problem end-2-end if stuck. - Self Starter with strong and proven...
          Php developer      Cache   Translate Page      
Skills we are looking for Background Applicant from 1-2 years - Proficiency in PHP, JavaScript, Jquery, JS libraries , MySQL ,HTML, CSS, Ajax, Boot Strap. - Knowledge of MVC architecture. - Background with RESTful APIs. - Current knowledge of Android/ Ionic framework is a plus. - Prior background in BFSI is a definite plus. Responsibilities and Duties - Hands on with Apache including...
          Announcing the general availability of Azure Event Hubs for Apache Kafka®      Cache   Translate Page      

In today’s business environment, with the rapidly increasing volume of data and the growing pressure to respond to events in real-time, organizations need data-driven strategies to gain valuable insights faster and increase their competitive advantage. To meet these big data challenges, you need a massively scalable distributed streaming platform that supports multiple producers and consumers, connecting data streams across your organization. Apache Kafka and Azure Event Hubs provide such distributed platforms.

How is Azure Event Hubs different from Apache Kafka?

Apache Kafka and Azure Event Hubs are both designed to handle large-scale, real-time stream ingestion. Conceptually, both are distributed, partitioned, and replicated commit log services. Both use partitioned consumer models with a client-side cursor concept that provides horizontal scalability for demanding workloads.

Apache Kafka is an open-source streaming platform which is installed and run as software. Event Hubs is a fully managed service in the cloud. While Kafka has a rapidly growing, broad ecosystem and has a strong presence both on-premises and in the cloud, Event Hubs is a cloud-native, serverless solution that gives you the freedom of not having to manage servers or networks, or worry about configuring brokers.

Announcing Azure Event Hubs for Apache Kafka

We are excited to announce the general availability of Azure Event Hubs for Apache Kafka. With Azure Event Hubs for Apache Kafka, you get the best of both worlds—the ecosystem and tools of Kafka, along with Azure’s security and global scale.

This powerful new capability enables you to start streaming events from applications using the Kafka protocol directly in to Event Hubs, simply by changing a connection string. Enable your existing Kafka applications, frameworks, and tools to talk to Event Hubs and benefit from the ease of a platform-as-a-service solution; you don’t need to run Zookeeper, manage, or configure your clusters.

Event Hubs for Kafka also allows you to easily unlock the capabilities of the Kafka ecosystem. Use Kafka Connect or MirrorMaker to talk to Event Hubs without changing a line of code. Find the sample tutorials on our GitHub.

This integration not only allows you to talk to Azure Event Hubs without changing your Kafka applications, you can also leverage the powerful and unique features of Event Hubs. For example, seamlessly send data to Blob storage or Data Lake Storage for long-term retention or micro-batch processing with Event Hubs Capture. Easily scale from streaming megabytes of data to terabytes while keeping control over when and how much to scale with Auto-Inflate. Event Hubs also supports Geo Disaster-Recovery. Event Hubs is deeply-integrated with other Azure services like Azure Databricks, Azure Stream Analytics, and Azure Functions so you can unlock further analytics and processing.

Event Hubs for Kafka supports Apache Kafka 1.0 and later through the Apache Kafka Protocol which we have mapped to our native AMQP 1.0 protocol. In addition to providing compatibility with Apache Kafka, this protocol translation allows other AMQP 1.0 based applications to communicate with Kafka applications. JMS based applications can use Apache Qpid™ to send data to Kafka based consumers.

Open, interoperable, and fully managed: Azure Event Hubs for Apache Kafka.

Next steps

Get up and running in just a few clicks and integrate Event Hubs with other Azure services to unlock further analytics.

Enjoyed this blog? Follow us as we update the features list. Leave us your feedback, questions, or comments below.

Happy streaming!


          (IT) Application Support Analyst - Security      Cache   Translate Page      

Location: Irving, TX   

Software Guidance & Assistance, Inc., (SGA), is searching for an Application Support Analyst - Security for a 12+ month contract assignment with one of our premier financial services clients in Irving, TX or New Castle, DE . This role is not open to 3rd Party C2C Responsibilities : Provide Technical Application SME support for the Security Infrastructure and Applications Production and UAT Environment setup Provide L3 Technical Support during off hours/weekends as needed Be the point of contact for the Level 2 Team escalation Support implementations on weekend coverage as Needed Capacity, Performance and Stability reporting and Management Ability to read and analyze logs on UNIX, Linux systems from multiple Servers Excellent Troubleshooting skills Lead initiatives to develop/improve scripts and other programs for system monitoring and maintenance Provide Technical Mentoring Lead Infrastructure projects and Security assessment for the infrastructure associated the applications in scope Leadership Skills to Document and do knowledge transfer to other team members Be responsible to assess the risk and associated impact of all operational issues and change events and react quickly to escalate to technology management in a timely manner when required Work closely with the development team to ensure the operational requirements are met during the project transition Work with vendor support, Engineering teams, System administrators/architects on a regular basis and have the ability to track issues to closure as Level 3 SME Continuously keep the Knowledge Share repository up to date Required Skills : Solid experience working with Red Hat Enterprise Linux 6 & 7 Proficiency in Shell, Python and Perl Scripting Experience with Oracle, Good knowledge on how to interface with these databases, analyze & write complex queries, stored procedures & SQL scripts Web Server experience with such as Apache HTTP Server Application Servers (WebLogic, IBM WebSphere & Apache Tomcat) Solid understanding of security and network concepts and protocols Good understanding of PKI concepts and standards Familiarity of Compliance and risk management frameworks and methodologies (ISO27002, SDLC) OpenSSH and Tectia SSH - solid experience with how this is setup and public key authentication Good understanding of virtualization technologies and various Hypervisors including but not limited to VMware ESXi and guest VMs, LDOMs, LPARs Experience on utilizing monitoring tools like: IBM Tivoli, HP SiteScope, ESM Understanding of Programming languages such as Core Java, C/C++ Solid understanding of security and network concepts and protocols Good understanding of PKI concepts and standards Good Understanding of storage types & technologies like NAS, SAN Strong Customer and quality-focus is a must Proficiency in managing and working on Large and Complex projects Preferred Skills : Certifications (CISSP, ITIL, CCNA/CCNP, CEH, RHEL) is a plus Knowledge of security tools and hands-on experience
 
Type: Contract
Location: Irving, TX
Country: United States of America
Contact: SGA
Advertiser: Software Guidance & Assistance
Reference: NT18-03848

          (IT) Java Apache Spark Developer - Investment Bank - Contract      Cache   Translate Page      

Rate: £600 - £700 per Day   Location: London   

Java Apache Spark Developer required by my Investment Bank client in London, on a long-term contract basis. The role sits within the company's Enterprise Data Team, on a brand new Greenfield Programme. The developer will design and deliver data solutions that underpin the data services fabric of the company. The work represents a paradigm shift in the company's approach to data provisioning using Big Data technologies in a service orientated architecture. The following skills/experience is required: Strong Apache Spark skills Java Development background Impala skills Working knowledge at an enterprise level This requires involvement in the full project life cycle from the initial project technical proposal through to analysis, design, implementation and subsequent production rollout and support. This role will expose the individual to all business areas, including Front Office, settlements, custody and accounting. Rate: Up to £700/day Duration: 12 months + Location: London If you are interested in this Java Apache Spark Developer position and meet the above requirements please apply immediately.
 
Rate: £600 - £700 per Day
Type: Contract
Location: London
Country: UK
Contact: Stephen Perkins
Advertiser: Hunter Bond
Start Date: ASAP - 1 month
Reference: JS-123

          Σχόλιο στο Ελληνικά Apache στην Ανατολική Μεσόγειο και το Ισραήλ από KonTim      Cache   Translate Page      
Ρεαλιστικά μιλώντας η πτήση πάνω από το υγρό στοιχείο και ειδικά σε μια περιοχή που γειτνιάζει με τις εχθρικές βάσεις ενέχει τέτοιους περιορισμούς για το ελικόπτερο που την καθιστά ουσιαστικά πτήση αυτοκτονίας. Η επιτυχία της συγκεκριμένης αποστολής δεν πρέπει να παρεξηγηθεί και να αποτελέσει βάση για την σχεδίαση και υλοποίηση ανάλογων αποστολών σε περίοδο πολέμου.Και αυτό διότι είναι άλλο πράγμα η διείσδυση σε μια ελεγχόμενη περιοχή από έναν αντίπαλο που βρίσκεται στην συνηθισμένη ετοιμότητα της ειρηνικής περιόδου και άλλο όταν λειτουργεί σε ρυθμούς πολεμικής σύρραξης.Τα επίπεδα ετοιμότητας είναι προφανώς διαφορετικά στις δυο αυτές περιπτώσεις και θέτουν αλλά στανταρτς για κάποιον που θα επιχειρήσει να εκμεταλλευθει πιθανά νεκρά σημεία της εχθρικής αεράμυνας. Δεν πρέπει επίσης να αποκλείεται το ενδεχόμενο ο αντίπαλος να εντόπισε τις πτήσεις των ελικοπτέρων και να μην αντέδρασε προκειμένου να μην αποκαλυφθεί η ικανότητα του να το πράττει συστηματικά.
          От гранатомётов до «Триумфа»: как Россия укрепляет свои позиции на оружейном рынке Юго-Восточной Азии.      Cache   Translate Page      

Россия обсуждает со странами Юго-Восточной Азии контракты на поставку вооружений. В ближайшее время может быть подписано соглашение о продаже Бангладешу ударных вертолётов Ми-35М. Филиппины не исключают покупки винтокрылых машин и подводных лодок. Ранее крупные контракты на поставку средств ПВО Москва подписала с Пекином и Нью-Дели. Только с КНР портфель заказов превышает $7 млрд. По мнению экспертов, спрос на оружие в Юго-Восточной Азии продолжит расти. Сможет ли Москва укрепить свои позиции на этом рынке.

Россия ведёт переговоры с государствами Юго-Восточной Азии по вопросу поставок военной техники. В частности, Бангладеш проявляет интерес к отечественным ударным вертолётам Ми-35М (Ми-24ВМ), Филиппины — к гранатомётам, винтокрылым машинам и подводным лодкам.

По данным Стокгольмского института проблем мира (SIPRI), именно страны Азиатско-Тихоокеанского региона (АТР) являются главными покупателями российского оружия. Их доля в структуре оружейного экспорта составляет почти 70%. Главными покупателями являются Индия (35%), Китай (12%) и Вьетнам (10%).

«Рост портфеля заказов»

Последние годы между Москвой и государствами Юго-Восточной Азии были заключены ряд крупных сделок по линии военно-технического сотрудничества (ВТС). Прежде всего, речь и идёт о продаже российских средств ПВО, самолётов, вертолётов, танков и кораблей.

По данным Федеральной службы по военно-техническому сотрудничеству (ФСВТС), общий портфель заказов на военную продукцию РФ составляет $55 млрд. При этом объём поставок в Китай превышает $7 млрд. Как заявил в интервью глава ФСВТС Дмитрий Шугаев, с 2013 года доля КНР в портфеле заказов выросла с 5% до 14–15%.

В 2014 году РФ договорилась с Китаем о поставке шести дивизионов зенитных ракетных комплексов (ЗРК) «Триумф» за $3 млрд. В 2015 году был заключён контракт на 24 многоцелевых сверхманёвренных истребителя Су-35 стоимостью $2,5 млрд. Свои обязательства Россия исполнит в 2019 и 2020 годах соответственно.

В ходе начавшегося 6 ноября в Чжухае авиасалона Airshow China 2018 Москва и Пекин заключили ещё три контракта по линии ВТС, однако их детали широкой публике пока не разглашаются. В настоящее время оба государства реализуют совместный проект по созданию дизель-электрических подводных лодок «Амур-1650» (экспортная версия проекта 677 «Лада»). Сумма сделки оценивается в $2 млрд.

Ожидается, что в ближайшее время холдинг «Вертолёты России» и китайская компания Avicopter заключат твёрдый контракт на совместное создание тяжёлого вертолёта AHL (Advanced Heavy Lifter). По словам директора по международному сотрудничеству и региональной политике «Ростеха» Виктора Кладова, машина будет «больше Ми-17, но меньше Ми-26». Взлётный вес AHL составит 38,5 т, грузоподъёмность — 14-15 т.

Ведущим покупателем российского оружия в АТР остаётся Индия, являясь и крупнейшим в мире эксплуатантом российских истребителей (МиГ-29К/КУБ, Су-27 и Су-30МКИ) и танков (Т-90). В 2016 году Москва получила от Нью-Дели заказ на строительство фрегатов проекта 11356 «Буревестник», сборку более 200 многоцелевых вертолётов Ка-226Т и модернизацию 10 противолодочных вертолётов Ка-28. Общая сумма сделок составляет несколько миллиардов долларов.

В 2017 году Объединённая авиастроительная корпорация (ОАК) заключила с индийской компанией Hindustan Aeronautics Limited договор о сервисной поддержке Су-30 МКИ. Стоимость контракта не уточняется. Однако известно, что с начала 2000-х годов Индия потратила $12 млрд на покупку комплектов для сборки 272 машин.

5 октября 2018 года во время визита в Нью-Дели президента РФ Владимира Путина был подписан контракт на поставку Индии десяти дивизионов С-400 стоимостью более $5 млрд. Первые два дивизиона будут переданы в 2020 году.

Примечательно, что расчёты в рамках договора будут производиться в рублях. Контракт с Индией стал самой крупной недолларовой сделкой в истории постсоветской России.

В ноябре 2016 Комиссия по военным закупкам Минобороны Индии одобрила приобретение 464 танка Т-90МС. Договор пока не подписан, однако индийские СМИ ожидают, что это случится в ближайшее время. Кроме того, в завершающей стадии находится согласование контракта на продажу 48 военно-транспортных вертолётов Ми-17В-5.

Немаловажную роль Москва играет в процессе перевооружения армии Вьетнама. Ранее были заключены оружейные контракты сумму свыше $4,5 млрд. Москва поставляет сторожевые корабли проекта 11661 (фрегаты «Гепард-3.9»), истребители Су-30МК, танки Т-90С и Т-90СК и различные виды бронетехники.

«Новые горизонты»

Положительная динамика по линии ВТС отмечается в отношениях России с другими юго-восточными государствами. Как сообщил в интервью замдиректора ФСВТС Михаил Петухов, для борьбы с террористическими формированиями Россия поставила Филиппинам крупную партию автоматов Калашникова и автомобилей «Урал».

«В этом году подписан контракт на поставку противотанковых гранатомётов. В перспективе могут быть достигнуты договорённости о поставке другой техники», — сказал Петухов на проходящей в Джакарте международной оружейной выставке Indo Defence — 2018.

В январе 2018 года во время турне министра обороны Сергея Шойгу по странам АТР была достигнута договорённость о продаже шести Су-30 Мьянме. В военном ведомстве рассчитывают на заключение более крупного контракта в ближайшем будущем. Замминистра обороны Алексей Фомин полагает, что «Су-30 станет основным боевым истребителем ВВС Мьянмы».

В феврале 2018 года РФ и Индонезия договорились о поставке 11 истребителей Су-35 поколения 4++. Стоимость сделки составила $1,14 млрд. При этом она отчасти носит бартерный характер. Половину суммы Москва получит валютой, а остальную Джакарта отдаст поставками пальмового масла, кофе, чая, каучука и других сельхозпродуктов.

На стадии переговоров находится поставка морской пехоте Индонезии бронетранспортёров БТ-3Ф и бронемашин пехоты БМП-3Ф. В 2010 году Россия завершила передачу Джакарте БМП-3Ф в рамках ранее заключённого контракта. По словам официального представителя ФСВТС Марии Воробьёвой, отечественная техника зарекомендовала себя отлично.

Помимо бронемашин Джакарта проявляет интерес к дизель-электрическим подводным лодкам проекта 636, самолётам-амфибиям Бе-200 и тактическим противокорабельным ракетам «Яхонт» (П-800 «Оникс»).

«Угрозы, давление, санкции»

Острую конкуренцию России на рынке стран АТР составляют Соединённые Штаты. За последние годы Вашингтон подписал с Нью-Дели контракты на поставку 10 военно-транспортных самолётов С-17 Globemaster III, 8 патрульных противолодочных самолётов P-8 Poseidon, 22 ударных вертолётов AH-64 Apache и 15 тяжёлых военно-транспортных вертолётов Chinook.

В беседе заместитель директора Центра анализа стратегий и технологий (ЦАСТ) Константин Макиенко назвал крупные сделки, которые удалось заключить США с Нью-Дели, «тактическим успехом» Вашингтона. По его словам, контракты на поставку авиационной техники не предполагают передачу Нью-Дели технологий и лицензионную сборку, а, значит, противоречат государственному курсу «сделано в Индии». В связи с этим сотрудничество с Соединёнными Штатами обернётся для страны «большими разочарованиями».

Помимо здоровых форм конкуренции США прибегают к дипломатическим играм, политическому давлению и угрозам ввести санкции за подписание крупных оружейных контрактов с Россией. В частности, Вашингтон пытался помешать заключению сделок на поставку С-400 Индии и Су-35 Индонезии. В сентябре 2018 года Минфин США ввёл санкции против Департамента разработки вооружений Минобороны Китая и его главы Ли Шанфу за приобретение «Триумфа» и Су-35.

«Американцы пускают в ход самые разные инструменты, чтобы не допустить заключения новых контрактов с РФ. Вполне вероятно, что Штаты будут препятствовать осуществлению транзакций при расчётах в долларах. Однако страны Юго-Восточной Азии в сфере закупки вооружений проводят диверсифицированную политику, спрос на оружие будет расти, и у Москвы очень хорошие перспективы на рынке этого региона. Маловероятно, что угрозы и давление США дадут какой-либо результат», — подытожил Макиенко.


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Empty export      Cache   Translate Page      

Replies: 0

Hi, i tried to export the orders but i got a empty csv file. I tried it two times. The file is 0Kb and can not be imported.

What could it be?

Status:
Versie van WooCommerce: 3.4.5
Versie van WordPress: 4.9.8
WordPress geheugenlimiet: 256 MB

Server
Serverinfore: Apache/2
PHP-versie: 7.1.21
PHP post max size: 8 MB
PHP time limit: 30
PHP max input vars: 1000
cURL Versie: 7.60.0, OpenSSL/1.0.2k
MySQL-versie: 10.1.34-MariaDB
Max upload size: 8 MB


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Intermediate Full Stack Software Developer - LotLinx - Winnipeg, MB      Cache   Translate Page      
LAMP (Linux, Apache My SQL, PHP), 4J’s (jQuery, JavaScript, Java, JSP), AWS Cloud (EC2, S3, RDS, Route53, ElastiCache), MVC architecture, Agile Development, SVN...
From LotLinx - Fri, 02 Nov 2018 20:10:50 GMT - View all Winnipeg, MB jobs
          Développeur Java/JEE - Voonyx - Lac-beauport, QC      Cache   Translate Page      
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Thu, 26 Jul 2018 05:13:45 GMT - View all Lac-beauport, QC jobs
          Apache Beam      Cache   Translate Page      
Универсальная модель обработки данных - Apache Beam

          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          如何在树莓派上搭建 WordPress      Cache   Translate Page      

这篇简单的教程可以让你在树莓派上运行你的 WordPress 网站。

WordPress 是一个非常受欢迎的开源博客平台和内容管理平台(CMS)。它很容易搭建,而且还有一个活跃的开发者社区构建网站、创建主题和插件供其他人使用。

虽然通过一键式 WordPress 设置获得托管包很容易,但也可以简单地通过命令行在 Linux 服务器上设置自己的托管包,而且树莓派是一种用来尝试它并顺便学习一些东西的相当好的途径。

一个经常使用的 Web 套件的四个部分是 Linux、Apache、MySQL 和 PHP。这里是你对它们每一个需要了解的。

Linux

树莓派上运行的系统是 Raspbian,这是一个基于 Debian,为运行在树莓派硬件上而优化的很好的 Linux 发行版。你有两个选择:桌面版或是精简版。桌面版有一个熟悉的桌面还有很多教育软件和编程工具,像是 LibreOffice 套件、Mincraft,还有一个 web 浏览器。精简版本没有桌面环境,因此它只有命令行以及一些必要的软件。

这篇教程在两个版本上都可以使用,但是如果你使用的是精简版,你必须要有另外一台电脑去访问你的站点。

Apache

Apache 是一个受欢迎的 web 服务器应用,你可以安装在你的树莓派上伺服你的 web 页面。就其自身而言,Apache 可以通过 HTTP 提供静态 HTML 文件。使用额外的模块,它也可以使用像是 PHP 的脚本语言提供动态网页。

安装 Apache 非常简单。打开一个终端窗口,然后输入下面的命令:

sudo apt install apache2 -y

Apache 默认放了一个测试文件在一个 web 目录中,你可以从你的电脑或是你网络中的其他计算机进行访问。只需要打开 web 浏览器,然后输入地址 <http://localhost>。或者(特别是你使用的是 Raspbian Lite 的话)输入你的树莓派的 IP 地址代替 localhost。你应该会在你的浏览器窗口中看到这样的内容:

这意味着你的 Apache 已经开始工作了!

这个默认的网页仅仅是你文件系统里的一个文件。它在你本地的 /var/www/html/index/html。你可以使用 Leafpad 文本编辑器写一些 HTML 去替换这个文件的内容。

cd /var/www/html/
sudo leafpad index.html

保存并关闭 Leafpad 然后刷新网页,查看你的更改。

MySQL

MySQL(读作 “my S-Q-L” 或者 “my sequel”)是一个很受欢迎的数据库引擎。就像 PHP,它被非常广泛的应用于网页服务,这也是为什么像 WordPress 一样的项目选择了它,以及这些项目是为何如此受欢迎。

在一个终端窗口中输入以下命令安装 MySQL 服务(LCTT 译注:实际上安装的是 MySQL 分支 MariaDB):

sudo apt-get install mysql-server -y

WordPress 使用 MySQL 存储文章、页面、用户数据、还有许多其他的内容。

PHP

PHP 是一个预处理器:它是在服务器通过网络浏览器接受网页请求是运行的代码。它解决那些需要展示在网页上的内容,然后发送这些网页到浏览器上。不像静态的 HTML,PHP 能在不同的情况下展示不同的内容。PHP 是一个在 web 上非常受欢迎的语言;很多像 Facebook 和 Wikipedia 的项目都使用 PHP 编写。

安装 PHP 和 MySQL 的插件:

sudo apt-get install php php-mysql -y

删除 index.html,然后创建 index.php

sudo rm index.html
sudo leafpad index.php

在里面添加以下内容:

<?php phpinfo(); ?>

保存、退出、刷新你的网页。你将会看到 PHP 状态页:

WordPress

你可以使用 wget 命令从 wordpress.org 下载 WordPress。最新的 WordPress 总是使用 wordpress.org/latest.tar.gz 这个网址,所以你可以直接抓取这些文件,而无需到网页里面查看,现在的版本是 4.9.8。

确保你在 /var/www/html 目录中,然后删除里面的所有内容:

cd /var/www/html/
sudo rm *

使用 wget 下载 WordPress,然后提取里面的内容,并移动提取的 WordPress 目录中的内容移动到 html 目录下:

sudo wget http://wordpress.org/latest.tar.gz
sudo tar xzf latest.tar.gz
sudo mv wordpress/* .

现在可以删除压缩包和空的 wordpress 目录了:

sudo rm -rf wordpress latest.tar.gz

运行 ls 或者 tree -L 1 命令显示 WordPress 项目下包含的内容:

.
├── index.php
├── license.txt
├── readme.html
├── wp-activate.php
├── wp-admin
├── wp-blog-header.php
├── wp-comments-post.php
├── wp-config-sample.php
├── wp-content
├── wp-cron.php
├── wp-includes
├── wp-links-opml.php
├── wp-load.php
├── wp-login.php
├── wp-mail.php
├── wp-settings.php
├── wp-signup.php
├── wp-trackback.php
└── xmlrpc.php

3 directories, 16 files

这是 WordPress 的默认安装源。在 wp-content 目录中,你可以编辑你的自定义安装。

你现在应该把所有文件的所有权改为 Apache 的运行用户 www-data

sudo chown -R www-data: .

WordPress 数据库

为了搭建你的 WordPress 站点,你需要一个数据库。这里使用的是 MySQL。

在终端窗口运行 MySQL 的安全安装命令:

sudo mysql_secure_installation

你将会被问到一系列的问题。这里原来没有设置密码,但是在下一步你应该设置一个。确保你记住了你输入的密码,后面你需要使用它去连接你的 WordPress。按回车确认下面的所有问题。

当它完成之后,你将会看到 “All done!” 和 “Thanks for using MariaDB!” 的信息。

在终端窗口运行 mysql 命令:

sudo mysql -uroot -p

输入你创建的 root 密码(LCTT 译注:不是 Linux 系统的 root 密码,是 MySQL 的 root 密码)。你将看到 “Welcome to the MariaDB monitor.” 的欢迎信息。在 “MariaDB [(none)] >” 提示处使用以下命令,为你 WordPress 的安装创建一个数据库:

create database wordpress;

注意声明最后的分号,如果命令执行成功,你将看到下面的提示:

Query OK, 1 row affected (0.00 sec)

把数据库权限交给 root 用户在声明的底部输入密码:

GRANT ALL PRIVILEGES ON wordpress.* TO 'root'@'localhost' IDENTIFIED BY 'YOURPASSWORD';

为了让更改生效,你需要刷新数据库权限:

FLUSH PRIVILEGES;

Ctrl+D 退出 MariaDB 提示符,返回到 Bash shell。

WordPress 配置

在你的 树莓派 打开网页浏览器,地址栏输入 http://localhost。选择一个你想要在 WordPress 使用的语言,然后点击“Continue”。你将会看到 WordPress 的欢迎界面。点击 “Let’s go!” 按钮。

按照下面这样填写基本的站点信息:

Database Name:      wordpress
User Name:          root
Password:           <YOUR PASSWORD>
Database Host:      localhost
Table Prefix:       wp_

点击 “Submit” 继续,然后点击 “Run the install”。

按下面的格式填写:为你的站点设置一个标题、创建一个用户名和密码、输入你的 email 地址。点击 “Install WordPress” 按钮,然后使用你刚刚创建的账号登录,你现在已经登录,而且你的站点已经设置好了,你可以在浏览器地址栏输入 http://localhost/wp-admin 查看你的网站。

永久链接

更改你的永久链接设置,使得你的 URL 更加友好是一个很好的想法。

要这样做,首先登录你的 WordPress ,进入仪表盘。进入 “Settings”,&ldquoermalinks”。选择 &ldquoost name” 选项,然后点击 “Save Changes”。接着你需要开启 Apache 的 rewrite 模块。

sudo a2enmod rewrite

你还需要告诉虚拟托管服务,站点允许改写请求。为你的虚拟主机编辑 Apache 配置文件:

sudo leafpad /etc/apache2/sites-available/000-default.conf

在第一行后添加下面的内容:

<Directory "/var/www/html">
    AllowOverride All
</Directory>

确保其中有像这样的内容 <VirtualHost *:80>

<VirtualHost *:80>
    <Directory "/var/www/html">
        AllowOverride All
    </Directory>
    ...

保存这个文件,然后退出,重启 Apache:

sudo systemctl restart apache2

下一步?

WordPress 是可以高度自定义的。在网站顶部横幅处点击你的站点名,你就会进入仪表盘。在这里你可以修改主题、添加页面和文章、编辑菜单、添加插件、以及许多其他的事情。

这里有一些你可以在树莓派的网页服务上尝试的有趣的事情:

  • 添加页面和文章到你的网站
  • 从外观菜单安装不同的主题
  • 自定义你的网站主题或是创建你自己的
  • 使用你的网站服务向你的网络上的其他人显示有用的信息

不要忘记,树莓派是一台 Linux 电脑。你也可以使用相同的结构在运行着 Debian 或者 Ubuntu 的服务器上安装 WordPress。


via: https://opensource.com/article/18/10/setting-wordpress-raspberry-pi

作者:Ben Nuttall 选题:lujun9972 译者:dianbanjiu 校对:wxy

本文由 LCTT 原创编译,Linux中国 荣誉推出


          深度剖析阿里巴巴对 Apache Flink 的优化与改进      Cache   Translate Page      

Apache Flink 概述

Apache Flink(以下简称 Flink)是诞生于欧洲的一个大数据研究项目,原名 StratoSphere。该项目是柏林工业大学的一个研究性项目,早期专注于批计算。2014 年,StratoSphere 项目中的核心成员孵化出 Flink,并在同年将 Flink 捐赠 Apache,后来 Flink 顺利成为 Apache 的顶级大数据项目。同时 Flink 计算的主流方向被定位为流计算,即用流式计算来做所有大数据的计算工作,这就是 Flink 技术诞生的背景。

2014 年 Flink 作为主攻流计算的大数据引擎开始在开源大数据行业内崭露头角。区别于 Storm、Spark Streaming 以及其他流式计算引擎的是:它不仅是一个高吞吐、低延迟的计算引擎,同时还提供很多高级功能。比如它提供有状态的计算,支持状态管理,支持强一致性的数据语义以及支持 Event Time、WaterMark 对消息乱序的处理等。

Flink 的受欢迎还离不开它身上的众多标签,其中包括性能优秀(尤其在流计算领域)、高可扩展性、支持容错,是一种纯内存式的一个计算引擎,做了内存管理方面的大量优化,另外也支持 eventime 的处理、支持超大状态的 Job(在阿里巴巴中作业的 state 大小超过 TB 的是非常常见的)、支持 exactly-once 的处理。

阿里巴巴与 Flink

随着人工智能时代的降临,数据量的爆发,在典型的大数据的业务场景下数据业务最通用的做法是:选用批处理的技术处理全量数据,采用流式计算处理实时增量数据。在绝大多数的业务场景之下,用户的业务逻辑在批处理和流处理之中往往是相同的。但是,用户用于批处理和流处理的两套计算引擎是不同的。

因此,用户通常需要写两套代码。毫无疑问,这带来了一些额外的负担和成本。阿里巴巴的商品数据处理就经常需要面对增量和全量两套不同的业务流程问题,所以阿里巴巴就在想:能不能有一套统一的大数据引擎技术,用户只需要根据自己的业务逻辑开发一套代码。这样在各种不同的场景下,不管是全量数据还是增量数据,亦或者实时处理,一套方案即可全部支持,这就是阿里巴巴选择 Flink 的背景和初衷。

基于 Flink 在阿里巴巴搭建的平台于 2016 年正式上线,并从阿里巴巴的搜索和推荐这两大场景开始实现。目前阿里巴巴所有的业务,包括阿里巴巴所有子公司都采用了基于 Flink 搭建的实时计算平台。同时 Flink 计算平台运行在开源的 Hadoop 集群之上。采用 Hadoop 的 YARN 做为资源管理调度,以 HDFS 作为数据存储。因此,Flink 可以和开源大数据软件 Hadoop 无缝对接。

目前,这套基于 Flink 搭建的实时计算平台不仅服务于阿里巴巴集团内部,而且通过阿里云的云产品API向整个开发者生态提供基于 Flink 的云产品支持。

彼时的 Flink 不管是规模还是稳定性尚未经历实践,成熟度有待商榷。阿里巴巴实时计算团队决定在阿里内部建立一个 Flink 分支 Blink,并对 Flink 进行大量的修改和完善,让其适应阿里巴巴这种超大规模的业务场景。在这个过程当中,该团队不仅对 Flink 在性能和稳定性上做出了很多改进和优化,同时在核心架构和功能上也进行了大量创新和改进,并将逐渐推回给社区,例如:Flink 新的分布式架构,增量 Checkpoint 机制,基于 Credit-based 的网络流控机制和 Streaming SQL 等。接下来,我们主要从两个层面深度剖析阿里巴巴对Flink究竟做了哪些优化?

取之开源,用之开源

SQL 层

为了能够真正做到用户根据自己的业务逻辑开发一套代码,能够同时运行在多种不同的场景,Flink 首先需要给用户提供一个统一的 API。在经过一番调研之后,阿里巴巴实时计算认为 SQL 是一个非常适合的选择。在批处理领域,SQL 已经经历了几十年的考验,是公认的经典。在流计算领域,近年来也不断有流表二象性、流是表的 ChangeLog 等理论出现。在这些理论基础之上,阿里巴巴提出了动态表的概念,使得流计算也可以像批处理一样使用 SQL 来描述,并且逻辑等价。这样一来,用户就可以使用 SQL 来描述自己的业务逻辑,相同的查询语句在执行时可以是一个批处理任务,也可以是一个高吞吐低延迟的流计算任务,甚至是先使用批处理技术进行历史数据的计算,然后自动的转成流计算任务处理最新的实时数据。在这种声明式的 API 之下,引擎有了更多的选择和优化空间。接下来,我们将介绍其中几个比较重要的优化。

首先是对 SQL 层的技术架构进行升级和替换。调研过 Flink 或者使用过 Flink 的开发者应该知道,Flink 有两套基础的 API,一套是 DataStream,另一套是 DataSet。DataStream API 是针对流式处理的用户提供,DataSet API 是针对批处理用户提供,但是这两套 API 的执行路径是完全不一样的,甚至需要生成不同的 Task 去执行。Flink 原生的 SQL 层在经过一系列优化之后,会根据用户希望是批处理还是流处理的不同选择,去调用 DataSet 或者是 DataStream API。这就会造成用户在日常开发和优化中,经常要面临两套几乎完全独立的技术栈,很多事情可能需要重复的去做两遍。这样也会导致在一边的技术栈上做的优化,另外一边就享受不到。因此阿里巴巴在 SQL 层提出了全新的 Quyer Processor,它主要包括一个流和批可以尽量做到复用的优化层(Query Optimizer)以及基于相同接口的算子层(Query Executor)。这样一来, 80% 以上的工作可以做到两边复用,比如一些公共的优化规则,基础数据结构等等。同时,流和批也会各自保留自己一些独特的优化和算子,以满足不同的作业行为。

在 SQL 层的技术架构统一之后,阿里巴巴开始寻求一种更高效的基础数据结构,以便让 Blink 在 SQL 层的执行更加高效。在原生 Flink SQL中,都统一使用了一种叫 Row 的数据结构,它完全由 Java 的一些对象构成关系数据库中的一行。假如现在的一行数据由一个整型,一个浮点型以及一个字符串组成,那么 Row 当中就会包含一个 Java 的 Integer、Double 和 String。众所周知,这些 Java 的对象在堆内有不少的额外开销,同时在访问这些数据的过程中也会引入不必要的装箱拆箱操作。基于这些问题,阿里巴巴提出了一种全新的数据结构 BinaryRow,它和原来的 Row 一样也是表示一个关系数据中的一行,但与之不同的是,它完全使用二进制数据来存储这些数据。在上述例子中,三个不同类型的字段统一由 Java 的 byte[] 来表示。这会带来诸多好处:

  • 首先在存储空间上,去掉了很多无谓的额外消耗,使得对象的存储更为紧凑;
  • 其次在和网络或者状态存储打交道的时候,也可以省略掉很多不必要的序列化反序列化开销;
  • 最后在去掉各种不必要的装箱拆箱操作之后,整个执行代码对 GC 也更加友好。

通过引入这样一个高效的基础数据结构,整个 SQL 层的执行效率得到了一倍以上的提升。 

在算子的实现层面,阿里巴巴引入了更广范围的代码生成技术。得益于技术架构和基础数据结构的统一,很多代码生成技术得以达到更广范围的复用。同时由于 SQL 的强类型保证,用户可以预先知道算子需要处理的数据的类型,从而可以生成更有针对性更高效的执行代码。在原生 Flink SQL 中,只有类似 a > 2 或者 c + d 这样的简单表达式才会应用代码生成技术,在阿里巴巴优化之后,有一些算子会进行整体的代码生成,比如排序、聚合等。这使得用户可以更加灵活的去控制算子的逻辑,也可以直接将最终运行代码嵌入到类当中,去掉了昂贵的函数调用开销。一些应用代码生成技术的基础数据结构和算法,比如排序算法,基于二进制数据的 HashMap 等,也可以在流和批的算子之间进行共享和复用,让用户真正享受到了技术和架构的统一带来的好处。在针对批处理的某些场景进行数据结构或者算法的优化之后,流计算的性能也能够得到提升。接下来,我们聊聊阿里巴巴在 Runtime 层对 Flink 又大刀阔斧地进行了哪些改进。

Runtime 层

为了让 Flink 在阿里巴巴的大规模生产环境中生根发芽,实时计算团队如期遇到了各种挑战,首当其冲的就是如何让 Flink 与其他集群管理系统进行整合。Flink 原生集群管理模式尚未完善,也无法原生地使用其他其他相对成熟的集群管理系统。基于此,一系列棘手的问题接连浮现:多租户之间资源如何协调?如何动态的申请和释放资源?如何指定不同资源类型? 

为了解决这个问题,实时计算团队经历大量的调研与分析,最终选择的方案是改造 Flink 资源调度系统,让 Flink 可以原生地跑在 YARN 集群之上;并且重构 Master 架构,让一个 Job 对应一个 Master,从此 Master 不再是集群瓶颈。以此为契机,阿里巴巴和社区联手推出了全新的 Flip-6 架构,让 Flink 资源管理变成可插拔的架构,为 Flink 的可持续发展打下了坚实的基础。如今 Flink 可以无缝运行在 YARN、Mesos 和 K8S 之上,正是这个架构重要性的有力说明。

解决了 Flink 集群大规模部署问题后,接下来的就是可靠和稳定性,为了保证 Flink 在生产环境中的高可用,阿里巴巴着重改善了 Flink 的 FailOver 机制。首先是 Master 的 FailOver,Flink 原生的 Master FailOver 会重启所有的 Job,改善后 Master 任何 FailOver 都不会影响 Job 的正常运行;其次引入了 Region-based 的 Task FailOver,尽量减少任何 Task 的 FailOver 对用户造成的影响。有了这些改进的保驾护航,阿里巴巴的大量业务方开始把实时计算迁移到 Flink 上运行。

Stateful Streaming 是 Flink 的最大亮点,基于 Chandy-Lamport 算法的 Checkpoint 机制让 Flink 具备 Exactly Once 一致性的计算能力,但在早期 Flink 版本中 Checkpoint 的性能在大规模数据量下存在一定瓶颈,阿里巴巴也在 Checkpoint 上进行了大量改进,比如:

  • 增量 Checkpoint 机制:阿里巴巴生产环境中遇到大 Job 有几十 TB State 是常事,做一次全量 CP 地动山摇,成本很高,因此阿里巴巴研发了增量 Checkpoint 机制,从此之后 CP 从暴风骤雨变成了细水长流;
  • Checkpoint 小文件合并:都是规模惹的祸,随着整个集群 Flink Job 越来越多,CP 文件数也水涨船高,最后压的 HDFS NameNode 不堪重负,阿里巴巴通过把若干 CP 小文件合并成一个大文件的组织方式,最终把 NameNode 的压力减少了几十倍。

虽然说所有的数据可以放在 State 中,但由于一些历史的原因,用户依然有一些数据需要存放在像 HBase 等一些外部 KV 存储中,用户在 Flink Job 需要访问这些外部的数据,但是由于 Flink 一直都是单线程处理模型,导致访问外部数据的延迟成为整个系统的瓶颈,显然异步访问是解决这个问题的直接手段,但是让用户在 UDF 中写多线程同时还要保证 ExactlyOnce 语义,却并非易事。阿里巴巴在Flink中提出了 AsyncOperator,让用户在 Flink Job 中写异步调用和写“Hello Word”一样简单 ,这个让 Flink Job 的吞吐有了很大的飞跃。

Flink 在设计上是一套批流统一的计算引擎,在使用过快如闪电的流计算之后,批用户也开始有兴趣入住 Flink 小区。但批计算也带来了新的挑战,首先在任务调度方面,阿里巴巴引入了更加灵活的调度机制,能够根据任务之间的依赖关系进行更加高效的调度;其次就是数据 Shuffle,Flink 原生的S huffle Service 和 TM 绑定,任务执行完之后要依旧保持 TM 无法释放资源;还有就是原有的 Batch shuffle 没有对文件进行合并,所以基本无法在生产中使用。阿里巴巴开发了 Yarn Shuffle Service 功能的同时解决了以上两个问题。在开发 Yarn Shuffle Service 的时候,阿里巴巴发现开发一套新的 Shuffle Service 非常不便,需要侵入 Flink 代码的很多地方,为了让其他开发者方便的扩展不同 Shuffle,阿里巴巴同时改造了 Flink Shuffle 架构,让 Flink 的 Shuffle 变成可插拔的架构。目前阿里巴巴的搜索业务已经在使用 Flink Batch Job,并且已经开始服务于生产。 

经过 3 年多打磨,Blink 已经在阿里巴巴开始茁壮生长,但是对 Runtime 的优化和改进是永无止境的,一大波改进和优化正在路上。

Flink 的未来方向

目前 Flink 已经是一个主流的流计算引擎,社区下一步很重要的工作是让 Flink 在批计算上有所突破,在更多的场景下落地,成为一种主流的批计算引擎。然后进一步在流和批之间进行无缝的切换,流和批的界限越来越模糊。用 Flink,在一个计算中,既可以有流计算,又可以有批计算。 

接下来阿里巴巴将致力于推动 Flink 在生态上得到更多语言的支持,不仅仅是 Java、Scala 语言,甚至是机器学习下用的 Python、Go 语言。

另一点不得不说 AI,因为现在很多大数据计算的需求和数据量都是在支持很火爆的 AI 场景,所以 Flink 在流批生态完善的基础上,将继续完善上层的 Machine Learning 算法库,同时 Flink 也会向更成熟的机器学习、深度学习去集成。比如可以做 Tensorflow On Flink, 让大数据的 ETL 数据处理和机器学习的 Feature 计算和特征计算,训练的计算等进行集成,让开发者能够同时享受到多种生态给大家带来的好处。

最后,从生态、社区的活跃来说,阿里巴巴目前在推进的一件事情是筹备 2018 年 12 月 20 日 - 21 日在国家会议中心举办的首届 Flink Forward China 峰会(千人规模),参与者将有机会了解阿里巴巴、腾讯、华为、滴滴、美团、字节跳动等公司为何将 Flink 作为首选的流处理引擎。

报名链接:https://dwz.cn/a4lHAVgW


          SDKMAN:轻松管理多个软件开发套件 (SDK) 的命令行工具      Cache   Translate Page      

你是否是一个经常在不同的 SDK 下安装和测试应用的开发者?我有一个好消息要告诉你!给你介绍一下 SDKMAN,一个可以帮你轻松管理多个 SDK 的命令行工具。它为安装、切换、列出和移除 SDK 提供了一个简便的方式。有了 SDKMAN,你可以在任何类 Unix 的操作系统上轻松地并行管理多个 SDK 的多个版本。它允许开发者为 JVM 安装不同的 SDK,例如 Java、Groovy、Scala、Kotlin 和 Ceylon、Ant、Gradle、Grails、Maven、SBT、Spark、Spring Boot、Vert.x,以及许多其他支持的 SDK。SDKMAN 是免费、轻量、开源、使用 Bash 编写的程序。

安装 SDKMAN

安装 SDKMAN 很简单。首先,确保你已经安装了 zipunzip 这两个应用。它们在大多数的 Linux 发行版的默认仓库中。
例如,在基于 Debian 的系统上安装 unzip,只需要运行:

$ sudo apt-get install zip unzip

然后使用下面的命令安装 SDKMAN:

$ curl -s "https://get.sdkman.io" | bash

在安装完成之后,运行以下命令:

$ source "$HOME/.sdkman/bin/sdkman-init.sh"

如果你希望自定义安装到其他位置,例如 /usr/local/,你可以这样做:

$ export SDKMAN_DIR="/usr/local/sdkman" && curl -s "https://get.sdkman.io" | bash

确保你的用户有足够的权限访问这个目录。

最后,在安装完成后使用下面的命令检查一下:

$ sdk version
==== BROADCAST =================================================================
* 01/08/18: Kotlin 1.2.60 released on SDKMAN! #kotlin
* 31/07/18: Sbt 1.2.0 released on SDKMAN! #sbt
* 31/07/18: Infrastructor 0.2.1 released on SDKMAN! #infrastructor
================================================================================

SDKMAN 5.7.2+323

恭喜你!SDKMAN 已经安装完成了。让我们接下来看如何安装和管理 SDKs 吧。

管理多个 SDK

查看可用的 SDK 清单,运行:

$ sdk list

将会输出:

================================================================================
Available Candidates
================================================================================
q-quit /-search down
j-down ?-search up
k-up h-help

--------------------------------------------------------------------------------
Ant (1.10.1) https://ant.apache.org/

Apache Ant is a Java library and command-line tool whose mission is to drive
processes described in build files as targets and extension points dependent
upon each other. The main known usage of Ant is the build of Java applications.
Ant supplies a number of built-in tasks allowing to compile, assemble, test and
run Java applications. Ant can also be used effectively to build non Java
applications, for instance C or C++ applications. More generally, Ant can be
used to pilot any type of process which can be described in terms of targets and
tasks.

: $ sdk install ant

就像你看到的,SDK 每次列出众多 SDK 中的一个,以及该 SDK 的描述信息、官方网址和安装命令。按回车键继续下一个。

安装一个新的 SDK,例如 Java JDK,运行:

$ sdk install java

将会输出:

Downloading: java 8.0.172-zulu

In progress...

######################################################################################## 100.0%

Repackaging Java 8.0.172-zulu...

Done repackaging...

Installing: java 8.0.172-zulu
Done installing!

Setting java 8.0.172-zulu as default.

如果你安装了多个 SDK,它将会提示你是否想要将当前安装的版本设置为 默认版本。回答 Yes 将会把当前版本设置为默认版本。

使用以下命令安装一个 SDK 的其他版本:

$ sdk install ant 1.10.1

如果你之前已经在本地安装了一个 SDK,你可以像下面这样设置它为本地版本。

$ sdk install groovy 3.0.0-SNAPSHOT /path/to/groovy-3.0.0-SNAPSHOT

列出一个 SDK 的多个版本:

$ sdk list ant

将会输出:

================================================================================
Available Ant Versions
================================================================================
> * 1.10.1
1.10.0
1.9.9
1.9.8
1.9.7

================================================================================
+ - local version
* - installed
> - currently in use
================================================================================

像我之前说的,如果你安装了多个版本,SDKMAN 会提示你是否想要设置当前安装的版本为 默认版本。你可以回答 Yes 设置它为默认版本。当然,你也可以在稍后使用下面的命令设置:

$ sdk default ant 1.9.9

上面的命令将会设置 Apache Ant 1.9.9 为默认版本。

你可以根据自己的需要选择使用任何已安装的 SDK 版本,仅需运行以下命令:

$ sdk use ant 1.9.9

检查某个具体 SDK 当前的版本号,例如 Java,运行:

$ sdk current java
Using java version 8.0.172-zulu

检查所有当下在使用的 SDK 版本号,运行:

$ sdk current

Using:

ant: 1.10.1
java: 8.0.172-zulu

升级过时的 SDK,运行:

$ sdk upgrade scala

你也可以检查所有的 SDK 中还有哪些是过时的。

$ sdk upgrade

SDKMAN 有离线模式,可以让 SDKMAN 在离线时也正常运作。你可以使用下面的命令在任何时间开启或者关闭离线模式:

$ sdk offline enable
$ sdk offline disable

要移除已安装的 SDK,运行:

$ sdk uninstall ant 1.9.9

要了解更多的细节,参阅帮助章节。

$ sdk help

Usage: sdk <command> [candidate] [version]
sdk offline <enable|disable>

commands:
install or i <candidate> [version]
uninstall or rm <candidate> <version>
list or ls [candidate]
use or u <candidate> [version]
default or d <candidate> [version]
current or c [candidate]
upgrade or ug [candidate]
version or v
broadcast or b
help or h
offline [enable|disable]
selfupdate [force]
update
flush <broadcast|archives|temp>

candidate : the SDK to install: groovy, scala, grails, gradle, kotlin, etc.
                 use list command for comprehensive list of candidates
                 eg: $ sdk list

version : where optional, defaults to latest stable if not provided
             eg: $ sdk install groovy

更新 SDKMAN

如果有可用的新版本,可以使用下面的命令安装:

$ sdk selfupdate

SDKMAN 会定期检查更新,并给出让你了解如何更新的指令。

WARNING: SDKMAN is out-of-date and requires an update.

$ sdk update
Adding new candidates(s): scala

清除缓存

建议时不时的清理缓存(包括那些下载的 SDK 的二进制文件)。仅需运行下面的命令就可以了:

$ sdk flush archives

它也可以用于清理空的文件夹,节省一点空间:

$ sdk flush temp

卸载 SDKMAN

如果你觉得不需要或者不喜欢 SDKMAN,可以使用下面的命令删除。

$ tar zcvf ~/sdkman-backup_$(date +%F-%kh%M).tar.gz -C ~/ .sdkman
$ rm -rf ~/.sdkman

最后打开你的 .bashrc.bash_profile 和/或者 .profile,找到并删除下面这几行。

#THIS MUST BE AT THE END OF THE FILE FOR SDKMAN TO WORK!!!
export SDKMAN_DIR="/home/sk/.sdkman"
[[ -s "/home/sk/.sdkman/bin/sdkman-init.sh" ]] && source "/home/sk/.sdkman/bin/sdkman-init.sh"

如果你使用的是 ZSH,就从 .zshrc 中删除上面这一行。

这就是所有的内容了。我希望 SDKMAN 可以帮到你。还有更多的干货即将到来。敬请期待!

祝近祺!

:)


via: https://www.ostechnix.com/sdkman-a-cli-tool-to-easily-manage-multiple-software-development-kits/

作者:SK 选题:lujun9972 译者:dianbanjiu 校对:wxy

本文由 LCTT 原创编译,Linux中国 荣誉推出


          Серверное применение Linux      Cache   Translate Page      
Серверное применение Linux

Серверное применение Linux - Описана настройка различных типов серверов: Web-, FTP-, DNS-, DHCP-, почтового сервера, сервера баз данных. Подробно рассмотрена установка и базовая настройка операционной системы, настройка связки Apache + MySQL + PHP, дано общее устройство Linux и разобраны основные принципы работы с этой операционной системой.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SOA Application Developer II - Zantech - Kearneysville, WV      Cache   Translate Page      
Experience with messaging middleware products such as Red Hat JBoss A-MQ, Apache ActiveMQ, Apache Camelis strongly preferred....
From Zantech - Fri, 28 Sep 2018 05:35:22 GMT - View all Kearneysville, WV jobs
          Jr Software Engineer - Leidos - Morgantown, WV      Cache   Translate Page      
Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). Put your Java/C++ skills in action!...
From Leidos - Mon, 05 Nov 2018 16:38:00 GMT - View all Morgantown, WV jobs
          Software Engineer (careC2 Developers) - Leidos - Morgantown, WV      Cache   Translate Page      
Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). The Leidos Health Products &amp; Service Group has an opening for a Software Developers with...
From Leidos - Mon, 05 Nov 2018 16:38:00 GMT - View all Morgantown, WV jobs
          Software Engineering Tech - Leidos - Morgantown, WV      Cache   Translate Page      
Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). Leidos has job opening for a Software Engineering Tech in Morgantown, WV....
From Leidos - Wed, 24 Oct 2018 21:40:14 GMT - View all Morgantown, WV jobs
          'Indeh', un western con guión de Ethan Hawke      Cache   Translate Page      

El cine western popularizó la imagen del nativo americano como una bestia a la que el hombre blanco tenía que dominar para hacer habitable los Estados Unidos. Eran la barbarie enfrentada al progreso. En realidad las primeras películas de cine mudo habían mostrado una imagen compleja y con más matices de los indios, pero con el tiempo en los despachos de Hollywood se dieron cuenta de que se vendían más entradas de las películas en las que los indios salían como enemigos. La diligencia (1939) de John Ford fue una de aquellas primeras películas que marcaron ese rumbo.

Esto no quiere decir que no se estrenasen películas con otros puntos de vista, pero no fue hasta la llegada de los movimientos de derechos civiles de los 60 cuando en el cine comercial se empezó a generalizar un tratamiento más complejo e inteligente de los indios. Aunque fue muy posterior, uno de los mayores exponentes de este cambio de mentalidad fue Bailando con lobos (1990), en la que, de todos modos, el protagonista seguía siendo un hombre blanco. Ahora el western está muerto como género comercial, y tal vez uno de los motivos haya sido este, el haber dejado atrás los estereotipos sobre los pueblos indios.

Todo esto es un ejemplo de cómo los sistemas de valores cambian a medida que las sociedades evolucionan. Lo que en una época está normalizado, en otra posterior produce rechazo. Como comenta él mismo en el epílogo, esto es lo que empujó al actor Ethan Hawke a investigar la historia de los pueblos de nativos americanos para intentar sacar adelante una película que les hiciese justicia. Sin embargo, ningún estudio ha confiado hasta ahora en este proyecto, por lo que en cierto momento decidió darle forma de cómic con la ayuda del dibujante Greg Ruth. Personalmente, imagino que también lo ha hecho como otra forma de vender a las productoras esta película, igual que Aronofsky hizo con Noé (2014)

Indeh, por tanto, no es del todo una excepción dentro del género western. Con este cómic Ethan Hawke quiere profundizar en las características de la cultura apache, pero lo hace con una punto de partida complicado: la justificación de las crueles matanzas de Gerónimo como una reacción al exterminio y el odio que sufría su pueblo. El guión y las ilustraciones se regodean en la violencia y la crueldad de los dos bandos para comprometer al lector, para impedirle posicionarse. Unos y otros se diferencian sólo en un detalle: los blancos empezaron. Los hombres de Gerónimo simplemente se estaban defendiendo (brutalmente) de un invasor innoble, racista y despiadado.

Con estos aspectos interesantes, ¿por qué este cómic no ha llegado a llamar la atención entre los lectores españoles? Lo digo porque, por ejemplo, a mí me ha costado encontrarlo en las librerías. La explicación que le doy está relacionada con que no sé cómo se ha trabajado en este cómic. ¿Ethan Hawke y Greg Ruth han colaborado para trasladar el relato original a este medio? ¿O, como me temo, Greg Ruth ha adaptado un guión cinematográfico? Eso podría explicar algunos de los errores de este cómic. La falta de un contexto y desarrollo de los personajes crea en el lector la sensación de estar perdido casi todo el tiempo, a lo cual se unen unos diálogos tan inconexos que he llegado a pensar que la culpa era de la traducción. Es difícil seguir las conversaciones también porque las caras de los personajes apenas se distinguen, o peor aún, porque hay casos de conversaciones en las que algunas viñetas no nos muestran a los personajes. En el cine y la televisión no es necesario enfocar continuamente a los personajes cuando hablan porque distinguimos el timbre de sus voces, pero en el cómic necesitamos más información.

No puedo decir que este «bestseller» (eso dice la portada) sea al menos un producto interesante entre los trabajos de Ethan Hawke. Tanto dentro del género del western como en su carrera profesional hay obras mucho más recomendables.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Développeur Java/JEE - Voonyx - Lac-beauport, QC      Cache   Translate Page      
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Thu, 26 Jul 2018 05:13:45 GMT - View all Lac-beauport, QC jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          肾衰灌肠液辅助HV-CVVH治疗对脓毒血症致早期急性肾损伤患者器官功能、APACHEⅡ评分及血清ALB、TP水平的影响      Cache   Translate Page      
作者:轩兴伟;刘建新;唐明贵;王宝华;
摘要:目的观察肾衰灌肠液辅助高容量连续性静脉-静脉血液滤过(HV-CVVH)治疗对脓毒血症致早期急性肾损伤患者器官功能、急性生理与慢性健康状况评分系统Ⅱ(APACHEⅡ)评分及人血白蛋白(ALB)、总蛋白(…
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Apoyos se acaban y Álamo Temapache no se recupera aún de las inundaciones      Cache   Translate Page      
El presiente Jorge Vera Hernández habló d...

          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10
Site Map 2018_10_11
Site Map 2018_10_12
Site Map 2018_10_13
Site Map 2018_10_14
Site Map 2018_10_15
Site Map 2018_10_16
Site Map 2018_10_17
Site Map 2018_10_18
Site Map 2018_10_19
Site Map 2018_10_20
Site Map 2018_10_21
Site Map 2018_10_22
Site Map 2018_10_23
Site Map 2018_10_24
Site Map 2018_10_25
Site Map 2018_10_26
Site Map 2018_10_27
Site Map 2018_10_28
Site Map 2018_10_29
Site Map 2018_10_30
Site Map 2018_10_31
Site Map 2018_11_01
Site Map 2018_11_02
Site Map 2018_11_03
Site Map 2018_11_04
Site Map 2018_11_05
Site Map 2018_11_06
Site Map 2018_11_07