Next Page: 10000

          I need gpt/bux script for my ptc site.      Cache   Translate Page      
I need gpt/bux/ptc script for my pay for click website..I want this urgent. If you have already this script then you can bid.. this is the demo. http://aurorabrushes.com/index.php?route=product/product&product_id=119 (Budget: $10 - $30 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          I need gpt/bux script for my ptc site.      Cache   Translate Page      
I need gpt/bux/ptc script for my pay for click website..I want this urgent. If you have already this script then you can bid.. this is the demo. http://aurorabrushes.com/index.php?route=product/product&product_id=119 (Budget: $10 - $30 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          I need gpt/bux script for my ptc site.      Cache   Translate Page      
I need gpt/bux/ptc script for my pay for click website..I want this urgent. If you have already this script then you can bid.. this is the demo. http://aurorabrushes.com/index.php?route=product/product&product_id=119 (Budget: $10 - $30 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          I need gpt/bux script for my ptc site.      Cache   Translate Page      
I need gpt/bux/ptc script for my pay for click website..I want this urgent. If you have already this script then you can bid.. this is the demo. http://aurorabrushes.com/index.php?route=product/product&product_id=119 (Budget: $10 - $30 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          DRUPAL8 expert developer and API Travel integration      Cache   Translate Page      
As part of an OTA website project with DRUPAL 8, Data base: MariaDB 10.3 PHP 7.2 Symfony 3 + Angular We are launching a search for service provider for the development and integration of API flight and other with the characteristics specified below... (Budget: €1500 - €3000 EUR, Jobs: Drupal, HTML, MySQL, PHP, Software Architecture)
          debugger required for wordpress site and server file clean up      Cache   Translate Page      
job requirements - debugging securing site and cleaning bug files (Budget: ₹600 - ₹1500 INR, Jobs: HTML, MySQL, PHP, Software Architecture, WordPress)
          少数チームで自社WEBサービスを開発したいフロントエンドエンジニアを募集! by 株式会社エグゼック      Cache   Translate Page      
当社のお客様である全国の写真館への導入実績が順調に増え、今後更なる広がりが見込まれる中、新たな仲間を募集します。 ◆業務内容 ・自社サービスである「フォトストア」のバージョンアップ ・新機能開発 ・周辺に位置付く新サービスの開発 ・新規事業開発など 上記に関わるフロントエンドを担当していただきます。 ◆開発環境 言語      :HTML5+CSS3・JavaScript フレームワーク :自社フレームワーク・.NET Framework・jQuery・prototype.js 環境      :Linux、Cent OS、Apache、memcached、Eclipse データベース  :MySQL、PostgreSQL、Microsoft SQL Server プロジェクト管理:Redmine 自社プロダクトであるフォトストアは、写真館がクライアントなります。 そのお客様の声から得たニーズや、年に1回行う約10,000名からのアンケート結果などを元に、 継続的に新規機能開発、機能改善などの開発に取り組んで頂きます。  開発案件は営業側から来る要望(顧客の声で優先すべきもの)がRedmineでチケット登録され、開発チームと営業側とで開発を行うか行わないか、課題を解決する最適な機能など検討を重ねた上で、開発するチケットを決定し、各開発担当者に割り振られていきます。 各開発担当者は、仕様確認・設計・工数を確認しながらコーディングに入り、コーディングを終えれば、GitHub 上で管理者にプルリクを出し、レビューが通ればリリースされていきます。 ■開発チーム 役割は多少分けているものの、インフラもみたり、フロントも書いたり、サーバーも書いたりと、幅広く担当できます。フルスタックエンジニアを目指す方にはぴったりです。 ■環境について 技術に対する考え方は、まず最新動向はきっちり仕入れよう。仕入れた上で何がいいか考えよう。最も適する技術を採用しよう。古いか新しいかは関係ない。そんな感じです。ただ、今開発に追われていて、やりたくても手をつけられていないことも多く、これらの考えを共有しながら一緒に仕事ができる方が理想的です。 また、代表の古田がエンジニアということもあり、たまに代表といっしょにプログラミングを組んだり、技術ネタで議論したり、ビジネスという切り口からエンジニアとしてどうサービスを作りあげていくかという議論ができる点などなど、これも当社の特徴の1つかも知れません。 ■今後の直近の課題 これからの開発メンバー増強にともない、チームで開発する体制を整備して、スピーディに短周期で機能改善を回していくことと考えています。なので、これから当社のエンジニアチームのカタチを作っていくというフェーズなので、それを楽しめる方には最適です。 ■こんな方求めてます! 成長中のベンチャー企業で、将来会社が目指しているビジョンに向かって一緒に情熱を持って、いっしょに楽しみながら互いに成長ができる仲間を求めています。 条件としては、一生懸命なこと、成長したい気持ちがあること、ふざけるけど根は真面目なこと、そして仲間を思いやれる人、そして技術にワクワク感を感じ、長くいっしょに働けること。 まずは、どんな雰囲気なのか、ぜひコーヒーでも飲みにきてください。 アットホームな、でもゆるゆるではなく、ちょっと厳しい、個性的で、なんだかんだ真面目なメンバーの集まり。そんな感じがEXECかなと思います!
          Software Engineer that will be core member to a Fintech SME Lender by Uprise Credit      Cache   Translate Page      
Software Engineers will be the core of our online financing solution. We’re looking for someone passionate about building new applications from the ground up to develop an innovative financing solution for new economy entrepreneurs in Asia that are underserved by banks. Here at Uprise, you’ll work closely with your peers in a flat structure to share responsibilities. The technology team should move fast, celebrate great ideas, inspire testing and learning, and stretch for new solutions. This pivotal role is responsible for building the online SME lending platform from top to bottom, collaborating directly and frequently with the internal team, outsourced developers, and third party strategic partners to co-create solutions for merchant customers. The ultimate users of the product are currently targeted at online merchants who use certain types of online payment gateways. We expect to expand the user-base to offline merchants as well through partnership with other of e-payment solution providers. Requirements - 3+ years solid software development experience - Have expert knowledge in one or more languages such as Javascript, Python, Ruby, Java, C#, Golang - Knowledge in one or more popular framework such as Rails, React, Play, NodeJS. - Familiarity with databases, e.g. MySQL / MongoDB / PostgreSQL / Redis.Knowledge and experience in building online / offline payment gateway or financing solutions - Demonstrated interest and passionate in bridging the financing gap faced by SMEs and entrepreneurs in Asia - Proficient in English and Chinese (either Mandarin or Cantonese) - Must be willing to occasionally travel to Hong Kong, Singapore, Taiwan and other Southeast Asian countries.
          Unity未経験でもOK!UI/UXに興味のあるUnityエンジニア募集! by ラムズデザイン株式会社      Cache   Translate Page      
ラムズでは、モーターショー等に出展されるショーカー用のアプリケーションの開発や、大手車メーカーのこれから世に出るメーターやナビゲーション等の車載システムのプロトタイプの制作を行なっています。 その中でも新しいGUI・インタラクション機能の実装や、市場で発表された新しい電子デバイスを活かしたフロントエンドのソフトウェアの開発が、弊社の主に担当する業務になります。 その為通常のアプリ開発では行わないようなLEDやモーターなどハードウェアとの連携が多く、アプリ開発に限定されない幅広い知識と好奇心が必要なのが特徴です。 また案件は立ち上げの企画段階から関わらせていただくことも多く、ソフトウェアの設計など上流工程から実際の実装まで携わることができる為、自分次第でいくらでも活躍できる範囲を広げることができます。 個々の案件においては弊社でデザインから開発業務まで担当することもあり担当者が複数人になることが多い為、チームワークを重視しているのも特徴です。 【こんな方をお待ちしています】 ・ナショナルクライアントと仕事をしてみたい。 ・色々なプログラム言語に挑戦してみたい。 ・これから世に出るプロダクト製品を作りたい。 ・本気でエンジニアを目指したい。 年齢、性別は問いません。 【開発言語、環境】 ・言語 c#、Java、Swift、PHP、Mysql ・環境 Unity、VisualStudio、Linux *案件により適切なアーキテクチャを採用するようにしているため言語や環境は変動します 【必須スキル】 プログラム開発経験があること バックエンド側の言語でも歓迎致します 【歓迎スキル】 ・スマートフォンアプリの開発経験者 ・Unity等の3D開発の経験者 勤務地は原宿駅徒歩5分です。 一緒に車の未来を作りませんか? 少しでも興味を持たれた方は是非一度弊社に遊びに来ませんか? あなたのご応募をお待ちしています!
          SPサービスのプラットフォーム立ち上げに参画したいwebエンジニア募集! by 株式会社SPinno      Cache   Translate Page      
販促活動のノウハウを集めたクラウドサービス「SPinno」の開発をお任せします。 当面はロードマップにある機能の技術的な調査、プロトタイピング、設計、テスト等を担当していただきます。  長期的には開発プロセスの最適化、DevOps、新しい要素技術の研究開発など、キャリアの指向に応じて担当分野を広げ、または深堀をしてください。 ■求める経験、技術 フロントエンド/サーバーサイドのいずれかの開発経験5年以上  ・フロントエンド:Javascript(JQuery、Vue.js)  ・サーバサイド:PHP(Laravel)  ・DB:MySQL(AWS RDS)  ・インフラ:AWS、Docker ■望ましい経験、技術  ・スクラムなど、アジャイル手法による開発プロジェクトの経験  ・クラウドサービス開発プロジェクトの経験  ・印刷、販促、マーケティング領域のアプリケーション開発の経験  ・AWSによるインフラ構築の経験  ・個人でのサービス開発経験、勉強会での発表 ■環境、ツールなど  ・OS:Windows / Mac / Linux(Amazon Linux)  ・DB:RDS(MySQL、PostgreSQL)、Redshift  ・DevOps: CircleCI / SideCI / VAddy  ・その他: Github / Redmine / Trac / Subversion / Re:dash
          アプリや車のUI/UXに興味のあるUnityエンジニア募集! by ラムズデザイン株式会社      Cache   Translate Page      
ラムズでは、モーターショー等に出展されるショーカー用のアプリケーションの開発や、大手車メーカーのこれから世に出るメーターやナビゲーション等の車載システムのプロトタイプの制作を行なっています。 その中でも新しいGUI・インタラクション機能の実装や、市場で発表された新しい電子デバイスを活かしたフロントエンドのソフトウェアの開発が、弊社の主に担当する業務になります。 その為通常のアプリ開発では行わないようなLEDやモーターなどハードウェアとの連携が多く、アプリ開発に限定されない幅広い知識と好奇心が必要なのが特徴です。 また案件は立ち上げの企画段階から関わらせていただくことも多く、ソフトウェアの設計など上流工程から実際の実装まで携わることができる為、自分次第でいくらでも活躍できる範囲を広げることができます。 個々の案件においては弊社でデザインから開発業務まで担当することもあり担当者が複数人になることが多い為、チームワークを重視しているのも特徴です。 【こんな方をお待ちしています】 ・ナショナルクライアントと仕事をしてみたい。 ・色々なプログラム言語に挑戦してみたい。 ・これから世に出るプロダクト製品を作りたい。 ・本気でエンジニアを目指したい。 年齢、性別は問いません。 【開発言語、環境】 ・言語 c#、Java、Swift、PHP、Mysql ・環境 Unity、VisualStudio、Linux *案件により適切なアーキテクチャを採用するようにしているため言語や環境は変動します 【必須スキル】 プログラム開発経験があること バックエンド側の言語でも歓迎致します 【歓迎スキル】 ・スマートフォンアプリの開発経験者 ・Unity等の3D開発の経験者 勤務地は原宿駅徒歩5分です。 一緒に車の未来を作りませんか? 少しでも興味を持たれた方は是非一度弊社に遊びに来ませんか? あなたのご応募をお待ちしています!
          Arcsight Delivery Quality Assurance Resource Engineer, Network Security at Ecscorp Resources      Cache   Translate Page      
Ecscorp Resources is a solution engineering firm, established in the year 2001 with a cumulative of over 100 years experience. Our business is driven by passion and the spirit of friendliness; we harness the power of creativity and technology to drive innovation and deliver cutting-edge solutions to increase productivity. Our passion, experience, expertise and shared knowledge have forged us into a formidable catalyst for desirable, sustainable change and incessant growth. We strive to provide achievable solutions that efficiently and measurably support goal-focused business priorities and objectives.Duration: 3 months Detailed Description ArcSight division, is a leading global provider of Compliance and Security Management solutions that protect enterprises, education and governmental agencies. ArcSight helps customers comply with corporate and regulatory policy, safeguard their assets and processes and control risk. The ArcSight platform collects and correlates user activity and event data across the enterprise so that businesses can rapidly identify, prioritize and respond to compliance violations, policy breaches, cybersecurity attacks, and insider threats. The successful candidate for this position will work on the ArcSight R&D team. This is a hands-on position that will require the candidate to work with data collected from various network devices in combination with the various ArcSight product lines in order to deliver content that will help address the needs of all of ArcSight's customers. The ideal candidate will have a good understanding of enterprise security coupled with hands-on networking and security skills as well as an ability to write and understand scripting languages such as Perl, Python. Research, analyze and understand log sources, particularly from various devices in an enterprise network Appropriately categorize the security messages generated by various sources into the multi-dimensional ArcSight Normalization schema Write and modify scripts to parse out messages and interface with the ArcSight categorization database Work on content and vulnerability update releases Write scripts and automation to optimize various processes involved Understand content for ArcSight ESM, including correlation rules, dashboards, reports, visualizations, etc. Understand requirements to write content to address use cases based on customer requests and feedback Assist in building comprehensive, correct and useful ArcSight Connector and ESM content to ArcSight customers on schedule. Requirements Excellent knowledge of IT operations, administration and security Hands-on experience of a variety of different networking and security devices, such as Firewalls, Routers, IDS/IPS etc. Ability to examine operational and security logs generated by networking and security devices, identify the meaning and severity of them Understand different logging mechanisms, standards and formats Very strong practical Linux-based and Windows-based system administration skills Strong scripting skills using languages (Shell, Perl, Python etc), and Regex Hands-on experience of database such as MySQL Knowledge of Security Information Management solution such as ArcSight ESM Experience with a version control system (Perforce, GitHub) Advanced experience with Microsoft Excel Excellent written and verbal communication skills Must possess ability and desire to learn new technologies quickly while remaining detailed oriented Strong analytical skill and problem solving skills, multi-tasking. Pluses: Network device or Security certification (CISSP, CEH etc) Experience with application server such as Apache Tomcat Work experience in security operation center (SOC).
          Bit`coins / Blockchain Expert Required someone with pevious experiance URGENT      Cache   Translate Page      
Bit`coins / Blockchain Expert Required someone with pevious experiance URGENT We need someone who can sell us Bitcoin$ , we are ready to pay here UPFRONT (Budget: $250 - $750 USD, Jobs: Bitcoin, HTML, MySQL, PHP, Website Design)
          add function to database      Cache   Translate Page      
Hello, i need to add function to database if last_date less than today set ishidden value = "2" and showed in closed project + add media in edit profile. my budget is $2 Thanks. (Budget: $2 - $8 USD, Jobs: Database Programming, HTML, MySQL, PHP, SQL)
          Bit`coins / Blockchain Expert Required someone with pevious experiance URGENT      Cache   Translate Page      
Bit`coins / Blockchain Expert Required someone with pevious experiance URGENT We need someone who can sell us Bitcoin$ , we are ready to pay here UPFRONT (Budget: $250 - $750 USD, Jobs: Bitcoin, HTML, MySQL, PHP, Website Design)
          add function to database      Cache   Translate Page      
Hello, i need to add function to database if last_date less than today set ishidden value = "2" and showed in closed project + add media in edit profile. my budget is $2 Thanks. (Budget: $2 - $8 USD, Jobs: Database Programming, HTML, MySQL, PHP, SQL)
          SQLPro Studio 1.0.181 – Powerful database manager.      Cache   Translate Page      
SQLPro Studio is the premium database management tool for Postgres, MySQL, Microsoft Management Studio and Oracle databases. Features Intellisense/SQL auto-completion Syntax highlighting with customizable themes (including dark) Tab-based interface for an optimal user experience Context aware database tree navigation, including quick access to tables, views, columns, indices, and much more SQL beautifier/formatter Database-wide searching NTLMv2 […]
          Softwareentwickler/in Python/MySQL - LIMETEC Biotechnologies GmbH - Hennigsdorf      Cache   Translate Page      
Erfolgreich abgeschlossenes Hochschulstudium in Informatik oder einem verwandten Studiengang, alternativ erfolgreich abgeschlossene Berufsausbildung zum/-r...
Gefunden bei LIMETEC Biotechnologies GmbH - Thu, 31 May 2018 10:03:30 GMT - Zeige alle Hennigsdorf Jobs
          Softwareentwickler/in PHP/MySQL - LIMETEC Biotechnologies GmbH - Hennigsdorf      Cache   Translate Page      
Erfolgreich abgeschlossenes Hochschulstudium in Informatik oder einem verwandten Studiengang, alternativ erfolgreich abgeschlossene Berufsausbildung zum/-r...
Gefunden bei LIMETEC Biotechnologies GmbH - Fri, 07 Sep 2018 10:02:41 GMT - Zeige alle Hennigsdorf Jobs
          Magento 2 Speed Optimization 58748      Cache   Translate Page      
Hi there, I am looking Magento 2 site speed optimization, the site should load in 2-3 seconds after solving the speed issues. This is not only for improving score and speed test websites, it must have speed/performance requirements which necessary to improve overall loading time... (Budget: $250 - $750 NZD, Jobs: HTML, Javascript, Magento, MySQL, PHP)
          Incomedia WebSite X5 Professional 13.1.8.23 Multilingual 180913      Cache   Translate Page      

Incomedia WebSite X5 Professional 13.1.8.23 Multilingual 180913
[center]
http://www.hostpic.org/images/1711270006110105.jpg

Incomedia WebSite X5 Professional 13.1.8.23 Multilingual | 175.6 Mb

WebSite X5 is the most versatile and complete software that lets you create attractive, professional and functional websites, blogs and online stores. You don't need any programming skills to create a website, all you need is a mouse! The software is easy to use, flexible and open to your customization. You work with a fully-visual intuitive interface, with plenty of previews of your work that are constantly updated in real time.
[/center]

[center]
Incomedia WebSite X5 guarantees simplicity of use, flexibility and maximum customization so that you can create exactly the website you want.

Browse through more than 400,000 exclusive and royalty-free photos, buttons and graphic libraries, a gallery of ready-to-use widgets, and much more.

WebSite X5 provides a gallery of 1,500 templates. With such a vast choice available, you're sure to find the right solution for your website.

Top Features:
Sites with Mobile App to share news you publish
Online store with credit card management, product availability, promotions and coupons
Dynamic content that can be updated directly online
Integration with database and data management using an online Control Panel
Advanced Project Analysis and SEO optimization functions

Working with Incomedia WebSite X5 Evolution 13 is easy. Just follow the tutorial to create and publish your very own website online. The tutorial shows the basic steps. Setting up the project, laying out the website map, creating pages, defining advanced features. And, finally, publishing your website online.

Incomedia WebSite X5 Professional 13 is unique software for Web experts, an incredible combination of power and simplicity.

The secret of WebSite X5's success is that you don't have to spend time learning to use complicated software. All you have to do is follow the 5 easy steps to create top quality websites. Each step has been designed to help you obtain professional results with the minimum effort.

There's a specific tool for every job. From editing images and photos, to creating buttons, to automatically generating menus, right up to going online with the built-in FTP engine. You don't need any other software - this has it all. Save time and effort, because this software includes everything you need to create eye-catching and fully-comprehensive websites.

Tech Specs
Perfect for Windows 7 SP1, 8, 10 | 2 GB RAM | Min. Screen Resolution: 1024 x 600
Compatible Windows, Linux, Unix PHP 5.x, MySQL (only for certain advanced features) servers
Internet connection and e-mail account required for activation

Home Page -
http://www.websitex5.com/en/professional.html

Buy a premium  to download file with fast speed
thanks
Rapidgator.net
https://rapidgator.net/file/7f5753188e6 … l.rar.html
alfafile.net
http://alfafile.net/file/yoMH/mzog9.Inc … ingual.rar
[/center]


          Incomedia WebSite X5 Professional 14.0.4.1 Multilingual 180913      Cache   Translate Page      

Incomedia WebSite X5 Professional 14.0.4.1 Multilingual 180913
[center]
http://www.hostpic.org/images/1711270005380117.jpg

Incomedia WebSite X5 Professional 14.0.4.1 Multilingual | 164.2 Mb

WebSite X5 is the most versatile and complete software that lets you create attractive, professional and functional websites, blogs and online stores. You don't need any programming skills to create a website, all you need is a mouse! The software is easy to use, flexible and open to your customization. You work with a fully-visual intuitive interface, with plenty of previews of your work that are constantly updated in real time. Incomedia WebSite X5 guarantees simplicity of use, flexibility and maximum customization so that you can create exactly the website you want.
[/center]

[center]
Browse through more than 400,000 exclusive and royalty-free photos, buttons and graphic libraries, a gallery of ready-to-use widgets, and much more.

WebSite X5 provides a gallery of 1,500 templates. With such a vast choice available, you're sure to find the right solution for your website.

Top Features:
Sites with Mobile App to share news you publish
Online store with credit card management, product availability, promotions and coupons
Dynamic content that can be updated directly online
Integration with database and data management using an online Control Panel
Advanced Project Analysis and SEO optimization functions

Main Features WebSite X5 Professional 14:
Includes All Features of WebSite X5 Evolution 14 and much more:
Enhanced E-Commerce tools: integration with payment processing gateways, add Coupon& Discount codes, manage inventory and orders online, store optimized for Search Engines
APPs included to monitor an manage all your sites from your iOS or Android devices in real-time: receive stats, process store orders, check inventory and comments on your blog.
Dynamic Content to edit your site directly online.
The secret of WebSite X5's success is that you don't have to spend time learning to use complicated software. All you have to do is follow the 5 easy steps to create top quality websites. Each step has been designed to help you obtain professional results with the minimum effort.

There's a specific tool for every job. From editing images and photos, to creating buttons, to automatically generating menus, right up to going online with the built-in FTP engine. You don't need any other software - this has it all. Save time and effort, because this software includes everything you need to create eye-catching and fully-comprehensive websites.

Tech Specs
Perfect for Windows 7 SP1, 8, 10 | 2 GB RAM | Min. Screen Resolution: 1024 x 600
Compatible Windows, Linux, Unix PHP 5.x, MySQL (only for certain advanced features) servers
Internet connection and e-mail account required for activation

Home Page -
http://www.websitex5.com/en/professional.html

Buy a premium  to download file with fast speed
thanks
Rapidgator.net
https://rapidgator.net/file/f65d5103123 … l.rar.html
alfafile.net
http://alfafile.net/file/yoMz/hkmqm.Inc … ingual.rar
[/center]


          SQLPro Studio 1.0.181      Cache   Translate Page      

http://picplus.ru/img/1809/12/63811e84.png
SQLPro Studio 1.0.181 | macOS | 70 mb

SQLPro Studio-это инструмент управления базами данных премиум-класса для postgres, MySQL и руководство студии Microsoft и Oracle баз данных.

+ Технология intellisense/SQL и автодополнение.

+ Интерфейс с вкладками для оптимального пользовательского опыта.

SQLPro Studio-это инструмент управления базами данных премиум-класса для postgres, MySQL и руководство студии Microsoft и Oracle баз данных.
Некоторые из главных особенностей включают в себя:
+ технология intellisense/SQL и автодополнение.
+ Подсветка синтаксиса с возможностью настройки темы (включая темные).
+ Интерфейс с вкладками для оптимального пользовательского опыта.
+ Контекст осознает базе дереве навигации, включая быстрый доступ к данным таблиц, представлений, колонок, индексов и многое другое!
+ В SQL красоты/форматирования.
+ Широкая база данных поиск.
+ Протокол ntlmv2 поддерживается (но не обязательно).
+ Поддержки NetBIOS.
+ Поддержка мастер-пароль для дополнительной безопасности.
SQLPro Studio поддерживает следующие серверы баз данных:
+ MySQL и mariadb в
+ PostgreSQL в
+ Майкрософт SQL Server (2005 и выше)
+ Оракул (8и и выше)
: ОС x 10.11 или более поздней версии 64-разрядной
:

DOWNLOAD
uploadgig

Код:
https://uploadgig.com/file/download/140a1A8f4E61d5Cd/OXjzpbBJ_SQLPro_Studio_1.0.181_TNT.rar


nitroflare

Код:
http://nitroflare.com/view/CF8D60BCBDDD25F/OXjzpbBJ_SQLPro_Studio_1.0.181_TNT.rar


rapidgator

Код:
https://rapidgator.net/file/48f9ddd3e68ab68339610ad43d4a9e3d/OXjzpbBJ_SQLPro_Studio_1.0.181_TNT.rar.html


turbobit

Код:
https://turbobit.net/gmh4xg2y6a30/OXjzpbBJ_SQLPro_Studio_1.0.181_TNT.rar.html

          Navicat Premium 12.1.7 Full Version      Cache   Translate Page      

Navicat Premium is an advanced multi-connections database administration tool that allows you to simultaneously connect to all kinds of database easily. Navicat enables you to connect to MySQL, MariaDB, Oracle, PostgreSQL, SQLite, and SQL Server databases from a single application, making database administration so easy. You can easily and quickly build, manage and maintain your […]

Go to the original content here!
Navicat Premium 12.1.7 Full Version


          Magento 2 Speed Optimization 58748      Cache   Translate Page      
Hi there, I am looking Magento 2 site speed optimization, the site should load in 2-3 seconds after solving the speed issues. This is not only for improving score and speed test websites, it must have speed/performance requirements which necessary to improve overall loading time... (Budget: $250 - $750 NZD, Jobs: HTML, Javascript, Magento, MySQL, PHP)
          值得看|30道Redis面试题,面试官能问的都被我找到了      Cache   Translate Page      

值得看|30道Redis面试题,面试官能问的都被我找到了
1、什么是Redis?简述它的优缺点?

Redis本质上是一个Key-Value类型的内存数据库,很像memcached,整个数据库统统加载在内存当中进行操作,定期通过异步操作把数据库数据flush到硬盘上进行保存。

因为是纯内存操作,Redis的性能非常出色,每秒可以处理超过 10万次读写操作,是已知性能最快的Key-Value DB。

Redis的出色之处不仅仅是性能,Redis最大的魅力是支持保存多种数据结构,此外单个value的最大限制是1GB,不像 memcached只能保存1MB的数据,因此Redis可以用来实现很多有用的功能。

比方说用他的List来做FIFO双向链表,实现一个轻量级的高性 能消息队列服务,用他的Set可以做高性能的tag系统等等。

另外Redis也可以对存入的Key-Value设置expire时间,因此也可以被当作一 个功能加强版的memcached来用。 Redis的主要缺点是数据库容量受到物理内存的限制,不能用作海量数据的高性能读写,因此Redis适合的场景主要局限在较小数据量的高性能操作和运算上。

2、Redis相比memcached有哪些优势?

(1) memcached所有的值均是简单的字符串,redis作为其替代者,支持更为丰富的数据类型

(2) redis的速度比memcached快很多

(3) redis可以持久化其数据

3、Redis支持哪几种数据类型?

String、List、Set、Sorted Set、hashes

4、Redis主要消耗什么物理资源?

内存。

5、Redis的全称是什么?

Remote Dictionary Server。

6、Redis有哪几种数据淘汰策略?

noeviction:返回错误当内存限制达到并且客户端尝试执行会让更多内存被使用的命令(大部分的写入指令,但DEL和几个例外)

allkeys-lru: 尝试回收最少使用的键(LRU),使得新添加的数据有空间存放。

volatile-lru: 尝试回收最少使用的键(LRU),但仅限于在过期集合的键,使得新添加的数据有空间存放。

allkeys-random: 回收随机的键使得新添加的数据有空间存放。

volatile-random: 回收随机的键使得新添加的数据有空间存放,但仅限于在过期集合的键。

volatile-ttl: 回收在过期集合的键,并且优先回收存活时间(TTL)较短的键,使得新添加的数据有空间存放。

7、Redis官方为什么不提供windows版本?

因为目前linux版本已经相当稳定,而且用户量很大,无需开发windows版本,反而会带来兼容性等问题。

8、一个字符串类型的值能存储最大容量是多少?

512M

9、为什么Redis需要把所有数据放到内存中?

Redis为了达到最快的读写速度将数据都读到内存中,并通过异步的方式将数据写入磁盘。

所以redis具有快速和数据持久化的特征。如果不将数据放在内存中,磁盘I/O速度为严重影响redis的性能。

在内存越来越便宜的今天,redis将会越来越受欢迎。 如果设置了最大使用的内存,则数据已有记录数达到内存限值后不能继续插入新值。

10、Redis集群方案应该怎么做?都有哪些方案?

1.codis。

目前用的最多的集群方案,基本和twemproxy一致的效果,但它支持在 节点数量改变情况下,旧节点数据可恢复到新hash节点。

2.redis cluster3.0自带的集群,特点在于他的分布式算法不是一致性hash,而是hash槽的概念,以及自身支持节点设置从节点。具体看官方文档介绍。

3.在业务代码层实现,起几个毫无关联的redis实例,在代码层,对key 进行hash计算,然后去对应的redis实例操作数据。 这种方式对hash层代码要求比较高,考虑部分包括,节点失效后的替代算法方案,数据震荡后的自动脚本恢复,实例的监控,等等。

11、Redis集群方案什么情况下会导致整个集群不可用?

有A,B,C三个节点的集群,在没有复制模型的情况下,如果节点B失败了,那么整个集群就会以为缺少5501-11000这个范围的槽而不可用。

12、mysql里有2000w数据,redis中只存20w的数据,如何保证redis中的数据都是热点数据?

redis内存数据集大小上升到一定大小的时候,就会施行数据淘汰策略。

13、Redis有哪些适合的场景?

(1)会话缓存(Session Cache)

最常用的一种使用Redis的情景是会话缓存(session cache)。用Redis缓存会话比其他存储(如Memcached)的优势在于:Redis提供持久化。当维护一个不是严格要求一致性的缓存时,如果用户的购物车信息全部丢失,大部分人都会不高兴的,现在,他们还会这样吗?

幸运的是,随着 Redis 这些年的改进,很容易找到怎么恰当的使用Redis来缓存会话的文档。甚至广为人知的商业平台Magento也提供Redis的插件。

(2)全页缓存(FPC)

除基本的会话token之外,Redis还提供很简便的FPC平台。回到一致性问题,即使重启了Redis实例,因为有磁盘的持久化,用户也不会看到页面加载速度的下降,这是一个极大改进,类似php本地FPC。

再次以Magento为例,Magento提供一个插件来使用Redis作为全页缓存后端。

此外,对WordPress的用户来说,Pantheon有一个非常好的插件 wp-redis,这个插件能帮助你以最快速度加载你曾浏览过的页面。

(3)队列

Reids在内存存储引擎领域的一大优点是提供 list 和 set 操作,这使得Redis能作为一个很好的消息队列平台来使用。Redis作为队列使用的操作,就类似于本地程序语言(如python)对 list 的 push/pop 操作。

如果你快速的在Google中搜索“Redis queues”,你马上就能找到大量的开源项目,这些项目的目的就是利用Redis创建非常好的后端工具,以满足各种队列需求。例如,Celery有一个后台就是使用Redis作为broker,你可以从这里去查看。

(4)排行榜/计数器

Redis在内存中对数字进行递增或递减的操作实现的非常好。集合(Set)和有序集合(Sorted Set)也使得我们在执行这些操作的时候变的非常简单,Redis只是正好提供了这两种数据结构。

所以,我们要从排序集合中获取到排名最靠前的10个用户 我们称之为“user_scores”,我们只需要像下面一样执行即可:

当然,这是假定你是根据你用户的分数做递增的排序。如果你想返回用户及用户的分数,你需要这样执行:

ZRANGE user_scores 0 10 WITHSCORES

Agora Games就是一个很好的例子,用Ruby实现的,它的排行榜就是使用Redis来存储数据的,你可以在这里看到。

(5)发布/订阅

最后(但肯定不是最不重要的)是Redis的发布/订阅功能。发布/订阅的使用场景确实非常多。我已看见人们在社交网络连接中使用,还可作为基于发布/订阅的脚本触发器,甚至用Redis的发布/订阅功能来建立聊天系统!

14、Redis支持的Java客户端都有哪些?官方推荐用哪个?

Redisson、Jedis、lettuce等等,官方推荐使用Redisson。

15、Redis和Redisson有什么关系?

Redisson是一个高级的分布式协调Redis客服端,能帮助用户在分布式环境中轻松实现一些Java的对象 (Bloom filter, BitSet, Set, SetMultimap, ScoredSortedSet, SortedSet, Map, ConcurrentMap, List, ListMultimap, Queue, BlockingQueue, Deque, BlockingDeque, Semaphore, Lock, ReadWriteLock, AtomicLong, CountDownLatch, Publish / Subscribe, HyperLogLog)。

16、Jedis与Redisson对比有什么优缺点?

Jedis是Redis的Java实现的客户端,其API提供了比较全面的Redis命令的支持;

Redisson实现了分布式和可扩展的Java数据结构,和Jedis相比,功能较为简单,不支持字符串操作,不支持排序、事务、管道、分区等Redis特性。Redisson的宗旨是促进使用者对Redis的关注分离,从而让使用者能够将精力更集中地放在处理业务逻辑上。

17、Redis如何设置密码及验证密码?

设置密码:config set requirepass 123456

授权密码:auth 123456

18、说说Redis哈希槽的概念?

Redis集群没有使用一致性hash,而是引入了哈希槽的概念,Redis集群有16384个哈希槽,每个key通过CRC16校验后对16384取模来决定放置哪个槽,集群的每个节点负责一部分hash槽。

19、Redis集群的主从复制模型是怎样的?

为了使在部分节点失败或者大部分节点无法通信的情况下集群仍然可用,所以集群使用了主从复制模型,每个节点都会有N-1个复制品.

20、Redis集群会有写操作丢失吗?为什么?

Redis并不能保证数据的强一致性,这意味这在实际中集群在特定的条件下可能会丢失写操作。

21、Redis集群之间是如何复制的?

异步复制

22、Redis集群最大节点个数是多少?

16384个。

23、Redis集群如何选择数据库?

Redis集群目前无法做数据库选择,默认在0数据库。

24、怎么测试Redis的连通性?

ping

25、Redis中的管道有什么用?

一次请求/响应服务器能实现处理新的请求即使旧的请求还未被响应。这样就可以将多个命令发送到服务器,而不用等待回复,最后在一个步骤中读取该答复。

这就是管道(pipelining),是一种几十年来广泛使用的技术。例如许多POP3协议已经实现支持这个功能,大大加快了从服务器下载新邮件的过程。

26、怎么理解Redis事务?

事务是一个单独的隔离操作:事务中的所有命令都会序列化、按顺序地执行。事务在执行的过程中,不会被其他客户端发送来的命令请求所打断。

事务是一个原子操作:事务中的命令要么全部被执行,要么全部都不执行。

27、Redis事务相关的命令有哪几个?

MULTI、EXEC、DISCARD、WATCH

28、Redis key的过期时间和永久有效分别怎么设置?

EXPIRE和PERSIST命令。

29、Redis如何做内存优化?

尽可能使用散列表(hashes),散列表(是说散列表里面存储的数少)使用的内存非常小,所以你应该尽可能的将你的数据模型抽象到一个散列表里面。

比如你的web系统中有一个用户对象,不要为这个用户的名称,姓氏,邮箱,密码设置单独的key,而是应该把这个用户的所有信息存储到一张散列表里面。

30、Redis回收进程如何工作的?

一个客户端运行了新的命令,添加了新的数据。

Redi检查内存使用情况,如果大于maxmemory的限制, 则根据设定好的策略进行回收。

一个新的命令被执行,等等。

所以我们不断地穿越内存限制的边界,通过不断达到边界然后不断地回收回到边界以下。

如果一个命令的结果导致大量内存被使用(例如很大的集合的交集保存到一个新的键),不用多久内存限制就会被这个内存使用量超越。


          The Data Day: August 31, 2018      Cache   Translate Page      

AWS and VMware announce Amazon RDS on VMware. And more.

For @451Research clients: On the Yellowbrick road: data-warehousing vendor emerges with funding and flash-based EDW https://t.co/shKUTosHlS By @jmscrts

― Matt Aslett’s The Data Day (@thedataday) August 31, 2018

For @451Research clients: Automated analytics: the role of the machine in corporate decision-making https://t.co/3PkCXnGfhR By Krishna Roy

― Matt Aslett’s The Data Day (@thedataday) August 28, 2018

For @451Research clients: @prophix does cloud and on-premises CPM, with machine learning up next https://t.co/8FKKvRrJDb By Krishna Roy

― Matt Aslett’s The Data Day (@thedataday) August 28, 2018

AWS and VMware have announced Amazon Relational Database Service on VMware, supporting Microsoft SQL Server, Oracle, PostgreSQL, mysql, and MariaDB. https://t.co/hy5F1g8dTA

― Matt Aslett’s The Data Day (@thedataday) August 27, 2018

Cloudera has launched Cloudera Data Warehouse (previously Cloudera Analytic DB) as well as Cloudera Altus Data Warehouse as-a-service https://t.co/386z7HaT6Q and also Cloudera Workload XM, an intelligent workload experience management cloud service https://t.co/v5jGb3Hkp0

― Matt Aslett’s The Data Day (@thedataday) August 30, 2018

Alteryx has announced version 2018.3 of the Alteryx analytics platform, including Visualytics for real-time, interactive visualizations https://t.co/8ewTXJqs5T

― Matt Aslett’s The Data Day (@thedataday) August 28, 2018

Informatica has updated its Master Data Management, Intelligent Cloud Services and Data Privacy and Protection products with a focus on hybrid, multi-cloud and on-premises environments. https://t.co/eGGrA28trh

― Matt Aslett’s The Data Day (@thedataday) August 29, 2018

SnapLogic has announced the general availability of SnapLogic eXtreme, providing data transformation support for big data architectures in the cloud. https://t.co/NijnMNLTx0

― Matt Aslett’s The Data Day (@thedataday) August 28, 2018

VoltDB has enhanced its open source VoltDB Community Edition to support real-time data snapshots, advanced clustering technology, exporter services, manual scale-out on commodity servers and access to the VoltDB Management Console. https://t.co/tEHblf4J7v

― Matt Aslett’s The Data Day (@thedataday) August 30, 2018

ODPi has announced the Egeria project for the open sharing, exchange and governance of metadata https://t.co/tEb0jRHV8F

― Matt Aslett’s The Data Day (@thedataday) August 28, 2018

And that’s the data day


          Set up Zen Cart.      Cache   Translate Page      
I'm a designer a developer (to some extent). I need one Zen Cart set up, and maybe another. one has no more than 100 products, the other (tentative) has more or less the same. (Budget: $30 - $250 USD, Jobs: eCommerce, MySQL, PHP)
          configure the captcha ocr api      Cache   Translate Page      
hi i want to configure captcha ocr api with script that the system will send the captcha to the ocr and the ocr will replay the answer to the api im preffer to use this api http://www.eve.cm (Budget: $750 - $1500 USD, Jobs: Javascript, MySQL, PHP, Python, Software Architecture)
          add chat in my site between users (my site laravel framwork)      Cache   Translate Page      
i need making new chat system to make users of my site contact them self with it easy in using fast in send and received messages and some additional options (Budget: $30 - $250 USD, Jobs: HTML, Laravel, MySQL, PHP, Website Design)
          Set up Zen Cart.      Cache   Translate Page      
I'm a designer a developer (to some extent). I need one Zen Cart set up, and maybe another. one has no more than 100 products, the other (tentative) has more or less the same. (Budget: $30 - $250 USD, Jobs: eCommerce, MySQL, PHP)
          configure the captcha ocr api      Cache   Translate Page      
hi i want to configure captcha ocr api with script that the system will send the captcha to the ocr and the ocr will replay the answer to the api im preffer to use this api http://www.eve.cm (Budget: $750 - $1500 USD, Jobs: Javascript, MySQL, PHP, Python, Software Architecture)
          add chat in my site between users (my site laravel framwork)      Cache   Translate Page      
i need making new chat system to make users of my site contact them self with it easy in using fast in send and received messages and some additional options (Budget: $30 - $250 USD, Jobs: HTML, Laravel, MySQL, PHP, Website Design)
          Set up Zen Cart.      Cache   Translate Page      
I'm a designer a developer (to some extent). I need one Zen Cart set up, and maybe another. one has no more than 100 products, the other (tentative) has more or less the same. (Budget: $30 - $250 USD, Jobs: eCommerce, MySQL, PHP)
          configure the captcha ocr api      Cache   Translate Page      
hi i want to configure captcha ocr api with script that the system will send the captcha to the ocr and the ocr will replay the answer to the api im preffer to use this api http://www.eve.cm (Budget: $750 - $1500 USD, Jobs: Javascript, MySQL, PHP, Python, Software Architecture)
          Tinder-like app for pets (ionic 3+)      Cache   Translate Page      
We need an app like Tinder for pets. App Features include: -Login/registration -Pet registration with geolocation and reverse geocoding -Match browsing and finding by registration data -Chat between matches... (Budget: $750 - $1500 USD, Jobs: Angular.js, Ionic Framework, Java, Mobile App Development, MySQL)
          Build me an app with a database      Cache   Translate Page      
We want to build an employment database where carers, nurses and cleaners can upload their details about their ability to work. Details would include cv alongside other materials and their available times... (Budget: £100 - £10000 GBP, Jobs: Database Administration, Mobile App Development, MySQL, PHP, User Interface / IA)
          add chat in my site between users (my site laravel framwork)      Cache   Translate Page      
i need making new chat system to make users of my site contact them self with it easy in using fast in send and received messages and some additional options (Budget: $30 - $250 USD, Jobs: HTML, Laravel, MySQL, PHP, Website Design)
          Ticketing and Membership Website      Cache   Translate Page      
I am looking to create a ticketing website with similiar functionality of eventbrite. This website will sell tickets have discount codes and offer a month and annual recurring membership that allows users to attend events for free... (Budget: $1500 - $3000 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Tinder-like app for pets (ionic 3+)      Cache   Translate Page      
We need an app like Tinder for pets. App Features include: -Login/registration -Pet registration with geolocation and reverse geocoding -Match browsing and finding by registration data -Chat between matches... (Budget: $750 - $1500 USD, Jobs: Angular.js, Ionic Framework, Java, Mobile App Development, MySQL)
          Java序列化的状态      Cache   Translate Page      

关键要点

  • Java序列化在很多库中引入了安全漏洞。
  • 对序列化进行模块化处于开放讨论状态。
  • 如果序列化能够成为模块,开发人员将能够将其从攻击表面上移除。
  • 移除其他模块可以消除它们所带来的风险。
  • 插桩提供了一种编织安全控制的方法,提供现代化的防御机制。

多年来,Java的序列化功能饱受 安全漏洞 和zero-day攻击,为此赢得了“ 持续奉献的礼物 ”和“ 第四个不可饶恕的诅咒 ”的绰号。

作为回应,OpenJDK贡献者团队讨论了一些用于限制序列化访问的方法,例如将其 提取到可以被移除的jigsaw模块中 ,让黑客无法攻击那些不存在的东西。

一些文章(例如“ 序列化必须死 ”)提出了这样的建议,将有助于防止 某些流行软件(如VCenter 6.5)的漏洞被利用

什么是序列化?

自从1997年发布 JDK 1.1 以来,序列化已经存在于Java平台中。

它用于在套接字之间共享对象表示,或者将对象及其状态保存起来以供将来使用(反序列化)。

在JDK 10及更低版本中,序列化作为java.base包和java.io.Serializable方法的一部分存在于所有的系统中。

GeeksForGeeks对 序列化的工作原理 进行了详细的描述。

有关更多如何使用序列化的代码示例,可以参看Baeldung对 Java序列化的介绍

序列化的挑战和局限

序列化的局限主要表现在以下两个方面:

  1. 出现了新的对象传输策略,例如JSON、XML、Apache Avro、Protocol Buffers等。
  2. 1997年的序列化策略无法预见现代互联网服务的构建和攻击方式。

进行序列化漏洞攻击的基本前提是找到对反序列化的数据执行特权操作的类,然后传给它们恶意的代码。为了理解完整的攻击过程,可以参看Matthias Kaiser在2015年发表的“ Exploiting Deserialization Vulnerabilities in Java ”一文,其中幻灯片第14页开始提供了相关示例。

其他大部分与序列号有关的安全研究 都是基于Chris Frohoff、Gabriel Lawrence和Alvaro Munoz的工作成果。

序列化在哪里?如何知道我的应用程序是否用到了序列化?

要移除序列化,需要从java.io包开始,这个包是java.base模块的一部分。最常见的使用场景是:

使用这些方法的开发人员应考虑使用其他存储和读回数据的替代方法。Eishay Smith发布了 几个不同序列化库的性能指标 。在评估性能时,需要在基准度量指标中包含安全方面的考虑。默认的Java序列化“更快”一些,但漏洞也会以同样的速度找上门来。

我们该如何降低序列化缺陷的影响?

项目Amber 包含了一个关于将序列化API隔离出来的讨论。我们的想法是将序列化从java.base移动到单独的模块,这样应用程序就可以完全移除它。在确定 JDK 11功能集 时并没有针对该提议得出任何结果,但可能会在未来的Java版本中继续进行讨论。

通过运行时保护来减少序列化暴露

一个可以监控风险并自动化可重复安全专业知识的系统对于很多企业来说都是很有用的。Java应用程序可以将JVMTI工具嵌入到安全监控系统中,通过插桩的方式将传感器植入到应用程序中。Contrast Security是这个领域的一个免费产品,它是JavaOne大会的 Duke's Choice大奖得主 。与其他软件项目(如MySQL或GraalVM)类似, Contrast Security的社区版 对开发人员是免费的。

将运行时插桩应用在Java安全性上的好处是它不需要修改代码,并且可以直接集成到JRE中。

它有点类似于面向切面编程,将非侵入式字节码嵌入到源端(远程数据进入应用程序的入口)、接收端(以不安全的方式使用数据)和转移(安全跟踪需要从一个对象移动到另一个对象)。

通过集成每个“接收端”(如ObjectInputStream),运行时保护机制可以添加额外的功能。在从JDK 9移植反序列化过滤器之前,这个功能对序列化和其他攻击的类型(如SQL注入)来说至关重要。

集成这个运行时保护机制只需要修改启动标志,将javaagent添加到启动选项中。例如,在Tomcat中,可以在bin/setenv.sh中添加这个标志:

CATALINA_OPTS=-javaagent:/Users/ecostlow/Downloads/Contrast/contrast.jar

启动后,Tomcat将会初始化运行时保护机制,并将其注入到应用程序中。关注点的分离让应用程序可以专注在业务逻辑上,而安全分析器可以在正确的位置处理安全性。

其他有用的安全技术

在进行维护时,可以不需要手动列出一长串东西,而是使用像 OWASP Dependency-Check 这样的系统,它可以识别出已知安全漏洞的依赖关系,并提示进行升级。也可以考虑通过像 DependABot 这样的系统进行库的自动更新。

虽然用意很好,但默认的 Oracle序列化过滤器 存在与SecurityManager和相关沙箱漏洞相同的设计缺陷。因为需要混淆角色权限并要求提前了解不可知的事物,限制了这个功能的大规模采用:系统管理员不知道代码的内容,所以无法列出类文件,而开发人员不了解环境,甚至DevOps团队通常也不知道系统其他部分(如应用程序服务器)的需求。

移除未使用模块的安全隐患

Java 9的模块化JDK能够 创建自定义运行时镜像 ,移除不必要的模块,可以使用名为jlink的工具将其移除。这种方法的好处是黑客无法攻击那些不存在的东西。

从提出模块化序列化到应用程序能够实际使用以及使用其他序列化的新功能需要一段时间,但正如一句谚语所说:“种树的最佳时间是二十年前,其次是现在”。

剥离Java的原生序列化功能还应该为大多数应用程序和微服务提供更好的互操作性。通过使用标准格式(如JSON或XML),开发人员可以更轻松地在使用不同语言开发的服务之间进行通信——与Java 7的二进制blob相比,python微服务通常具有更好的读取JSON文档的集成能力。不过,虽然JSON格式简化了对象共享,针对Java和.NET解析器的“ Friday the 13th JSON attacks ”证明了银弹是不存在的( 白皮书 )。

在进行剥离之前,序列化让然保留在java.base中。这些技术可以降低与其他模块相关的风险,在序列化被模块化之后,仍然可以使用这些技术。

为Apache Tomcat 8.5.31模块化JDK 10的示例

在这个示例中,我们将使用模块化的JRE来运行Apache Tomcat,并移除任何不需要的JDK模块。我们将得到一个自定义的JRE,它具有更小的攻击表面,仍然能够用于运行应用程序。

确定需要用到哪些模块

第一步是检查应用程序实际使用的模块。OpenJDK工具jdeps可以对JAR文件的字节码执行扫描,并列出这些模块。像大多数用户一样,对于那些不是自己编写的代码,我们根本就不知道它们需要哪些依赖项或模块。因此,我使用扫描器来检测并生成报告。

列出单个JAR文件所需模块的命令是:

jdeps -s JarFile.jar

它将列出模块信息:

tomcat-coyote.jar -> java.base
tomcat-coyote.jar -> java.management
tomcat-coyote.jar -> not found

最后,每个模块(右边的部分)都应该被加入到一个模块文件中,成为应用程序的基本模块。这个文件叫作module-info.java,文件名带有连字符,表示不遵循标准的Java约定,需要进行特殊处理。

下面的命令组合将所有模块列在一个可用的文件中,在Tomcat根目录运行这组命令:

find . -name *.jar ! -path "./webapps/*" ! -path "./temp/*" -exec jdeps -s {} \; | sed -En "s/.* -\> (.*)/  requires \1;/p" | sort | uniq | grep -v "not found" | xargs -0 printf "module com.infoq.jdk.TomcatModuleExample{\n%s}\n"

这组命令的输出将被写入lib/module-info.java文件,如下所示:

module com.infoq.jdk.TomcatModuleExample{
  requires java.base;
  requires java.compiler;
  requires java.desktop;
  requires java.instrument;
  requires java.logging;
  requires java.management;
  requires java.naming;
  requires java.security.jgss;
  requires java.sql;
  requires java.xml.ws.annotation;
  requires java.xml.ws;
  requires java.xml;
}

这个列表比整个Java模块列表要短得多。

下一步是将这个文件放入JAR中:

javac lib/module-info.java
jar -cf lib/Tomcat.jar lib/module-info.class

最后,为应用程序创建一个JRE:

jlink --module-path lib:$JAVA_HOME/jmods --add-modules ThanksInfoQ_Costlow --output dist

这个命令的输出是一个运行时,包含了运行应用程序所需的恰到好处的模块,没有任何性能开销,也没有了未使用模块中可能存在的安全风险。

与基础JDK 10相比,只用了98个核心模块中的19个。

java --list-modules

com.infoq.jdk.TomcatModuleExample
java.activation@10.0.1
java.base@10.0.1
java.compiler@10.0.1
java.datatransfer@10.0.1
java.desktop@10.0.1
java.instrument@10.0.1
java.logging@10.0.1
java.management@10.0.1
java.naming@10.0.1
java.prefs@10.0.1
java.security.jgss@10.0.1
java.security.sasl@10.0.1
java.sql@10.0.1
java.xml@10.0.1
java.xml.bind@10.0.1
java.xml.ws@10.0.1
java.xml.ws.annotation@10.0.1
jdk.httpserver@10.0.1
jdk.unsupported@10.0.1

运行这个命令后,就可以使用dist文件夹中的运行时来运行应用程序。

看看这个列表:部署插件(applet)消失了,JDBC(SQL)消失了,JavaFX也不见了,很多其他模块也消失了。从性能角度来看,这些模块不再产生任何影响。从安全角度来看,黑客无法攻击那些不存在的东西。保留应用程序所需的模块非常重要,因为如果缺少这些模块,应用程序也无法正常运行。

关于作者

Java序列化的状态 Erik Costlow 是甲骨文的Java 8和9产品经理,专注于安全性和性能。他的安全专业知识涉及威胁建模、代码分析和安全传感器增强。在进入技术领域之前,Erik是一位马戏团演员,可以在三轮垂直独轮车上玩火。

查看英文原文: The State of Java Serialization

 

来自:http://www.infoq.com/cn/articles/java-serialization-aug18

 


          Ada 300.297 pemilih ganda di Jawa Timur      Cache   Translate Page      
Bawaslu Jawa Timur merilis hasil temuan Daftar Pemilih Ganda yang dikoordinasikan bersama KPU Provinsi dan Partai Politik di kantor Bawaslu Provinsi Jatim, Rabu,(12/9). Komisioner Bawaslu Devisi pengawasan, Aang Khunaifi mengungkapkan, hasil temuan ini ditemukan dengan melakukan analisis menggunakan aplikasi mysql.
          Adriod Developer - vikas Global Solutions Ltd - Madhavanpark, Karnataka      Cache   Translate Page      
Android Application Development, Android SDK’s, API, HTML,CSS, MYSQL. Android Application Development. Vikas Global Solutions Ltd.... ₹1,00,000 - ₹2,00,000 a year
From Indeed - Mon, 10 Sep 2018 05:26:38 GMT - View all Madhavanpark, Karnataka jobs
          Create import script for OS Property to import reaxml real estate files      Cache   Translate Page      
Develop a script to import reaxml files (contains property listings). Reaxml is the Australian standard xml import type for real estate. OS Property currently only accepts regular xml files. OS Property... (Budget: $250 - $750 AUD, Jobs: Joomla, MySQL, PHP, Software Architecture, XML)
          Delete All Prestashop Products and Categories and Change domain name      Cache   Translate Page      
Hey, i want to delete all My Prestashop Products and Categories and i want to change domain name of a website from subdomain to domain and the other one from domain to subdomain (Budget: $10 - $30 USD, Jobs: eCommerce, HTML, MySQL, PHP, Prestashop)
          業界内でも技術力の高さに定評があるYappliにて、インフラエンジニア募集 by 株式会社ヤプリ      Cache   Translate Page      
プログラミング不要で高品質なネイティブアプリを開発・運用できるプラットフォーム『Yappli』 のインフラ構築・設計・運用をお任せします。 【具体的には】 ・AWSを活用したサーバーの設計・構築 ・Web/モバイルアプリケーション開発エンジニアと連携したシステムの改善 ・オペレーション自動化や監視ツールなどの導入と運用 構築・デプロイにはChefを用います。今後はDocker化も行う予定です。 【開発環境】 ■フレームワーク:Laravel, Nuxt.js, Vuex, gRPC ■フロントエンド:PHP, Go, Scala, JavaScript, ES2015, Sass(SCSS), TypeScript, Swift, Objective-C, Kotlin, RxJava, Retrofit ■データベース:MySQL(Aurora) ■インフラ:AWS(EC2, ECS, Fargate, OpsWorks, Lambda, SNS, SQS, DynamoDB, RDS), Chef, Docker, GCP ■コミュニケーション:Slack ■その他:Kubernetes, Serverspec, Ansible, NewRelic, Datadog, Elasticsearch ▼必須スキル/経験 ・AWSを用いたインフラ構築・運用経験(2年以上) ・Chef/Ansible/Capistrano などOSSデプロイツールの使用経験(半年以上) ・Zabbix/Nagios/Mackerel/NewRelic/re:dash などの監視ツール使用経験(半年以上) ※以下の経験次第では、上記経験年数は問わず ・Serverspec/Testinfraの実務経験 ・Dockerの実務経験 ▼歓迎するスキル/経験 ・オブジェクト指向(または関数型プログラミング)のデザインパターンを理解し、疎結合で拡張性の高い設計ができる方 ・Terraform/CloudFormation/Codenize.tools などのインフラ管理ツール使用経験(半年以上) ≪Yahoo出身者3名が立ち上げた「Yappli」≫ 私たちは自社にエンジニアを抱えていない企業、つまり世の9割を占める非IT企業のアプリ開発プラットフォームになるべく「Yappli」を開発し提供しています。1つのプロダクトの上に、250以上の国内有力ブランドがアプリを構築しており、ダウンロード数は数千万を超えています。2020年までに、導入1,000社、ダウンロード数1億を突破し、自社アプリのインフラになるべく邁進しています。 【エンジニア組織体制について】 現在、エンジニアは16名程在籍しており、構成としては下記の通りです。 CTO:1名 フロントエンドエンジニア:2名 サーバーサイド・インフラエンジニア:8名 iOS・Androidエンジニア:5名 ★新・アプリ時代を共に切り拓く仲間を探しています! 近年、プラットフォームが大きくなり、導入クライアントも急増。各セクションで新たな仲間を求めています!少しでも興味を持ってくださった方は、ぜひご応募ください!
          MySQL ORDER BY 조건별 필드 및 ASC DESC      Cache   Translate Page      
안될줄 알았는데 이게 되네요.

SELECT * FROM myTable ORDER BY (CASE WHEN 조건1 THEN myField1 END) ASC, (CASE WHEN 조건2 THEN myField2 END) DESC

이런식으로 하면 조건1이 맞으면 myField1 ASC 정렬이 되고, 조건2가 맞으면 myField2 DESC 정렬이 됩니다.


예제) myField1, myField2 DESC 로 정렬하고 myField1 이 A 일 경우만 myField2 ASC 로 정렬.
SELECT * FROM myTable ORDER BY myField1, (CASE WHEN myField1 = 'A' THEN myField2 END) ASC, (CASE WHEN myField1 != 'A' THEN myField2 END) DESC


License : Public Domain
          Linux Kernel Vs. Mac Kernel      Cache   Translate Page      
http://www.linuxandubuntu.com/home/difference-between-linux-kernel-mac-kernel

Difference Between Linux Kernel & Mac Kernel
Both the Linux kernel and the macOS kernel are UNIX-based. Some people say that macOS is "linux", some say that both are compatible due to similarities between commands and file system hierarchy. Today I want to show a little of both, showing the differences and similarities between Linux Kernel & Mac kernel like I mentioned in previous Linux kernel articles.

Kernel of macOS

In 1985, Steve Jobs left Apple due to a disagreement with CEO John Sculley and Apple's board of directors. He then founded a new computer company called NeXT. Jobs wanted a new computer (with a new operating system) to be released quickly. To save time, the NeXT team used the Carnegie Mellon Mach kernel and parts of the BSD code base to create the NeXTSTEP operating system.
NeXTSTEP desktop operating system
NeXT has never become a financial success, in part due to Jobs's habit of spending money as if he were still at Apple. Meanwhile, Apple tried unsuccessfully to update its operating system on several occasions, even partnering with IBM. In 1997, Apple bought NeXT for $429 million. As part of the deal, Steve Jobs returned to Apple and NeXTSTEP became the foundation of macOS and iOS.

Linux kernel

Unlike the macOS kernel, Linux was not created as part of a commercial enterprise. Instead, it was created in 1991 by computer student Linus Torvalds. Originally, the kernel was written according to the specifications of Linus's computer because he wanted to take advantage of his new 80386 processor. Linus posted the code for his new kernel on the web in August 1991. Soon, he was receiving code and resource suggestions Worldwide. The following year, Orest Zborowski ported the X Windows System to Linux, giving it the ability to support a graphical user interface.

MacOS kernel resources

The macOS kernel is officially known as XNU. The acronym stands for "XNU is Not Unix." According to Apple's official Github page, XNU is "a hybrid kernel that combines the Mach kernel developed at Carnegie Mellon University with FreeBSD and C++ components for the drivers." The BSD subsystem part of the code is "normally implemented as userspace servers in microkernel systems". The Mach part is responsible for low-level work such as multitasking, protected memory, virtual memory management, kernel debugging support, and console I/O.
macos kernel resources
Map of MacOS: the heart of everything is called Darwin; and within it, we have separate system utilities and the XNU kernel, which is composed in parts by the Mach kernel and by the BSD kernel.

Unlike Linux, this kernel is split into what they call the hybrid kernel, allowing one part of it to stop for maintenance, while another continues to work. In several debates this also opened the question of the fact that a hybrid kernel is more stable; if one of its parts stops, the other can start it again.

Linux kernel resources

While the macOS kernel combines the capabilities of a microkernel with Mach and a monolithic kernel like BSD, Linux is just a monolithic kernel. A monolithic kernel is responsible for managing CPU, memory, inter-process communication, device drivers, file system, and system service calls. That is, it does everything without subdivisions.

Obviously, this has already garnered much discussion even with Linus himself and other developers, who claim that a monolithic kernel is more susceptible to errors besides being slower; but Linux is the opposite of this every year, and can be optimized as a hybrid kernel. In addition, with the help of RedHat, the kernel now includes a Live Patch that allows real-time maintenance with no reboot required.

Differences between MacOS Kernel (XNU) and Linux

  1. The MacOS kernel (XNU) has existed for longer than Linux and was based on a combination of two even older code bases. This weighs in favor, for stability and history.
  2. On the other hand, Linux is newer, written from scratch and used on many other devices; so much that it is present in all 500 best among the best supercomputers and in the recently inaugurated North American supercomputer.

​In the system scope, we do not have a package manager via the command line in the macOS terminal.
The installation of the packages in .pkg format - such as BSD - is via this command line, if not through the GUI:
$ sudo installer -pkg /path/to/package.pkg -target /
NOTE: MacOS .pkg is totally different from BSD .pkg!
Do not think that macOS supports BSD programs and vice versa. It does not support and does not install.
You can have a command equivalent to apt in macOS, under 2 options: Installing Homebrew or MacPorts.  In the end, you will have the following syntax:
$ brew install PACKAGE
$ port install PACKAGE
Remember that not all programs/packages available for Linux or BSD will be in MacOS Ports.

Compatibility

In terms of compatibility, there is not much to say; the Darwin core and the Linux kernel are as distinct as comparing the Windows NT kernel with the BSD kernel. Drivers written for Linux do not run on macOS and vice versa. They must be compiled beforehand; Curiously, Linux has a series of macOS daemons, including the CUPS print server!

What we have in common compatibility are, in fact, terminal tools like GNU Utils packages or Busybox, so we have not only BASH but also gcc, rm, dd, top, nano, vim, etc. And this is intrinsic to all UNIX-based applications. In addition, we have the filesystem folders architecture, common folders common to root in /, / lib, / var, / etc, / dev, and so on.

Conclusion

MacOS and Linux have their similarities and differences, just like BSD compared to Linux. But because they are based on UNIX, they share patterns that make them familiar to the environment. Those who use Linux and migrate pro macOS or vice versa will be familiar with a number of commands and features. The most striking difference would be the graphical interface, whose problem would be a matter of personal adaptation.

          Apache NiFi fails to connect to Solr (LocalHost)      Cache   Translate Page      
Apache NiFi fails to connect to Solr (LocalHost), using hortonworks VM for college coursework. Not sure why it keeps failing, tried Googleing for hours! (Budget: $10 - $30 USD, Jobs: Apache, Linux, MySQL, Nginx, System Admin)
          [آموزش] دانلود Lynda Cleaning Bad Data in R - آموزش پاکسازی داده ای کثیف در آر      Cache   Translate Page      

دانلود Lynda Cleaning Bad Data in R - آموزش پاکسازی داده ای کثیف در آر#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

R، یک زبان برنامه‌نویسی و محیط نرم‌افزاری برای محاسبات آماری و علم داده‌ها است، که بر اساس زبان‌های اس و اسکیم پیاده‌سازی شده است. این نرم‌افزار متن باز، تحت اجازه‌نامه عمومی همگانی گنو عرضه شده و به رایگان قابل دسترس است. زبان اس بجز R، توسط شرکت Insightful، در نرم‌افزار تجاری اس‌پلاس نیز پیاده‌سازی شده است. اگرچه دستورات اس‌پلاس و R بسیار شبیه است لیکن این دو نرم‌افزار دارای هسته‌های متمایزی می‌باشند. R، حاوی محدودهٔ گسترده‌ای از تکنیک‌های آماری (از جمله: مدل‌سازی خطی و غیرخطی، آزمون‌های کلاسیک آماری، تحلیل سری‌های زمانی، رده‌بندی، خوشه‌بندی و غیره) و قابلیت‌های گرافیکی است. ...


http://p30download.com/81895

مطالب مرتبط:



دسته بندی: دانلود » آموزش » برنامه نویسی و طراحی وب
برچسب ها: , , , , , , , , , , , , ,
لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید آنتی ویروس, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است همین حالا مشترک این پایگاه شوید!
لینک دانلود: http://p30download.com/fa/entry/81895


          ANDROID APPLICATION FOR LOGIN/AUTO LOGIN/LOGOUT      Cache   Translate Page      
I actually have a mobile version of my system where user has to login with email/password. I would like to create an application for Android and IOS where user would login just once and the credentials would be stored in the phone... (Budget: $30 - $250 USD, Jobs: Android, iPhone, Mobile App Development, MySQL, PHP)
          Navicat Premium / Essentials Premium v12.1.7 (x86/x64)      Cache   Translate Page      
http://i100.fastpic.ru/big/2018/0912/26/9fbc714f0f98bd57bad8e94fd6146326.jpg#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
Navicat Premium / Essentials Premium v12.1.7 (x86/x64) | 227 Mb

Navicat Essentials is a compact version of Navicat which provides the basic and necessary features you will need to perform simple database development. Navicat Essentials is for commercial use and is available for MySQL, MariaDB, SQL Server, PostgreSQL, Oracle, and SQLite databases. If you need to administer all aforementioned database servers at the same time, there is also Navicat Premium Essentials which allows you to access multiple servers from a single application.
          ANDROID APPLICATION FOR LOGIN/AUTO LOGIN/LOGOUT      Cache   Translate Page      
I actually have a mobile version of my system where user has to login with email/password. I would like to create an application for Android and IOS where user would login just once and the credentials would be stored in the phone... (Budget: $30 - $250 USD, Jobs: Android, iPhone, Mobile App Development, MySQL, PHP)
          Setting up the server and setting up the admin page      Cache   Translate Page      
We need to setup the server in aws and the admin panel as well. (Budget: $250 - $750 USD, Jobs: Amazon Web Services, HTML, Linux, MySQL, PHP)
          Build a website      Cache   Translate Page      
I am interested to build “Airbnb style” website. Can start small, Happy with read script. But need to be able to modify and make changes in the future. It must be user friendly both for us and for our clientele... (Budget: $1500 - $3000 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Web Developer - New Roots Herbal - Tessier, SK      Cache   Translate Page      
Strong experience with PHP, MySQL, HTML 5, CSS 3, JavaScript, jQuery, Laravel, Drupal, Joomla; New Roots Herbal is a leading national manufacturer and...
From New Roots Herbal - Tue, 17 Jul 2018 23:03:24 GMT - View all Tessier, SK jobs
          Moodle installation on AWS      Cache   Translate Page      
Hi, Looking for someone who can setup EC2 VM for moodle and help me to setup my development platform for Moodle. (Budget: $8 - $15 AUD, Jobs: Amazon Web Services, Linux, Moodle, MySQL, PHP)
          配置中心 duic 2.2.0 发布,新增 OAI 预览及精练代码      Cache   Translate Page      

duic 是采用 kotlin 与 spring-webflux 开发的配置中心。通过 HTTP 的方式获取配置信息,可管理任何语言、应用的配置。设计目标是统一不同应用的配置管理方式,打造更人性化的配置编辑方式,提供更灵活的配置获取方式。

  • 支持 mongodb, mysql, postgresql, oracle 存储配置信息

  • 支持多配置合并

  • 支持按需获取配置

  • 支持用户权限控制

  • 支持 ip/token 访问限制

  • 支持分布式集群配置管理

  • 支持 docker

更新内容:

  • 新增 OpenAPI 预览功能

  • 使用 java8 time 替换 joda time

  • 避免无用 Exception 日志打印

  • 默认将编辑器缩近设置为 2 个 space

  • 使用 vue-cli 替代原生 webpack 打包方式

  • 添加应用配置时使用 trim 去除前后空格

  • 升级 kotlin 版本

  • 升级 spring-boot 版本

资源地址:


          PHP 开源框架 MiniFramework 发布 1.4.0 版      Cache   Translate Page      

MiniFramework 是一款遵循 Apache2 开源协议发布的,支持 MVC 和 RESTful 的超轻量级 PHP 开发框架。MiniFramework 能够帮助开发者用最小的学习成本快速构建 Web 应用,在满足开发者最基础的分层开发、数据库和缓存访问等少量功能基础上,做到尽可能精简,以帮助您的应用基于框架高效运行。

MiniFramework于2018年9月13日发布1.4.0版本,变化有:

  • 新增Log类,用于以日志的形式记录代码运行报错和开发者自定义的调试信息。

  • 新增常量LOG_ON,用于控制日志功能的开启和关闭(生产环境建议关闭)。

  • 新增常量LOG_LEVEL,用于定义可被写入日志的错误等级。

  • 新增常量LOG_PATH,用于定义日志存储路径。

  • 新增Debug类的varType方法,用于判断变量类型。

  • 改进优化异常控制相关功能。

MiniFramework 1.4.0 版本下载地址
zip格式:https://github.com/jasonweicn/MiniFramework/archive/1.4.0.zip
tar.gz格式:https://github.com/jasonweicn/MiniFramework/archive/1.4.0.tar.gz

MiniFramework 快速入门文档
地址:http://www.miniframework.com/docv1/guide/

近期版本更新主要变化回顾:

1.3.0

  • 新增Debug类,用于程序代码的调试。

  • 新增Session类的commit方法,用于提交将当前$_SESSION变量存放的数据。

  • 新增Session类的status方法,用于获取当前会话状态。(PHP >= 5.4.0)

  • 新增Upload类的setSaveNameLen方法,用于设置上传文件保存时生成的随机文件名长度。

  • 新增Upload类的saveOne方法,专门用于上传保存单个文件。

  • 改进Upload类的save方法,支持多个文件同时上传保存的新特性。

1.2.0

  • 新增Upload类,用于上传文件。

  • 新增全局函数getFileExtName(),用于获取文件扩展名。

  • 新增全局函数getHash(),用于在分库或分表场景下获取一个指定长度INT型HASH值。

  • 新增常量PUBLIC_PATH,用于定义WEB站点跟目录。

  • 改进Model类,新增支持连贯操作方式查询数据的特性。

1.1.1

  • 修正Registry类命名冲突的bug,将其中的方法unset更名为del。

1.1.0

  • 新增Captcha类,用于生成和校验图片验证码

  • 新增Registry类的unset方法,用于删除已注册的变量

  • 新增全局函数browserDownload(),用于让浏览器下载文件

  • 在App目录中,新增名为Example的控制器,其中包含部分功能的示例代码

1.0.13

  • 改进Db_Mysql中的execTrans方法

  • 改进渲染特性

  • 新增全局函数isImage(),用于判断文件是否为图像格式

  • 新增全局函数getStringLen(),用于获取字符串长度(支持UTF8编码的汉字)

1.0.12

  • 新增Session类,用于读写会话数据

1.0.11

  • 改进转换伪静态地址分隔符的机制

  • 优化路由处理伪静态时的性能

  • 优化部分核心类的属性

  • 优化框架内存占用


          XtraBackup 8.0.1 alpha 发布,兼容 MySQL 8.0      Cache   Translate Page      

Percona XtraBackup 8.0.1 alpha 已发布了,这是第一个测试用的 XtraBackup for MySQL 8.0 版本,可用于 MySQL 8.0 备份。

值得注意的事项:

  • 删除了已弃用的 innobackupex 命令 。

  • 由于 MySQL 8.0 数据目录以及 redo 格式的变化,新的 Xtrabackup for MySQL 8.0 仅兼容 MySQL 8.0 ,以及即将推出的 Percona Server for MySQL 8.0.x 。

  • 对于更早的版本的迁移,需要使用 XtraBackup 2.4 来备份和恢复,然后使用 MySQL 8.0.x 中的 mysql_upgrade 。

PXB 8.0.1 alpha 可用于以下平台:

  • RHEL / Centos 6.x

  • RHEL / Centos 7.x

  • Ubuntu 14.04 Trusty *

  • Ubuntu 16.04 Xenial

  • Ubuntu 18.04 Bionic

  • Debian 8 Jessie *

  • Debian 9 Stretch

更多详情可查阅发行说明


          Moodle installation on AWS      Cache   Translate Page      
Hi, Looking for someone who can setup EC2 VM for moodle and help me to setup my development platform for Moodle. (Budget: $8 - $15 AUD, Jobs: Amazon Web Services, Linux, Moodle, MySQL, PHP)
          Développeur backend - R et D - Tundra Technical - Saint-Roch, QC      Cache   Translate Page      
Programmation C#, MySQL, XML, Node.js, HTML, JS, Rest. Développeur Back End – RetD....
From Indeed - Thu, 30 Aug 2018 14:50:49 GMT - View all Saint-Roch, QC jobs
          Build me a Website      Cache   Translate Page      
I need a website for an Educational Software Company dealing in School College Erp. (Budget: ₹1500 - ₹12500 INR, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Need a PHP Developer for crm tool development      Cache   Translate Page      
Need a PHP Developer for crm tool development (Budget: ₹100 - ₹400 INR, Jobs: HTML, MySQL, PHP, Software Architecture, Website Design)
          build me a website      Cache   Translate Page      
i need a website as same as https://www.naukri.com . which include admin dashboard with profile, employer dashboard with profile, and job finder profile with dashboard . means full packed job portal. (Budget: ₹25000 - ₹40000 INR, Jobs: Codeigniter, Frontend Development, Graphic Design, MySQL, Website Design)
          Online Construction Project Calculator & Proposal Generator      Cache   Translate Page      
We currently have multiple excel sheets that generate us project estimations, material list, engineering calculators, proposal generator. We would like to build an online platform for all these tools to... (Budget: $1500 - $3000 USD, Jobs: HTML, MySQL, PHP, Software Architecture, Website Design)
          Dolibarr Module Develpment      Cache   Translate Page      
I just need you to build module for dolibarr Here is the project requirements. Dolibarr has functionality builtin for stock management. It is called warehouse. Warehouse is used for maintaining company's stock... (Budget: ₹1500 - ₹12500 INR, Jobs: ERP, Javascript, MySQL, PHP, Software Architecture)
          (USA-OH-Beaver Creek) Cyber Test Engineer - 63428234      Cache   Translate Page      
Cyber Test Engineer - 63428234 Job Code: 63428234Job Location: Beaver Creek, OHCategory: Software EngineeringLast Updated: 09/12/2018Apply Now! Job Description: Candidate will lead the test of cyber operations products to ensure verification of require' validation that the system meets the Concept of Operations. Requirements: Minimum Qualifications and Desirables: • Act as the primary interface with the custom regarding test and verification activities of cyber operations products. • Work with multiple develop stress, functional, acceptance, and ad-hoc tests based on the requirements and operational scenarios • Automate the testing and characterization of cyber products to the possible given budgets, project time frame, and long term benefit of the product • Design, and configure system test components necessary to perform end-to-end testing of cyber Develop scripts as necessary to integrate components, perform new capabilities, facilitate etc. • Characterize cyber operations products under a multitude of configurations • Monitor automated test executions and work with teams to analyze the cause and apply fixes for Generate and present to the end customer various milestone packages including: Softwa Requirement Review, Test Readiness Review, and Acceptance Review • Specify improve test and development network infrastructures • Programing skills in scripting languages s Python, Bash, Expect, Powershell • General understanding of networking protocols • General understanding of virtual machines, specifically VMware Workstation and ESXi • Setup an configuration of networks using VLANS, Switches and Windows and Linux networking. • Experience with protocol analysis using Wireshark or other packet analysis tools • Experience, test automation frameworks, scripting of automated test and capturing and analyzing rest Experience with MySQL or other databases • Python, C/C++/C# development experience Top Secret security clearance For more information, please send your resume and we will get back to you. Equal Opportunity Employer Minorities/Women/Veterans/Disabled
          Magento Check out page issue      Cache   Translate Page      
HI we have got some malware in our existing website. After scanning scan software has marked 2-3 files and put then in quarantine folder. Since then we have issue in checkout. need some one to lookinto... (Budget: $10 - $30 USD, Jobs: HTML, Javascript, Magento, MySQL, PHP)
          (USA-OK-Oklahoma City) System Administrator      Cache   Translate Page      
System Administrator EOE Statement We are an equal employment opportunity and E-Verify employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, national origin, disability status, gender identity, protected veteran status or any other characteristic protected by law. Description SUMMARY: The System Administrator is responsible for effective provisioning, installation/configuration, operation, and maintenance of systems hardware and software and related infrastructure. This individual participates in technical research and development to enable continuing innovation within the infrastructure. The Systems Administrator will ensure that system hardware, operating systems, software systems, and related procedures adhere to organizational values, enabling all customers, support teams and development teams. This individual is accountable for the following systems: Linux and Windows systems that support IFS infrastructure; Linux, Windows and Application systems that support the Portal; Responsibilities on these systems include system administrator engineering and provisioning, operations and support, maintenance and research and development to ensure continual innovation. ESSENTIAL FUNCTIONS/RESPONSIBILITIES: + Help tune performance and ensure high availability of infrastructure containing both Linux and Windows-based systems. + Maintain infrastructure monitoring and reporting tools. + Develop and maintain configuration management solutions. + Track, record, and confirm all systems and configuration changes. + Create tools to help teams make the most out of the available infrastructure. + Perform additional duties as assigned. Position Requirements + Bachelor’s degree in Computer Science or related field. + Minimum of 8 years of experience serving in the capacity of system administrator, in either Windows or Linux environments. + Minimum of 8 years of experience in applying OS patches and maintenance packs. + Experience installing, configuring, and maintaining services such as Bind, Apache, MySQL, Memcache, rsyslog, OSSEC, ntpd, Icinga, etc. + Experience with Windows servers in virtualized environments. + Experience with virtualization technologies, such as VirtualBox and VMWare. + Experience with virtual host and SSL setups + Experience with the National Airspace System (NAS) is preferred, but not required. SPECIFIC KNOWLEDGE, SKILLS, & ABILITIES: + Experience with Linux servers in virtualized environments. + Experience with Kubernetes, Docker, and Ansible preferred + Familiarity with the fundamentals of Linux scripting languages. + Proficient with network tools such as iptables, traceroute, Linux IPVS, HAProxy, etc. + Strong grasp of configuration management tools. + Familiarity with load balancing, firewalls, etc. + Ability to build and monitor services on production servers. + Knowledge of servers, switches, routing, network diagrams, etc. + Knowledge of CDN and DNS management. + Outstanding interpersonal and communication skills with the ability to effectively communicate across diverse audiences and influence cross functionally. + Ability to multi-task as well as be strategic, creative and innovative in a dynamic, team environment. + Ability to work independently as well as a contributing member of the team. + Proven ability to complete goals/projects on time delivering high quality results. Required work onsite in Oklahoma City. Full-Time/Part-Time Full-Time Position System Administrator Number of Openings 1 Location Oklahoma City, OK About the Organization Changeis, Inc. is an award-winning 8(a) certified, woman-owned small business that provides management consulting and engineering services to both public and private sectors. Changeis' work has resulted in successful execution of numerous programmatic initiatives, development of acquisition-sensitive deliverables, and establishment of a variety of long-term innovative strategic priorities for its customers. Changeis focuses on delivering unparalleled expertise in the areas of strategy and transformation management, investment analysis and acquisition management, governance, and innovation management. Inc. magazine has ranked the management consulting firm, Changeis Inc., among the top 1000 firms on its 35th annual Inc. 5000, the most prestigious ranking of the nation's fastest-growing private companies. Changeis offers a full benefit package that includes medical, dental, and vision, short and long term disability, retirement plan with immediate vesting and company match, and a generous annual leave plan. This position is currently accepting applications.
          (USA-NY-New York) Senior Software Engineer - Full Stack      Cache   Translate Page      
Want to work with driven and knowledgeable engineers using the latest technology in a friendly, open, and collaborative environment? Expand your knowledge at various levels of a modern, big-data driven, micro service stack, with plenty of room for career growth? At Unified, we empower autonomous teams of engineers to discover creative solutions for real-world problems in the marketing and technology sectors. We take great pride in building quality software that brings joy and priceless business insight to our end-users. We're looking for a talented Senior Software Engineer to bolster our ranks! Could that be you? What you'll do: • Gain valuable technology experience at a rapidly growing big data company • Take on leadership responsibilities, leading projects and promoting high quality standards • Work with a top-notch team of engineers and product managers using Agile development methodologies • Design, build and iterate on novel, cutting-edge software in a young, fast-moving industry • Share accumulated industry knowledge and mentor less experienced engineers • Participate regularly in code reviews • Test your creativity at Unified hack-a-thons Who you are: You're a senior engineer who is constantly learning and honing your skills. You love exploring complex systems to reveal possible architectural improvements. You're friendly and always willing to help others accomplish their goals. You have strong communication skills that enable you to describe technical issues in layman's terms. Must have: • 4+ years of professional development experience in relevant technologies • Willingness to mentor other engineers • Willingness to take ownership of projects • Backend development experience with Python, Golang, or Java • Frontend development experience with JavaScript, HTML, and CSS • React and supporting ecosystem tech, e.g. Redux, React Router, GraphQL • Experience supporting a complex enterprise SaaS software platform • Relational databases, e.g. Amazon RDS, PostgreSQL, MySQL • Microservice architecture design principles • Strong personal commitment to code quality • Integrating with third-party APIs • REST API endpoint development • Unit testing • A cooperative, understanding, open, and friendly demeanor • Excellent communication skills, with a willingness to ask questions • Demonstrated ability to troubleshoot difficult technical issues • A drive to make cool stuff and seek continual self-improvement • Able to multitask in a dynamic, early-stage environment • Able to work independently with minimal supervision Nice to have: • Working with agile methodologies • Experience in the social media or social marketing space • Git and Github, including Github Pull-Request workflows • Running shell commands (OS X or Linux terminal) • Ticketing systems, e.g. JIRA Above and beyond: • Amazon Web Services • CI/CD systems, e.g. Jenkins • Graph databases, e.g. Neo4J • Columnar data stores, e.g. Amazon Redshift, BigQuery • Social networks APIs, e.g. Facebook, Twitter, LinkedIn • Data pipeline and streaming tech, e.g. Apache Kafka, Apache Spark
          (USA-NY-New York) Software Engineer - Full-Stack      Cache   Translate Page      
Want to work with driven and knowledgeable engineers using the latest technology in a friendly, open, and collaborative environment? Expand your knowledge at various levels of a modern, big-data driven, micro service stack, with plenty of room for career growth? At Unified, we empower autonomous teams of engineers to discover creative solutions for real-world problems in the marketing and technology sectors. We take great pride in building quality software that brings joy and priceless business insight to our end-users. We're looking for a talented Software Engineer to bolster our ranks! Could that be you? What you'll do: • Gain valuable technology experience at a rapidly growing big data company • Work with a top-notch team of engineers and product managers using Agile development methodologies • Design, build and iterate on novel, cutting-edge software in a young, fast-moving industry • Participate regularly in code reviews • Test your creativity at Unified hack-a-thons Who you are: You're a software engineer who is eager to get more experience with enterprise-level software development, and constantly learning and honing your skills. You love to learn about large systems and make them better by fixing deficiencies and finding inefficient designs. You're friendly and always willing to help others accomplish their goals. You have strong communication skills that enable you to describe technical issues in layman's terms. Must have: • 1+ years of professional development experience in relevant technologies • Backend development experience with Python, Golang, or Java • Relational databases, e.g. Amazon RDS, PostgreSQL, MySQL • Strong personal commitment to code quality • Integrating with third-party APIs • REST API endpoint development • Unit testing • Working with agile methodologies • A cooperative, understanding, open, and friendly demeanor • Excellent communication skills, with a willingness to ask questions • Demonstrated ability to troubleshoot difficult technical issues • A drive to make cool stuff and seek continual self-improvement • Able to multitask in a dynamic, early-stage environment • Able to work independently with minimal supervision Nice to have: • Frontend development experience with JavaScript, HTML, and CSS • React and supporting ecosystem tech, e.g. Redux, React Router, GraphQL • Experience supporting a complex enterprise SaaS software platform • Experience in the social media or social marketing space • Git and Github, including Github Pull-Request workflows • Running shell commands (OS X or Linux terminal) • Microservice architecture design principles • Ticketing systems, e.g. JIRA Above and beyond: • Amazon Web Services • CI/CD systems, e.g. Jenkins • Graph databases, e.g. Neo4J • Columnar data stores, e.g. Amazon Redshift, BigQuery • Social networks APIs, e.g. Facebook, Twitter, LinkedIn • Data pipeline and streaming tech, e.g. Apache Kafka, Apache Spark
          vBulletin 3.8.11 & MySQL 8      Cache   Translate Page      
Can vBulletin 3.8.11 run with MySQL 8? Ta :)
          (USA-NY-Delhi) Web Developer      Cache   Translate Page      
Web Developer Category: Staff Department: Marketing and Communications Locations: Delhi, NY Posted: Sep 12, '18 Type: Full-time Share About SUNY Delhi: SUNY Delhi has been delivering a student-centered education in the foothills of the scenic Catskill Mountains for more than 100 years. An innovative approach to student success by expanding baccalaureate programs and adding satellite campuses and online platforms has led to record enrollment of more than 3,500 students. A member of the State University of New York system, SUNY Delhi offers its employees generous benefits, a supportive environment, and the chance to work among energetic colleagues and students on a friendly and inclusive campus. SUNY Delhi values its relationship with the broader community; it has been recognized for forming partnerships that enhance regional economic growth, as well as its commitment to civic engagement and service learning. Its home, the Village of Delhi, has been called one of America's "Coolest Small Towns" for its outdoor recreational opportunities and quaint shops, galleries, and local artisans on a thriving and walkable Main Street. That small-town charm is balanced by easy access to major metropolitan areas like Binghamton, the Capital District, and New York City. Job Description: SUNY Delhi seeks a dynamic, forward thinking and engaging Web Developer to join the Office of Marketing and Communications. SUNY Delhi prides itself on being a welcoming college community for all. Diversity, equity and inclusiveness are integral components of the highest quality academic programs and the strongest campus climate. The college seeks a wide range of applicants for its positions so that inclusive excellence will be affirmed. Principal duties include: + Identify and develop new web site features, functionality, information architecture, and digital content + Develop websites that adhere to website style guidelines based on campus branding and assist in creation of new templates for website and digital campaigns + CMS management and troubleshooting; reviewing page content and functionality + Conduct website code reviews for standards and ADA compliance + Conduct CMS and Accessibility trainings with web editors as needed + Website search engine optimization; including the use of Google Analytics, Tag Manager + Help to create and publish web content; assist content editors in maintaining web pages + Assist with the development, assessment, and implementation of digital marketing campaigns + Integrate graphic designs, database functionality, and digital media elements as needed + Monitor website for Accessibility Standards and Quality Assurance + Form design and maintenance Knowledge, Skills, & Abilities: + Skilled understanding of: UI, cross-browser compatibility, web standards, responsive design, and designing ADA compliant sites + Expertise and hands on experience with front-end web technologies and programming languages; HTML5, CSS3, PHP, JavaScript, JQuery, BootStrap, XML (XSL, XSLT), API's, RSS, MySQL, SQL + Ability to Integrate data from various back-end services and databases + Work with vendor designs to match visual design intent, and able to create working website from design files + Proficiency in Adobe Creative Suite and Microsoft Office + Familiarity with Search Engine Optimization, website analytics software including Google Analytics and Google Tag Manager + Experience with Content Management Systems (CMS) such as OmniUpdate, WordPress, Drupal, ESPs + Skilled in consulting with users regarding web site development and/or design issues + Ability to work individually and within a team environment + Ability to communicate effectively verbally and in writing + Experience with PhotoShop and video editing software + Experience with end-user training in CMS, Web Accessibility, and Content Creation Requirements: + Bachelor degree in computer science, information technology, or other related field required + Professional web site design and development experience + Experience managing and administrating Web content management systems (Word Press or Drupal) + Experience in content management for large websites Preferred Qualifications: + Experience with OmniUpdate CMS, Bootstrap Framework, JQuery, PHP, third party APIs + Familiarity with website analytics software including google analytics + Prior experience in a higher education environment + Knowledge of web accessibility standards + Knowledge of information architecture, user-centered design, and usability testing + Demonstrated organizational skills, customer service focus, attention to detail, and openness to new approaches and new ideas Additional Information: This is a full-time, professional position that reports to the Web Master. Obligation requires working during regular business hours, and occasional evenings and weekends. Salary range $60,000 to $70,000 dependent upon qualifications and experience + This position offers full New York State UUP (FT) benefits, which are among the most comprehensive in the country with an excellent fringe benefits package + Click here for more Information for Prospective Employees SUNY Delhi has a strong commitment to Affirmative Action and Cultural Diversity. The College welcomes responses from women, minorities, individuals with disabilities and veterans. SUNY Delhi is committed to providing a safe and productive learning and living community for our students and employees. To achieve that goal, we conduct background investigations for all final candidates being considered for employment. Any offer of employment is contingent on the successful completion of the background check. Pursuant to Executive Order 161, no State entity, as defined by the Executive Order, is permitted to ask, or mandate, in any form, that an applicant for employment provide his or her current compensation, or any prior compensation history, until such time as the applicant is extended a conditional offer of employment with compensation. If such information has been requested from you before such time, please contact the Governor's Office of Employee Relations at (518) 474-6988 or via email at info@goer.ny.gov. If you need a disability-related accommodation, please contact mortonmb@delhi.edu. Clery Statement: Applicants interested in positions may access the Annual Security Report (ASR) for SUNY Delhi online. The ASR contains information on campus security policies and certain campus crime statistics. Crime statistics are reported in accordance with the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act. Applicants may request a hard copy of the ASR by contacting the SUNY Delhi University Police Department at 607-746-4700. Application Instructions: To apply, please submit: + Letter of interest + Resume or Curriculum Vitae + Contact information for three professional references For full consideration please apply by October 12, 2018 SUNY Delhi is an AA/EO Employer
          session更换存储,实现在多台服务器共享      Cache   Translate Page      
# 场景 web服务器有多台,每台服务器都会存贮自己的session,session无法在多台服务器共享。所以就需要更换session的存贮空间,存贮在一个共用的空间。通常为了读写速度,我们会选择存贮在内存服务上,如redis、mysql的memory存贮引擎等,本文以reddis存贮贯串上下文。 ## session共享 ### php.ini修改配置 ``` session....
          Lập Trình Viên Fullstack (php/html/js)      Cache   Translate Page      
Hà Đông, Hà Nội - Đống Đa, Hà Nội - Lập trình website bằng PHP. , xây dựng, triển khai các dự án sản phẩm của công ty và khách hàng. Chịu trách nhiệm về tiến độ và kết quả... kinh nghiệm lập trình bằng PHP, Mysql Kiến thức tốt về Code security, DB security và tối ưu hệ thống Tư duy tốt, hiểu rõ về cấu trúc dữ liệu...
          Lập Trình Viên PHP (lương: 8 - 16 Triệu)      Cache   Translate Page      
Hà Nội - cầu công việc - Độ tuổi: Từ 21 đến 35 - Trình độ Cao đẳng / Đại học hoặc các chứng chỉ tương đương trở lên. - Trình độ tin học: Thành thạo..., JavaScript, Jquery, Ajax, XML,... - Tối thiểu 01 năm kinh nghiệm lập trình PHP-MySQL, ưu tiên ucv đã có kinh nghiệm với một trong những framework...
          Adriod Developer - vikas Global Solutions Ltd - Madhavanpark, Karnataka      Cache   Translate Page      
Android Application Development, Android SDK’s, API, HTML,CSS, MYSQL. Android Application Development. Vikas Global Solutions Ltd.... ₹1,00,000 - ₹2,00,000 a year
From Indeed - Mon, 10 Sep 2018 05:26:38 GMT - View all Madhavanpark, Karnataka jobs
          Comment on MySQL running on Server 2008 by Tutuapp for android      Cache   Translate Page      
Awesome post about also check it Blog!
          Provisioning SQL Database(20765C)      Cache   Translate Page      
Provisioning SQL Database(20765C) is this course is designed to teach students how to provision SQL Server databases both on premise and in SQL Azure. We are looking for freelancer trainer and Willness... (Budget: ₹37500 - ₹75000 INR, Jobs: Azure, Database Administration, Microsoft SQL Server, MySQL, SQL)
          Create PHP single script      Cache   Translate Page      
Hi we need script write in php, to search data in database, but if data not avalaible , script will request to server API and save it to database and sent back response to partner, flow will like this... (Budget: $2 - $8 USD, Jobs: MySQL, PHP, Software Architecture)
          Buatkan sy program aplikasi real time      Cache   Translate Page      
Saya ingin membuat aplikasi real time tentang stok gudang secara real time dan dpt di tampilkan di layar seperti di bandara (Budget: $250 - $750 USD, Jobs: Java, MySQL, PHP, Programming, Software Architecture)
          build a script in PHP      Cache   Translate Page      
check the uploaded file - go through it, post questions if any and let me know your budget / time frame and also type downloadedthefile-letschat so I know you read the project (Budget: $30 - $250 USD, Jobs: HTML, MySQL, PHP, Software Architecture, Website Design)
          Buatkan sy program aplikasi real time      Cache   Translate Page      
Saya ingin membuat aplikasi real time tentang stok gudang secara real time dan dpt di tampilkan di layar seperti di bandara (Budget: $250 - $750 USD, Jobs: Java, MySQL, PHP, Programming, Software Architecture)
          build a script in PHP      Cache   Translate Page      
check the uploaded file - go through it, post questions if any and let me know your budget / time frame and also type downloadedthefile-letschat so I know you read the project (Budget: $30 - $250 USD, Jobs: HTML, MySQL, PHP, Software Architecture, Website Design)
          Senior Database Engineer (MySQL-MongoDB)      Cache   Translate Page      
CA-Los Angeles, Senior Database Engineer contract in Los Angeles, CA (Burbank area). Must be able to legally work in the US on our W2. Synopsis: Our client is moving away from a managed services group on the DB side We are looking for an individual contributor who is more of an Infrastructure DBA than development, with MySQL and MongoDB focus (not just Oracle), and experience with enterprise level DB management T
          Web Developer      Cache   Translate Page      
We need self motivated and talented software interns Designing and maintaining in house web dashboards for managing our client, vendor and distributor operations. Prior experience with PHP, MySQL, and basic web development skills... (Budget: ₹1500 - ₹12500 INR, Jobs: MySQL, PHP, Web Development, Yii)
          Développeur backend - R et D - Tundra Technical - Saint-Roch, QC      Cache   Translate Page      
Programmation C#, MySQL, XML, Node.js, HTML, JS, Rest. Développeur Back End – RetD....
From Indeed - Thu, 30 Aug 2018 14:50:49 GMT - View all Saint-Roch, QC jobs
          Excellent Wordpress and PHP developer wanted. - Upwork      Cache   Translate Page      
I'm looking for a developer to help add a new feature to our Wordpress website.

We run a membership website and have lots of users..

The feature I'm looking to add would ideally be packaged into it's own separate Plugin... as to not affect any future Wordpress upgrades.

So I'm looking for someone who's highly proficient in the Wordpress code structure with strong object-oriented programming and adheres to SOLID principles.

The functions of the feature are similar to a "task management" app...

After a user logs in, they can navigate to the main page of this feature.. and be able to submit a simple form of "what they're working on".. the form will request they fill in a few details regarding what they're working on.

The app should automatically associate each entry with the logged in user. Their name and email.

The back end of this feature should allow admins to overview ALL entries from ALL users, and have certain entries be highlighted or presented at the top (depending on the data entered on the form)..

For instance.. if a user indicates "YES" to "Do you need an admin to assist you" question... then their entry should be highlighted, so an admin can get in touch with them and help out.

We might also consider having the feature send out email notifications to the admins whenever a user indicates they need help.

I'm only interested in applicants with Excellent English for this project as to minimize communication breakdowns or misunderstandings. Ideally available U.S. morning hours.

I can provide more in-depth details about the feature during our chat.

It's also very likely we'll need some ongoing support as we launch the feature..

Bonus points if you also know Laravel! (We have a separate project that could use Laravel Expertise)

Please start your cover letter with "Rockstar Developer" to help me weed out spam.. and keep it short and sweet.

Super pumped to meet talented people!! :-)

Thank you

Posted On: September 13, 2018 07:56 UTC
ID: 214191468
Category: Web, Mobile & Software Dev > Web Development
Skills: CSS, HTML5, JavaScript, jQuery, MySQL Administration, PHP, Web Design, Website Development, WordPress
Country: United States
click to apply
          Senior Database Engineer (MySQL-MongoDB)      Cache   Translate Page      
CA-Los Angeles, Senior Database Engineer contract in Los Angeles, CA (Burbank area). Must be able to legally work in the US on our W2. Synopsis: Our client is moving away from a managed services group on the DB side We are looking for an individual contributor who is more of an Infrastructure DBA than development, with MySQL and MongoDB focus (not just Oracle), and experience with enterprise level DB management T
          PremiumSoft Navicat Premium 12.1.7 / Enterprise 11.2.16 [Latest]      Cache   Translate Page      

PremiumSoft Navicat Premium Enterprise – is a multi-connections database administration tool allowing you to connect to MySQL, SQL Server, SQLite, Oracle and PostgreSQL databases simultaneously within a single application, making database administration to multiple kinds of database so easy. Navicat Premium combines the functions of other Navicat members and supports most of the features in …

The post PremiumSoft Navicat Premium 12.1.7 / Enterprise 11.2.16 [Latest] appeared first on S0ft4PC.


          No muestra datos de una tabla con el menú desplegable      Cache   Translate Page      

No muestra datos de una tabla con el menú desplegable

Respuesta a No muestra datos de una tabla con el menú desplegable

Ya hice lo que me sugirieron con lo siguiente:

<p>Seleccione un cliente:</p> <p><form method="POST" action="consultarclientes1.php"> Cliente: <Select Name="nombre_o_razon_social"> <option value="0">Selección:</option> <?php $query = $mysqli -> query ("SELECT * FROM cliente"); while($registro = mysqli_fe...

Publicado el 12 de Septiembre del 2018 por Marcela

          ACES ANALYST - Amazon.com - Saskatchewan      Cache   Translate Page      
Advanced knowledge of relational databases (Access, MySQL, Oracle, MS SQL Server), Excel, HTML, CSS, JavaScript, Java, Visual Basic, Windows OS....
From Amazon.com - Mon, 13 Aug 2018 08:00:07 GMT - View all Saskatchewan jobs
          Amazon Aurora Serverless MySQLが一般提供に      Cache   Translate Page      

AWSのカスタムビルドのMySQLとPostgreSQL互換データベースであるAmazon Auroraの新しい機能が一般提供された。Aurora Serverless MySQLだ。Amazonは昨年のAWS re:Inventで初めてこのサーバレス機能のプレビューを見せていた。

By Steef-Jan Wiggers Translated by 阪田 浩一
          Principal Data Labs Solution Architect - Amazon.com - Seattle, WA      Cache   Translate Page      
Implementation and tuning experience in the Big Data Ecosystem, (such as EMR, Hadoop, Spark, R, Presto, Hive), Database (such as Oracle, MySQL, PostgreSQL, MS...
From Amazon.com - Fri, 07 Sep 2018 19:22:14 GMT - View all Seattle, WA jobs
          Lavaravel developer      Cache   Translate Page      
Hi There! We have 2 task to do within Laravel, we estimate 2/3 hours to complete this it is a mid-level task. Please do not bid more than the budget we have. Thanks (Budget: €8 - €30 EUR, Jobs: HTML, Laravel, MySQL, PHP, Software Architecture)
          Laravel Expert      Cache   Translate Page      
I will explain to candidates about my requirements in detail . You should know laravel frameworks well. You need to have test before I hire you . (Budget: $250 - $750 CAD, Jobs: HTML, Laravel, MySQL, PHP, Website Design)
          Export all tables from the database into a single csv file in python      Cache   Translate Page      

I'm been trying to export all the tables in my database into a single csv file.

I've tried import mysqldb as dbapi import sys import csv import time dbname = 'site-local' user = 'root' host = '127.0.0.1' password = '' date = time.strftime("%d-%m-%Y") file_name = date+'-portal' query='SELECT * FROM site-local;' //<---- I'm stuck here db=dbapi.connect(host=host,user=user,passwd=password) cur=db.cursor() cur.execute(query) result=cur.fetchall() c = csv.writer(open(file_name+'.csv','wb')) c.writerow(result)

I'm a little stuck now, I hope someone can sheds some light base on what I have.

Consider iteratively exporting the SHOW CREATE TABLE (txt files) and SELECT * FROM (csv files) output from all database tables. From your related earlier questions, since you need to migrate databases, you can then run the create table statements (adjusting the MySQL for Postgre syntax such as the ENGINE=InnoDB lines) and then import the data via csv using PostgreSQL's COPY command. Below csv files include table column headers not included in fetchall() .

db = dbapi.connect(host=host,user=user,passwd=password) cur = db.cursor() # RETRIEVE TABLES cur.execute("SHOW TABLES") tables = [] for row in cur.fetchall(): tables.append(row[0]) for t in tables: # CREATE TABLES STATEMENTS cur.execute("SHOW CREATE TABLE `{}`".format(t)) temptxt = '{}_table.txt'.format(t) with open(temptxt, 'w', newline='') as txtfile: txtfile.write(cur.fetchone()[1]) # ONE RECORD FETCH txtfile.close() # SELECT STATEMENTS cur.execute("SELECT * FROM `{}`".format(t)) tempcsv = '{}_data.csv'.format(t) with open(tempcsv, 'w', newline='') as csvfile: writer = csv.writer(csvfile) writer.writerow([i[0] for i in cur.description]) # COLUMN HEADERS for row in cur.fetchall(): writer.writerow(row) csvfile.close() cur.close() db.close()
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page      
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          prestashop 1.7.2.4 manual upgrade to 1.7.4.2      Cache   Translate Page      
Hi, There is no product on it. It is fresh installation of 1.7.2.4 The upgrade has to be done manually. Because after 1-click upgrade I have faced several database problems and function errors. (Budget: $10 - $30 USD, Jobs: eCommerce, HTML, MySQL, PHP, Prestashop)
          SysInfoTools MySQL Database Recovery 18      Cache   Translate Page      
MySQL Database Recovery tool can repair MySQL database file.
          prestashop 1.7.2.4 manual upgrade to 1.7.4.2      Cache   Translate Page      
Hi, There is no product on it. It is fresh installation of 1.7.2.4 The upgrade has to be done manually. Because after 1-click upgrade I have faced several database problems and function errors. (Budget: $10 - $30 USD, Jobs: eCommerce, HTML, MySQL, PHP, Prestashop)
          仮想通貨 × 税務の領域で、フロントエンド開発をリードしませんか? by 株式会社Aerial Partners      Cache   Translate Page      
◆こんなことやります 私たちは現在「仮想通貨 × 税務」の領域で、取引で生じる煩雑な所得計算を自動化するためのプロダクト『G-tax』開発を行っています。 今後はより深いレイヤーでブロックチェーンの社会実装を行っていくために、新規プロダクト開発・サービス提供、更にはブロックチェーンまわりの開発を行っていきます。 これからさらにプロダクトの価値を高めていくために、フロントエンドをリードしていただけるメンバーを募集しています! 今後社会のインフラとして大きく成長していくブロックチェーン領域で、目の前の社会課題を解決し、「世の中の多くの人に必要とされる」プロダクトを共に創りましょう! 【こんなことを任せたい!】 ・仮想通貨税務計算アプリ『G-tax』のフロントエンド開発 ・オウンドメディアやサービスLPのフロントエンド開発  https://www.aerial-p.com/media  https://www.aerial-p.com/guardian 【求めるスキル】 "持っていてほしい!" ■HTML,CSSでWEBサイトのコーディングができる ■SCSSなどの利用経験 ■BEMなどCSS設計に関する知見 (正社員はもちろん、業務委託などでも週3日勤務から相談しましょう!) "あると嬉しい!" ■Vue.jsなどのJSライブラリの利用経験 【こんな人はぜひ遊びに来てください!】 ・世の中に大きな影響を与えるサービスを提供したい ・最先端技術に興味関心、知見がある ・Fintech、仮想通貨、ブロックチェーンと聞くと心が騒ぐ ・自分のエンジニアリング力をもっと活かしたい 【参考】 開発環境は以下のとおりです。 ・インフラ概要 環境:AWS(RDS, S3, CloudFront, WAF, etc..) ・バックエンド概要 言語/フレームワーク:PHP7.1, Laravel5.5, MySQL, REDIS バージョン管理:Git ・フロントエンド概要 言語/フレームワーク:JavaScript(ES6), Vue.js, Nuxt.js, SCSS バージョン管理:Git ・ツール:Jenkins
          即戦力より成長力を重視します!プログラマーになりたい大学生積極採用中! by 株式会社CoinOtaku      Cache   Translate Page      
コインオタクでは、仮想通貨に関する定量データを収集・解析し、仮想通貨毎に評価付け・価格予測をするサービスを提供しております。 機械学習による市場予測や通貨の将来性に対する評価は、ゴールドマンサックスなどの証券会社や銀行、Googleなどのテックジャイアントが精力的に開発を進めている分野であり、その開発を仮想通貨領域において行なっているのが弊社です。 現状の成果としては、数百種の仮想通貨に対して、 開発力、 性能、 市場からの評価、 コミュニティの活発さ、 マイニングの現状 将来的な実需、 などの観点から分析を行い、偏差値という形で評価を出し、有料コンテンツとして提供しております。 また、10種以上の取引所から価格情報を取得し機械学習させることでバックテストで月数10パーセントのリターンを出すアルゴリズムの作成をしております。 仮想通貨に対して正当な評価を下すことが極めて難しい市場の中で、 ビッグデータを用いて仮想通貨市場を解明したい方、そしてそれを多くのユーザーに届けたい方、ぜひご応募ください! ▼業務で使用する技術・ツール ・基礎技術  ・機械学習  ・ウェブ ・使用言語  ・Python3  ・Javascript ・ライブラリ  ・Flask  ・Vue  ・Nuxt  ・Express ・インフラ  ・EC2  ・S3  ・RDS  ・Redis  ・MySQL ▼必須項目 ・Python/Javascriptいずれかでの開発経験 ▼推奨項目 ・AWSなどのインフラに関する知識 ・機械学習・統計に関する知識 ▼求める人物 ・仮想通貨に対する興味がある方 ・大胆にチャレンジし、多くの失敗から学べる方 ・チームのために、自ら考え、自ら動き、率先して成功のために行動できる方 ・オーナーシップを持って業務に励み、ベストを尽くすための努力を惜しまない方 ぜひ一緒に仮想通貨市場を解明し、市場に対して説得力のある答えを提供することで市場の発展を支えましょう!ご応募をお待ちしています! ーーーーーーーーーーーーーーーーーーーーーー コインオタクのお仕事に少しでも興味を持たれた方へ ーーーーーーーーーーーーーーーーーーーーーー コインオタクは、本気で【世界一の仮想通貨情報サービス】を目指しています。 この目的を共有し、それを本気で実現しようとしている学生たちで構成されています。 そして、結果も着実に出ています。 他の学生インターンとは異なり裁量権の重さ、達成感、充足感は段違いのものになっています。 少しでもこの目的を実現したい、その過程で様々なスキルセットを学び成長したいと思う部分があれば、ぜひ「話を聞きたい」ボタンを押してみてください。 長くなりましたが、ここまで読んでいただきありがとうございました。 メンバー一同、あなたからのご応募を心よりお待ちしています。
          まだアルバイトで消耗してるの?学生企業のインターンで市場価値を高めろ! by 株式会社CoinOtaku      Cache   Translate Page      
コインオタクでは、仮想通貨に関する定量データを収集・解析し、仮想通貨毎に評価付け・価格予測をするサービスを提供しております。 機械学習による市場予測や通貨の将来性に対する評価は、ゴールドマンサックスなどの証券会社や銀行、Googleなどのテックジャイアントが精力的に開発を進めている分野であり、その開発を仮想通貨領域において行なっているのが弊社です。 現状の成果としては、数百種の仮想通貨に対して、 開発力、 性能、 市場からの評価、 コミュニティの活発さ、 マイニングの現状 将来的な実需、 などの観点から分析を行い、偏差値という形で評価を出し、有料コンテンツとして提供しております。 また、10種以上の取引所から価格情報を取得し機械学習させることでバックテストで月数10パーセントのリターンを出すアルゴリズムの作成をしております。 仮想通貨に対して正当な評価を下すことが極めて難しい市場の中で、 ビッグデータを用いて仮想通貨市場を解明したい方、そしてそれを多くのユーザーに届けたい方、ぜひご応募ください! ▼業務で使用する技術・ツール ・基礎技術  ・機械学習  ・ウェブ ・使用言語  ・Python3  ・Javascript ・ライブラリ  ・Flask  ・Vue  ・Nuxt  ・Express ・インフラ  ・EC2  ・S3  ・RDS  ・Redis  ・MySQL ▼必須項目 ・Python/Javascriptいずれかでの開発経験 ▼推奨項目 ・AWSなどのインフラに関する知識 ・機械学習・統計に関する知識 ▼求める人物 ・仮想通貨に対する興味がある方 ・大胆にチャレンジし、多くの失敗から学べる方 ・チームのために、自ら考え、自ら動き、率先して成功のために行動できる方 ・オーナーシップを持って業務に励み、ベストを尽くすための努力を惜しまない方 ぜひ一緒に仮想通貨市場を解明し、市場に対して説得力のある答えを提供することで市場の発展を支えましょう!ご応募をお待ちしています! ーーーーーーーーーーーーーーーーーーーーーー コインオタクのお仕事に少しでも興味を持たれた方へ ーーーーーーーーーーーーーーーーーーーーーー コインオタクは、本気で【世界一の仮想通貨情報サービス】を目指しています。 この目的を共有し、それを本気で実現しようとしている学生たちで構成されています。 そして、結果も着実に出ています。 他の学生インターンとは異なり裁量権の重さ、達成感、充足感は段違いのものになっています。 少しでもこの目的を実現したい、その過程で様々なスキルセットを学び成長したいと思う部分があれば、ぜひ「話を聞きたい」ボタンを押してみてください。 長くなりましたが、ここまで読んでいただきありがとうございました。 メンバー一同、あなたからのご応募を心よりお待ちしています。
          Software Engineer - Microsoft - Bellevue, WA      Cache   Translate Page      
Presentation, Application, Business, and Data layers) preferred. Database development experience with technologies like MS SQL, MySQL, or Oracle....
From Microsoft - Sat, 30 Jun 2018 02:19:22 GMT - View all Bellevue, WA jobs
          仮想通貨投資に新たな選択肢を創るサーバーサイドエンジニアを募集! by (株) Gaia      Cache   Translate Page      
2018年9月10日にお試し版をリリースした仮想通貨投資の新たなプラットフォーム「moneco」の開発を進めており、今後のサービスグロース、新機能開発において中心メンバーとして加わるサーバーサイドエンジニアを募集しております。 PHPおよびPhalconを用いて、フォロートレードシステムの開発を行なっていただきます。 また、GCPを利用したサーバーインスタンスの管理等も行います。 開発は仕様書を元に仕様レビュー、設計レビュー、コーディング、コードレビュー、テスト、結合テスト、デバッグといった流れで進め保守性、安定性の高いコードに取り組んでいます。 可読性・保守性の高いコード、設計ができ、技術的にチームをリードできる方、 開発・運用経験豊富でセキュリティやパフォーマンス、スケーラビリティを考慮したコードを書ける方を募集しています。 業務内容 ・moneco (https://moneco.jp/)の開発 開発環境 ・言語: PHP ・フレームワーク: Phalcon, ・DB: MySql ・サーバー: nginx ・インフラ: Google Cloud Platform, AWS, ・プロジェクト管理: git, Wrike, Slack 必須条件 ・開発経験2年以上(プライベート可) ・PHP等による大規模プログラムの開発経験 ・MySQL,PostgreSQLでのデータベースの設計/運用経験 ・大規模サービスの運用経験 歓迎条件 ・セキュリティに対する知見 ・インフラ(GCP, AWS)に関する知見 ・金融に関する知識 ・数字に強い ・マネジメント経験 求める人物像 ・ビジョンへの共感 ・新しい技術を試してみたい好奇心、学習意欲 ・得意分野に対する自信 ・ユーザーにとっての価値を高めようとする視点 ・主体性を持って行動し、チームの価値を最大化する姿勢 ・スピードと品質を兼ね備えた高い開発スキル
          Senior Data Engineer - Amazon Redshift, MySQL, Postgres      Cache   Translate Page      
CA-Los Angeles, If you are a Senior Data Engineer with Amazon Redshift (preferred), MySQL, Postgres experience, please read on! Based in scenic Playa Vista, we are a technology-driven company dedicated to promoting and enabling an eco-friendly lifestyle for you and your family. Our technology team is ranked as one of the strongest Ruby on Rails development shops; our foundation is comprised of startup gurus and s
          Remote Java HTML and CSS Front End Developer      Cache   Translate Page      
A media company is searching for a person to fill their position for a Remote Java HTML and CSS Front End Developer. Must be able to: Utilize knowledge of XHTML, CSS, DHTML and JavaScript Utilize experience in front-end web development Utilize experience with PHP, jQuery, MySql Qualifications for this position include: Minimum of 10+ years experience in front-end web development Experience with SEO best practices required Strong knowledge of XHTML, CSS, DHTML and JavaScript required Proven experience with PHP, jQuery, MySql Proven experience with common PHP driven platforms/systems such as WordPress, SocialEngine
          Telecommute AWS Application Expert Developer      Cache   Translate Page      
An internet company is in need of a Telecommute AWS Application Expert Developer. Individual must be able to fulfill the following responsibilities: Build highly available and scalable cloud applications on AWS Develop and architect systems at scale Make business decisions and recommendations on the best technology Required Skills: Demonstrated ability to build high performance multi-platform applications and robust APIs Experience working in teams with a DevOps culture Knowledge of AWS, GCP or similar cloud platforms Knowledge of gRPC, Docker, Kubernetes, Linkerd, NodeJs, and/or Cassandra, etc Knowledge of REST APIs in Ruby, Java, Scala, Python, PHP, and/or C#, etc Knowledge of PostgreSQL, MySQL, SQL Server, or another RDBMS
          Need C++, QT expert to upgrade window desktop application      Cache   Translate Page      
Upgrade to CEF 3.3359.1768.g8e7c5d6 / Chromium 66.0.3359.117 from cef 3.154.1597.0 for window desktop application. source code written in c++ and compile with QT. we will discuss in details in chat. (Budget: $200 - $250 USD, Jobs: C# Programming, C++ Programming, MySQL, PHP, Qt)
          Build an Car Spare Parts Online Store with Tecdoc integration.       Cache   Translate Page      
Hello, First thing first! The candidate should show WORKING similar project made by himself! There are plenty of russian, ukrainian, belorussian etc. TECDOC-modules that DOES NOT WORK. Do not propose them to me... (Budget: €1500 - €3000 EUR, Jobs: eCommerce, HTML, MySQL, Shopping Carts, Website Design)
          Navicat Premium 12.1.8 – Combines all Navicat versions in an ultimate version.      Cache   Translate Page      
Navicat Premium is an all-in-one database admin and migration tool for MySQL, SQL Server, Oracle and PostgreSQL. Navicat Premium combines all Navicat versions in an ultimate version and can connect MySQL, Oracle and PostgreSQL. Navicat Premium allows user to drag and drop tables and data from Oracle to MySQL, PostgreSQL to MySQL, Oracle to PostgreSQL and […]
          Crazy Like a Fox(Pro)      Cache   Translate Page      

“Database portability” is one of the key things that modern data access frameworks try and ensure for your application. If you’re using an RDBMS, the same data access layer can hopefully work across any RDBMS. Of course, since every RDBMS has its own slightly different idiom of SQL, and since you might depend on stored procedures, triggers, or views, you’re often tied to a specific database vendor, and sometimes a version.

Keulemans Chama fox.png

And really, for your enterprise applications, how often do you really change out your underlying database layer?

Well, for Eion Robb, it’s a pretty common occurrence. Their software, even their SaaS offering of it, allows their customers a great deal of flexibility in choosing a database. As a result, their PHP-based data access layer tries to abstract out the ugly details, they restrict themselves to a subset of SQL, and have a lot of late nights fighting through the surprising bugs.

The databases they support are the big ones- Oracle, SQL Server, MySQL, and FoxPro. Oh, there are others that Eion’s team supports, but it’s FoxPro that’s the big one. Visual FoxPro’s last version was released in 2004, and the last service pack it received was in 2007. Not many vendors support FoxPro, and that’s one of Eion’s company’s selling points to their customers.

The system worked, mostly. Until one day, when it absolutely didn’t. Their hosted SaaS offering crashed hard. So hard that the webserver spinlocked and nothing got logged. Eion had another late night, trying to trace through and figure out: which customer was causing the crash, and what were they doing?

Many hours of debugging and crying later, Eion tracked down the problem to some code which tracked sales or exchanges of product- transactions which might not have a price when they occur.

$query .= odbc_iif("SUM(price) = 0", 0, "SUM(priceact)/SUM(" . odbc_iif("price != 0", 1, 0) . ")") . " AS price_avg ";

odbc_iif was one of their abstractions- an iif function, aka a ternary. In this case, if the SUM(price) isn’t zero, then divide the SUM(priceact) by the number of non-zero prices in the price column. This ensures that there is at least one non-zero price entry. Then they can average out the actual price across all those non-zero price entries, ignoring all the “free” exchanges.

This line wasn’t failing all the time, which added to Eion’s frustration. It failed when two very specific things were true. The first factor was the database- it only failed in FoxPro. The second factor was the data- it only failed when the first product in the resultset had no entries with a price greater than zero.

Why? Well, we have to think about where FoxPro comes from. FoxPro’s design goal was to be a data-driven programming environment for non-programmers. Like a lot of those environments, it tries its best not to yell at you about types. In fact, if you’re feeding data into a table, you don’t even have to specify the type of the column- it will pick the “correct” type by looking at the first row.

So, look at the iif again. If the SUM(price) = 0 we output 0 in our resultset. Guess what FoxPro decides the datatype must be? A single digit number. If the second row has an average price of, say, 9.99, that’s not a single digit number, and FoxPro explodes and takes down everything else with it.

Eion needed to fix this in a way that didn’t break their “database agnostic” code, and thus would continue to work in FoxPro and all the other databases, with at least predictable errors (that don’t crash the whole system). In the moment, suffering through the emergency, Eion changed the code to this:

$query .= "SUM(priceact)/SUM(" . odbc_iif("price != 0", 1, 0) . ")") . " AS price_avg ";

Without the zero check, any products which had no sales would trigger a divide-by-zero error. This was a catchable, trappable error, even if FoxPro. Eion made the change in production, got the system back up and their customers happy, and then actually put the change in source control with a very apologetic commit message.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

          Gamerquizz - 13/09/2018 06:48 EDT      Cache   Translate Page      
It will be a quiz platform where gamers can predict the answer and they will get the reward as a credit. Answers will be locked before the 24hr when the result will be announced. Gamers will predict the... (Budget: ₹1500 - ₹12500 INR, Jobs: Bootstrap, Django, MySQL, Python, Website Design)
          Gamerquizz      Cache   Translate Page      
It will be a quiz platform where gamers can predict the answer and they will get the reward as a credit. Answers will be locked before the 24hr when the result will be announced. Gamers will predict the... (Budget: ₹1500 - ₹12500 INR, Jobs: Bootstrap, Django, MySQL, Python, Website Design)
          看图轻松理解数据结构与算法:冒泡排序      Cache   Translate Page      

前言

推出一个新系列,《看图轻松理解数据结构和算法》,主要使用图片来描述常见的数据结构和算法,轻松阅读并理解掌握。本系列包括各种堆、各种队列、各种列表、各种树、各种图、各种排序等等几十篇的样子。

冒泡排序

冒泡排序是一种很简单的排序算法,主要思想就是不断走访待排序序列,每次只比较两个相邻元素,如果这俩元素顺序不符合要求则对换它们,不断重复知道没有相邻元素需要对换。在不断走访比较过程中,越大的元素经过交换会慢慢走到数列顶端,所以看起来它就像气泡一样不断往上冒,于是就叫冒泡。

排序要点

  1. 比较相邻两个元素,如果前一元素比后一元素大则对换它们的位置。

  2. 从头开始对每一对相邻元素都执行1的对比工作,直至结尾最后一对,执行完一轮后,该轮最大的元素被换置到最后。

  3. 针对所有元素执行若干轮1和2操作,每次经过2操作后都会将该轮的最大值换置到该轮最后,而最后元素不参与下一轮。

  4. 每一轮对越来越少的元素重复3操作,直至没有任何一对元素需要比较。

排序过程

假设我们有如下5个元素,分别为72,58,22,34,14,现在进行冒泡排序。

imageimage

第一遍,对所有元素前后两个元素进行比较,

imageimage

72比58大,两者对换,完成后继续与下一元素比较,

imageimage

72比22大,两者对换,完成后继续与下一元素比较,

imageimage

72比34大,两者对换,完成后继续与下一元素比较,

imageimage

72比14大,两者对换,72已经到序列最顶端,它是这一轮的最大的元素。下一轮比较排除72,只需比较58,22,34,14。开始比较,

imageimage

58比22大,两者对换,完成后继续与下一元素比较,

imageimage

58比34大,两者对换,完成后继续与下一元素比较,

imageimage

58比14大,两者对换,58已经到该轮序列最顶端,它是这一轮的最大的元素。下一轮比较排除58,只需比较22,34,14。开始比较,

imageimage

22比34小,两者不对换,继续与下一元素比较,

imageimage

34比14大,两者对换,34已经到该轮序列最顶端,它是这一轮的最大的元素。下一轮比较排除34,只需比较22,14。开始比较,

imageimage

22比14大,两者对换,22已经到该轮序列最顶端,它是这一轮的最大的元素。除了22后只剩一个元素,停止比较,至此完成了整个排序工作。

image

--------------------------------------

跟我交流:

image

-------------推荐阅读------------

我的开源项目汇总(机器&深度学习、NLP、网络IO、AIML、mysql协议、chatbot)

为什么写《Tomcat内核设计剖析》

2017文章汇总——机器学习篇

2017文章汇总——Java及中间件

2017文章汇总——深度学习篇

2017文章汇总——JDK源码篇

2017文章汇总——自然语言处理篇

2017文章汇总——Java并发篇


          宝塔linux面板命令大全      Cache   Translate Page      

安装宝塔

Centos安装脚本
yum install -y wget && wget -O install.sh http://download.bt.cn/install/install.sh && sh install.sh
Ubuntu/Deepin安装脚本
wget -O install.sh http://download.bt.cn/install/install-ubuntu.sh && sudo bash install.sh
Debian安装脚本
wget -O install.sh http://download.bt.cn/install/install-ubuntu.sh && bash install.sh
Fedora安装脚本
wget -O install.sh http://download.bt.cn/install/install.sh && bash install.sh

管理宝塔

停止
/etc/init.d/bt stop
启动
/etc/init.d/bt start
重启
/etc/init.d/bt restart
卸载
/etc/init.d/bt stop && chkconfig --del bt && rm -f /etc/init.d/bt && rm -rf /www/server/panel
查看当前面板端口
cat /www/server/panel/data/port.pl
修改面板端口,如要改成8881(centos 6 系统)
echo '8881' > /www/server/panel/data/port.pl && /etc/init.d/bt restart
iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 8881 -j ACCEPT
service iptables save
service iptables restart
修改面板端口,如要改成8881(centos 7 系统)
echo '8881' > /www/server/panel/data/port.pl && /etc/init.d/bt restart
firewall-cmd --permanent --zone=public --add-port=8881/tcp
firewall-cmd --reload
强制修改MySQL管理(root)密码,如要改成123456
cd /www/server/panel && python tools.pyc root 123456
修改面板密码,如要改成123456
cd /www/server/panel && python tools.pyc panel 123456
查看宝塔日志
cat /tmp/panelBoot.pl
查看软件安装日志
cat /tmp/panelExec.log
站点配置文件位置
/www/server/panel/vhost
删除域名绑定面板
rm -f /www/server/panel/data/domain.conf
清理登陆限制
rm -f /www/server/panel/data/*.login
查看面板授权IP
cat /www/server/panel/data/limitip.conf
关闭访问限制
rm -f /www/server/panel/data/limitip.conf
查看许可域名
cat /www/server/panel/data/domain.conf
关闭面板SSL
rm -f /www/server/panel/data/ssl.pl && /etc/init.d/bt restart
查看面板错误日志
cat /tmp/panelBoot
查看数据库错误日志
cat /www/server/data/*.err
站点配置文件目录(nginx)
/www/server/panel/vhost/nginx
站点配置文件目录(apache)
/www/server/panel/vhost/apache
站点默认目录
/www/wwwroot
数据库备份目录
/www/backup/database
站点备份目录
/www/backup/site
站点日志
/www/wwwlogs

Nginx服务管理

nginx安装目录
/www/server/nginx
启动
/etc/init.d/nginx start
停止
/etc/init.d/nginx stop
重启
/etc/init.d/nginx restart
启载
/etc/init.d/nginx reload
nginx配置文件
/www/server/nginx/conf/nginx.conf

Apache服务管理

apache安装目录
/www/server/httpd
启动
/etc/init.d/httpd start
停止
/etc/init.d/httpd stop
重启
/etc/init.d/httpd restart
启载
/etc/init.d/httpd reload
apache配置文件
/www/server/apache/conf/httpd.conf

MySQL服务管理

mysql安装目录
/www/server/mysql
phpmyadmin安装目录
/www/server/phpmyadmin
数据存储目录
/www/server/data
启动
/etc/init.d/mysqld start
停止
/etc/init.d/mysqld stop
重启
/etc/init.d/mysqld restart
启载
/etc/init.d/mysqld reload
mysql配置文件
/etc/my.cnf

FTP服务管理

ftp安装目录
/www/server/pure-ftpd
启动
/etc/init.d/pure-ftpd start
停止
/etc/init.d/pure-ftpd stop
重启
/etc/init.d/pure-ftpd restart
ftp配置文件
/www/server/pure-ftpd/etc/pure-ftpd.conf

PHP服务管理

php安装目录
/www/server/php
启动(请根据安装PHP版本号做更改,例如:/etc/init.d/php-fpm-54 start)
/etc/init.d/php-fpm-{52|53|54|55|56|70|71} start
停止(请根据安装PHP版本号做更改,例如:/etc/init.d/php-fpm-54 stop)
/etc/init.d/php-fpm-{52|53|54|55|56|70|71} stop
重启(请根据安装PHP版本号做更改,例如:/etc/init.d/php-fpm-54 restart)
/etc/init.d/php-fpm-{52|53|54|55|56|70|71} restart
启载(请根据安装PHP版本号做更改,例如:/etc/init.d/php-fpm-54 reload)
/etc/init.d/php-fpm-{52|53|54|55|56|70|71} reload
配置文件(请根据安装PHP版本号做更改,例如:/www/server/php/52/etc/php.ini)
/www/server/php/{52|53|54|55|56|70|71}/etc/php.ini

Redis服务管理

redis安装目录
/www/server/redis
启动
/etc/init.d/redis start
停止
/etc/init.d/redis stop
redis配置文件
/www/server/redis/redis.conf

Memcached服务管理

memcached安装目录
/usr/local/memcached
启动
/etc/init.d/memcached start
停止
/etc/init.d/memcached stop
重启
/etc/init.d/memcached restart
启载
/etc/init.d/memcached reload

          انتهيت من برمجة سكربت صوتيات يهمني رايكم      Cache   Translate Page      

السلام عليكم أخوتي ,

لقد انتهيت من سكربت صوتيات

هذا رابط السكربت برمجياً انا من برمجته باستخدام php&mysql

http://cutt.us/rAvGX

اما بنسبه لتصميم والستايل لقد قمت بشرائة مسبقاً

اريد إن تكون ارئكم عن البرمجة فقط


          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page      
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          The Definitive Guide to MySQL, 2nd Edition      Cache   Translate Page      
#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000\
Автор: Michael Kofler
Название: The Definitive Guide to MySQL
Издательство: Apress
Год: 2003
Серия: Books for Professionals By Professionals
ISBN: 1590591445
Язык: English
Формат: chm, epub
Размер: 16,2 mb
Страниц: 824

This second edition of Michael Kofler's acclaimed MySQL book has updated and expanded to cover MySQL 4.0, the most recent production release of the popular open source database, which boasts more than 4 million users worldwide.
          ONILNE PORTAL      Cache   Translate Page      
SCHOOL PORTAL WITH DIFFERENT MODULE LIKE ADMISSION,FEE COLLECTION TC BUS SERVICE ETC (Budget: ₹1500 - ₹12500 INR, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Sr. Software Engineer 3 - The Kenjya-Trusant Group, LLC - Annapolis Junction, MD      Cache   Translate Page      
Java/JEE, JavaScript, Java Expression Language (JEXL), J1BX, Flex, EXT - JS, JSP, .NET, AJAX, SEAM, C, C++, PHP, Ruby / Ruby-on-Rails, SQL, MS SQL Server, MySQL...
From The Kenjya-Trusant Group, LLC - Fri, 10 Aug 2018 06:03:55 GMT - View all Annapolis Junction, MD jobs
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page      
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          WEB BASED SIMPLE RESULT CHECKER      Cache   Translate Page      
A simple student result checker that can hold student results for multiple schools and design report card template for the every school and print out pdf of their result. (Budget: $30 - $250 USD, Jobs: MySQL, PHP, Website Design, WordPress)
          Adriod Developer - vikas Global Solutions Ltd - Madhavanpark, Karnataka      Cache   Translate Page      
Android Application Development, Android SDK’s, API, HTML,CSS, MYSQL. Android Application Development. Vikas Global Solutions Ltd.... ₹1,00,000 - ₹2,00,000 a year
From Indeed - Mon, 10 Sep 2018 05:26:38 GMT - View all Madhavanpark, Karnataka jobs
          Re: build mysql bd from hash files ?      Cache   Translate Page      
by Howard Miller.  

By 'hash files' I assume you mean the the 'moodledata' files area. 

If, as it sounds like, you have lost the complete database then the files stuff represents only one of hundreds of tables that go to make up the Moodle database. If you have genuinely lost your database with no backup then you have lost your site. 

Sorry to be bearer of bad news. 


          build mysql bd from hash files ?      Cache   Translate Page      
by Dan Lawless.  

We recently performed a migration and upgrade on our moodle site but somehow seem to have lost the mysql db.  We have all the hash files from moodledata but are unable to use those files.  Is there any way to perform a reverse db build that will organize the hash files appropriately?  

Thanks in advance for any help.

Dan L


          WEB BASED SIMPLE RESULT CHECKER      Cache   Translate Page      
A simple student result checker that can hold student results for multiple schools and design report card template for the every school and print out pdf of their result. (Budget: $30 - $250 USD, Jobs: MySQL, PHP, Website Design, WordPress)
          After Upgrade No Inbox or Sent Box      Cache   Translate Page      

Replies: 0

Hi Everyone,
I have upgraded to BuddyPress: 3.1.0 and now i can’t seem to access messages and i am using Thrive Nouveau Child theme.

Browser Developer Browser error

JQMIGRATE: Migrate is installed, version 1.4.1
http://www.google-analytics.com/analytics.js:1 Failed to load resource: net::ERR_BLOCKED_BY_CLIENT
underscore.min.js?ver=1.8.3:5 Uncaught SyntaxError: Unexpected token ;
at new Function (<anonymous>)

System Info

// Generated by the Send System Info Plugin //

Multisite: No

WordPress Version: 4.9.8
Permalink Structure: /%postname%/
Active Theme: Thrive Nouveau Child

PHP Version: 7.2.9
MySQL Version: 5.6.40
Web Server Info: Apache

WordPress Memory Limit: 64MB
PHP Safe Mode: No
PHP Memory Limit: 256M
PHP Upload Max Size: 1000M
PHP Post Max Size: 1000M
PHP Upload Max Filesize: 1000M
PHP Time Limit: 600
PHP Max Input Vars: 10000
PHP Arg Separator: &
PHP Allow URL File Open: No

WP_DEBUG: Disabled

WP Remote Post: wp_remote_post() works

Session: Enabled
Session Name: PHPSESSID
Cookie Path: /
Save Path: /var/cpanel/php/sessions/ea-php72
Use Cookies: On
Use Only Cookies: On

DISPLAY ERRORS: N/A
FSOCKOPEN: Your server supports fsockopen.
cURL: Your server supports cURL.
SOAP Client: Your server has the SOAP Client enabled.
SUHOSIN: Your server does not have SUHOSIN installed.

ACTIVE PLUGINS:

404 to 301: 3.0.1
Absolutely Glamorous Custom Admin: 6.4.1
bbP private groups: 3.6.7
bbPress: 2.5.14
BP Block Users: 1.0.2
BP Local Avatars: 2.2
BP Simple Front End Post: 1.3.4
BuddyBlog: 1.3.2
BuddyKit: 0.0.3
BuddyPress: 3.1.0
BuddyPress Clear Notifications: 1.0.4
BuddyPress Docs: 2.1.0
BuddyPress Global Search: 1.1.9
BuddyPress Security Check: 3.2.2
BuddyPress User To-Do List: 1.0.4
GA Google Analytics: 20180828
GD bbPress Toolbox Pro: 5.2.1
GDPR: 2.1.0
Gears: 4.1.9
Gravity Forms: 2.3.2
Hide Admin Bar From Front End: 1.0.0
Image Upload for BBPress: 1.1.15
Insert Headers and Footers: 1.4.3
Invite Anyone: 1.3.20
iThemes Security Pro: 5.4.8
Kirki Toolkit: 3.0.33
MediaPress: 1.4.2
MediaPress – Downloadable Media: 1.0.3
Menu Icons: 0.11.2
Nifty Menu Options: 1.0.1
Really Simple SSL: 3.0.5
reSmush.it Image Optimizer: 0.1.16
Restricted Site Access: 7.0.1
s2Member Framework: 170722
SeedProd Coming Soon Page Pro: 5.10.8
Send System Info: 1.3
Slider Revolution: 5.4.7.3
SSL Insecure Content Fixer: 2.7.0
TaskBreaker – Group Project Management: 1.5.1
The Events Calendar: 4.6.23
The Events Calendar PRO: 4.4.30.1
Users Insights: 3.6.5
WP-Polls: 2.73.8
WPBakery Page Builder: 5.5.4
WP Mail SMTP: 1.3.3
WP Nag Hide: 1.0
WPS Hide Login: 1.4.3
Yoast SEO: 8.2
YouTube Live: 1.7.10

Thanks

Rockforduk


          how to add limit = 1 with prepare statement?      Cache   Translate Page      
Forum: MySQL Posted By: piano0011 Post Time: Sep 13th, 2018 at 05:31 AM
          Upgrade MYSQL code to MYSQLi      Cache   Translate Page      
Hi, I have a full website built with MYSQL. The website was built in 2014(ish). The code is built in PHP/MySQL. The code calls old functions such as mysql_connect and session_register(). I need somebody to go through all the code, searching for old functions and upgrading them to new ones... (Budget: £20 - £250 GBP, Jobs: HTML, MySQL, PHP, Software Architecture, Website Design)
          Automate web scraping using Apify      Cache   Translate Page      
Scrape data of a website and sync existing database with it using Apify. This should be automated for scraping on regular basis. (Budget: ₹1500 - ₹12500 INR, Jobs: Data Mining, MySQL, PHP, Software Architecture, Web Scraping)
          Upgrade MYSQL code to MYSQLi      Cache   Translate Page      
Hi, I have a full website built with MYSQL. The website was built in 2014(ish). The code is built in PHP/MySQL. The code calls old functions such as mysql_connect and session_register(). I need somebody to go through all the code, searching for old functions and upgrading them to new ones... (Budget: £20 - £250 GBP, Jobs: HTML, MySQL, PHP, Software Architecture, Website Design)
          Automate web scraping using Apify      Cache   Translate Page      
Scrape data of a website and sync existing database with it using Apify. This should be automated for scraping on regular basis. (Budget: ₹1500 - ₹12500 INR, Jobs: Data Mining, MySQL, PHP, Software Architecture, Web Scraping)
          Upgrade MYSQL code to MYSQLi      Cache   Translate Page      
Hi, I have a full website built with MYSQL. The website was built in 2014(ish). The code is built in PHP/MySQL. The code calls old functions such as mysql_connect and session_register(). I need somebody to go through all the code, searching for old functions and upgrading them to new ones... (Budget: £20 - £250 GBP, Jobs: HTML, MySQL, PHP, Software Architecture, Website Design)
          Automate web scraping using Apify      Cache   Translate Page      
Scrape data of a website and sync existing database with it using Apify. This should be automated for scraping on regular basis. (Budget: ₹1500 - ₹12500 INR, Jobs: Data Mining, MySQL, PHP, Software Architecture, Web Scraping)
          Regarding website security      Cache   Translate Page      
I want to protect my site from hacking. Currently I know about XSS and SQL injection.Do I need to use mysqli instead of mysql? And why?When should I use `htmlentities()` and `striptags()`?I also don't want users to upload melicious files and since I accept file uploading, is it enough to ...
          Paginado de JTable      Cache   Translate Page      

Paginado de JTable

Hola gente , genero este tema ya que me he quedado sin ideas y no se como hacerlo. Necesito paginar un JTable con registros de una BD MySql, y la verdad que buscando no encuentro ejemplos que detallen los pasos bien , ya que hay métodos o funciones que no conozco a fondo y necesitaría un vídeo o una pagina que marque paso a paso como se elabora paginado , para yo poder comprenderlo. Si conocen una pagina o un vídeo o método para realizar paginado del JTable lo agradecería.
Desde ya graci...

Publicado el 13 de Septiembre del 2018 por Valentin

          PICAR Application for 10.1 inch Tablet      Cache   Translate Page      
I need an API built for an App to run on a 10.1 Inch Tablet. I am having the 2 movies made for the project already on freelancer. Once the movie is complete there is a prompt to start the form. Once form is completed an email is sent to me and place where email was sent from and data recorded in DB... (Budget: $250 - $750 AUD, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          mangetoooi      Cache   Translate Page      
Develop a php script would be used like a API from Magento 1.9. I need create products (simple, configurable...), update price and stock, get orders, add invoice, add tracking. I already have a simple... (Budget: $250 - $750 USD, Jobs: HTML, Magento, MySQL, PHP, Software Architecture)
          Transform XML file to PFD and create a web portal to download these files by users      Cache   Translate Page      
I need to transform XML files to PFD and create a web portal to download these files by users (Budget: $30 - $250 USD, Jobs: HTML, MySQL, PHP, Website Design, XML)
          Mysqli_query() expects parameter 1 to be mysqli      Cache   Translate Page      

If ever the connect is not a problem double check the select statement your column fields spelling

Hope it hepls


          记一次 MySQL semaphore crash 的分析(有彩蛋)      Cache   Translate Page      
关于MySQL semaphore crash的详细分析,文中有彩蛋!
          mysql字符串操作      Cache   Translate Page      
mysql 函数substring_index() 函数: 1、从左开始截取字符串 left(str, length) 说明:left(被截取字段,截取长度) 例:select left(content,200) as abstract from my_content_t 2、从右开始截取字符串 right(str, length) 说明:right(被截取字段,截取长度) 例:select right(content,200) as abstract ...
          Codeigneiter expert needed, Full stack developer 5 years min      Cache   Translate Page      
Codeigneiter expert needed, Full stack developer 5 years min 1- Complete Full responsive page (dev + E2E testing) 2- Link Front and back end for dipslay results using best practice 3- Update html layout... (Budget: $30 - $250 USD, Jobs: HTML, HTML5, Javascript, MySQL, PHP)
          Re: bidirectional mysql - android data sync? is AppSync a good fit?      Cache   Translate Page      
Thanks!
...
          Analyzing Amazon Aurora Slow Logs with pt-query-digest      Cache   Translate Page      
Amazon Aurora MySQL slow query logs with pt-query-digest slow

In this blog post we shall discuss how you can analyze slow query logs from Amazon Aurora for MySQL, (referred to as Amazon Aurora in the remaining blog). The tools and techniques explained here apply to the other MySQL compatible services available under Amazon Aurora. However, we’ll focus specially on analyzing slow logs from Amazon […]

The post Analyzing Amazon Aurora Slow Logs with pt-query-digest appeared first on Percona Database Performance Blog.


          Getting variables out of python script      Cache   Translate Page      
I have a python script that is bring run as an external script, using Python 3, and I wanted to get a variable out of my script to post in a notification. The script queries a MYSQL database, creates a CSV file from the database and then emails it. All I want to do is get a single variable out of my script and use it in the notification, but I can't seem to get notifications running at all with the workflow. Even if I try to just post a "Success" notification, nothing appears. What am I missing?
          iDB - 0.5.0 Alpha SVN 877 / Internet Discussion Boards      Cache   Translate Page      

Internet Discussion Boards is a message board system by Game Maker 2k. Its very easy to set up. Your web host needs PHP and CUBRID / MySQL / MariaDB / PostgreSQL / SQLite. Internet Discussion Boards is Open Source so you can make changes to it.


          Estagiario em Programação - Cevisa Soluções - Americana, SP      Cache   Translate Page      
Conhecimento na ferramentas: Delphi, Zeos, Firebord e Mysql. Estagio com plano de carreira para efetivação. Tipo de vaga: Estágio Escolaridade: * Ensino...
De Indeed - Thu, 13 Sep 2018 13:59:48 GMT - Visualizar todas as empregos: Americana, SP
          HA Mysql and Share Storage on Ubuntu 18.04      Cache   Translate Page      
none
          phpMyAdmin 4.8.3      Cache   Translate Page      
A tool that handles the basic administration of MySQL over the Web.
          NodeJS + AppJS + Sqlite3      Cache   Translate Page      

I am trying to build the SQLite3 module into my project. If I run NPM install sqlite3 it fails. Here is my npm-debug.log relevant:

235 info install sqlite3@2.1.5 236 verbose unsafe-perm in lifecycle true 237 silly exec cmd "/c" "node-gyp rebuild" 238 silly cmd,/c,node-gyp rebuild,C:\NodeWorkbench\AppJS Workspace\template\data\node_modules\sqlite3 spawning 239 info sqlite3@2.1.5 Failed to exec install script 240 info C:\NodeWorkbench\AppJS Workspace\template\data\node_modules\sqlite3 unbuild 241 verbose from cache C:\NodeWorkbench\AppJS Workspace\template\data\node_modules\sqlite3\package.json 242 info preuninstall sqlite3@2.1.5 243 info uninstall sqlite3@2.1.5 244 verbose true,C:\NodeWorkbench\AppJS Workspace\template\data\node_modules,C:\NodeWorkbench\AppJS Workspace\template\data\node_modules unbuild sqlite3@2.1.5 245 info postuninstall sqlite3@2.1.5 246 error sqlite3@2.1.5 install: `node-gyp rebuild` 246 error `cmd "/c" "node-gyp rebuild"` failed with 1 247 error Failed at the sqlite3@2.1.5 install script. 247 error This is most likely a problem with the sqlite3 package, 247 error not with npm itself. 247 error Tell the author that this fails on your system: 247 error node-gyp rebuild 247 error You can get their info via: 247 error npm owner ls sqlite3 247 error There is likely additional logging output above. 248 error System windows_NT 6.1.7600 249 error command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "sqlite3" 250 error cwd C:\NodeWorkbench\AppJS Workspace\template\data 251 error node -v v0.8.14 252 error npm -v 1.1.65 253 error code ELIFECYCLE 254 verbose exit [ 1, true ]

I have node-gyp installed, as well as python (3.3 I believe). Thanks for the help. I really need this resolved ASAP, so if you could point me in a direction I would appreciate it greatly!

Ideally, I would like to use Nano and couchdb for my project. JSON from front to back would be great. But nano was throwing C++ exceptions during run time so I had to recompile the stack and start over (it recompiled AppJS when I installed nano which I assume put some faulty extensions in and messed up the whole works) My stack is as follows:

Database > AppJS (NodeJS included in this) > SocketIO > AngularJS

The point of this project is to assemble a stack that I can use as a replacement for server2go. My company has had severe stability issues regarding server2go, including data loss and DB corruption (MyISAM with mysql).

Problem courtesy of: Andrew Rhyne


          Defining Roles-based Security ACLs and Supporting Multitenancy in the Node.js St ...      Cache   Translate Page      

Strongloop Loopback ( https://loopback.io ) is a Node.js framework that extends Express.js and makes it easy for developers to create REST-based CRUD APIs in minutes.

In my experience, their proclamations about RAD API development generally holds true, however, their documentation and, in particular, their tutorials, really need a lot of work in terms of organization. While there’s a lot of information that’s presented, it doesn’t really flow to tell the complete story of how you would solve real-world problems and how all of the pieces really fit together.

Hopefully you’ll find that this post helps bridge that gap. I assume that you already have a very basic understanding of the Loopback framework and that you are also a movie nerd.

Let’s say, for example, that you have an e-commerce website that you plan to offer as Software as a Service. It has the following three tables:

Stores a registry containing the stores that are being hosted Users including username, password, email, and a foreign key store_id that maps back to a record in the Stores table. Orders Orders placed by a user. Contains a foreign key to the Users table and a foreign key to the Stores table.

We need to create CRUD services for each of these tables, secure them with roles-based security and also support multi-tenancy. For example, an administrator for Store ID #2 should only be able to see Orders and Users that were added store ID #2. A superuser should be able to see everything.

Creating Tables to Support Roles-Based Security

Loopback has connectors for most commonly used databases. Connection data is stored in your project’s server/datasources.json file. In this case, I’ve created a custom datasource named “ecommerce”:

{ "db": { "name": "db", "connector": "memory" }, "ecommerce": { "host": "[some mysql host on Amazon]", "port": 3306, "database": "ecommerce", "name": "ecommerce", "user": "quizartshaderach", "password" : "ForH3IsTh3" "connector": "mysql" } }

Loopback natively supports roles-based security out of the box. You simply need to have the framework automagically add its tables to your database by placing the following script in your project’s /server/ folder and running it.

var server = require('./server'); var ds = server.dataSources.ecommerce; var lbTables = [ 'Application','User', 'AccessToken', 'ACL', 'RoleMapping', 'Role' ]; ds.automigrate(lbTables, function(er) { if (er) throw er; console.log('Loopback tables [' - lbTables - '] created in ', ds.adapter.name); ds.disconnect(); }); COOL! Defining Roles

In the generated Role table, we’re going to add two roles ― a “superuser” who is the ultimate supreme being (think Keanu Reeves as John Wick), and a “storeadmin” who manages a specific store or stores in our SaaS system. Loopback has $unauthenticated, $authenticated, and $everyone as built-in, immutable roles. So, we got that going for us:


Defining Roles-based Security ACLs and Supporting Multitenancy in the Node.js St ...
Defining Users

You may have noted that Loopback has its own User table definition. You should actually hide this from the API and instead extend its properties into your own custom table as illustrated by the following screenshot:


Defining Roles-based Security ACLs and Supporting Multitenancy in the Node.js St ...

Your corresponding /common/models/users.json file should resemble the following snippet and only list out properties that are *not* baked into the native loopback User model. Also note that I’m already using ACLs to deny general access to unauthenticated users (a built-in immutable loopback role) and grant full access to John Wick, my superuser who can kill five angry customers in a bar with a pencil. A f****ing pencil.

{ "name": "users", "plural": "users", "base": "User", "idInjection": true, "options": { "validateUpsert": true }, "properties": { "firstname": { "type": "string" }, "lastname": { "type": "string" }, "disabled": { "type": "boolean" }, "creationDate": { "type": "date" }, "store_id": { "type": "number" } }, "validations": [], "relations": {}, "acls": [], "methods": {} } Creating a Superuser/SuperAdmin Account

Now that the basics of your security framework are in place, you can use Loopback’s APIs to create the superuser account. Loopback automatically generates a Swagger ui for all of its services and will automatically hash the password and store it in your database.


Defining Roles-based Security ACLs and Supporting Multitenancy in the Node.js St ...

Note that Loopback will default to using the npm crypto library for hashing passwords. This dependency caused us some issues when deploying on Amazon elastic beanstalk, so we had to use the pure javascript cryptojs library instead, which is also supported by the framework. So you might need to execute the following commands to install the appropriate library for your deployment platform:

npm uninstall crypto npm install cryptojs

After your superuser account has been added to your users table, you can assign it the superuser role by inserting a record to the RoleMapping table:


Defining Roles-based Security ACLs and Supporting Multitenancy in the Node.js St ...
Logging In

After you’ve added your account, verify that you can use it to login by executing the users/login service from the Loopback explorer. If your login was successful, the service will return an access token id:

So the following post call: http://localhost:3000/api/v1/users/login?include=user would return something similar to:

{ "id": "OcCGkGHCKKQrLNr2mS1DskVMXxae7IJlOezkEttVvvYXjk74gwRdpBrW7LEBufG8", "ttl": 1209600, "created": "2018-09-12T11:17:14.824Z", "userId": 1, "user": { "firstname": "John", "lastname": "Wick", "disabled": false, "creationDate": "2018-09-05T01:47:07.000Z", "store_id": 1, "realm": null, "username": "Administrator", "email": "jwick@thecontinentalhotel.com", "emailVerified": true, "id": 1 } }

You can then copy and paste the returned id into Loopback Explorer’s accessToken field. Loopback Explorer will subsequently automatically append a query string variable named access_token to every request.

Securing Services

There are at least 4 ways to secure services:

Statically apply ACLs Define ACLs in the ACL database table Programmatically hide public methods Dynamically verify permissions at runtime with an observer event Statically applying ACLs

We use static ACLs to restrict methods of our ORDERS table to specific groups. In this particular user story, we want to deny access to anyone who hasn’t logged in, allow full access for anyone in the “superuser” role, and allow read/write access to anyone who’s authenticated:

(common/models/orders.json)

"acls": [ { "accessType": "*", "principalType": "ROLE", "principalId": "$unauthenticated", "permission": "DENY" }, { "accessType": "*", "principalType": "ROLE", "principalId": "superuser", "permission": "ALLOW" }, { "accessType": "READ", "principalType": "ROLE", "principalId": "$authenticated", "permission": "ALLOW" }, { "accessType": "WRITE", "principalType": "ROLE", "principalId": "$authenticated", "permission": "ALLOW" } ] Defining ACLs in the Database

Note that you could also implement these rules by adding records to the ACL database table, which would give you a little bit more flexibility for tweaking security without having to modify source code or restart the node service.


Defining Roles-based Security ACLs and Supporting Multitenancy in the Node.js St ...
Hiding Public Methods

Our user story dictates that no user should be able to delete an order, so we attack this requirement by stripping the programatically hiding the method in the common/models/orders.js file. We also want to disable a user’s ability to update all records and any other services that we’re not intending to use in our application:

'use strict'; module.exports = function(Orders) { // remove unused methods Orders.disableRemoteMethodByName('deleteById'); // Removes (DELETE) /products/:id Orders.disableRemoteMethodByName("prototype.patchAttributes"); // Removes (PATCH) /products/:id Orders.disableRemoteMethodByName('createChangeStream'); // Removes (GET|POST) /products/change-stream Orders.disableRemoteMethodByName("updateAll"); // Removes (POST) /products/update } Handling Multi-tenancy and Record-Level Permissions with Observers

Loopback includes a series of Operation Hooks that are triggered from all methods that execute a particular high-level create, read, update, or delete operation. For our user story, we want to implement the following business rules:

John Wick can do whatever he needs to do because he’s wearing bulletproof eveningwear. A store admin can only access records that are related to their specific store Id. Only a superadmin or a storeadmin can update an Order record. A user can only only access records that they “own” (user_id foreign key relationship). Any authenticated user can create a new order.

Each observer gets passed a context (ctx) object. Through the context object, you can access the user’s ID and dynamically modify the SQL WHERE clause before it is passed to the database server. To implement our business rules, however, we also need additional information from the users table for the logged in account. This necessitates adding the following script to the server/boot folder which will lookup additional user info on every request and add it to the context object:

(server/boot/attach-user-info.js)

module.exports = function(app) { app.remotes().phases .addBefore('invoke', 'options-from-request') .use(function(ctx, next) { if (!ctx.args.options || !ctx.args.options.accessToken) return next(); // attach user info to context options const User = app.models.users; User.findById(ctx.args.options.accessToken.userId, function(err, user) { if (err) return next(err); ctx.args.options.currentUser = user; next(); }); }); };

Now that we’ve marshalled all of our user information, dynamically adding where clauses to queries at runtime through observers becomes a relatively trivial process:

(common/models/orders.js)

// handle multitenancy read operations Orders.observe('access', function limitToTenant(ctx, next) { let authorizedRoles = ctx.options.authorizedRoles; let userId = ctx.options.accessToken.userId; let storeId = ctx.options.currentUser.store_id; if (!authorizedRoles.superuser) { // non super-duper admins ("storeadmin") can only see orders bound to their "store" ctx.query.where = ctx.query.where || {}; ctx.query.where.store_id = storeId; if (!authorizedRoles.storeadmin) { // non store admins can only see orders that they "own" ctx.query.where = ctx.query.where || {}; ctx.query.where.user_id = userId; } } next(); }); // only allow admins to update records Orders.observe('before save', function preventUpdatesFromNonAdmins(ctx, next) { let authorizedRoles = ctx.options.authorizedRoles; // the presence of ctx.instance indicates an "insert" operation // the presence of ctx.instance.id indicates an update if (!ctx.instance || ctx.instance.id) { if (!authorizedRoles.superuser && !authorizedRoles.storeadmin) { throw new Error("Security Exception - only admins can update an order record."); } } next(); }); And in the end… John was once an associate of ours. They call him Baba Yaga. Well, John wasn’t exactly the boogeyman. He was the one you sent to kill the f***ing boogeyman!

If you’ve found this post to be helpful in defeating the Strongloop Looopback learning curve boogeyman, please add a comment below.

Happy coding!


          [مکینتاش] دانلود Lingon X v6.3.1 MacOSX - نرم افزار اجرا کننده برنامه ها و دستورات برای مک      Cache   Translate Page      

دانلود Lingon X v6.3.1 MacOSX - نرم افزار اجرا کننده برنامه ها و دستورات برای مک#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Lingon X هر گونه برنامه و منبعی را اجرا کرده و به طور مداوم بر آن نظارت می کند. این نرم افزار به صورت خودکار و طبق برنامه ریزی که برایش تعریف می کنید برنامه ها و منابع مد نظرتان را به محض شروع به کار مک و بدون نیاز به ورود کاربران اجرا می کند و در صورتی که برای نرم افزار خطا، اختلال در عملکرد و یا استفاده بیش از حد از منابع و حافظه را تشخیص داد سریعا آن را راه اندازی مجدد می کند. این نرم افزار تمامی منابع خود را به کار می گیرد تا ...


http://p30download.com/76997

مطالب مرتبط:



دسته بندی: دانلود » مکینتاش » نرم افزار » دسکتاپ, نرم افزار, نرم افزار » کاربردی
برچسب ها: , , , , , ,
لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید آنتی ویروس, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است همین حالا مشترک این پایگاه شوید!
لینک دانلود: http://p30download.com/fa/entry/76997


          Percona Server for MySQL 5.7.23-23 发布      Cache   Translate Page      

Percona Server for MySQL 5.7.23-23 已发布,基于 MySQL 5.7.23,包含其中所的 bug 修复,Percona Server for MySQL 5.7.23-2 是目前 5.7 系列的最新稳定版本。更新如下:

新特性

  • The max_binlog_files variable is deprecated, and the binlog_space_limit variable should be used instead of it. The behavior of binlog_space_limit is consistent with the variable relay-log-space-limit used for relay logs; both variables have the same semantics. For more information, see #275.

  • Starting with 5.7.23-23, it is possible to encrypt all data in the InnoDB system tablespace and in the parallel double write buffer. This feature is considered ALPHA quality. A new variable innodb_sys_tablespace_encrypt is introduced to encrypt the system tablespace. The encryption of the parallel double write buffer file is controlled by the variable innodb_parallel_dblwr_encrypt. Both variables are OFF by default. For more information, see #3822.

  • Changing rocksdb_update_cf_options returns any warnings and errors to the client instead of printing them to the server error log. For more information, see #4258.

  • rocksdb_number_stat_computers and rocksdb_rate_limit_delay_millis variables have been removed. For more information, see #4780.

  • A number of new variables were introduced for MyRocksrocksdb_rows_filtered to show the number of rows filtered out for TTL in MyRocks tables, rocksdb_bulk_load_allow_sk to allow adding secondary keys using the bulk loading feature, rocksdb_error_on_suboptimal_collation toggling warning or error in case of an index creation on a char field where the table has a sub-optimal collation, rocksdb_stats_recalc_ratespecifying the number of indexes to recalculate per second, rocksdb_commit_time_batch_for_recoverytoggler of writing the commit time write batch into the database, and rocksdb_write_policy specifying when two-phase commit data are actually written into the database.

Bug 修复

  • The statement SELECT...ORDER BY produced inconsistent results with the euckr charset or euckr_bin collation. Bug fixed #4513 (upstream #91091).

  • InnoDB statistics could incorrectly report zeros in the slow query log. Bug fixed #3828.

  • With the FIPS mode enabled and performance_schema=off, the instance crashed when running the CREATE VIEW command. Bug fixed #3840.

  • The soft limit of the core file size was set incorrectly starting with PS 5.7.21-20. Bug fixed #4479.

  • The option innodb-optimize-keys could fail when a dumped table has two columns such that the name of one of them contains the other as as a prefix and is defined with the AUTO_INCREMENT attribute. Bug fixed #4524.

  • When innodb_temp_tablespace_encrypt was set to ON the CREATE TABLE command could ignore the value of the ENCRYPTION option. Bug fixed #4565.

  • If FLUSH STATUS was run from a different session, a statement could be counted twice in GLOBAL STATUS. Bug fixed #4570 (upstream #91541).

  • In some cases, it was not possible to set the flush_caches variable on systems that use systemd. Bug fixed #3796.

  • A message in the MyRocks log file did not clearly inform whether fast CRC32 was supported. Bug fixed #3988.

  • mysqld could not be started on Ubuntu if the database recovery had taken longer than ten minutes. Bug fixed #4546 (upstream #91423).

  • The ALTER TABLE command was slow when the number of dirty pages was high. Bug fixed #3702.

  • Setting the global variable version_suffix to NULL could lead to a server crash. Bug fixed #4785.

详情请查看发布说明

下载地址


          imi v0.0.11 发布,新增支持模型关联关系、延迟收包      Cache   Translate Page      

imi 正在被用于项目开发中,每周都修复了大量问题,增加实用的功能。

v0.0.11 更新内容:

新增:

  • 新增支持模型一对一、一对多、多对多关系(增删改查)

  • 新增支持协程Redis延迟收包

  • 新增支持协程MySQL延迟收包

  • 新增启动工具时输出imi图标

  • 新增项目初始化事件(在该时间执行完前,不会处理请求)

  • 新增支持数据库查询器setData/setField/setFieldExp/setFieldInc/setFieldDec

优化:

  • 优化部分代码

  • 补充IQuery接口方法

修复:

  • 修复使用容器实例化的类方法,引用返回值报错问题(不支持引用返回,仅解决报错问题)

  • 修复获取上传文件没有form字段名称

  • 调整getUploadedFiles方法注释中返回类型,支持IDE提示

  • 修复ITaskHandler $server类型问题

介绍

IMI 是基于 Swoole 开发的协程 PHP 开发框架,完美支持 Http、WebSocket、TCP、UDP 开发,拥有常驻内存、协程异步非阻塞IO等优点。

IMI 框架文档丰富,上手容易,致力于让开发者跟使用传统 MVC 框架一样顺手。

IMI 框架底层开发使用了强类型,易维护,性能更强。支持 Aop ,支持使用注解和配置文件注入,完全遵守 PSR-3、4、7、11、15、16 标准规范。

框架的扩展性强,开发者可以根据实际需求,自行开发相关驱动进行扩展。不止于框架本身提供的功能和组件!

官网:https://www.imiphp.com/
文档手册:https://doc.imiphp.com/

代码仓库:
码云:https://gitee.com/yurunsoft/IMI
Github:https://github.com/Yurunsoft/IMI

空项目:https://gitee.com/yurunsoft/empty-imi-demo
功能Demo:https://gitee.com/yurunsoft/imi-demo

寻有缘人

我希望在以后,phper 能够自信地用着 php,不至于在项目成熟的后期被其他语言重构掉。

一个开源项目不能仅靠一个两个人,需要大家一起来完善壮大。

我们需要你的加入,以便完善:

  • 贡献代码(BUG修复、新功能开发等)

  • 丰富文档(文档非常重要)

  • 教程、博客分享


          Fix my custom plugin - wc vendors distance rate shipping      Cache   Translate Page      
I had a custom plugin created for me that would intergrate wc vendors and distance rate shipping Each store can set their rates in the back end but all are currently set at £3.50 and £4.00 (depending on... (Budget: £250 - £750 GBP, Jobs: HTML, MySQL, PHP, Software Architecture, WordPress)
          Teach ZOHO CRM implementation and Admin on 1 to 1      Cache   Translate Page      
I need to learn how to implement and admin ZOHO CRM , also I need to learn how to administrate it, it is why I need somebody with good knowledge of this CRM to teach me, I really do not know the hours need it to learn, but I am willing to take all the hours needed until i master it... (Budget: $2 - $8 USD, Jobs: Cloud Computing, CRM, MySQL, Software Testing, Website Management)
          Software de Pago / recibo digital      Cache   Translate Page      
Estamos en la búsqueda de un programador para desarrollar un Software de pago. Queremos integrar diversas plataformas de pago presencial para que el recibo llegue al correo del cliente en lugar de que... (Budget: $15 - $25 USD, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          Qr code and DataBase      Cache   Translate Page      
i need a page when i scan the qr code with my webcam to insert in my database that the user is on my gym and when i scan again that he left the gym The main page will have just the login for the gym admin(me)... (Budget: €30 - €250 EUR, Jobs: HTML, MySQL, PHP, SQL, Website Testing)
          Admin Panel for Product Ordering      Cache   Translate Page      
We would like to develop a laravel back-end panel for our company. It will be used for our franchise owners to order products, and for us to manage the payments, shipping, inventory, and other processes... (Budget: $15 - $25 USD, Jobs: eCommerce, Laravel, MySQL, PHP, Website Design)
          J3.x:Developing an MVC Component/Adding Checkout      Cache   Translate Page      

Database section added

← Older revision Revision as of 15:17, 13 September 2018
Line 25: Line 25:
 
== Approach ==
 
== Approach ==
  
As usual, Joomla provides a lot of core functionality which we can reuse, provide we align with how the standard Joomla approach.  
+
As usual, Joomla provides a lot of core functionality which we can reuse, provide we align with the standard Joomla approach.  
  
 
To our database record we need to add 2 fields
 
To our database record we need to add 2 fields
Line 41: Line 41:
 
# include a Checkin button in the toolbar.
 
# include a Checkin button in the toolbar.
  
 +
== Updating the Database ==
 +
Add the checkout fields to the database record:
 +
 +
<span id="admin/sql/install.mysql.utf8.sql">
 +
<tt>admin/sql/install.mysql.utf8.sql</tt>
 +
<source lang="sql" highlight="8-9">
 +
DROP TABLE IF EXISTS `#__helloworld`;
 +
 +
CREATE TABLE `#__helloworld` (
 +
`id`      INT(11)    NOT NULL AUTO_INCREMENT,
 +
`asset_id` INT(10)    NOT NULL DEFAULT '0',
 +
`created`  DATETIME    NOT NULL DEFAULT '0000-00-00 00:00:00',
 +
`created_by`  INT(10) UNSIGNED NOT NULL DEFAULT '0',
 +
`checked_out` INT(10) NOT NULL DEFAULT '0',
 +
`checked_out_time` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00',
 +
`greeting` VARCHAR(25) NOT NULL,
 +
`alias`  VARCHAR(40)  NOT NULL DEFAULT '',
 +
`language`  CHAR(7)  NOT NULL DEFAULT '*',
 +
`published` tinyint(4) NOT NULL DEFAULT '1',
 +
`catid`     int(11)    NOT NULL DEFAULT '0',
 +
`params`  VARCHAR(1024) NOT NULL DEFAULT '',
 +
`image`  VARCHAR(1024) NOT NULL DEFAULT '',
 +
`latitude` DECIMAL(9,7) NOT NULL DEFAULT 0.0,
 +
`longitude` DECIMAL(10,7) NOT NULL DEFAULT 0.0,
 +
PRIMARY KEY (`id`)
 +
)
 +
ENGINE =MyISAM
 +
AUTO_INCREMENT =0
 +
DEFAULT CHARSET =utf8;
 +
 +
CREATE UNIQUE INDEX `aliasindex` ON `#__helloworld` (`alias`, `catid`);
 +
 +
INSERT INTO `#__helloworld` (`greeting`,`alias`,`language`) VALUES
 +
('Hello World!','hello-world','en-GB'),
 +
('Goodbye World!','goodbye-world','en-GB');
 +
</source>
 +
</span>
 +
 +
New SQL update file:
 +
 +
<span id="admin/sql/updates/mysql/0.0.24.sql">
 +
<tt>/admin/sql/updates/mysql/0.0.24.sql</tt>
 +
<source lang="sql">
 +
ALTER TABLE `#__helloworld` ADD COLUMN `checked_out` INT(10) NOT NULL DEFAULT '0' AFTER `created_by`;
 +
ALTER TABLE `#__helloworld` ADD COLUMN `checked_out_time` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00' AFTER `checked_out`;
 +
</source>
 +
</span>
  
 
== Packaging the Component ==
 
== Packaging the Component ==

          Configure a MySQL master-slave replication on Alibaba ECS instances      Cache   Translate Page      

@foolishneo wrote:

Replication gives you the ability to maintain multiple copies of your data on one MySQL database server (the master) and one or more MySQL database servers (the slaves). You can configure MySQL to replicate all databases, selected databases, or even selected tables within a database.

Replication can helpful for many reasons including:

  • High performance: the master server is dedicated to writes and updates whereas reads are distributed among multiple slaves.

  • Security: data backup can be done on the slave without interrupting the master.

  • Data analytics: the master records live data and the slave handle the data analysis which ensures the master’s high performance.

This tutorial will use two Alibaba Cloud ECS instances to demonstrate a simple scenario of MySQL replication — one master sends information to a single slave. It will use the traditional method of replication: replicating events from the master’s binary log, and synchronizing the log files between master and slave.

Step 1: Create 2 ECS instances

We will need to separate ECS instances to play the roles of master and slave. In this tutorial I have selected Ubuntu 16.04 as the operating system. You can following this guide to create the ECS instances.

Assume that IP address of the master server is 10.0.0.11

Install MySQL if you haven’t got it installed. You need sudo privileges to run the installation command:

$ sudo apt-get install mysql-server mysql-client

Step 2: Configure the master server

Setting up the replication master involves establishing a unique server ID and enabling binary logging. It can be done in the configuration file:

$ sudo nano /etc/mysql/my.cnf

Firstly, we need to set bind-address to the master IP address:

bind-address = 10.0.0.11

More info about bind-address is here.

Secondly, set the server-id which is located in the [mysqld] section.

server-id = 1

It can be any number provided that it must be unique within the replication topology. Usually, we pick number 1 for the master and following numbers for the slaves.

Next, we set the location of the binary log file. This file contains records of the data changes to be sent to slave databases. Whenever a replication slave connects to a master, the master will feed the slave with replication events from the binary log.

log_bin = /var/log/mysql/mysql-bin.log

Finally, save all the changes to the configuration file and restart the server:

$ sudo service mysql restart

Step 3: Creating a user for replication

This step will create a user account on the master which the slave will use to connect.Log in to the MySQL shell

$ mysql -u root –p

Then create a user and grant it the privileges required for replication:

mysql> CREATE USER ‘repl_user’@’%’ IDENTIFIED BY ‘your_password’;
mysql> GRANT REPLICATION SLAVE ON . TO ‘repl_user’@’’;
mysql > FLUSH PRIVILEGES;

In this example repl_user is the username of the account.

Step 4: Get the master binary log coordinates

This step involves checking the master’s current coordinates within its binary log so that the slave could start the replication process at the correct point. Firstly, flush all tables and block write statements

mysql> FLUSH TABLES WITH READ LOCK;

Secondly, view the current position and the current binary log file:

mysql > SHOW MASTER STATUS;

+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000003 | 73       | test         | manual,mysql     |
+------------------+----------+--------------+------------------+

In this example, the binary log file is mysql-bin.000003 and the position is 73. This is the replication coordinates at which the slave should begin processing new updates from the master.

Step 5: Configure the slave server

Now log in the second ECS instance (the slave) and install MySQL if necessary. Similar to what we did on the master, open the configuration file

$ sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

and make the following changes

server-id = 2
relay-log = /var/log/mysql/mysql-relay-bin.log
log_bin = /var/log/mysql/mysql-bin.log

The slave reads binary log events from the master as they come in and copies them over to a local log file called relay log.

Save the changes and restart the server

$ sudo service mysql restart

Log in MySQL console

$ mysql -u root –p

then configure the slave database with the values we used in the previous step

mysql> STOP SLAVE;
mysql> CHANGE MASTER TO
-> MASTER_HOST=‘10.0.0.11’,
-> MASTER_USER=‘repl_user’,
-> MASTER_PASSWORD=‘your_password’,
-> MASTER_LOG_FILE=’ mysql-bin.000003’,
-> MASTER_LOG_POS=73;
mysql> START SLAVE;

Summary

We have gone through the steps to set up a MySQL master-slave replication topology on two Alibaba Cloud ECS instance. This tutorial is just a brief overview to familiarize you with basic concepts of MySQL replication. There are a lot more options and capabilities of MySQL replication which you can learn in the MySQL manual and other tutorials.

Posts: 1

Participants: 1

Read full topic


          Wordpress: Pods and WP Import Pro Fix      Cache   Translate Page      
I am using this plugin https://wordpress.org/plugins/custom-fields-csv-xml-importer/ and WP Import Pro to try and import information I cannot get the relationship field to import correctly and need someone to help troubleshoot with me... (Budget: $30 - $250 USD, Jobs: HTML, MySQL, PHP, Software Architecture, WordPress)
          Build a Wordpress site Phase 2      Cache   Translate Page      
I need a website for internal use in my company to handle customers: The website will be secured and only be accessed using access information. Following skills are required for this project: MYSql (Advance... (Budget: €30 - €250 EUR, Jobs: CSS, jQuery / Prototype, PHP, Website Design, WordPress)
          Wordpress: Pods and WP Import Pro Fix      Cache   Translate Page      
I am using this plugin https://wordpress.org/plugins/custom-fields-csv-xml-importer/ and WP Import Pro to try and import information I cannot get the relationship field to import correctly and need someone to help troubleshoot with me... (Budget: $30 - $250 USD, Jobs: HTML, MySQL, PHP, Software Architecture, WordPress)
          Build a Wordpress site Phase 2      Cache   Translate Page      
I need a website for internal use in my company to handle customers: The website will be secured and only be accessed using access information. Following skills are required for this project: MYSql (Advance... (Budget: €30 - €250 EUR, Jobs: CSS, jQuery / Prototype, PHP, Website Design, WordPress)
          System Engineer, Senior - RGS - Martinsburg, WV      Cache   Translate Page      
Databases (Microsoft SQL, MySQL, Postgres, MongoDB, etc.). USfalcon, Inc., recognized as one of the fastest growing, privately held companies in the United...
From RGS - Tue, 21 Aug 2018 14:58:44 GMT - View all Martinsburg, WV jobs
          SYSTEM ENGINEER, SR. T4NG IOSS RFP 0073-KNG-082018 - PWS 5.4.1 - All In Solutions LLC - Martinsburg, WV      Cache   Translate Page      
Databases (Microsoft SQL, MySQL, Postgres, MongoDB, etc.). Systems Engineer provides technical support in system architecture, system design, system integration...
From All In Solutions LLC - Wed, 22 Aug 2018 10:19:16 GMT - View all Martinsburg, WV jobs
          VueJs-Mid Level Developer - Core10 - Huntington, WV      Cache   Translate Page      
Experience with other database technologies (postgresql, mysql, mssql, mongodb). The Mid Developer is responsible for working as part of a collaborative...
From Core10 - Wed, 29 Aug 2018 19:17:59 GMT - View all Huntington, WV jobs
          NodeJs-Mid Level Developer - Core10 - Huntington, WV      Cache   Translate Page      
Experience with other database technologies (postgresql, mysql, mssql, mongodb). The Mid Developer is responsible for working as part of a collaborative...
From Core10 - Wed, 29 Aug 2018 19:17:59 GMT - View all Huntington, WV jobs
          Associate DevOps Engineer - Infor - Charleston, WV      Cache   Translate Page      
MS SQL Server, MySQL, Oracle, MongoDB, Neo4j, Titan, etc. The Infor Ming.le™ team seeks a motivated individual with solid technical credentials for the position... $60,000 - $70,000 a year
From Indeed - Fri, 31 Aug 2018 13:01:09 GMT - View all Charleston, WV jobs
          RS Form Pro - Joomla column increase      Cache   Translate Page      
Hi, I need someone to increase the number of columns I can used to make forms on a component called RS From Pro for Joomla 3.9. This is a web link to the component: https://www.rsjoomla.com/joomla-extensions/joomla-form.html I have attached a screenshot of the area I need changing... (Budget: £250 - £750 GBP, Jobs: CSS, HTML, Joomla, MySQL, PHP)
          PHP messaging Developer      Cache   Translate Page      
Ongoing work for qualified PHP developers with sms/email expertise. Yii Framework. (Budget: $15 - $25 USD, Jobs: Google App Engine, MySQL, PHP, Yii)
          Analista Programador Java Web Especialista - EMPHASYS IT SOLUTIONS - São Paulo, SP      Cache   Translate Page      
Atividades Profissionais: Desenvolvimento de sistemas, linguagem de programação Java, desenvolvimento para ambiente Unix/Linux, microserviços, devops, Mesos, Docker, Marathon, hproxy ?load balance?. Oracle, Cassandra e Mysql, TDD e BDD, desenvolvimento de arquitetura cliente-servidor, desenvolviment...
          Consultor Experto Mongo DBA - KOGHI S.A.S - Bogotá      Cache   Translate Page      
Nos encontramos en la búsqueda de Consultor Mongo DBA Requisitos: Requerimos profesionales graduados en Ingenieria de sistemas, computación, electrónicos, mecatrónicos o afines Experiencia mínima de 3 años como experto en bases de datos utilizando motores de bases de datos MYSQL, POSTGRES, MONGO conocimientos en bases de datos transaccionales, procedimientos almacenados, triggers. Conocimiento comprobable de desarrollo en Web- PHP Mantenimiento de sistemas desarrollados en MySQL...
          Desarrollador MySQL - DBA - KOGHI S.A.S - Bogotá      Cache   Translate Page      
Nos encontramos en la búsqueda de Desarrollador MySQL - DBA Requisitos: Requerimos profesionales graduados en Ingenieria de sistemas, computación, electrónicos, mecatrónicos o afines Experiencia mínima de 3 años como desarrollador senior utilizando motores de bases de datos MYSQL, POSTGRES conocimientos en bases de datos transaccionales, procedimientos almacenados, triggers. Conocimiento comprobable de desarrollo en Web- PHP Mantenimiento de sistemas desarrollados en MySQL Server...
          Push Filemaker DATA to WordPress Site      Cache   Translate Page      
Looking for an experienced Filemaker developer that can Push 5 text fields to a Word Press site. Would prefer to use the Filemaker API to push to a PHP page. Using Filemaker Server 17. (Budget: $750 - $1500 USD, Jobs: FileMaker, HTML, MySQL, PHP, WordPress)
          Professional Codeigniter Frontend developer for large project.      Cache   Translate Page      
As a first very simple project there should be as UX improvement some list items hidden/shown by user experience level or active projects. After this first warm up it is planned to start directly with further tasks... (Budget: $10 - $30 USD, Jobs: Codeigniter, HTML, MySQL, PHP, Website Design)
          Как установить локальный серер MAMP на Mac OS X?      Cache   Translate Page      
Вам нужен локальный сервер на mac os x? Здесь балом правит mamp (расшифровывается как mac, apache, mysql, php). Есть и платная pro версия. О разнице можете узнать, перейдя по этой ссылке (сабжа больше нет, потому скопировал в google docs) на страницу «mamp vs...
          magento 2 installation      Cache   Translate Page      
SSH - URL + TEMPLATE FTP CREATON (Budget: €8 - €30 EUR, Jobs: HTML, Linux, Magento, MySQL, PHP)
          No muestra datos de una tabla con el menú desplegable      Cache   Translate Page      

No muestra datos de una tabla con el menú desplegable

Respuesta a No muestra datos de una tabla con el menú desplegable

Es búsqueda de clientes, le envío corrección del código:
<?php
if(isset($_POST['idcliente']) && isset($_POST['nombre_o_razon_social'])){
//$idcliente = $_POST['idcliente'];
include('connect_db.php');
$sql="SELECT * FROM clienteWHERE nombre_repuesto='$idcliente'";
$ressql=mysqli_query($mysqli,$sql);
if($registro=mysqli_fetch_array ($ressql)){
echo "<table border='1'cellspacing=0 bgcol...

Publicado el 13 de Septiembre del 2018 por Marcela

          No muestra datos de una tabla con el menú desplegable      Cache   Translate Page      

No muestra datos de una tabla con el menú desplegable

Respuesta a No muestra datos de una tabla con el menú desplegable

if(isset($_POST['idcliente'])) {echo("idcliente ok");}
if(isset($_POST['nombre_o_razon_social'])){echo("nombre ok");}

agrega esto al inicio, el que este mal no va a apaarecer.

generas el combo en base a los clientes, ok.

$query = $mysqli -> query ("SELECT * FROM cliente");

pero haces la busqueda en repuestos?

$sql="SELECT * FROM repuesto WHERE nombre_repuesto='$idcliente'"...

Publicado el 13 de Septiembre del 2018 por Gonzalo

          WP - Initial sorting of a not simple table      Cache   Translate Page      
In a wordpress site I have a table based on 3 queries (see the attached file for the table code). I'm using a plugin to make it sortable and it works, but I'd like it to have already an automatic initial sorting since loading... (Budget: €8 - €15 EUR, Jobs: HTML, MySQL, PHP, WordPress)
          Comment on Percona Server for MySQL 5.7.23-23 Is Now Available by Bruno Cabral      Cache   Translate Page      
There is any estimate on when Percona 8.0 will be released?
          微擎开启redis与memcache      Cache   Translate Page      
打开/data/config.php 找到: $config['setting']['cache'] = 'mysql'; // 可改成redis或者memcache,主要看你选用哪种缓存方式 开启memcache需要在服务器安装memcache组件 然后在/data/config.php底部添加如下代码: // ------------------- ...
          A genology with PHP mysql      Cache   Translate Page      
I want to design a genology with PHP Mysql (Budget: $10 - $30 USD, Jobs: HTML, Javascript, MySQL, PHP, Website Design)
          A genology with PHP mysql      Cache   Translate Page      
I want to design a genology with PHP Mysql (Budget: $10 - $30 USD, Jobs: HTML, Javascript, MySQL, PHP, Website Design)
          A genology with PHP mysql      Cache   Translate Page      
I want to design a genology with PHP Mysql (Budget: $10 - $30 USD, Jobs: HTML, Javascript, MySQL, PHP, Website Design)
          Project Seller Needed      Cache   Translate Page      
We have some under working project like : leave management System(php,css,bootstrap,JavaScript,html,mysql), Recipe project (spring,java,css,bootstrap,JavaScript,html,mysql), Hotel booking(php,css,bootstrap,JavaScript,html,mysql)... (Budget: ₹12500 - ₹37500 INR, Jobs: Java, Market Research, Marketing, PHP, SEO)
          Using MySQL Databases With Python      Cache   Translate Page      

Using MySQL Databases With Python, Learn MySql Database With Python The Fast and Easy Way!

The post Using MySQL Databases With Python appeared first on Online Classes.


          TorrentPier Bison v2.3.0.1 - движок торрент-трекера      Cache   Translate Page      
#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000\


TorrentPier Bison - движок торрент-трекера, с форумом, аналогичный движку rutracker. Открытый движок BitTorrent-Трекера на базе модифицированного и улучшенного phpBB. Относительно популярен у российских пользователей, из-за внешнего сходства с BitTorrent-Трекером rutracker.org. TorrentPier написан на PHP и для хранения данных использует СУБД MySQL. Имеется встроенный поиск (mysql, sphinx), поддержка нескольких методов кеширования и в целом движок достаточно хорошо оптимизирован к высоким нагрузкам.
          PbootCMS V1.2.1 发布,细节优化及调整      Cache   Translate Page      

PbootCMS V1.2.1 build 2018-09-12

1、新增在线升级新版本红点提示;

2、新增程序部署到非根目录时虚拟目录大小写不区分;

3、新增表单提交频率安全检测;

4、调整程序非伪静态时不再显示html后缀;

5、调整程序留言表单外的自定义表单不再要求验证码;

6、修复上版本首页分页不正常问题;

7、优化完善默认模板;

PbootCMS是翱云科技开发的全新内核的php开源企业建站系统,系统以高效. 简洁. 强悍为开发目标,能够满足各类企业网站建设的需要。

系统采用高效. 简洁的模板标签,只要懂HTML就可快速开发企业网站。

系统采用PHP语言开发,使用自主研发的高速MVVM多层开发框架及多级缓存技术。

系统默认采用Sqlite轻型数据库,放入PHP空间即可直接使用,可选mysql. Pgsql等数据库,满足各类存储需求。

系统采用响应式管理后台,满足各类设备随时管理的需要。

特色功能:

支持自定义内容模型;

支持多语言区域建站;

支持小程序. APP等对接;

支持自定义留言表单;

支持多条件筛选及搜索;

支持全站伪静态及前端动态缓存;

支持后台在线升级;

简洁强大的标签: 1、全局标签示意:
{pboot:sitetitle} 站点标题
{pboot:sitelogo} 站点logo
2、列表页标签示意:
{pboot:list num=10 order=date}
<p><a href="[list:link]">[list:title]</a></p>
{/pboot:list}
3、内容页标签示意:
{content:title} 标题
{content:subtitle}副标题
{content:author} 作者
{content:source} 来源
更多简单到想哭的标签请参考标签手册... 联系我们:

技术交流群:137083872

官方网站: https://www.pbootcms.com/

码云地址: https://gitee.com/hnaoyun/PbootCMS


          In the process of php image uploading using the move&lowbar;uploaded& ...      Cache   Translate Page      

In php image upload the temp file is to move to the user floder but it is not moving to the new floder.

html code

<form method="POST" action="db.php" enctype="multipart/form-data"> <input type="file" name="myimage"> <input type="submit" name="submit_image" value="Upload"> </form>

php code

<?php $dbhost = 'localhost'; $dbuser = 'root'; $dbpass = 'password11'; $conn = mysql_connect($dbhost, $dbuser, $dbpass); mysql_select_db('loginn'); $upload_image=$_FILES['myimage'] ['tmp_name']; echo $upload_image; $folder="images/"; move_uploaded_file($_FILES[' myimage '][' tmp_name '],'$folder'.$_FILES[' myimage '][' name ']); $insert_path="INSERT INTO demo(path,image) VALUES('$folder','$upload_image')"; $var=mysql_query($insert_path); ?>

HTML Code:

<form method="POST" action="db.php" enctype="multipart/form-data"> <input type="file" name="myimage"> <input type="submit" name="submit_image" value="Upload"> </form>

PHP CODE:

$tmp_name = $_FILES["myimage"]["tmp_name"]; $name = "images/".$_FILES['myimage']['name']; $filename=$_FILES['myimage']['name']; if(move_uploaded_file($tmp_name,$name)) { $insert_path="INSERT INTO image_table (path,name) VALUES('images/','$filename')"; $var=mysql_query($insert_path); if(!$var) { die('Could not enter data: ' . mysql_error()); } }

Change the code like this... Dont leave the spaces between array names.

[' myimage '][' tmp_name ']
          PHP实现一个简陋的注册登录页面      Cache   Translate Page      

php菜鸟

今天来水一篇没有**用的 /滑稽脸,代码简陋臃肿考虑不全,各位大佬轻喷,还望不吝赐教。

首先考虑了一下需要至少四个页面: register.html 、 register.php 、 login.html 、 login.php 。

register.html 是这么写的:

<!DOCTYPE html> <html> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <head> <title>注册界面</title> </head> <body> <form action="register.php" method="post"> 用户名:<input type="text" name="name"></input> <br /> 密码:<input type="password" name="password"></input> <br /> <input type="submit" value="注册"></input> </form> </body> </html>

register.php 是这么写的:

<?php header("Content-type:text/html;charset=utf-8"); $conn=new mysqli('localhost','wy','000000','test'); if ($conn->connect_error){ die("服务器连接失败!"); } $name=$_POST["name"]; $password=$_POST["password"]; $sql="insert into new_info values('$name',$password)"; $res=$conn->query($sql); if(!$res){ echo "注册失败!"; }else{ if($conn->affected_rows>0){ sleep(2); header("Location:login.html"); exit; }else{ echo "注册失败"; } } $conn->close(); ?>

login.html 是这么写的:

<!DOCTYPE html> <html> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <head> <title>登录界面</title> </head> <body> <p>注册成功,请登录!</p> <form action="login.php" method="post"> 用户名:<input type="text" name="name"></input> <br /> 密码:<input type="password" name="password"></input> <br /> <input type="submit" value="登录"></input> </form> </body> </html>

login.php 是这么写的:

<?php header("Content-type:text/html;charset=utf-8"); $conn=new mysqli('localhost','wy','000000','test'); if ($conn->connect_error){ die("服务器连接失败!"); } $name=$_POST["name"]; $password=$_POST["password"]; $sql_name="select name from new_info where name='$name'"; $res_sql=$conn->query($sql_name); if($conn->affected_rows==0){ die("账号或密码有误"); }else{ $sql_pass="select password from new_info where name='$name'"; $res_pass=$conn->query($sql_pass); $row_pass=$res_pass->fetch_row(); if($row_pass[0]==$password){ echo "登录成功!"; }else{ echo "账号或密码有误"; } } $conn->close(); ?>

然后来看一下效果:


PHP实现一个简陋的注册登录页面
PHP实现一个简陋的注册登录页面

看一下数据库:


PHP实现一个简陋的注册登录页面

可以看到已经将数据写入数据库。

接着来登录试一下:


PHP实现一个简陋的注册登录页面
PHP实现一个简陋的注册登录页面

换个错误密码试一下:


PHP实现一个简陋的注册登录页面
PHP实现一个简陋的注册登录页面

#### 2018-09-04 问题修正:

1.用户密码加密

2.数据库编码问题

1.用户密码加密

register.php 页面:

<?php header("Content-type:text/html;charset=utf-8"); $conn=new mysqli('192.168.134.128','root','123456','test'); if ($conn->connect_error){ die("服务器连接失败!"); } $name=$_POST["name"]; $password=$_POST["password"]; $password=md5($password); //将用户输入的密码进行md5加密 $sql="insert into test values('$name','$password')"; $res=$conn->query($sql); if(!$res){ echo "注册失败!"; }else{ if($conn->affected_rows>0){ sleep(2); header("Location:login.html"); }else{ echo "注册失败"; } } $conn->close(); ?>

login.php 页面:

<?php header("Content-type:text/html;charset=utf-8"); $conn=new mysqli('192.168.134.128','root','123456','test'); if ($conn->connect_error){ die("服务器连接失败!"); } $name=$_POST["name"]; $password=$_POST["password"]; $password=md5($password); //对用户输入的密码进行md5加密 $sql_name="select name from test where name='$name'"; $res_sql=$conn->query($sql_name); if($conn->affected_rows==0){ die("账号或密码有误!"); }else{ $sql_pass="select password from test where name='$name'"; $res_pass=$conn->query($sql_pass); $row_pass=$res_pass->fetch_row(); if($row_pass[0]==$password){ //将用户输入的加密密码与数据库密码进行对比 echo "登录成功!"; }else{ echo "账号或密码有误"; } } $conn->close(); ?>

2.数据库编码问题

在数据库执行 set names utf8 命令,将数据库编码改为 utf8 。

这样就可以使用中文名注册登录。

#### 2018-09-06 问题修正:

用户注册或者登录时输入为空的问题

register.php 页面:

<?php header("Content-type:text/html;charset=utf-8"); $conn=new mysqli('192.168.134.128','root','123456','test'); if ($conn->connect_error){ die("服务器连接失败!"); } $name=$_POST["name"]; $password=$_POST["password"]; if(empty($name) || empty($password)){ //判断注册时账号或密码是否为空 die('账号或密码不能为空!'); }else{ $password=md5($password); $sql="insert into test values('$name','$password')"; $res=$conn->query($sql); if(!$res){ echo "注册失败!"; }else{ if($conn->affected_rows>0){ sleep(2); header("Location:login.html"); }else{ echo "注册失败"; } } $conn->close(); } ?>

login.php 页面:

<?php header("Content-type:text/html;charset=utf-8"); $conn=new mysqli('192.168.134.128','root','123456','test'); if ($conn->connect_error){ die("服务器连接失败!"); } $name=$_POST["name"]; $password=$_POST["password"]; if(empty($name) || empty($password)){ //判断登陆时账号或密码是否为空 die('账号或密码不能为空!'); }else{ $password=md5($password); //var_dump($password); $sql_name="select name from test where name='$name'"; $res_sql=$conn->query($sql_name); if($conn->affected_rows==0){ die("账号或密码有误!"); }else{ $sql_pass="select password from test where name='$name'"; $res_pass=$conn->query($sql_pass); $row_pass=$res_pass->fetch_row(); if($row_pass[0]==$password){ echo "登录成功!"; }else{ echo "账号或密码有误"; } } $conn->close(); } ?>

代码臃肿,还望见谅。


          Verify the result before using &dollar; stmt - & gt&semi; Fetc ...      Cache   Translate Page      

This question already has an answer here:

Why does mysqli num_rows always return 0? 2 answers

probably a stupid question. I have the following code:

while ($stmt -> fetch()) { echo "<p><strong>" .$name. "</strong><br />" .$comment. "</p>"; }

I would like to execute the while ONLY if $stmt -> fetch() returns at least one row (this is a SELECT query).. but if I do

if ($stmt -> fetch())

It gets executed already one time, and I want to avoid this. Any clue?

Thanks in advance.

http://php.net/manual/en/mysqli-stmt.num-rows.php

Try using $stmt->num_rows > 0

$stmt->store_result(); if( $stmt->num_rows > 0 ) { while( $stmt->fetch() ) { echo "<p><strong>" .$name. "</strong><br />" .$comment. "</p>"; } }

why mysqli_result has a fetchAll function and mysqli_stmt doesnt.... i have no idea.

Longer, but may work

$result = $stmt->fetch(); if( !empty( $result ) { do { echo "<p><strong>" .$name. "</strong><br />" .$comment. "</p>"; } while( $result = $stmt->fetch() ); }


          PHP Developer      Cache   Translate Page      
Logic Manpower Services - Bhubaneswar, Odisha - 1.Overall 3+ years of strong programming experience with PHP/MYSQL/Laravel/Angular/Node JS. 2. Good knowledge of Core PHP / Word press...
          PHP Programmer in Kharghar Navi Mumbai ( 1 year plus experience )      Cache   Translate Page      
Capricorn HR Consultancy - Navi Mumbai, Maharashtra - Following is the detail of requirement for PHP Software Programmer profile: Minimum experience:1 year plus Candidate must know the... technology : PHP,MYSQL,JQUERY,AJAX,JAVA SCRIPT, HTML,BOOTSTRAP,CSS,MOBILE PAGE CREATION Monthly salary: Rs.20,000/- to Rs.25,000/- Preferred...
          Walkin for PHP Developer / Senior PHP Developer      Cache   Translate Page      
IndiaMART - Noida, Uttar Pradesh - using PHP, JavaScript, MySQL and AJAX. Key Responsibility Areas: Create, design and modify websites to suit the requirements of a client.... PHP Developers need to have a thorough knowledge of developing cross platforms that are compatible with Web, mobile Web applications. Sound...
          PHP Developers      Cache   Translate Page      
ABS Systems - Mohali, Punjab - Chandigarh - Maintain and manage clear plus complete documentation Desired Candidate Profile Must be proficient in PHP, MySQL, CSS, HTML, Javascript,\AJAX, XML...
          Team Lead PHP      Cache   Translate Page      
Myforexeye Fintech ( P) Ltd - Delhi - Currently handling a PHP team of not less than 5 people. Must have worked on AWS server-less architecture Must have worked on large scale... projects which have grown and scaled at a rapid pace Strong experience in PHP (Version 7+), MySQL, Clear on OOPs Concepts, HTML5, CSS3, Angular 2.0...
          PHP Developer - Noida Sector      Cache   Translate Page      
Faris Technology Private Limited - Noida, Uttar Pradesh - engineers require the capacity to topic and create custom theme, plugin for WordPress, and Core PHP website. As an engineer at Faris Technology... Woocommerce mixes a reward however not required Involvement in PHP, MYSQL, Javascript, AJAX, Jquery, CSS/3, HTML/5, Bootstrap, Wordpress Theme...
          PHP Developer      Cache   Translate Page      
TIGI HR SOLUTION PVT. LTD - Surat, Gujarat - We have Urgent Requirement for the post of PHP Developer @ Surat Salary : Upto 25k Experience : 1-4 years... applications and web services using Core PHP At least 1 year of experience using Laravel framework Experience in RDBMS such as Sqlserver/ MySQL...
          Web Application Developer - Tech Talent Staffing - Aurora, ON      Cache   Translate Page      
Experience with PHP and MySQL. Software web design, software development, software testing.... $15 an hour
From Indeed - Fri, 31 Aug 2018 19:07:30 GMT - View all Aurora, ON jobs
          Web Developer - Hamilton Wentworth District School - Hamilton, ON      Cache   Translate Page      
Solid understanding of HTML 5, CSS, PHP, MySQL, IIS 7, Responsive Web Design. Human Resources Services.... $58,892 a year
From Hamilton Wentworth District School - Thu, 13 Sep 2018 18:05:45 GMT - View all Hamilton, ON jobs
          Programador API Google Calendario      Cache   Translate Page      
Preciso de um programador que possa programar diferentes calendários usando uma mesma API (Budget: $30 - $250 USD, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          Percona Toolkit 3.0.12 Is Now Available      Cache   Translate Page      
percona toolkit

Percona announces the release of Percona Toolkit 3.0.12 on September 13, 2018. Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, […]

The post Percona Toolkit 3.0.12 Is Now Available appeared first on Percona Database Performance Blog.


          Magento 2 SEO Link FIxes      Cache   Translate Page      
As being multiple store magento 2, some reason URL key in subjected store are left unticked for some reason, in result many product URLs are in non-seo structure. I have attached image for ref. we looking... (Budget: $14 - $30 NZD, Jobs: Javascript, Magento, MySQL)
          Website Designer at Kings Elite Media      Cache   Translate Page      
Kings Elite is a Lagos based Digital Marketing and Website Design Agency. We are looking forward to recruiting talented individuals that will work together as a team to deliver high quality Digital services to our clients. Our major goal is helping businesses utilize digital technology to improve their business and attract more sales.Job Description We are looking for a talented & creative web Designer to develop high quality websites The ideal candidate should have an eye for clean design and possess superior user interface design skills. The candidate must understand how to design graphics using Photoshop and other Adobe tools The right candidate will have prior experience bringing products to life and has demonstrated experience in designing usable web-based interfaces The ideal candidate must be a critical thinker with a strong design sense, a strong technical background, and an eye for making things better The individual is proactive, creative, collaborative, and passionate about design in an entrepreneurial environment Key Responsibilities Design websites for clients and also offer web related support Validate designs through rapid prototyping, user research, design review and collaboration with the design, development and product management team. Contribute directly tthe development of products and features by providing front-end assets and deliverables Help maintain a consistent graphic system and visual language for our clients brand Assisting with the creation of a platform-wide style guide and its components Develop wireframes, rapid prototypes, user interfaces, and comp designs Design UI using front-end code in HTML, CSS, JQuery, and JavaScript to interface with backend code Use advanced CSS techniques to solve design issues Perform usability tests on interface design to insure cross-browser compatibility Use Test and Target to serve up segmented content and for A/B and multivariate testing Provide expert design guidance and recommendations for improving websites Collaborate with content, marketing, and programming teams The ideal candidate will be a hybrid, with strong graphic design creativity as well as front-end development skills, utilizing dynamic web programming experience. Qualifications 1-3 years of relevant work experience, including demonstrated experience in designing usable web-based interfaces preferred Dedicated, action-oriented, flexible and strong attention to detail Passion for website design Strong creative, design and interactive skills; strong communication and presentation skills Requirements, Education and Experience: Experience in designing websites with a capacity for simplifying complexity Proficiency in HTML, CSS, php/Mysql and JavaScript including responsive design techniques Experience with enterprise Content Management Systems (Wordpress and Joomla) A clear understanding of design-centered processes, proven methodologies for identifying and solving problems A diverse portfolio that exhibits excellent use of typography, color, imagery and graphic elements Proficiency in Photoshop, Illustrator, or other visual design and wire-framing tools Up-to-date with the latest Web trends, techniques, and technologies A strong understanding of brand development and multi-channel marketing concepts Ability to teach others about website design.
          JetBrains DataGrip 2018.2.3      Cache   Translate Page      

JetBrains DataGrip 2018.2.3
JetBrains DataGrip 2018.2.3 | 155 MB

DataGrip is the multi-engine database environment. We support MySQL, PostgreSQL, Microsoft SQL Server, Oracle, Sybase, DB2, SQLite, HyperSQL, Apache Derby and H2. If the DBMS has a JDBC driver you can connect to it via DataGrip. For any of supported engines it provides database introspection and various instruments foaedit, and clone data rows. Navigate through the data by foreign keys and use the text search to find anything in the data displayed in the table editor.


          BE Programmer - 91bnb - El Monte, CA      Cache   Translate Page      
Familiar to use MySQl database with optimizing SQL performance experience. Assist to quality control staff to test the functionalities and performances of the...
From 91bnb - Thu, 13 Sep 2018 19:07:33 GMT - View all El Monte, CA jobs
          Adriod Developer - vikas Global Solutions Ltd - Madhavanpark, Karnataka      Cache   Translate Page      
Android Application Development, Android SDK’s, API, HTML,CSS, MYSQL. Android Application Development. Vikas Global Solutions Ltd.... ₹1,00,000 - ₹2,00,000 a year
From Indeed - Mon, 10 Sep 2018 05:26:38 GMT - View all Madhavanpark, Karnataka jobs
          PHP Job Board APIs save to MySQL      Cache   Translate Page      
• MUST HAVE CLEAN (reasonably) documented PHP and MYSQL code! • Need .php 7 script written to pull from various Job Board APIs (plenty of source code examples on the web, many from job api providers themselves)... (Budget: $30 - $250 USD, Jobs: JSON, MySQL, PHP)
          PHP Job Board APIs save to MySQL      Cache   Translate Page      
• MUST HAVE CLEAN (reasonably) documented PHP and MYSQL code! • Need .php 7 script written to pull from various Job Board APIs (plenty of source code examples on the web, many from job api providers themselves)... (Budget: $30 - $250 USD, Jobs: JSON, MySQL, PHP)
          CUSTOMER DATA      Cache   Translate Page      
I am a businessman. I have a finance business for which I need a software in which the following things should be covered. 1. Name date of birth Father's name Loan amount Loan date Rate Of Interest... (Budget: ₹1500 - ₹12500 INR, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          CUSTOMER DATA      Cache   Translate Page      
I am a businessman. I have a finance business for which I need a software in which the following things should be covered. 1. Name date of birth Father's name Loan amount Loan date Rate Of Interest... (Budget: ₹1500 - ₹12500 INR, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          Upgrading from 3.5.1 to 3.5.2      Cache   Translate Page      
by Paul Roberts.  

Get the following error messages as part of the upgrade:

databasemysql (10.1.35-MariaDB-cll-lve)

Wrong $CFG->dbtype: you need to change it in your config.phpfile, from 'mysql' to 'mariadb'

unsupported_db_table_row_format

Your database has tables using Antelope as the file format. You are recommended to convert the tables to the Barracuda file format. See the documentation Administration via command line for details of a tool for converting InnoDB tables to Barracuda.

mysql_full_unicode_support

The current setup of MySQL or MariaDB is using 'utf8'. This character set does not support four byte characters which include some emoji. Trying to use these characters will result in an error when updating a record, and any information being sent to the database will be lost. Please consider changing your settings to 'utf8mb4'. See the documentation for full details.


The system will not let me proceed and I just do not know what to do.







          JetBrains DataGrip 2018.2.3      Cache   Translate Page      


JetBrains DataGrip 2018.2.3 | 155 MB

DataGrip is the multi-engine database environment. We support MySQL, PostgreSQL, Microsoft SQL Server, Oracle,...

          Build a n application in Java that takes from a database and compares data between tables.      Cache   Translate Page      
This project is for building a Java application for a company providing solutions to financial institutions in Latin America. The IDE you will be working on will be anyone you are familiar with, including Eclipse, Spring or Intellij... (Budget: $3000 - $5000 USD, Jobs: Java, jQuery / Prototype, MySQL, Software Architecture, SQL)
          modificacion de aplicacion en android studio      Cache   Translate Page      
se necesita modificar una aplicacion que esta orientado a taxi se tiene el codigo de android studio que trabaja con app server y mysql , hay modificaciones que se pueden hacer desde el mismo android studio, hay una modificacion en especial que se necesitra trabajar con la db del la aplicacion... (Budget: $30 - $250 USD, Jobs: Android, Java, Mobile App Development, MySQL, SQLite)
          Analytics Architect - GoDaddy - Kirkland, WA      Cache   Translate Page      
Implementation and tuning experience in the big data Ecosystem (Amazon EMR, Hadoop, Spark, R, Presto, Hive), database (Oracle, mysql, postgres, Microsoft SQL...
From GoDaddy - Tue, 07 Aug 2018 03:04:25 GMT - View all Kirkland, WA jobs
          Principal Data Architect - DBS Customer Advisory - Amazon.com - Seattle, WA      Cache   Translate Page      
Implementation and tuning experience in the Big Data Ecosystem, (EMR, Hadoop, Spark, R, Presto, Hive), Database (Oracle, mysql, postgres, MS SQL Server), NoSQL...
From Amazon.com - Wed, 12 Sep 2018 13:23:03 GMT - View all Seattle, WA jobs
          Principal Data Labs Solution Architect - Amazon.com - Seattle, WA      Cache   Translate Page      
Implementation and tuning experience in the Big Data Ecosystem, (such as EMR, Hadoop, Spark, R, Presto, Hive), Database (such as Oracle, MySQL, PostgreSQL, MS...
From Amazon.com - Fri, 07 Sep 2018 19:22:14 GMT - View all Seattle, WA jobs
          Reconhecimento Facial      Cache   Translate Page      
Ola Buscamos propostas para desenvolvimento de um sistema para reconhecimento facial por câmeras ativas, para controle de acesso a condomínios. O sistema deverá identificar o condômino previamente cadastrado no sistema através do reconhecimento facial na portaria do predio ou na entrada da garagem... (Budget: $1500 - $3000 USD, Jobs: AJAX, HTML5, Javascript, jQuery / Prototype, MySQL)
          Reconhecimento Facial      Cache   Translate Page      
Ola Buscamos propostas para desenvolvimento de um sistema para reconhecimento facial por câmeras ativas, para controle de acesso a condomínios. O sistema deverá identificar o condômino previamente cadastrado no sistema através do reconhecimento facial na portaria do predio ou na entrada da garagem... (Budget: $1500 - $3000 USD, Jobs: AJAX, HTML5, Javascript, jQuery / Prototype, MySQL)
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page      
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Estágio na área de Desenvolvimento Web - Empresa Confidencial - Porto Alegre, RS      Cache   Translate Page      
Atividades Profissionais: Desenvolvimento WEB em sites e auxílio em projetos de desenvolvimento de sistemas Web, testes, etc. Conhecimentos em PHP e MySQL, HTML, JavaScript e edições de imagens. Experiências e/ou Qualificações: Conhecimentos em PHP e MySQL, HTML, JavaScript e edições de imagens. Des...
          MongoDB Performance Tuning: Everything You Need to Know      Cache   Translate Page      

MongoDB is one of the most popular document databases. It’s the M in the MEAN stack (MongoDB, Express, Angular, and Node.js). Unlike relational databases such as MySQL or PostgreSQL, MongoDB uses JSON-like documents for storing data. MongoDB is free, open-source, and incredibly performant. However, just as with any other database, certain issues can cost MongoDB its edge and drag it down. In ...

Read More

The post MongoDB Performance Tuning: Everything You Need to Know appeared first on Stackify.


          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page      
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          WWarning: mysqli_connect(): (HY000/2005): Unknown MySQL server host      Cache   Translate Page      
Hi, have an error Warning: mysqli_connect(): (HY000/2005): Unknown MySQL server host 'gator3206.hostgator.com:3306' (0) in /home3/vibrasyonfm/public_html/wp-content/db.php on line 358 Warning: mysqli_connect(): (HY000/2005): Unknown MySQL server host 'gator3206.hostgator.com:3306' (0) in /home3/vibrasyonfm/public_html/wp-content/db.php on line 358 Warning: mysqli_connect(): (HY000/2005): Unknown MySQL server host 'gator3206.hostgator.com:3306' (0) in /home3/vibrasyonfm/public_html/wp-content/db.php on line 358 Warning: mysqli_connect(): (HY000/2005): Unknown MySQL server host 'gator3206.hostgator.com:3306' (0) […]
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page      
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Need changes done to PHP Booked Scheduler website      Cache   Translate Page      
I need changes doing to PHP booked scheduler Mainly visual changes Also emails aren't being sent correctly I also need to fix front page PHP which is only showing the next few days. This is needed urgently (Budget: £20 - £250 GBP, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Fix my Adult website issues just to start with its on going project      Cache   Translate Page      
HI Fix my Adult website issues just to start with its on going project (Budget: $10 - $30 USD, Jobs: HTML, MySQL, PHP, Website Design, WordPress)
          Is it possible that mysql can not update some data&quest;      Cache   Translate Page      

Ok, let say we have two rows:

member_id, name

Let say member_id = 15 and name = 'John';

I want to UPDATE this data and do the following query:

mysql_query("UPDATE members SET member_id = 14, name = 'Peter' WHERE member_id = 15

This is just an example, but is it possible that mysql would fail and UPDATE for example only name row. So, after completing mysql_query above, it would become member_id = 15 and name = 'Peter';

It is just an example. Today, a similar situation happened in my website and I checked my code hundred times and I see no errors and there hadn't been no same errors before it at all.

So, should I recheck my code one hundred times more, or it can happen?

Thank you very much.

According to the spec, single UPDATE statements are atomic; ie: either it updates all columns or it doesn't update any of them.

So no, it shouldn't happen. But of course there could be a bug with MySQL.


          MySQL数据库如何去掉数据库中重复记录?      Cache   Translate Page      

MySQL数据库如何去掉数据库中重复记录?

对于常规的mysql数据表中可能存在重复的数据,有些情况是允许重复数据的存在,有些情况是不允许的,这个时候我们就需要查找并删除这些重复数据,以下是具体的处理方法!

方法一:防止表中出现重复数据

当表中未添加数据时,可以在MySQL数据表中设置指定的字段为PRIMARY KEY(主键) 或者 UNIQUE(唯一) 索引来保证数据的唯一性。

例如在学生信息表中学号no不允许重复,需设置学号no为主键,且默认值不能为NULL。

CREATETABLEstudent ( noCHAR(12)NOTNULL, nameCHAR(20), sexCHAR(10), PRIMARYKEY(no) ); 方法二:过滤删除重复值

对于数据表中原有的数据,想要去除重复数据需要经过重复数据查找、过滤以及删除等步骤。

1. 统计重复数据 mysql>SELECTCOUNT(*)asrepetitions,no ->FROMstudent ->GROUPBYno ->HAVINGrepetitions>1;

以上查询语句将返回student表中重复的记录数。

2. 过滤重复数据

如果需要读取不重复的数据可以在SELECT 语句中使用 DISTINCT 关键字来过滤重复数据。

mysql>SELECTDISTINCTno ->FROMstudent;

也可以使用 GROUP BY 来读取数据表中不重复的数据

mysql>SELECTno ->FROMstudent ->GROUPBY(no); 3. 删除重复数据

删除数据表中重复数据,可以使用以下SQL语句:

mysql>CREATETABLEtmpSELECTno,name,sexFROMstudentGROUPBY(no,sex); mysql>DROPTABLEstudent; mysql>ALTERTABLEtmpRENAMETOstudent;

也可以在数据表中添加INDEX(索引)和 PRIMAY KEY(主键)来删除表中的重复记录,方法如下:

mysql>ALTERIGNORETABLEstudent ->ADDPRIMARYKEY(no);

-End 如果觉得文章好的话麻烦您帮小编 点个赞 ! 关注支持 我、后续内容更加精彩敬请期待!


          深入理解Mysql――锁、事务与并发控制      Cache   Translate Page      

本文对锁、事务、并发控制做一个总结,看了网上很多文章,描述非常不准确。如有与您观点不一致,欢迎有理有据的拍砖!

mysql服务器逻辑架构
深入理解Mysql――锁、事务与并发控制

每个连接都会在mysql服务端产生一个线程(内部通过线程池管理线程),比如一个select语句进入,mysql首先会在查询缓存中查找是否缓存了这个select的结果集,如果没有则继续执行 解析、优化、执行的过程;否则会之间从缓存中获取结果集。

mysql并发控制――共享锁、排他锁 共享锁

共享锁也称为读锁,读锁允许多个连接可以同一时刻并发的读取同一资源,互不干扰;

排他锁

排他锁也称为写锁,一个写锁会阻塞其他的写锁或读锁,保证同一时刻只有一个连接可以写入数据,同时防止其他用户对这个数据的读写。

锁策略

锁的开销是较为昂贵的,锁策略其实就是保证了线程安全的同时获取最大的性能之间的平衡策略。

mysql锁策略:talbe lock(表锁)

表锁是mysql最基本的锁策略,也是开销最小的锁,它会锁定整个表;

具体情况是:若一个用户正在执行写操作,会获取排他的“写锁”,这可能会锁定整个表,阻塞其他用户的读、写操作;

若一个用户正在执行读操作,会先获取共享锁“读锁”,这个锁运行其他读锁并发的对这个表进行读取,互不干扰。只要没有写锁的进入,读锁可以是并发读取统一资源的。

通常发生在DDL语句\DML不走索引的语句中,比如这个DML update table set columnA=”A” where columnB=“B”.

如果columnB字段不存在索引(或者不是组合索引前缀),会锁住所有记录也就是锁表。如果语句的执行能够执行一个columnB字段的索引,那么会锁住满足where的行(行锁)。

mysql锁策略:row lock(行锁)

行锁可以最大限度的支持并发处理,当然也带来了最大开销,顾名思义,行锁的粒度实在每一条行数据。

事务

事务就是一组原子性的sql,或者说一个独立的工作单元。 事务就是说,要么mysql引擎会全部执行这一组sql语句,要么全部都不执行(比如其中一条语句失败的话)。

比如,tim要给bill转账100块钱:

1.检查tim的账户余额是否大于100块;

2.tim的账户减少100块;

3.bill的账户增加100块;

这三个操作就是一个事务,必须打包执行,要么全部成功,要么全部不执行,其中任何一个操作的失败都会导致所有三个操作“不执行”――回滚。

CREATE DATABASE IF NOT EXISTS employees; USE employees; CREATE TABLE `employees`.`account` ( `id` BIGINT (11) NOT NULL AUTO_INCREMENT, `p_name` VARCHAR (4), `p_money` DECIMAL (10, 2) NOT NULL DEFAULT 0, PRIMARY KEY (`id`) ) ; INSERT INTO `employees`.`account` (`id`, `p_name`, `p_money`) VALUES ('1', 'tim', '200'); INSERT INTO `employees`.`account` (`id`, `p_name`, `p_money`) VALUES ('2', 'bill', '200'); START TRANSACTION; SELECT p_money FROM account WHERE p_name="tim";-- step1 UPDATE account SET p_money=p_money-100 WHERE p_name="tim";-- step2 UPDATE account SET p_money=p_money+100 WHERE p_name="bill";-- step3 COMMIT;复制代码

一个良好的事务系统,必须满足ACID特点:

事务的ACID

A:atomiciy原子性 一个事务必须保证其中的操作要么全部执行,要么全部回滚,不可能存在只执行了一部分这种情况出现。

C:consistency一致性

数据必须保证从一种一致性的状态转换为另一种一致性状态

比如上一个事务中执行了第二步时系统崩溃了,数据也不会出现bill的账户少了100块,但是tim的账户没变的情况。要么维持原装(全部回滚),要么bill少了100块同时tim多了100块,只有这两种一致性状态的

I:isolation隔离性 在一个事务未执行完毕时,通常会保证其他Session 无法看到这个事务的执行结果

D:durability持久性 事务一旦commit,则数据就会保存下来,即使提交完之后系统崩溃,数据也不会丢失。

隔离级别
深入理解Mysql――锁、事务与并发控制
查看系统隔离级别: select @@global.tx_isolation; 查看当前会话隔离级别 select @@tx_isolation; 设置当前会话隔离级别 SET session TRANSACTION ISOLATION LEVEL serializable; 设置全局系统隔离级别 SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;复制代码 READ UNCOMMITTED(未提交读,可脏读) 事务中的修改,即使没有提交,对其他会话也是可见的。

可以读取未提交的数据 ―― 脏读 。脏读会导致很多问题,一般不适用这个隔离级别。

实例:

-- ------------------------- read-uncommitted实例 ------------------------------ -- 设置全局系统隔离级别 SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; -- Session A START TRANSACTION; SELECT * FROM USER; UPDATE USER SET NAME="READ UNCOMMITTED"; -- commit; -- Session B SELECT * FROM USER; //SessionB Console 可以看到Session A未提交的事物处理,在另一个Session 中也看到了,这就是所谓的脏读 id name 2 READ UNCOMMITTED 34 READ UNCOMMITTED 复制代码 READ COMMITTED(提交读或不可重复读,幻读)

一般数据库都默认使用这个隔离级别(mysql不是), 这个隔离级别保证了一个事务如果没有完全成功(commit执行完),事务中的操作对其他会话是不可见的

-- ------------------------- read-cmmitted实例 ------------------------------ -- 设置全局系统隔离级别 SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITTED; -- Session A START TRANSACTION; SELECT * FROM USER; UPDATE USER SET NAME="READ COMMITTED"; -- COMMIT; -- Session B SELECT * FROM USER; //Console OUTPUT: id name 2 READ UNCOMMITTED 34 READ UNCOMMITTED --------------------------------------------------- -- 当 Session A执行了commit,Session B得到如下结果: id name 2 READ COMMITTED 34 READ COMMITTED复制代码

也就验证了 read committed 级别在事物未完成commit操作之前修改的数据对其他Session 不可见,执行了commit之后才会对其他Session 可见。

我们可以看到Session B两次查询得到了不同的数据。

read committed隔离级别解决了脏读的问题,但是会对其他Session 产生两次不一致的读取结果(因为另一个Session 执行了事务,一致性变化)。

REPEATABLE READ(可重复读)

一个事务中多次执行统一读SQL,返回结果一样。

这个隔离级别解决了脏读的问题,幻读问题。这里指的是innodb的rr级别,innodb中使用next-key锁对”当前读”进行加锁,锁住行以及可能产生幻读的插入位置,阻止新的数据插入产生幻行。

下文中详细分析。

SERIALIZABLE(可串行化)

最强的隔离级别,通过给事务中每次读取的行加锁,写加写锁,保证不产生幻读问题,但是会导致大量超时以及锁争用问题。

多版本并发控制-MVCC

MVCC(multiple-version-concurrency-control)是个 行级锁 的变种,它在 普通读情况下避免了加锁操作,因此开销更低

虽然实现不同,但通常都是实现 非阻塞读 ,对于 写操作只锁定必要的行

一致性读 (就是读取快照) select * from table ….; 当前读(就是读取实际的持久化的数据) 特殊的读操作,插入/更新/删除操作,属于当前读,处理的都是当前的数据,需要加锁。
select * from table where ? lock in share mode;
select * from table where ? for update;
insert;
update ;
delete;

注意:select …… from where…… (没有额外加锁后缀)使用MVCC,保证了读快照(mysql称为consistent read),所谓一致性读或者读快照就是读取当前事务开始之前的数据快照,在这个事务开始之后的更新不会被读到。详细情况下文select的详述。

对于加锁读SELECT with FOR UPDATE(排他锁) or LOCK IN SHARE MODE(共享锁)、update、delete语句,要考虑是否是唯一索引的等值查询。

写锁-recordLock,gapLock,next key lock

对于使用到唯一索引 等值查询:比如,where columnA=”…” ,如果columnA上的索引被使用到, 那么会在满足where的记录上加行锁(for update是排他锁,lock in shared 是共享锁,其他写操作加排他锁)。这里是行级锁,record lock。

对于范围查询(使用非唯一的索引): 比如(做范围查询):where columnA between 10 and 30 ,会导致其他会话中10以后的数据都无法插入(next key lock),从而解决了幻读问题。

这里是next key lock 会包括涉及到的所有行。 next key lock=recordLock+gapLock,不仅锁住相关数据,而且锁住边界,从而彻底避免幻读

对于没有索引 锁表

通常发生在DDL语句\DML不走索引的语句中,比如这个DML update table set columnA=”A” where columnB=“B”.

如果columnB字段不存在索引(或者不是组合索引前缀),会锁住所有记录也就是锁表。如果语句的执行能够执行一个columnB字段的索引,那么会锁住满足where的行(行锁)。

INNODB的MVCC通常是通过在每行数据后边保存两个隐藏的列来实现(其实是三列,第三列是用于事务回滚,此处略去), 一个保存了行的创建版本号,另一个保存了行的更新版本号(上一次被更新数据的版本号) 这个版本号是每个事务的版本号,递增的。

这样保证了innodb对读操作不需要加锁也能保证正确读取数据。

MVCC select无锁操作 与 维护版本号

下边在mysql默认的Repeatable Read隔离级别下,具体看看MVCC操作:

Select(快照读,所谓读快照就是读取当前事务之前的数据。): a. InnoDB只select查找版本号早于当前版本号的数据行 ,这样保证了读取的数据要么是在这个事务开始之前就已经commit了的(早于当前版本号),要么是在这个事务自身中执行创建操作的数据(等于当前版本号)。

b.查找行的更新版本号要么未定义,要么大于当前的版本号(为了保证事务可以读到老数据),这样保证了事务读取到在当前事务开始之后未被更新的数据。

注意: 这里的select不能有for update、lock in share 语句。

总之要只返回满足以下条件的行数据,达到了快照读的效果:

(行创建版本号< =当前版本号 && (行更新版本号==null or 行更新版本号>当前版本号 ) )复制代码

Insert

InnoDB为这个事务中新插入的行,保存当前事务版本号的行作为行的行创建版本号。

Delete InnoDB为每一个删除的行保存当前事务版本号,作为行的删除标记。

Update

将存在两条数据,保持当前版本号作为更新后的数据的新增版本号,同时保存当前版本号作为老数据行的更新版本号。

当前版本号―写―>新数据行创建版本号 && 当前版本号―写―>老数据更新版本号();复制代码 脏读 vs 幻读 vs 不可重复读

脏读一事务未提交的中间状态的更新数据 被其他会话读取到。 当一个事务正在访问数据,并且对数据进行了修改,而这种修改还没有 提交到数据库中(commit未执行),这时,另外会话也访问这个数据,因为这个数据是还没有提交, 那么另外一个会话读到的这个数据是脏数据,依据脏数据所做的操作也可能是不正确的。

不可重复读简单来说就是在一个事务中读取的数据可能产生变化,ReadCommitted也称为不可重复读

在同一事务中,多次读取同一数据返回的结果有所不同。换句话说就是,后续读取可以读到另一会话事务已提交的更新数据。 相反,“可重复读”在同一事务中多次读取数据时,能够保证所读数据一样,也就是,后续读取不能读到另一会话事务已提交的更新数据。

幻读 :会话T1事务中执行一次查询,然后会话T2新插入一行记录, 这行记录恰好可以满足T1所使用的查询的条件 。然后T1又使用相同 的查询再次对表进行检索,但是此时却看到了事务T2刚才插入的新行。这个新行就称为“幻像”,因为对T1来说这一行就像突然 出现的一样。

innoDB的RR级别无法做到完全避免幻读,下文详细分析。

----------------------------------前置准备---------------------------------------- prerequisite: -- 创建表 mysql> CREATE TABLE `t_bitfly` ( `id` bigint(20) NOT NULL DEFAULT '0', `value` varchar(32) DEFAULT NULL, PRIMARY KEY (`id`) ) -- 确保当前隔离级别为默认的RR级别 mysql> select @@global.tx_isolation, @@tx_isolation; +-----------------------+-----------------+ | @@global.tx_isolation | @@tx_isolation | +-----------------------+-----------------+ | REPEATABLE-READ | REPEATABLE-READ | +-----------------------+-----------------+ 1 row in set (0.00 sec) ---------------------------------------开始--------------------------------------------- session A | session B | | mysql> START TRANSACTION; | mysql> START TRANSACTION; Query OK, 0 rows affected (0.00 sec) | Query OK, 0 rows affected (0.00 sec) | | mysql> SELECT * FROM test.t_bitfly; | mysql> SELECT * FROM test.t_bitfly; Empty set (0.00 sec) | Empty set (0.00 sec) | | mysql> INSERT INTO t_bitfly VALUES (1, 'test'); | Query OK, 1 row affected (0.00 sec) | | mysql> SELECT * FROM test.t_bitfly; | Empty set (0.00 sec) | | | mysql> commit; | Query OK, 0 rows affected (0.01 sec) mysql> SELECT * FROM test.t_bitfly; | Empty set (0.00 sec) | -- 可以看到虽然两次执行结果返回的数据一致, | -- 但是不能说明没有幻读。接着看: | | mysql> INSERT INTO t_bitfly VALUES (1, 'test'); | ERROR 1062 (23000): | Duplicate entry '1' for key 'PRIMARY' | | -- 明明为空的表,为什么说主键重复?――幻读出现 !!! | 复制代码 如何保证rr级别绝对不产生幻读?

在使用的select …where语句中加入 for update(排他锁) 或者 lock in share mode(共享锁)语句来实现。 其实就是锁住了可能造成幻读的数据,阻止数据的写入操作。

其实是因为数据的写入操作(insert 、update)需要先获取写锁,由于可能产生幻读的部分,已经获取到了某种锁,所以要在另外一个会话中获取写锁的前提是当前会话中释放所有因加锁语句产生的锁。

mysql死锁问题

死锁,就是产生了循环等待链条,我等待你的资源,你却等待我的资源,我们都相互等待,谁也不释放自己占有的资源,导致无线等待下去。

比如:

//Session A START TRANSACTION; UPDATE account SET p_money=p_money-100 WHERE p_name="tim"; UPDATE account SET p_money=p_money+100 WHERE p_name="bill"; COMMIT; //Thread B START TRANSACTION; UPDATE account SET p_money=p_money+100 WHERE p_name="bill"; UPDATE account SET p_money=p_money-100 WHERE p_name="tim"; COMMIT;复制代码

当线程A执行到第一条语句UPDATE account SET p_money=p_money-100 WHERE p_name=”tim”;锁定了p_name=”tim”的行数据;并且试图获取p_name=”bill”的数据;

,此时,恰好,线程B也执行到第一条语句:UPDATE account SET p_money=p_money+100 WHERE p_name=”bill”;

锁定了 p_name=”bill”的数据,同时试图获取p_name=”tim”的数据;

此时,两个线程就进入了死锁,谁也无法获取自己想要获取的资源,进入无线等待中,直到超时!

innodb_lock_wait_timeout 等待锁超时回滚事务: 直观方法是在两个事务相互等待时,当一个等待时间超过设置的某一阀值时,对其中一个事务进行回滚,另一个事务就能继续执行。这种方法简单有效,在innodb中,参数innodb_lock_wait_timeout用来设置超时时间。

wait-for graph算法来主动进行死锁检测: innodb还提供了wait-for graph算法来主动进行死锁检测,每当加锁请求无法立即满足需要并进入等待时,wait-for graph算法都会被触发。

如何尽可能避免死锁

1)以固定的顺序访问表和行。比如两个更新数据的事务,事务A 更新数据的顺序 为1,2;事务B更新数据的顺序为2,1。这样更可能会造成死锁。

2)大事务拆小。大事务更倾向于死锁,如果业务允许,将大事务拆小。

3)在同一个事务中,尽可能做到一次锁定所需要的所有资源,减少死锁概率。

4)降低隔离级别。如果业务允许,将隔离级别调低也是较好的选择,比如将隔离级别从RR调整为RC,可以避免掉很多因为gap锁造成的死锁。

5)为表添加合理的索引。可以看到如果不走索引将会为表的每一行记录添加上锁,死锁的概率大大增大。

显式锁 与 隐式锁

隐式锁 :我们上文说的锁都属于不需要额外语句加锁的隐式锁。

显示锁

SELECT ... LOCK IN SHARE MODE(加共享锁); SELECT ... FOR UPDATE(加排他锁);复制代码

详情上文已经说过。

通过如下sql可以查看等待锁的情况

select * from information_schema.innodb_trx where trx_state="lock wait"; 或 show engine innodb status;复制代码 mysql中的事务 show variables like "autocommit"; set autocommit=0; //0表示AutoCommit关闭 set autocommit=1; //1表示AutoCommit开启 复制代码 自动提交(AutoCommit,mysql默认)

mysql默认采用AutoCommit模式,也就是每个sql都是一个事务,并不需要显示的执行事务。

如果autoCommit关闭,那么每个sql都默认开启一个事务,只有显式的执行“commit”后这个事务才会被提交。


          Unforeseen use case of my GTID work: replicating from AWS Aurora to Google Cloud ...      Cache   Translate Page      

A colleague brought an article to my attention. I did not see it on Planet mysql where I get most of the MySQL news (or it did not catch my eye there). As it is interesting replication stuff, I think it is important to bring it to the attention of the MySQL Community, so I am writing this short post.

The surprising part for me is that it uses my 4-year-old work for online migration to GTID with MySQL5.6 . This is a completely unforeseen use case of my work as I never thought that my hack would be useful after Oracle include an online migration path to GTID in MySQL 5.7 ( Percona did something similar for MySQL5.6 ).

The thing that my 4-year-old hack allows, which is not provided by Oracle or Percona, is to generate GTID on the replication path . Both Oracle and Percona migration path only generate GTIDs on the master . In that respect, Oracle and Percona migration path allow to migrate online to GTID, but both are all-in migrations: if something goes wrong in production because of GTIDs, you need to quickly fix it and your production might be down in the meantime.

My work not only allows to migrate online but it also allows to experiment with GTIDs . With the ANONYMOUS_IN-GTID_OUT intermediate master , part of the replication setup is using GTIDs while the rest (including the master) is still in legacy mode. This is a subtle but important difference. If GTIDs are breaking something, only the subset of replication using them is broken, and the master with the non-GTID slaves are still working. This is exactly what was needed by the people at YoungCapital to migrate from AWS Aurora to Google CloudSQL .

ANONYMOUS_IN-GTID_OUT allows to migrate

from AWS Aurora to Google CloudSQL

I have no experience with Aurora, so what I am writing here is my understanding from the YoungCapital article. What I gathered is that Aurora does not use GTIDs. But CloudSQL is only working with GTIDs. As the people at YoungCapital wanted to migrate from Aurora to CloudSQL, they were facing a challenging problem. To solve it, they used my hack; the details are in their post .

I did a follow-up post on my GTID migration path in 2015 (one year after the original work was published), but I never thought that it would still be useful 3 years later. As pointed out by Karst Hammer in the YoungCapital article, my feature request ( Bug#71543 ) about the ANONYMOUS_IN-GTID_OUT mode is still open (actually, it was reopened at the request of Simon Mudd). Maybe AIGO (this is the short name we gave internally at Booking.com to the ANONYMOUS_IN-GTID_OUT mode) will eventually land in a GA version of Oracle MySQL or Percona Server, so people needing it will not have to recompile MySQL.

This reminds me of other unforeseen use cases of my work. When I started working on Binlog Servers, it was to allow extreme replication scaling . As the work evolved, it became obvious that it could also be used for master failover . Then, after a Binlog Server talk at Percona Live, someone came to me suggesting using it for point-in-time recovery . Facebook also published related work and I am sure there is more about this out there...

This is the power of the MySQL Community: I find this very cool!

So if you solve an interesting problem, blog about it to allow others to build on your work. You do not have a blog, then use the Percona Community Blog .

And the MySQL Community European Conference - Percona Live

- is in two months in Frankfurt, join us there.


          Insert benchmark for MyISAM from MySQL 5.0 to 8.0      Cache   Translate Page      

This post explains performance for the insert benchmark with MyISAM from mysql versions 5.0 to 8.0. The goal is to understand how performance has changed across releases. This is for an in-memory workload with an i3 NUC and i5 NUC. The i5 NUC is newer and faster.

tl;dr - from 5.0.96 to 8.0.3

Regressions are frequently larger on the i3 NUC than the i5 NUC. Maybe modern MySQL and thecore i3 NUCaren't great together because regressions are also larger on the i3 NUC for InnoDB. The insert rate decreased by 16% on the i5 NUC and 20% on the i3 NUC The query rate decreased by 6% on the i5 NUC and 10% on the i5 NUC for the test with 100 inserts/second

tl;dr - from 5.6.35 to 8.0.3

Most of the drop in performance from 5.0 to 8.0 occurs between 5.6.35 and 8.0.3. The drop is similar for MyISAM and InnoDB. I assume the drop is from code above the storage engine. The insert rate decreased by 11% on the i5 NUC and 9% on the i3 NUC The query rate decreased by 8% on the i5 NUC and 9% on the i3 NUC for the test with 100 inserts/second

Configuration

The tests used MyISAM from upstream MySQL versions 5.0.96, 5.1.72, 5.5.51, 5.6.35, 5.7.17 and 8.0.3. All tests used jemalloc with mysqld. The i3 and i5 NUC servers aredescribed here. The insert benchmark isdescribed here. The my.cnf files are here for the i3 NUC and i5 NUC . The i5 NUC has more RAM, faster CPUs and faster storage than the i3 NUC. I tried to tune my.cnf for all engines including: disabled SSL for MySQL 5.7, used the same charset and collation. For all tests the binlog was enabled but fsync was disabled for the binlog and database redo log. I compiled all of the MySQL versions on the test servers and did not use binaries from upstream.

The database fits in RAM as the test table has ~10M rows. The i3 NUC has 8gb of RAM and the i5 NUC has 16gb. The insert benchmark loaded the table with 10M rows, then did a full scan of each index on the table (PK, 3 secondary, PK again), then two read-write tests. The first read-write test tries to insert 1000 rows/second with one client while the other client does short range scans as fast as possible. The second read-write test is similar except the insert rate limit is 100/second.

Results

All of the data for the tests is here .

Results: load

The graph below has the insert rate for each release relative to the rate for MyISAM in 5.0.96. There is a small regression over time in the insert rate. The loss from 5.7 to 8.0 is the largest. Fortunately, 8.0 is not GA yet and maybe this can be improved.


Insert benchmark for MyISAM from MySQL 5.0 to 8.0

Additional metrics help to explain performance. The metrics areexplained here. The CPU overhead per insert (Mcpu/i) has increased with each release (more features == more instructions to execute) and that explains the decrease in the insert rate. Otherwise the metrics look good. The increase in Mcpu/i from 5.0.96 to 8.0.3 is 19% for the i3 NUC and 16% for the i5 NUC. This matches the change in the insert rate.

i3 NUC

IPS wKB/i Mcpu/i size(GB)

5.0.96 27174 1.01 1404 2.1

5.1.72 28249 1.06 1372 2.1

5.5.51 24691 1.08 1555 2.1

5.6.35 23866 1.11 1592 1.7

5.7.17 24691 0.94 1543 1.7

8.0.3 21645 0.93 1675 1.7


i5 NUC

IPS wKB/i Mcpu/i size(GB)

5.0.96 33113 0.64 1123 1.7

5.1.72 34843 0.59 1108 1.7

5.5.51 31348 0.61 1192 1.8

5.6.35 31250 0.70 1193 2.1

5.7.17 29674 0.65 1236 1.9

8.0.3 27701 0.65 1305 2.0

Results: scan

Below are tables that show the number of seconds for each full index scan: 1 is the PK, 2/3/4 are the secondary indexes and 5 is the PK again. The scan doesn't take long and the result is rounded to a whole number so the numbers aren't that useful. If there is a regression from 5.0 to 8.0 it isn't apparent in this result.

#seconds to scan an index, i3 NUC

1 2 3 4 5 2+3+4 engine

- - - - - ----- ------

2 2 3 5 2 10 5.0.96

2 3 2 6 1 11 5.1.72

2 2 3 5 2 10 5.5.51

2 3 3 6 2 12 5.6.35

2 3 3 5 2 11 5.7.17

2 3 3 6 2 12 8.0.3


#seconds to scan an index, i5 NUC

1 2 3 4 5 2+3+4 engine

- - - - - ----- ------

1 3 2 4 2 9 5.0.96

1 2 3 4 2 9 5.1.72

1 2 3 4 2 9 5.5.51

2 2 3 5 2 10 5.6.35

1 3 2 5 1 10 5.7.17

2 2 3 4 2 9 8.0.3

Results: read-write, 1000 inserts/second

This section has results for the read-write tests where the writer does 1000 inserts/second. The graph has the query rate relative to the rate for MySQL 5.0.96. The QPS regression from MySQL 5.0.96 to 8.0.3 is 13% for the i3 NUC and 8% for the i5 NUC. That is small which is good.


Insert benchmark for MyISAM from MySQL 5.0 to 8.0

All of the engines were able to sustain the target insert rate on average (ips.av). The value is 999 rather than 1000 because of implementation artifacts. The 99th percentile insert rate is 998 which means there were few write stalls. Additional metrics help explain the performance and more detail on the metricsis here. The increase in the CPU overhead per query (CPU/q) is 17% for the i3 NUC and 9% for the i5 NUC. More CPU overhead probably explains the drop in QPS.

i3 NUC

IPS.av IPS.99 wKB/i QPS.av QPS.99 CPU/q engine

999 998 10.28 5158 4706 5160 5.0.96

999 998 9.97 5443 5028 4816 5.1.72

999 998 10.09 5192 4980 5107 5.5.51

999 998 10.01 4956 4757 5456 5.6.35

999 998 10.00 4672 4464 5792 5.7.17

999 998 9.93 4470 4306 6049 8.0.3


i5 NUC

IPS.av IPS.99 wKB/i QPS.av QPS.99 CPU/q engine

999 998 10.79 5764 5383 4574 5.0.96

999 998 10.79 6144 5702 4215 5.1.72

999 998 10.84 5850 5694 4465 5.5.51

999 998 10.67 5744 5551 4613 5.6.35

999 998 10.58 5481 5285 4824 5.7.17

999 998 10.64 5284 5094 4994 8.0.3

Results: read-write, 100 inserts/second

This section has results for the read-write tests where the writer does 100 inserts/second. The graph has the query rate relative to the rate for MySQL 5.0.96. The QPS regression from MySQL 5.0.96 to 8.0.3 is 10% on the i3 NUC and 6% on the i5 NUC. That is small which is good.


Insert benchmark for MyISAM from MySQL 5.0 to 8.0

          [MySQL5.7]添加用户/删除用户/授权      Cache   Translate Page      
添加用户 CREATE USER ‘username’@’host’ IDENTIFIED BY ‘password’;

username是用户名

host是允许用户从哪里登录,”%”表示从任何来源都允许

password是登录密码可以为空

范例:

CREATE USER ‘test’@’localhost’ IDENTIFIED BY ‘123456’;

CREATE USER ‘test2’@’%’ IDENTIFIED BY ”;

删除用户 DROP USER ‘username’@’host’;

参数和添加用户对应

授权 GRANT grivileges ON dbname.tablename TO ‘username’@’host’;

grivileges是权限,所有权限为ALL

dbname是数据库名字

tablename是表名

这2个参数都可以为*

范例:

GRANT ALL PRIVILEGES ON . TO ‘test’@’%’;

GRANT SELECT,INSERT ON mydb.* TO ‘test2’@’localhost’;


          从理论到实践,Mysql查询优化剖析      Cache   Translate Page      
前言

之前在文章【从I/O到索引的那些事】笔者讨论了索引在数据库查询中体现的作用,主要表现为降低查询的次数来提高执行效率,根本原因是消减I/O的成本。本文将针对mysql数据库做一次相关优化的例证,把查询和索引做好联系,增强实际应用的能力!

关于Mysql

一旦涉及到查询优化,就离不开索引的应用,本文选取mysql常用的引擎InnoDB作为研究对象,针对InnoDB引擎利用的索引结构B+树做个简单说明。

InnoDB的B+树

假设我们创建表Student,主键为id:

CREATE TABLE `Student` ( `id` int(16) NOT NULL AUTO_INCREMENT, `name` varchar(10) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;复制代码

插入12条数据:

insert into Student(id,name) valuse(1,'XiaoHong') insert into Student(id,name) valuse(2,'XiaoMing') insert into Student(id,name) valuse(3,'XiaoFang') .... insert into Student(id,name) valuse(12,'XiaoYou') 复制代码

此时,Innodb引擎将会根据主键id自行创建一个B+树的索引结构,我们有如下图的抽象:


从理论到实践,Mysql查询优化剖析

如何理解图中结构的形态?

表数据首先会根据主键id的顺序存储在磁盘空间,图中叶节点存放表中每行的真实数据,可以认识到表数据本身属于主键索引的一部分,如下图,每行数据根据主键id按序存放:


从理论到实践,Mysql查询优化剖析

我们设定id为Int类型占据4个字节,name字段为固定10字节的Char类型,Student表每行将占据14个字节的磁盘空间。在理想状况下,我们可以简化成这样的一个认识:假定图中第一行(1,XiaoHong)在磁盘地址0x01,那么第二行(2,XiaoMing)则在磁盘地址0x0f(0x01+14=0x0f),以此类推下去。

非叶节点存放索引值和对应的指针,我们看到这12行数据根据主键id分成了五个节点(一个非叶节点四个叶节点),真实环境下Mysql利用磁盘按块读取的原理设定每个磁盘块(也可理解为页,一般为4kb,innodb中将页大小设定为16kb)为一个树节点大小,这样每次一个磁盘I/O产生的内容就可以获取对应节点所有数据。

对于非叶节点每个索引值左边的指针指向小于这个索引值的对应数据的节点地址,索引值右边的指针指向大于或等于该索引值的对应数据的节点地址:


从理论到实践,Mysql查询优化剖析

如上图,索引值为4的左边指针的指向结点数据必定都是小于4的,对应右指针指向节点范围必定是大于或等于4的。而且,在索引值数目一定的情况下,B+树为了控制树的高度尽可能小,会要求每个非页节点尽可能存放更多数据,一般要求非叶节点索引值的个数至少为(n-1)/2,n为一个页块大小最多能容纳的值个数。按照上图假设的构造形态,我们知道每个页块最多只能容纳三个索引值或三行数据(实际会大很多),在这样的前提下,如果继续插入行数据,那么首先是叶节点将没有空间容纳新数据,此时叶节点通过分裂来增加一个新叶节点完成保存:


从理论到实践,Mysql查询优化剖析

可以想象的是,我们试图继续插入2条数据:

insert into Student(id,name) valuse(13,'XiaoRui') insert into Student(id,name) valuse(14,'XiaoKe')复制代码

最终将会变成如下形态:


从理论到实践,Mysql查询优化剖析

因为每个非页节点最多容纳3个索引值和对应的4个指针(扇出),整个查询的复杂度为 O ( log 4 N ),N为表的行数。对于拥有1000个学生数据的Student表来说,根据id查询的复杂度为 log 4 1000 =5,在这里,查询复杂度在B+树中可以直观地理解为树的高度,通过非叶节点一层层的递进判断最终定位到目标数据所在的页块地址。

因此,innodb引擎的表数据是通过主键索引结构来组织的,叶节点存放着行数据,是一种B+树文件组织,如果通过主键定位行数据将拥有极大的效率,所以在创建表时无论有没明确定义主键索引,引擎内部都会自动为表创建一个主键索引继而构造出一个B+树文件组织。在实际应用中,当通过主键去查询某些数据时,首先是通过B+树定位到具体的叶节点地址,因为叶节点刚好设定为磁盘块连续地址的整数倍大小,所以通过连续地址的快速I/O将整个节点内容加载到内存,然后从内存中对节点内容进行筛选找出目标数据!

但innodb引擎还允许我们对表其它字段单独构建索引,也就是常说的辅助索引,比如我们这样创建Student表:

CREATE TABLE `Student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(10) DEFAULT NULL, PRIMARY KEY (`id`), KEY `index_name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;复制代码

插入示例数据:

insert into Student(id,name) valuse(1,'A') insert into Student(id,name) valuse(2,'A') insert into Student(id,name) valuse(3,'B') ...... ...... insert into Student(id,name) valuse(12,'F')复制代码

如何理解name字段索引结构存在的形式?直接上图:


从理论到实践,Mysql查询优化剖析

可见,辅助索引同样会构建一个B+树索引结构,只不过叶节点存放的是主键id值,非数字的索引在索引结构中按照预先设定的字符集排序规则进行排序,比如name=A在对应排序规则中是比B要小的。

按照上图的结构,假定我们进行如下操作:

select * from Student where name='A';复制代码

那么首先会利用辅助索引定位到叶节点1,然后加载到内存,在内存中检索发现有两个主键id:1、2 符合条件,然后通过主键id再从主键索引进行检索,把行数据全部加载出来!

在辅助索引中,innodb还支持组合索引的形式,把多个字段按序组合而成一个索引,比如我们创建如下Student表:

CREATE TABLE `StudentTmp` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(10) DEFAULT NULL, `age` int(11) DEFAULT NULL, PRIMARY KEY (`id`), KEY `index_name_age` (`name`,`age`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;复制代码

name和age组合构成一个索引,对应B+树索引结构有如下形式:


从理论到实践,Mysql查询优化剖析

在该组合索引中,叶节点内容先是按照name字段进行排序,在name字段值相同情况下再按照age字段进行排序,这样在对name和age作为组合条件查询时将充分利用两个字段的排序关系实现多级索引的定位。

好的,我们不在纠结B+树的更多细节,我们只需先在脑海中构建索引结构的大体形态,想象着索引的查询是在一个树状结构中层层递进最终定位到目标数据的过程,并且认识到查询复杂度和B+树非叶节点中指针个数存在着联系,这对于我们形成查询成本的敏感性是非常有帮助的。

通过Explain方法来了解查询成本

体会了索引构造带来的查询效率的提升,在实际应用中我们又该如何了解每个查询Sql对索引的利用情况呢?Explain方法可以在执行前辅助我们进行判断,通过相关参数特别是对于多层嵌套或连接的复杂查询语句具有非常大的帮助。

通过在查询sql前面加上explain关键字就可以完成计划分析:

explain select id from Student where id=1;

执行后有如下结果:


从理论到实践,Mysql查询优化剖析

我们看到结果表单有id、table、select_type...等10个参数,每个参数有对应的结果值,接下来我们一步步做好认识。

id:用于标识各个子查询的执行顺序,值越大执行优先级越高

上文查询只是一个简单查询,故id只有一个1,我们现在增加一个子查询后:

explain select name from Student where id=(select max(id) from Student);

有:


从理论到实践,Mysql查询优化剖析

可以看到有两个结果行,说明这个sql有两个查询计划,table字段用于指明该查询计划对应的表名,而id值的作用在于提示我们哪个查询计划是优先执行的。

table:指定对应查询计划关联的表名

上文关于id字段的示例说明中,我们发现id=2的查询计划( select max(id) from Student )对应表名是空的,这似乎不符合常规,难道这个查询计划不涉及到表操作?我们在Extra字段中找到了这样一个说明: Select tables optimized away 这个语句告诉我们,引擎对该查询计划做了优化,基于索引层面的优化像min/max操作或者count(*)操作,不需要等到执行阶段对表进行检索,该值可能预先保存在某些地方直接读取。笔者猜想的一种情况是,因为id字段本身属于Student表的主键索引,引擎本身实时保存着min(id)、max(id)的值供查询,或者直接读取主键索引树第一个、最后一个叶节点数据来获取,所以类似查询计划在实际执行中具有极大的执行效率。

select_type:标识查询计划的类型

select_type主要有如下几种不同类型:

SIMPLE:简单SELECT,不使用UNION或子查询等 PRIMARY:查询中若包含任何复杂的子部分,最外层的select被标记为PRIMARY UNION:UNION中的第二个或后面的SELECT语句 SUBQUERY:子查询中的第一个SELECT

DERIVED(派生表的SELECT, FROM子句的子查询)

对于 explain select id from Student where id=1;

select_type为SIMPLE,表示该sql是最简单形式的查询

对于 explain select name from Student union select name from Course; 有:


从理论到实践,Mysql查询优化剖析

我们看到有两个查询计划,对于最外层Student表的查询为PRIMARY,表示该语句是复杂语句,包含着其它查询计划,而这个包含的查询计划就是Course查询计划,Course查询计划的select_type为UNION,印证了上面对UNION类型的说明。结合id字段代表的意义,我们了解到引擎先是执行Course表计划再是执行Student表计划。

对于 explain select id,(select count(*) from Course) as count from Student; 有:


从理论到实践,Mysql查询优化剖析

这次同样是两个查询计划,但区别在于我们构建了一个对Course表的子查询语句,相应的select_type为SUBQUERY,通过id可知,该sql会优先执行Course表的查询计划再执行Student表的查询计划。

对于 explain select name from (select name from Student where id=1) tb; 有:


从理论到实践,Mysql查询优化剖析

这个语句的特别之处在于对Student表的子查询计划被外面包裹了一层,因此对应的select_type为DERIVED。

到这里,我们认识到一个sql在执行过程中会被拆分一个以上的查询计划,计划间有一定的执行优先级,而select_type则很好地定义了不同计划存在的形式,这使得我们可以把复杂sql进行结构上的拆解,针对不同的查询计划一个个分析最后完成整体的优化。

接下来我们开始重点关注explian分析表单的其它几个字段:

type possible_keys:查询计划可能用到的索引 key:查询计划实际采用的索引 rows:查询复杂度,亦可简单理解为查询计划需要处理的行数

这些字段和索引紧密联系,将真正为我们查询成本的分析提供参考,我们可以通过这些字段很好地判断索引的利用情况了。

type:对表进行数据查询时所利用的检索方式

type指明了该查询计划是否利用了索引结构,以及检索上存在的具体特点,具体类别有:

ALL:没用到索引, MySQL将遍历全表以找到匹配的行 index: 只利用索引结构,在innodb可以理解为只在B+树上进行全局检索,不直接对表进行操作 range:只检索给定范围的行,使用一个索引来选择行 ref: 通过索引检索,只不过该索引是非唯一索引,可能检索出多个相同值的记录 eq_ref: 类似ref,区别就在使用的索引是唯一索引,对于每个索引键值,表中只有一条记录匹配,简单来说,就是多表连接中使用primary key或者 unique key作为关联条件 const、system: 当MySQL对查询某部分进行优化,并转换为一个常量时,使用这些类型访问。如将主键置于where列表中,MySQL就能将该查询转换为一个常量,system是const类型的特例,当查询的表只有一行的情况下,使用system NULL: MySQL在优化过程中分解语句,执行时甚至不用访问表或索引,例如从一个索引列里选取最小值可以通过单独索引查找完成

对于 explain select name from Student where name='学生1'; 有:


          MySQL不同存储引擎的数据备份与恢复      Cache   Translate Page      

数据备份的目的很直接也很简单,就是为了避免因不可预测、偶然的事件而导致的惨重损失,所以数据越重要、变化越频繁,就越要进行数据备份。我们以mysql为例对数据备份进行了粗略的解读,本文我们依然以MySQL为例,讲讲面对不同的存储引擎如何做数据备份与恢复。

为了应对不同的数据处理,MySQL提供了十几种不同的存储引擎,不过,我们没有必要一一去了解,因为熟悉使用MySQL的人都知道,比较常用的存储引擎有两个,分别是MyISAM和InnoDB。

MyISAM是MySQL的ISAM扩展格式和缺省的数据库引擎,不支持事务、也不支持外键,但其优势在于访问速度快,对事务完整性没有要求,以select,insert为主的应用基本上可以用这个引擎来创建表。常用于高读取的应用场景数据库,支持三种不同类型的存储结构:静态型、动态型、压缩型。 InnoDB提供了具有提交、回滚和崩溃恢复能力的事务安全,支持自动增长列,支持外键约束。对比MyISAM引擎,InnoDB写的处理效率会差一些,并且会占用更多的磁盘空间以保留数据和索引。
MySQL不同存储引擎的数据备份与恢复

了解了MySQL常用的两种存储引擎之后,我们就来看看在这两种引擎中如何进行数据备份和恢复。

MyISAM数据备份

因为MyISAM是保存成文件的形式,所以在备份时有多种方法可以使用,并且大多数虚拟主机提供商和INTERNET平台提供商只允许使用MyISAM格式,掌握MyISAM数据备份就格外重要了。

方法1:文件拷贝

为了保持数据备份的一致性,我们可以对相关表执行LOCK TABLES操作,对表执行FLUSH TABLES。当然,你只需要限制写操作,这样能够保证在复制数据时,其它操作仍然可以查询表,而FLUSH TABLES是用来确保开始备份前将所有激活的索引页写入硬盘。

标准流程:锁表、刷新表到磁盘、拷贝文件、解锁。

方法2:SQL语句备份

SELECT INTO ...OUTFILE或BACKUP TABLE都可以进行SQL级别的表备份,需要注意的是这两种方法如果有重名文件,最好是先移除重名文件。另外,BACKUP TABLE备份需要注意输出目录的权限,改方法只是备份MYD和frm文件,不备份索引。

方法3: mysqlhotcopy 备份

mysqlhotcopy 是一个 Perl脚本,使用LOCK TABLES、FLUSH TABLES和cp或scp来快速备份数据库,但其只能运行在数据库目录所在的机器上,且只用于备份MyISAM。

shell>mysqlhotcopydb_name[/path/to/new_directory] shell>mysqlhotcopydb_name_1...db_name_n/path/to/new_directory

方法4: mysqldump 备份

Mysqldump既可以备份表结构和数据,也可以备份单个表、单个库或者所有库,输出是SQL语句文件或者是其它数据库兼容的格式。在之前的文章中,我们较详细的介绍了Mysqldump,本文就不再赘述了。

shell>mysqldump[options]db_name[tables] shell>mysqldump[options]---databaseDB1[DB2DB3...] shell>mysqldump[options]--all―database

方法5:冷备份

冷备份的方法就很简单粗暴了,在MySQL服务器停止服务时,复制所有表文件。

MyISAM数据备份恢复

不同的备份方式有相应的恢复方法:

如果是mysqldump备份,恢复方法是mysql u root < 备份文件名; 如果是mysqlhotcopy或文件冷/热拷贝备份,恢复方法是停止MySQL服务,并用备份文件覆盖现有文件; 如果是BACKUP TABLE备份,使用restore table来恢复; 如果是SELECT INTO ...OUTFILE备份,使用load data恢复数据或mysqlimport命令。

InnoDB数据备份

MyISAM不支持事务和外键,这使得MySQL使用者往往会面临一些挑战,所以,理所当然的当InnoDB支持事务和外键时,即使它的速度较慢,也还是获得了MySQL使用者的青睐。

方法1:mysqldump

是不是看着很眼熟,没错儿,上面MyISAM数据备份的方法其中之一就是它。Mysqldump也可以对InnoDB提供非物理的在线逻辑热备份,使用方法和MyISAM类似。

方法2:copy file

InnoDB底层存储的时候会将数据和元信息存在ibdata*, *.ibd, *.frm, *.ib_logfile*等文件中,所以备份了这些文件就相当于备份了InnoDB数据。

方法3:select into

与MyISAM用法一样。

方法4:商业工具

InnoDB数据备份有很多商业工具可以使用,例如InnoDB Hotbackup,这是一个在线备份工具,即可以在InnoDB数据库运行时备份InnoDB数据库; ibbackup,将线上的my.cnf所指向的的数据内容备份到my.backup.cnf指向的数据目录。

InnoDB数据备份恢复

在使用特定的恢复方法之前,InnoDB数据备份恢复还有两个通用的方法,分别是InnoDB的日志自动恢复功能,即重启mysql服务和“万能大法”――重启计算机。

如果是mysqldump完全备份,先恢复完全备份,然后再恢复完全备份后的增量日志备份。 如果是select into备份表,则采用load data或mysqlimport恢复。 如果是copy file,那么就停止MySQL服务,备份文件覆盖当前文件,并执行上次完全备份后的增量日志备份。
          MySQL: Calculate the free space in IBD files      Cache   Translate Page      

If you use mysql with InnoDB, chances are you've seen growing IBD data files. Those are the files that actually hold your data within MySQL. By default, they only grow -- they don't shrink. So how do you know if you still have free space left in your IBD files?

There's a query you can use to determine that:

SELECT round((data_length+index_length)/1024/1024,2)
FROM information_schema.tables
WHERE
table_schema='zabbix'
AND table_name='history_text';

The above will check a database called zabbix for a table called history_text . The result will be the size that MySQL has " in use " in that file. If that returns 5.000 as a value, you have 5GB of data in there.

In my example, it showed the data size to be 16GB. But the actual IBD file was over 50GB large.

$ ls -alh history_text.ibd
-rw-r----- 1 mysql mysql 52G Sep 10 15:26 history_text.ibd

In this example I had 36GB of wasted space on the disk (52GB according to the OS, 16GB in use by MySQL). If you run MySQL with innodb_file_per_table=ON , you can individually shrink the IBD files. One way, is to run an OPTIMIZE query on that table.

Note: this can be a blocking operation, depending on your MySQL version. WRITE and READ I/O can be blocked to the table for the duration of the OPTIMIZE query.

MariaDB [zabbix]> OPTIMIZE TABLE history_text;
Stage: 1 of 1 'altering table' 93.7% of stage done
Stage: 1 of 1 'altering table' 100% of stage done
+---------------------+----------+----------+-------------------------------------------------------------------+
| Table | Op | Msg_type | Msg_text |
+---------------------+----------+----------+-------------------------------------------------------------------+
| zabbix.history_text | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| zabbix.history_text | optimize | status | OK |
+---------------------+----------+----------+-------------------------------------------------------------------+
2 rows in set (55 min 37.37 sec)

The result is quite a big file size savings:

$ ls -alh history_text.ibd
-rw-rw---- 1 mysql mysql 11G Sep 10 16:27 history_text.ibd

The file that was previously 52GB in size, is now just 11GB.


          MySQL慢查询记录原理和内容解析      Cache   Translate Page      

我的学习记录,可能有误请谅解,也提供了一些源码接口供有兴趣的朋友调试。

源码版本:percona 5.7.14

本文并不准备说明如何开启记录慢查询,只是将一些重要的部分进行解析。如何记录慢查询可以自行参考官方文档:

5.4.5 The Slow Query Log

本文使用了Percona 版本开启来了参数log_slow_verbosity,得到了更详细的慢查询信息。通常情况下信息没有这么多,但是一定是包含关系,本文也会使用Percona的参数解释说明一下这个参数的含义。

一、慢查询中的时间

实际上慢查询中的时间就是时钟时间,是通过操作系统的命令获得的时间,如下是linux中获取时间的方式

while (gettimeofday(&t, NULL) != 0) {} newtime= (ulonglong)t.tv_sec * 1000000 + t.tv_usec; return newtime;

实际上就是通过OS的API gettimeofday函数获得的时间。

二、慢查询记录的依据

long_query_time:如果执行时间超过本参数设置记录慢查询。

log_queries_not_using_indexes:如果语句未使用索引记录慢查询。

log_slow_admin_statements:是否记录管理语句。(如ALTER TABLE,ANALYZE TABLE, CHECK TABLE, CREATE INDEX, DROP INDEX, OPTIMIZE TABLE, and REPAIR TABLE.)

本文主要讨论long_query_time参数的含义。

三、long_query_time参数的具体含义

如果我们将语句的执行时间定义为如下:

实际消耗时间 = 实际执行时间+锁等待消耗时间

那么long_query_time实际上界定的是实际执行时间,所以有些情况下虽然语句实际消耗的时间很长但是是因为锁等待时间较长而引起的,那么实际上这种语句也不会记录到慢查询。

我们看一下log_slow_applicable函数的代码片段:

res= cur_utime - thd->utime_after_lock; if (res > thd->variables.long_query_time) thd->server_status|= SERVER_QUERY_WAS_SLOW; else thd->server_status&= ~SERVER_QUERY_WAS_SLOW;

这里实际上清楚的说明了上面的观点,是不是慢查询就是通过这个函数进行的判断的,非常重要。我可以清晰的看到如下公式:

res (实际执行时间 ) = cur_utime(实际消耗时间) - thd->utime_after_lock( 锁等待消耗时间)

实际上在慢查询中记录的正是

Query_time:实际执行时间

Lock_time:锁等待消耗时间

但是是否是慢查询其评判标准却是实际执行时间及Query_time - Lock_time

其中锁等待消耗时间( Lock_time)我现在已经知道的包括:

mysql层 MDL LOCK等待消耗的时间。(Waiting for table metadata lock)

MySQL层 MyISAM表锁消耗的时间。 (Waiting for table level lock)

InnoDB层 行锁消耗的时间。

四、MySQL是如何记录锁时间

我们可以看到在公式中utime_after_lock( 锁等待消耗时间Lock_time)的记录也就成了整个公式的关键,那么我们试着进行debug。

1、MySQL层utime_after_lock的记录方式

不管是 MDL LOCK等待消耗的时间还是 MyISAM表锁消耗的时间都是在MySQL层记录的,实际上它只是记录在函数mysql_lock_tables的末尾会调用的THD::set_time_after_lock进行的记录时间而已如下:

void set_time_after_lock() { utime_after_lock= my_micro_time(); MYSQL_SET_STATEMENT_LOCK_TIME(m_statement_psi, (utime_after_lock - start_utime)); }

那么这里可以解析为代码运行到mysql_lock_tables函数的末尾之前的所有的时间都记录到utime_after_lock时间中,实际上并不精确。但是实际上MDL LOCK的获取和MyISAM表锁的获取都包含在里面。所以即便是select语句也会看到Lock_time并不为0。下面是栈帧:

#0 THD::set_time_after_lock (this=0x7fff28012820) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_class.h:3414 #1 0x0000000001760d6d in mysql_lock_tables (thd=0x7fff28012820, tables=0x7fff28c16b58, count=1, flags=0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/lock.cc:366 #2 0x000000000151dc1a in lock_tables (thd=0x7fff28012820, tables=0x7fff28c165b0, count=1, flags=0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_base.cc:6700 #3 0x00000000017c4234 in Sql_cmd_delete::mysql_delete (this=0x7fff28c16b50, thd=0x7fff28012820, limit=18446744073709551615) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_delete.cc:136 #4 0x00000000017c84ba in Sql_cmd_delete::execute (this=0x7fff28c16b50, thd=0x7fff28012820) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_delete.cc:1389 #5 0x00000000015a7814 in mysql_execute_command (thd=0x7fff28012820, first_level=true) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:3729 #6 0x00000000015adcd6 in mysql_parse (thd=0x7fff28012820, parser_state=0x7ffff035b600) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:5836 #7 0x00000000015a1b95 in dispatch_command (thd=0x7fff28012820, com_data=0x7ffff035bd70, command=COM_QUERY) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1447 #8 0x00000000015a09c6 in do_command (thd=0x7fff28012820) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1010

2、InnoDB层的行锁的utime_after_lock记录方式

InnoDB引擎层调用通过thd_set_lock_wait_time调用thd_storage_lock_wait函数完成的栈帧如下:

#0 thd_storage_lock_wait (thd=0x7fff2c000bc0, value=9503561) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_class.cc:798 #1 0x00000000019a4b2a in thd_set_lock_wait_time (thd=0x7fff2c000bc0, value=9503561) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:1784 #2 0x0000000001a4b50f in lock_wait_suspend_thread (thr=0x7fff2c088200) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/lock/lock0wait.cc:363 #3 0x0000000001b0ec9b in row_mysql_handle_errors (new_err=0x7ffff0317d54, trx=0x7ffff2f2e5d0, thr=0x7fff2c088200, savept=0x0) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/row/row0mysql.cc:772 #4 0x0000000001b4fe61 in row_search_mvcc (buf=0x7fff2c087640 "\377", mode=PAGE_CUR_G, prebuilt=0x7fff2c087ac0, match_mode=0, direction=0) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/row/row0sel.cc:5940 #5 0x00000000019b3051 in ha_innobase::index_read (this=0x7fff2c087100, buf=0x7fff2c087640 "\377", key_ptr=0x0, key_len=0, find_flag=HA_READ_AFTER_KEY) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9104 #6 0x00000000019b4374 in ha_innobase::index_first (this=0x7fff2c087100, buf=0x7fff2c087640 "\377") at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9551 #7 0x00000000019b462c in ha_innobase::rnd_next (this=0x7fff2c087100, buf=0x7fff2c087640 "\377") at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9656 #8 0x0000000000f66f1b in handler::ha_rnd_next (this=0x7fff2c087100, buf=0x7fff2c087640 "\377") at /root/mysql5.7.14/percona-server-5.7.14-7/sql/handler.cc:3099 #9 0x00000000014c61b6 in rr_sequential (info=0x7ffff03189e0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/records.cc:520 #10 0x00000000017c56c3 in Sql_cmd_delete::mysql_delete (this=0x7fff2c006ae8, thd=0x7fff2c000bc0, limit=1) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_delete.cc:454 #11 0x00000000017c84ba in Sql_cmd_delete::execute (this=0x7fff2c006ae8, thd=0x7fff2c000bc0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_delete.cc:1389

函数本身还是很简单自己看看就知道了就是相加而已如下:

void thd_storage_lock_wait(THD *thd, long long value) { thd->utime_after_lock+= value; } 五、Percona中的log_slow_verbosity参数

这是Percona的解释:

Specifies how much information to include in your slow log. The value is a comma-delimited string, and can contain any combination of the following values:

microtime: Log queries with microsecond precision (mandatory).

query_plan: Log information about the query’s execution plan (optional).

innodb: Log InnoDB statistics (optional).

minimal: Equivalent to enabling just microtime.

standard: Equivalent to enabling microtime,innodb.

full: Equivalent to all other values OR’ed together.

总之在Percona中可以修改这个参数获得更加详细的信息大概的格式如下:

# Time: 2018-05-30T09:30:12.039775Z # User@Host: root[root] @ localhost [] Id: 10 # Schema: test Last_errno: 1317 Killed: 0 # Query_time: 19.254508 Lock_time: 0.001043 Rows_sent: 0 Rows_examined: 0 Rows_affected: 0 # Bytes_sent: 44 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0 # InnoDB_trx_id: 0 # QC_Hit: No Full_scan: No Full_join: No Tmp_table: No Tmp_table_on_disk: No # Filesort: No Filesort_on_disk: No Merge_passes: 0 # InnoDB_IO_r_ops: 0 InnoDB_IO_r_bytes: 0 InnoDB_IO_r_wait: 0.000000 # InnoDB_rec_lock_wait: 0.000000 InnoDB_queue_wait: 0.000000 # InnoDB_pages_distinct: 0 SET timestamp=1527672612; select count(*) from z1 limit 1; 六、输出的详细解释

本节将会进行详细的解释,全部的慢查询的输出都来自于函数File_query_log::write_slow ,有兴趣的同学可以自己看看,我这里也会给出输出的位置和含义,其中含义部分可能给出的是源码中的注释。

1、第一部分时间

# Time: 2018-05-30T09:30:12.039775Z

对应的代码:

my_snprintf(buff, sizeof buff,"# Time: %s\n", my_timestamp);

其中my_timestamp取值来自于

thd->current_utime();

实际上就是:

while (gettimeofday(&t, NULL) != 0) {} newtime= (ulonglong)t.tv_sec * 1000000 + t.tv_usec; return newtime;

可以看到实际就是调用gettimeofday系统调用得到的系统当前时间。

注意:

对于5.6来讲还有一句判断

if (current_time != last_time)

如果两次打印的时间秒钟一致则不会输出时间,只有通过后面介绍的

SET timestamp=1527753496;

来判断时间,5.7.14没有看到这样的代码。

2、第二部分用户信息

# User@Host: root[root] @ localhost [] Id: 10

对应的代码:

buff_len= my_snprintf(buff, 32, "%5u", thd->thread_id()); if (my_b_printf(&log_file, "# User@Host: %s Id: %s\n", user_host, buff) == (uint) -1) goto err; }

user_host是一串字符串,参考代码:

size_t user_host_len= (strxnmov(user_host_buff, MAX_USER_HOST_SIZE, sctx->priv_user().str ? sctx->priv_user().str : "", "[", sctx_user.length ? sctx_user.str : (thd->slave_thread ? "SQL_SLAVE" : ""), "] @ ", sctx_host.length ? sctx_host.str : "", " [", sctx_ip.length ? sctx_ip.str : "", "]", NullS) - user_host_buff);

解释如下:

root: m_priv_user - The user privilege we are using. May be "" for anonymous user。

[root]: m_user - user of the client, set to NULL until the user has been read from the connection。

localhost: m_host - host of the client。

[]:client IP m_ip - client IP。

Id: 10 thd->thread_id()实际上就是show processlist出来的id。

3、第三部分schema等信息

# Schema: test Last_errno: 1317 Killed: 0

对应的代码:

"# Schema: %s Last_errno: %u Killed: %u\n" (thd->db().str ? thd->db().str : ""), thd->last_errno, (uint) thd->killed,

Schema:

m_db Name of the current (default) database.If there is the current (default) database, "db" contains its name. If there is no current (default) database, "db" is NULL and "db_length" is 0. In other words, "db", "db_length" must either be NULL, or contain a valid database name.

Last_errno:

Variable last_errno contains the last error/warning acquired during query execution.

Killed: 这里代表的是终止的错误码。源码中如下:

enum killed_state

{

NOT_KILLED=0,

KILL_BAD_DATA=1,

KILL_CONNECTION=ER_SERVER_SHUTDOWN,

KILL_QUERY=ER_QUERY_INTERRUPTED,

KILL_TIMEOUT=ER_QUERY_TIMEOUT,

KILLED_NO_VALUE /* means neither of the states */

};

在错误码中代表如下:

{ "ER_SERVER_SHUTDOWN", 1053, "Server shutdown in progress" },

{ "ER_QUERY_INTERRUPTED", 1317, "Query execution was interrupted" },

{ "ER_QUERY_TIMEOUT", 1886, "Query execution was interrupted, max_statement_time exceeded" },

4、第四部分执行信息

这部分可能是大家最关心的部分,很多信息也是默认输出都会输出的。

# Query_time: 19.254508 Lock_time: 0.001043 Rows_sent: 0 Rows_examined: 0 Rows_affected: 0 # Bytes_sent: 44 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0 # InnoDB_trx_id: 0

对应代码:

my_b_printf(&log_file, "# Schema: %s Last_errno: %u Killed: %u\n" "# Query_time: %s Lock_time: %s Rows_sent: %llu" " Rows_examined: %llu Rows_affected: %llu\n" "# Bytes_sent: %lu", (thd->db().str ? thd->db().str : ""), thd->last_errno, (uint) thd->killed, query_time_buff, lock_time_buff, (ulonglong) thd->get_sent_row_count(), (ulonglong) thd->get_examined_row_count(), (thd->get_row_count_func() > 0) ? (ulonglong) thd->get_row_count_func() : 0, (ulong) (thd->status_var.bytes_sent - thd->bytes_sent_old) my_b_printf(&log_file, " Tmp_tables: %lu Tmp_disk_tables: %lu " "Tmp_table_sizes: %llu", thd->tmp_tables_used, thd->tmp_tables_disk_used, thd->tmp_tables_size) snprintf(buf, 20, "%llX", thd->innodb_trx_id);及thd->innodb_trx_id

Query_time:语句执行的时间及实际消耗时间 。

Lock_time:包含MDL lock和InnoDB row lock和MyISAM表锁消耗时间的总和及锁等待消耗时间。前面已经进行了描述(实际上也并不全是锁等待的时间只是锁等待包含在其中)。

我们来看看Query_time和Lock_time的源码来源,它们来自于Query_logger::slow_log_write函数如下: query_utime= (current_utime > thd->start_utime) ? (current_utime - thd->start_utime) : 0; lock_utime= (thd->utime_after_lock > thd->start_utime) ? (thd->utime_after_lock - thd->start_utime) : 0; 下面是数据current_utime 的来源, current_utime= thd->current_utime(); 实际上就是: while (gettimeofday(&t, NULL) != 0) {} newtime= (ulonglong)t.tv_sec * 1000000 + t.tv_usec; return newtime; 获取当前时间而已 对于thd->utime_after_lock的获取我已经在前文进行了描述,不再解释。

Rows_sent:发送给mysql客户端的行数,下面是源码中的解释

Number of rows we actually sent to the client

Rows_examined:InnoDB引擎层扫描的行数,下面是源码中的解释。(备注栈帧1)

Number of rows read and/or evaluated for a statement. Used for slow log reporting.

An examined row is defined as a row that is read and/or evaluated

according to a statement condition, including increate_sort_index(). Rows may be counted more than once, e.g., a statement including ORDER BY could possibly evaluate the row in filesort() before reading it for e.g. update.

Rows_affected:涉及到修改的话(比如DML语句)这是受影响的行数。

for DML statements: to the number of affected rows;

for DDL statements: to 0.

Bytes_sent:发送给客户端的实际数据的字节数,它来自于

(ulong) (thd->status_var.bytes_sent - thd->bytes_sent_old)

Tmp_tables:临时表的个数。

Tmp_disk_tables:磁盘临时表的个数。

Tmp_table_sizes:临时表的大小。

以上三个指标来自于:

thd->tmp_tables_used thd->tmp_tables_disk_used thd->tmp_tables_size

这三个指标增加的位置对应在free_tmp_table函数中如下:

thd->tmp_tables_used++; if (entry->file) { thd->tmp_tables_size += entry->file->stats.data_file_length; if (entry->file->ht->db_type != DB_TYPE_HEAP) thd->tmp_tables_disk_used++; }

InnoDB_trx_id:事物ID,也就是trx->id,/*!< transaction id */

5、第五部分优化器相关信息

# QC_Hit: No Full_scan: No Full_join: No Tmp_table: No Tmp_table_on_disk: No # Filesort: No Filesort_on_disk: No Merge_passes: 0

这一行来自于如下代码:

my_b_printf(&log_file, "# QC_Hit: %s Full_scan: %s Full_join: %s Tmp_table: %s " "Tmp_table_on_disk: %s\n" \ "# Filesort: %s Filesort_on_disk: %s Merge_passes: %lu\n", ((thd->query_plan_flags & QPLAN_QC) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_FULL_SCAN) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_FULL_JOIN) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_TMP_TABLE) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_TMP_DISK) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_FILESORT) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_FILESORT_DISK) ? "Yes" : "No"),

这里注意一个处理的技巧,这里query_plan_flags中每一位都代表一个含义,这样存储既能存储足够多的信息同时存储空间也很小,是C/C++中常用的方式。

QC_Hit: No:是否query cache命中。

Full_scan: 此处相当于Select_scan 的含义,是否进行了全扫描包括using index。

Full_join: 此处相当于Select_full_join 的含义,是否被驱动表使用到了索引,如果没有使用到索引则为YES。

考虑如下的执行计划

mysql> desc select *,sleep(1) from testuin a,testuin1 b where a.id1=b.id1; +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+ | 1 | SIMPLE | a | NULL | ALL | NULL | NULL | NULL | NULL | 5 | 100.00 | NULL | | 1 | SIMPLE | b | NULL | ALL | NULL | NULL | NULL | NULL | 5 | 20.00 | Using where; Using join buffer (Block Nested Loop) | +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+ 2 rows in set, 1 warning (0.00 sec)

如此输出如下:

# QC_Hit: No Full_scan: Yes Full_join: Yes

Tmp_table:是否使用了临时表,在函数create_tmp_table中设置。

Tmp_table_on_disk:是否使用了磁盘临时表,如果时候innodb引擎则在create_innodb_tmp_table函数中设置。

Filesort:是否进行了排序,在函数filesort中设置。

Filesort_on_disk:是否使用了磁盘排序,同样在函数filesort中设置,但是设置之前会进行是否需要磁盘排序文件的判断。

Merge_passes: 进行多路归并排序,归并的次数。

Variable query_plan_fsort_passes collects information about file sort passes

acquired during query execution.

6、第六部分InnoDB相关信息

# InnoDB_IO_r_ops: 0 InnoDB_IO_r_bytes: 0 InnoDB_IO_r_wait: 0.000000 # InnoDB_rec_lock_wait: 0.000000 InnoDB_queue_wait: 0.000000 # InnoDB_pages_distinct: 0

这一行来自于如下代码:

char buf[3][20]; snprintf(buf[0], 20, "%.6f", thd->innodb_io_reads_wait_timer / 1000000.0); snprintf(buf[1], 20, "%.6f", thd->innodb_lock_que_wait_timer / 1000000.0); snprintf(buf[2], 20, "%.6f", thd->innodb_innodb_que_wait_timer / 1000000.0); if (my_b_printf(&log_file, "# InnoDB_IO_r_ops: %lu InnoDB_IO_r_bytes: %llu " "InnoDB_IO_r_wait: %s\n" "# InnoDB_rec_lock_wait: %s InnoDB_queue_wait: %s\n" "# InnoDB_pages_distinct: %lu\n", thd->innodb_io_reads, thd->innodb_io_read, buf[0], buf[1], buf[2], thd->innodb_page_access) == (uint) -1)

InnoDB_IO_r_ops:物理IO读取次数。

InnoDB_IO_r_bytes:物理IO读取的总字节数。

InnoDB_IO_r_wait:物理IO读取等待的时间。innodb 使用 BUF_IO_READ标记为物理io读取繁忙,参考函数buf_wait_for_read。

InnoDB_rec_lock_wait:等待行锁消耗的时间。在函数que_thr_end_lock_wait中设置。

InnoDB_queue_wait: 等待进入innodb引擎消耗的时间,在函数srv_conc_enter_innodb_with_atomics中设置。(参考 http://blog.itpub.net/7728585/viewspace-2140446/ )

InnoDB_pages_distinct: innodb访问的页数,包含物理和逻辑IO,在函数buf_page_get_gen的末尾通过_increment_page_get_statistics函数设置。

7、第七部分set timestamp

SET timestamp=1527753496;

这一句来自源码,注意源码注释解释就是获取的服务器的当前的时间(current_utime)。

/* This info used to show up randomly, depending on whether the query checked the query start time or not. now we always write current timestamp to the slow log */ end= my_stpcpy(end, ",timestamp="); end= int10_to_str((long) (current_utime / 1000000), end, 10); if (end != buff) { *end++=';'; *end='\n'; if (my_b_write(&log_file, (uchar*) "SET ", 4) || my_b_write(&log_file, (uchar*) buff + 1, (uint) (end-buff))) goto err; }

七、总结

本文通过查询源码解释了一些关于MySQL慢查询的相关的知识,主要解释了慢查询是基于什么标准进行记录的,同时输出中各个指标的含义,当然这仅仅是我自己得出的结果,如果有不同意见可以一起讨论。

备注栈帧1:

本栈帧主要跟踪Rows_examined的变化及 join->examined_rows++;的变化

(gdb) info b Num Type Disp Enb Address What 1 breakpoint keep y 0x0000000000ebd5f3 in main(int, char**) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/main.cc:25 breakpoint already hit 1 time 4 breakpoint keep y 0x000000000155b94f in do_select(JOIN*) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:872 breakpoint already hit 5 times 5 breakpoint keep y 0x000000000155ca39 in evaluate_join_record(JOIN*, QEP_TAB*) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:1473 breakpoint already hit 20 times 6 breakpoint keep y 0x00000000019b4313 in ha_innobase::index_first(uchar*) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9547 breakpoint already hit 4 times 7 breakpoint keep y 0x00000000019b45cd in ha_innobase::rnd_next(uchar*) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9651 8 breakpoint keep y 0x00000000019b2ba6 in ha_innobase::index_read(uchar*, uchar const*, uint, ha_rkey_function) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9004 breakpoint already hit 3 times 9 breakpoint keep y 0x00000000019b4233 in ha_innobase::index_next(uchar*) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9501 breakpoint already hit 5 times #0 ha_innobase::index_next (this=0x7fff2cbc6b40, buf=0x7fff2cbc7080 "\375\n") at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9501 #1 0x0000000000f680d8 in handler::ha_index_next (this=0x7fff2cbc6b40, buf=0x7fff2cbc7080 "\375\n") at /root/mysql5.7.14/percona-server-5.7.14-7/sql/handler.cc:3269 #2 0x000000000155fa02 in join_read_next (info=0x7fff2c007750) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:2660 #3 0x000000000155c397 in sub_select (join=0x7fff2c007020, qep_tab=0x7fff2c007700, end_of_records=false) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:1274 #4 0x000000000155bd06 in do_select (join=0x7fff2c007020) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:944 #5 0x0000000001559bdc in JOIN::exec (this=0x7fff2c007020) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:199 #6 0x00000000015f9ea6 in handle_query (thd=0x7fff2c000b70, lex=0x7fff2c003150, result=0x7fff2c006cd0, added_options=0, removed_options=0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_select.cc:184 #7 0x00000000015acd05 in execute_sqlcom_select (thd=0x7fff2c000b70, all_tables=0x7fff2c006688) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:5391 #8 0x00000000015a5320 in mysql_execute_command (thd=0x7fff2c000b70, first_level=true) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:2889 #9 0x00000000015adcd6 in mysql_parse (thd=0x7fff2c000b70, parser_state=0x7ffff035b600) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:5836 #10 0x00000000015a1b95 in dispatch_command (thd=0x7fff2c000b70, com_data=0x7ffff035bd70, command=COM_QUERY) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1447 #11 0x00000000015a09c6 in do_command (thd=0x7fff2c000b70) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1010 #12 0x00000000016e29d0 in handle_connection (arg=0x3859ae0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/conn_handler/connection_handler_per_thread.cc:312 #13 0x0000000001d7bfdc in pfs_spawn_thread (arg=0x38607b0) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/perfschema/pfs.cc:2188 #14 0x0000003f74807aa1 in start_thread () from /lib64/libpthread.so.0 #15 0x0000003f740e8bcd in clone () from /lib64/libc.so.6


          Multi-master with MariaDB 10 a tutorial      Cache   Translate Page      

The goal of this tutorial is to show you how to use multi-master to aggregate databases with the same name, but different data from different masters, on the same slave.

Example :

master1 => a French subsidiary master2 => a British subsidiary

Both have the same database PRODUCTION but the data are totally different.


Multi-master with MariaDB 10   a tutorial
This screenshot is made from my own monitoring tool: PmaControl. You have to read 10.10.16.232 on master2 and not 10.10.16.235.
The fault of my admin system! :p)

We will start with three servers―2 masters and 1 slave―you can add more masters if needed. For this tutorial, I used Ubuntu 12.04. I’ll let you choose the right procedure for your distribution from Downloads.

Scenario 10.10.16.231 : first master (referred to subsequently as master1) => a French subsidiary 10.10.16.232 : second master (referred to subsequently as master2) => a British subsidiary 10.10.16.233 : slave (multi-master) (referred to subsequently as slave)

If you already have your three servers correctly installed, you can scroll down directly to “ Dump your master1 and master2 databases from slave “.

Default installation on 3 servers apt-get -y install python-software-properties apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db

The main reason I put it in a different file because we use Chef as the configuration manager and this overwrites /etc/apt/sources.list . The other reason is that if any trouble occurs, you can just remove this file and restart with the default configuration.

echo "deb http://mirror.stshosting.co.uk/mariadb/repo/10.0/ubuntu precise main" > /etc/apt/sources.list.d/mariadb.list apt-get update apt-get install mariadb-server

The goal of this small script is to get the IP of the server and make a CRC32 from this IP to generate one unique server-id. Generally the command CRC32 isn’t installed, so we will use the one from mysql. To set account // password we use the account system of Debian / Ubuntu.

Even if your server has more interfaces, you should have no trouble because the IP address should be unique.

user=`egrep user /etc/mysql/debian.cnf | tr -d ' ' | cut -d '=' -f 2 | head -n1 | tr -d '\n'` passwd=`egrep password /etc/mysql/debian.cnf | tr -d ' ' | cut -d '=' -f 2 | head -n1 | tr -d '\n'` ip=`ifconfig eth0 | grep "inet addr" | awk -F: '{print $2}' | awk '{print $1}' | head -n1 | tr -d '\n'` crc32=`mysql -u $user -p$passwd -e "SELECT CRC32('$ip')"` id_server=`echo -n $crc32 | cut -d ' ' -f 2 | tr -d '\n'`

This configuration file is not one I use in production, but a minimal version that’s shown just as an example. The config may work fine for me, but perhaps it won’t be the same for you, and it might just crash your MySQL server.

If you’re interested in my default installof MariaDB 10 you can see it here: https://raw.githubusercontent.com/Esysteme/Debian/master/mariadb.sh (this script as been updated since 4 years)

example :

./mariadb.sh -p 'secret_password' -v 10.3 -d /src/mysql cat >> /etc/mysql/conf.d/mariadb10.cnf << EOF [client] # default-character-set = utf8 [mysqld] character-set-client-handshake = FALSE character-set-server = utf8 collation-server = utf8_general_ci bind-address= 0.0.0.0 external-locking= off skip-name-resolve #make a crc32 of ip server server-id=$id_server #to prevent auto start of thread slave skip-slave-start [mysql] default-character-set = utf8 EOF

We restart the server

/etc/init.d/mysql restart * Stopping MariaDB database server mysqld[ OK ] * Starting MariaDB database server mysqld[ OK ] * Checking for corrupt, not cleanly closed and upgrade needing tables.

Repeat these actions on all three servers.

Create users on both masters Create the replication user on both masters

on master1 (10.10.16.231)

mysql -u root -p -e "GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'%' IDENTIFIED BY 'passwd';"

on master2 (10.10.16.232)

mysql -u root -p -e "GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'%' IDENTIFIED BY 'passwd';" Create a user for external backup

On master1 and on master2

mysql -u root -p -e "GRANT SELECT, LOCK TABLES, RELOAD, REPLICATION CLIENT, SUPER ON *.* TO 'backup'@'10.10.16.%' IDENTIFIED BY 'passwd' WITH GRANT OPTION;" If you are just testing…

If you don’t have a such a configuration and you want to set up tests:

Create a database on master1 (10.10.16.231) master1 [(NONE)]> CREATE DATABASE PRODUCTION; Create a database on master2 (10.10.16.232) master2 [(NONE)]> CREATE DATABASE PRODUCTION; Dump your master1 and master2 databases from slave (10.10.16.233)

All the commands from now until the end have to be carried out on the slave server

-- master - data = 2 get the file (binary log) and its position, and add it to the beginning of the dump as a comment -- single - transaction This option issues a BEGIN SQL statement before dumping data from the server (this works only on tables with the InnoDB storage engine) mysqldump -h 10.10.16.231 -u root -p --master-data=2 --single-transaction PRODUCTION > PRODUCTION_10.10.16.231.sql mysqldump -h 10.10.16.232 -u root -p --master-data=2 --single-transaction PRODUCTION > PRODUCTION_10.10.16.232.sql

Create both new databases:

slave[(NONE)]> CREATE DATABASE PRODUCTION_FR; slave[(NONE)]> CREATE DATABASE PRODUCTION_UK;

Load the data :

mysql -h 10.10.16.233 -u root -p PRODUCTION_FR < PRODUCTION_10.10.16.231.sql mysql -h 10.10.16.233 -u root -p PRODUCTION_UK < PRODUCTION_10.10.16.232.sql Set up both replications on the slave

Edit both dumps to get file name and position of the binlog, and replace it here: (use the command “less” instead of other commands in huge files)

French subsidiary master1 less PRODUCTION_10.10.16.231.sql

get the line : (the MASTER_LOG_FILE and MASTER_LOG_POS values will be different to this example)

-- CHANGE MASTER TO MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;

replace the file and position in this command:

CHANGE MASTER 'PRODUCTION_FR' TO MASTER_HOST = "10.10.16.231", MASTER_USER = "replication", MASTER_PASSWORD ="passwd", MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771; English subsidiary master2 less PRODUCTION_10.10.16.232.sql get the line: (the MASTER_LOG_FILE and MASTER_LOG_POS values will be different to this example, and would normally
          Using ProxySQL to connect to IPv6-only databases over IPv4      Cache   Translate Page      

Using ProxySQL to connect to IPv6-only databases over IPv4
It’s 2018. Maybe now is the time to start migrating your network to IPv6 , and your database infrastructure is a great place to start. Unfortunately, many legacy applications don’t offer the option to connect to mysql directly over IPv6 (sometimes even if passing a hostname). We can workaround this by using ProxySQL’s IPv6 support which was added in version 1.3. This will allow us to proxy incoming IPv4 connections to IPv6-only database servers.

Note that by default ProxySQL only listens on IPv4. We don’t recommended changing that until this bug is resolved. The bug causes ProxySQL to segfault frequently if listening on IPv6.

In this example I’ll use centos7-pxc57-1 as my database server. It’s running Percona XtraDB Cluster (PXC) 5.7 on CentOS 7, which is only accessible over IPv6. This is one node of a three node cluster, but l treat this one node as a standalone server for this example. One node of a synchronous cluster can be thought of as equivalent to the entire cluster, and vice-versa.Using the PXC plugin for ProxySQL to split reads from writes is the subject of a future blog post.

The application server, centos7-app01 , would be running the hypothetical legacy application.

Note: We use default passwords throughout this example. You should always change the default passwords.

We have changed the IPv6 address in these examples. Any resemblance to real IPv6 addresses, living or dead, is purely coincidental.

2a01:5f8:261:748c::74 is the IPv6 address of the ProxySQL server 2a01:5f8:261:748c::71 is the Percona XtraDB node Step 1: Install ProxySQL for your distribution

Packages are availableherebut in this case I’m going to use the version provided by the Percona yum repository:

[...] Installed: proxysql.x86_64 0:1.4.9-1.1.el7 Complete! Step 2: Configure ProxySQL to listen on IPv4 TCP port 3306 by editing /etc/proxysql.cnf and starting it [root@centos7-app1 ~]# vim /etc/proxysql.cnf [root@centos7-app1 ~]# grep interfaces /etc/proxysql.cnf interfaces="127.0.0.1:3306" [root@centos7-app1 ~]# systemctl start proxysql Step 3: Configure ACLs on the destination database server to allow ProxySQL to connect over IPv6 mysql> GRANT SELECT on sys.* to 'monitor'@'2a01:5f8:261:748c::74' IDENTIFIED BY 'monitor'; Query OK, 0 rows affected, 1 warning (0.25 sec) mysql> GRANT ALL ON legacyapp.* TO 'legacyappuser'@'2a01:5f8:261:748c::74' IDENTIFIED BY 'super_secure_password'; Query OK, 0 rows affected, 1 warning (0.25 sec) Step 4: Add the IPv6 address of the destination server to ProxySQL and add users

We need to configure the IPv6 server as a mysql_server inside ProxySQL. We also need to add a user to ProxySQL as it will reuse these credentials when connecting to the backend server. We’ll do this by connecting to the admin interface of ProxySQL on port 6032:

[root@centos7-app1 ~]# mysql -h127.0.0.1 -P6032 -uadmin -padmin mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 4 Server version: 5.5.30 (ProxySQL Admin Module) Copyright (c) 2009-2018 Percona LLC and/or its affiliates Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (1,'2a01:5f8:261:748c::71',3306); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO mysql_users(username, password, default_hostgroup) VALUES ('legacyappuser', 'super_secure_password', 1); Query OK, 1 row affected (0.00 sec) mysql> LOAD MYSQL USERS TO RUNTIME; Query OK, 0 rows affected (0.00 sec) mysql> SAVE MYSQL USERS TO DISK; Query OK, 0 rows affected (0.27 sec) mysql> LOAD MYSQL SERVERS TO RUNTIME; Query OK, 0 rows affected (0.01 sec) mysql> SAVE MYSQL SERVERS TO DISK; Query OK, 0 rows affected (0.30 sec) mysql> LOAD MYSQL VARIABLES TO RUNTIME; Query OK, 0 rows affected (0.00 sec) mysql> SAVE MYSQL VARIABLES TO DISK; Query OK, 95 rows affected (0.12 sec) Step 5: Configure your application to connect to ProxySQL over IPv4 on localhost4 (IPv4 localhost)

This is application specific and so not shown here, but I’d configure my application to use localhost4 as this is in /etc/hosts by default and points to 127.0.0.1 and not ::1

Step 6: Verify

As I don’t have the application here, I’ll verify with mysql-client. Remember that ProxySQL is listening on 127.0.0.1 port 3306, so we connect via ProxySQL on IPv4 (the usage of 127.0.0.1 rather than a hostname is just to show this explicitly):

[root@centos7-app1 ~]# mysql -h127.0.0.1 -ulegacyappuser -psuper_secure_password mysql: [Warning] Using a password on the command line interface can be insecure. mysql> SELECT host FROM information_schema.processlist WHERE ID=connection_id(); +-----------------------------+ | host| +-----------------------------+ | 2a01:5f8:261:748c::74:57546 | +-----------------------------+ 1 row in set (0.00 sec) mysql> CREATE TABLE legacyapp.legacy_test_table(id int); Query OK, 0 rows affected (0.83 sec)

The query above shows the remote host (from MySQL’s point of view) for the current connection. As you can see, MySQL sees this connection established over IPv6. So to recap, we connected to MySQL on an IPv4 IP address ( 127.0.0.1 ) and were successfully proxied to a backend IPv6 server.


          Exploring Oracle's MySQL Cloud Service      Cache   Translate Page      

As the maker of the world's most popular open source database, Oracle is ideally positioned to deliver a mysql Cloud Service. With the Cloud/DBaaS market expected to grow from 1.1 billion in 2014 to 14 billion by 2019 (forecast by MarketsAndMarkets ), Oracle has officially thrown their hat into the DBaaS ring with Oracle MySQL Cloud Service. Built on MySQL Enterprise Edition, Oracle MySQL Cloud Service is a MySQL database service that provides a simple, automated, integrated and enterprise ready MySQL cloud service.

If you're among the many organizations considering moving your database infrastructure to the Cloud, you may want to include Oracle's MySQL Cloud Service in your short list. To better help you evaluate Oracle MySQL Cloud Service, this tutorial will give you a rundown on how to get up and running with the free account option.

Signing up for an Oracle Public Cloud Account

On the Oracle Cloud home page , you'll see a "Try for Free" button in the top-right corner of the page. Clicking it will take you to the /tryit page, where you'll find a couple of "Create a Free Account" buttons. Click one to receive 300 dollars of free credits that are valid for up to 30 days. We'll use those to go through today's tutorial.

Enter an account name, select a data region, and specify your contact information. Verify your identity with your mobile number and a verification code.
Exploring Oracle's MySQL Cloud Service
Try it for Free Enter your credit card details. You won't be charged unless you elect to upgrade the account. You may see a small, temporary charge on your bill. This is a verification hold that will be removed automatically. Accept the terms and conditions, click "Complete", and then wait for a Welcome email.

Once the signup process is complete, you'll be directed to the Getting Started Page. You can also access it via the link in your welcome email:


Exploring Oracle's MySQL Cloud Service
Getting Started Page

On the Getting Started Page, you can get familiar with your account by clicking one of the tiles on the Guided Journey.


Exploring Oracle's MySQL Cloud Service
Guided Journey

The site will then prompt you with some questions, and guide you to some suggested cloud services, videos, and tutorials.

Creating a MySQL Cloud Service Instance

The Dashboard is your starting point for all your Oracle Cloud Account management actions.

Sign into the Oracle Cloud by clicking the link in your Welcome email, or go to https://cloud.oracle.com and click "Sign In". Select "Traditional Cloud Account" from the first dropdown, choose a Data Center from the second one, and click on "My Services".
Exploring Oracle's MySQL Cloud Service
MySQL Cloud Service Instance On the next page, type the name of your Identity Domain and click "Go".
Exploring Oracle's MySQL Cloud Service
Sign into Oracle Cloud

That will take you to the Dashboard.


Exploring Oracle's MySQL Cloud Service
Oracle Cloud Dashboard

There, you'll notice that your remaining free credits and days are shown.

Now it's time to create our MySQL instance.

Be sure to select the item without the "(traditional)" from the Identity Domain dropdown, or you won't see all of the available service options.
Exploring Oracle's MySQL Cloud Service
Identity Domain dropdown Click the "Create Instance" button and select "MySQL" from the popup dialog.

That will bring up the MySQL Cloud Service page.

Initially, the page shows that you have no MySQL Cloud Service instances, so you'll have to create one by clicking the "Create Instance" button.
Exploring Oracle's MySQL Cloud Service
Create Instance The first stop is the Basic Information page. There, you can provide the instance name, description, as well as some other pertinent information:
Exploring Oracle's MySQL Cloud Service
Instance Page The Service Details page is where you'll specify most of the instance parameters. These include Configuration details as well as backup and recovery options.

When you create a MySQL Cloud Service instance, you have to provide an SSH key pair, which will allow you to connect to the instance from an SSH client.

You can generate a public/private key pair with your SSH client, or during the instance creation process. The private key remains on your local machine, and you can install the public key on any machine you want to connect to via SSH.

In the Configuration section, specify the following: Usable Database Storage (GB): The default is 25 GB. Administration User and Password: The MySQL administration user credentials, usually "root". Database Schema Name: The default database.

Note that you can also configure MySQL Enterprise Monitor in this section if you want to monitor the instance from the same machine that hosts it. This is not the best practice, so for this tutorial select "No" from the "Configure MySQL Enterprise Monitor" dropdown.

In the Backup and Configuration section, note that backups (using MySQL Enterprise Backup) are enabled by default. You need an Oracle Cloud Service container so that backups can be stored both in the cloud and on the local compute node. For this tutorial, select "None" from the "Backup Destination" drop-down list to disable backups on the instance. Click the "Next" button in the Create Service wizard. The third and final page of the wizard displays a summary of your settings. Verify the settings for your instance and then click "Create". The instance will now be displayed with a status of "Creating service...". You can click the refresh button to monitor the status of the creation request:
Exploring Oracle's MySQL Cloud Service
In-Progress Operation Messages

When the instance is ready, it will appear in the list of available services. Then, you can click the service name link to view the service details, which include the public IP address of the virtual machine that hosts the instance. You'll also be able to perform all your typical administration tasks such as backups, patching, and scaling, from the Instance Summary screen.

Conclusion

All-in-all, the process was not too difficult. I had a few issues along the way with pages expiring and getting locked out at one point. Having said that, getting back on track was easy enough.

Now that we've got a MySQL instance to play with, in upcoming articles, we'll look at adding users, assigning roles, as well as managing our account.

See all articles by Rob Gravelle


          京东金融数据分析案例(一)      Cache   Translate Page      

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/mingyunxiaohai/article/details/82428664

数据说明:

给定的数据为业务情景数据,所有数据均已进了采样和脱敏处理,字段取值与分布均与真实业务数据不同。提供了时间为 2016-08-03 到 2016-11-30 期间,用户在移动端的行为数据、购物记录和历史借贷信息,及 11 月的总借款金额。 数据集下载地址为:链接: https://pan.baidu.com/s/1hk8hARHxkQcMS8SgABmcHQ 密码: fc7z

文件包括user.csv,order.cav,click.csv,loan.csv,loan_sum.csv。

前言:一般的大数据项目一般都分为两种,一种是批处理一种是流式处理,此次练习批处理使用hive和presto处理,流式处理使用SparkStreaming+kafka来处理

任务 1

一般情况下我们的user的数据都是存在自己的关系型数据库中,所以这里将 t_user 用户信息到 mysql 中,我们在从MySQL中将其导入到hdfs上,其他三个文件及,t_click,t_loan 和 t_loan_sum 直接导入到 HDFS 中。

mysql自带csv导入功能所以

先创建数据库和user表

create database jd use jd create table t_user (uid INT NOT NULL, age INT, sex INT, active_date varchar(40), initial varchar(40));

导入数据

LOAD DATA LOCAL INFILE '/home/chs/Documents/t_user.csv' INTO TABLE t_user CHARACTER SET utf8 FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n' IGNORE 1 ROWS; 任务 2

利用 Sqoop 将 MySQL 中的 t_user 表导入到 HDFS 中

显示有哪些数据库

sqoop list-databases --connect jdbc:mysql://master:3306 --username root --password '' //显示下面的几个数据库 information_schema jd mysql performance_schema

显示有哪些表

sqoop list-tables --connect jdbc:mysql://master:3306/jd --username root --password '' //这里只有一张表 t_user

使用sqoop把MySQL中表t_user数据导入到hdfs的/data/sq目录下

sqoop import --connect jdbc:mysql://master:3306/jd --username root --password '' --table t_user --target-dir /data/sq 出错了 18/08/21 13:44:26 ERROR tool.ImportTool: Import failed: No primary key could be found for table t_user. Please specify one with --split-by or perform a sequential import with '-m 1'.

说是这个表中没有主键。我们可以建表的时候给它设置上主键,也可以使用下面 split-by来指定使用哪个字段分割

sqoop import --connect jdbc:mysql://master:3306/jd --username root --password '' --table t_user --target-dir /data/sq --split-by 'uid' 又出错了 Host 'slave' is not allowed to connect to this MySQL server Host 'slave2' is not allowed to connect to this MySQL server

错误原因 ,因为我这里的hadoop集群使用了3台虚拟机,slave和slave2没有使用root用户访问MySQL的权限

进入mysql的控制台

use mysql

select host,user,password from user;

+-----------+------+----------+ | host | user | password | +-----------+------+----------+ | localhost | root | | | master | root | | | 127.0.0.1 | root | | | ::1 | root | | | localhost | | | | master | | | +-----------+------+----------+

可以看到现在只有master有权限,给slave和slave2也设置权限

grant all PRIVILEGES on jd.* to root@'slave' identified by ''; grant all PRIVILEGES on jd.* to root@'slave2' identified by '';

这才执行OK

查看导入后的hdfs上的目录

hdfs dfs -ls /data/sq

-rw-r--r-- 1 chs supergroup 0 2018-08-21 14:06 /data/sq/_SUCCESS -rw-r--r-- 1 chs supergroup 807822 2018-08-21 14:06 /data/sq/part-m-00000 -rw-r--r-- 1 chs supergroup 818928 2018-08-21 14:06 /data/sq/part-m-00001 -rw-r--r-- 1 chs supergroup 818928 2018-08-21 14:06 /data/sq/part-m-00002 -rw-r--r-- 1 chs supergroup 818964 2018-08-21 14:06 /data/sq/part-m-00003

查看每一部分的数据

hdfs dfs -cat /data/sq

17107,30,1,2016-02-13,5.9746772897 11272,25,1,2016-02-17,5.9746772897 14712,25,1,2016-01-10,6.1534138563 16152,30,1,2016-02-10,5.9746772897 10005,30,1,2015-12-17,5.7227683627 ......

OK导入完成 剩下的几个CSV文件直接功过hadoop的put命令上传到hdfs上对应的目录即可。

任务 3

利用 Presto 分析产生以下结果,并通过 web 方式可视化:

各年龄段消费者每日购买商品总价值

男女消费者每日借贷金额

我们在使用presto做数据分析的时候,一般是跟hive联合起来使用,先从hive中创建相应的数据表,然后使用presto分析hive中的表。

启动hive

//启动hive的metastore nohup hive --service metastore >> /home/chs/apache-hive-2.1.1-bin/metastore.log 2>&1 & //启动hive server nohup hive --service hiveserver2 >> /home/chs/apache-hive-2.1.1-bin/hiveserver.log 2>&1 & //启动客户端 beeline 并连接 beeline beeline> !connect jdbc:hive2://master:10000/default hadoop hadoop

创建用户表

create table if not exists t_user ( uid STRING, age INT, sex INT, active_date STRING, limit STRING )ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

导入hdfs上的数据

load data inpath '/data/sq' overwrite into table t_user;

创建用户订单表

create table if not exists t_order ( uid STRING, buy_time STRING, price DOUBLE, qty INT, cate_id INT, discount DOUBLE )ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

导入hdfs上的数据

load data inpath '/data/t_order.csv' overwrite into table t_order;

创建用户点击表

create table if not exists t_click ( uid STRING, click_time STRING, pid INT, param INT )ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

导入hdfs上的数据

load data inpath '/data/t_click.csv' overwrite into table t_click;

创建借款信息表t_loan

create table if not exists t_loan ( uid STRING, loan_time STRING, loan_amount STRING, plannum INT )ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

导入hdfs上的数据

load data inpath '/data/t_loan.csv' overwrite into table t_loan;

创建月借款总额表t_loan_sum

create table if not exists t_loan_sum ( uid STRING, month STRING, loan_sum STRING )ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

导入hdfs上的数据

load data inpath '/data/t_loan_sum.csv' overwrite into table t_loan_sum;

启动Presto

在安装目录下运行 bin/launcher start

运行客户端 bin/presto server master:8080 catalog hive schema default

连接hive !connect jdbc:hive2://master:10000/default hadoop hadoop

开始查询分析

第一题

select t_user.age,t_order.buy_time,sum(t_order.price*t_order.qty-t_order.discount) as sum from t_user join t_order on t_user.uid=t_order.uid group by t_user.age,t_order.buy_time;

部分结果

+-------------+-------------------+----------------------+--+ | t_user.age | j_order.buy_time | sum | +-------------+-------------------+----------------------+--+ | 20 | 2016-11-17 | 1.7227062320000002 | | 25 | 2016-10-15 | 5.386111459 | | 25 | 2016-10-19 | 0.45088435299999996 | | 25 | 2016-10-20 | 2.8137519620000004 | | 25 | 2016-10-21 | 3.548087797 | | 25 | 2016-10-22 | 2.788946585 | | 25 | 2016-10-26 | 2.469814958 | | 25 | 2016-10-27 | 0.4795708140000001 | | 25 | 2016-10-30 | 2.8022007390000003 | | 25 | 2016-10-31 | 6.995954644 | ......

第二题

select t_user.sex,SUBSTRING(t_loan.loan_time,0,10) as time,sum(t_loan.loan_amount) as sum from t_user join t_loan on t_user.uid=t_loan.uid group by t_user.sex ,SUBSTRING(t_loan.loan_time,0,10);

部分结果

+--------
          MySQL实例恢复      Cache   Translate Page      

版权声明:本文为博主原创文章,欢迎扩散,扩散请务必注明出处。 https://blog.csdn.net/robinson_0612/article/details/82588176

mysql实例在异常宕机重启后,会自动启动实例恢复。由于MySQL为多引擎数据库,所以需要说明的是MySQL实例恢复,实质上指的是对事务进行恢复,即对innodb恢复。本文简要描述mysql实例恢复的步骤,并通过具体演示来感受mysql实例恢复的过程。

一、MySQL实例

MySQL实例就是mysqld后台进程以及多个线程再加上内存分配


MySQL实例恢复
二、MySQL实例恢复的步骤
MySQL实例恢复
三、InnoDB恢复过程

InnoDB崩溃恢复包括几个步骤:

1、应用重做日志

重做日志应用程序是第一步,在实例初始化期间执行,此时不接受任何连接。如果在关机或崩溃时,所有更改都从缓冲池刷新到表空间(ibdata 和 .ibd文件),那么重做日志应用程序可以跳过。如果启动时缺少重做日志文件,InnoDB会跳过重做日志应用。即使数据丢失,也不建议删除重做日志以加快恢复过程。仅在干净关闭后才被视为一个选项执行,删除重做日志是可以接受的,innodb_fast_shutdown设置为0或1。

2、回滚未完成的事务

在崩溃时处于活动状态(未提交)的任何事务都将回滚。回滚未完成的事务所花费的时间可能是事务在中断之前处于活动状态的时间长度三倍或四倍,具体取决于服务器负载。无法取消正在回滚的事务。在极端情况下,回滚可能需要特别长的时间,也可能很快,取决于innodb_force_recovery设置为3或更高值。

3、更改缓冲区合并

将更改缓冲区(系统表空间的一部分)中的更改应用于二级索引的叶页,因为索引页被读取到缓冲池。

4、清除非活动事物

删除任何标记已删除记录,那些对活动事务不再可见的记录。重做日志应用之后的步骤不依赖于重做日志(除了用于记录重做日志)并且正常处理并行执行。其中,只有不完整的回滚事务对于崩溃恢复是特殊的。插入缓冲区合并和清除是在正常处理期间执行。

5、尽快接受客户端请求,减少宕机时间

作为崩溃恢复的一部分,在服务器崩溃,InnoDB回滚任何未提交的事务或在XA PREPARE状态下的事务。回滚由后台线程执行,与来自新连接的事务并行执行。在回滚操作完成之前,新连接可能会遇到与已恢复事务的锁定冲突。在大多数情况下,即使MySQL服务器在繁重的活动中被意外杀死,恢复过程自动发生,DBA不需要任何操作。如果是硬件失败或严重的系统错误导致InnoDB数据损坏,MySQL可能会拒绝启动。

四、演示实例恢复 [root@centos7 ~]# more /etc/redhat-release CentOS linux release 7.2.1511 (Core) (root@localhost)[(none)]> show variables like 'version'; +---------------+------------+ | Variable_name | Value | +---------------+------------+ | version | 5.7.23-log | +---------------+------------+ (root@localhost)[(none)]> drop table if exists sakila.t20; (root@localhost)[(none)]> create table sakila.t20(id int,descr varchar(20)); Query OK, 0 rows affected (0.02 sec) (root@localhost)[(none)]> insert into sakila.t20 values(1,'Instrecovery'); Query OK, 1 row affected (0.00 sec) (root@localhost)[(none)]> set autocommit=0; Query OK, 0 rows affected (0.00 sec) (root@localhost)[(none)]> insert into sakila.t20 values(2,'lost'); Query OK, 1 row affected (0.01 sec) [root@centos7 ~]# ps -ef|grep mysqld mysql 6012 1 0 21:56 ? 00:00:01 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid root 6819 5007 0 22:12 pts/2 00:00:00 grep --color=auto mysqld [root@centos7 ~]# [root@centos7 ~]# kill -9 6012 mysqld会自动重启,观察日志的输出情况 [root@centos7 ~]# tail -fn 100 /var/lib/mysql/mysqld.log 2018-08-17T22:13:58.162282+08:00 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2018-08-17T22:13:58.163716+08:00 0 [Note] /usr/sbin/mysqld (mysqld 5.7.23-log) starting as process 6902 ... 2018-08-17T22:13:58.169230+08:00 0 [Note] InnoDB: PUNCH HOLE support available 2018-08-17T22:13:58.169343+08:00 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2018-08-17T22:13:58.169363+08:00 0 [Note] InnoDB: Uses event mutexes 2018-08-17T22:13:58.169371+08:00 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier 2018-08-17T22:13:58.169377+08:00 0 [Note] InnoDB: Compressed tables use zlib 1.2.3 2018-08-17T22:13:58.169383+08:00 0 [Note] InnoDB: Using Linux native AIO 2018-08-17T22:13:58.169973+08:00 0 [Note] InnoDB: Number of pools: 1 2018-08-17T22:13:58.170188+08:00 0 [Note] InnoDB: Using CPU crc32 instructions 2018-08-17T22:13:58.172706+08:00 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2018-08-17T22:13:58.184610+08:00 0 [Note] InnoDB: Completed initialization of buffer pool 2018-08-17T22:13:58.187623+08:00 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2018-08-17T22:13:58.202938+08:00 0 [Note] InnoDB: Highest supported file format is Barracuda. 2018-08-17T22:13:58.206243+08:00 0 [Note] InnoDB: Log scan progressed past the checkpoint lsn 831040248 --检查点位置,检点点位置之后的日志需要确认是否重放或回滚 2018-08-17T22:13:58.206338+08:00 0 [Note] InnoDB: Doing recovery: scanned up to log sequence number 831040257 --日志位置 2018-08-17T22:13:58.206363+08:00 0 [Note] InnoDB: Database was not shutdown normally! --提示数据库异常关闭 2018-08-17T22:13:58.206372+08:00 0 [Note] InnoDB: Starting crash recovery. --开始崩溃恢复,以下提示一个事务在undo,需要回滚 2018-08-17T22:13:58.221492+08:00 0 [Note] InnoDB: 1 transaction(s) which must be rolled back or cleaned up in total 1 row operations to undo 2018-08-17T22:13:58.221564+08:00 0 [Note] InnoDB: Trx id counter is 38144 --下一行提示binlog位置 2018-08-17T22:13:58.223102+08:00 0 [Note] InnoDB: Last MySQL binlog file position 0 1004, file name mysqlbin.000012 2018-08-17T22:13:58.358681+08:00 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1" --移除及生成临时表空间 2018-08-17T22:13:58.358732+08:00 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2018-08-17T22:13:58.358824+08:00 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2018-08-17T22:13:58.374256+08:00 0 [Note] InnoDB: Starting in background the rollback of uncommitted transactions 2018-08-17T22:13:58.374350+08:00 0 [Note] InnoDB: Rolling back trx with id 37646, 1 rows to undo --开始回滚 2018-08-17T22:13:58.384470+08:00 0 [Note] InnoDB: Rollback of trx with id 37646 completed --回滚完成 2018-08-17T22:13:58.384553+08:00 0 [Note] InnoDB: Rollback of non-prepared transactions completed 2018-08-17T22:13:58.385412+08:00 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2018-08-17T22:13:58.387835+08:00 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active. 2018-08-17T22:13:58.387882+08:00 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active. 2018-08-17T22:13:58.389416+08:00 0 [Note] InnoDB: Waiting for purge to start --purge线程清理回滚段信息 2018-08-17T22:13:58.441475+08:00 0 [Note] InnoDB: 5.7.23 started; log sequence number 831040257 2018-08-17T22:13:58.443981+08:00 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 2018-08-17T22:13:58.444282+08:00 0 [Note] Plugin 'FEDERATED' is disabled. 2018-08-17T22:13:58.454856+08:00 0 [Note] Recovering after a crash using /var/lib/mysql/mysqlbin 2018-08-17T22:13:58.455101+08:00 0 [Note] Starting crash recovery... 2018-08-17T22:13:58.455185+08:00 0 [Note] Crash recovery finished. -- 完成所有崩溃恢复 2018-08-17T22:13:58.462891+08:00 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them. 2018-08-17T22:13:58.463297+08:00 0 [Warning] CA certificate ca.pem is self signed. 2018-08-17T22:13:58.466277+08:00 0 [Note] Server hostname (bind-address): '*'; port: 3306 2018-08-17T22:13:58.466650+08:00 0 [Note] IPv6 is available. 2018-08-17T22:13:58.466734+08:00 0 [Note] - '::' resolves to '::'; 2018-08-17T22:13:58.466830+08:00 0 [Note] Server socket created on IP: '::'. 2018-08-17T22:13:58.482404+08:00 0 [Note] Event Scheduler: Loaded 0 events 2018-08-17T22:13:58.482703+08:00 0 [Note] /usr/sbin/mysqld: ready for connections. --开始对外提供服务 Version: '5.7.23-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) 2018-08-17T22:13:59.473531+08:00 0 [Note] InnoDB: Buffer pool(s) load completed at 180817 22:13:59 [root@centos7 ~]# mysql (root@localhost)[(none)]> select * from sakila.t20; +------+--------------+ | id | descr | +------+--------------+ | 1 | Instrecovery | +------+--------------+
          MySql(三) MySql中的锁机制      Cache   Translate Page      

前面两篇博客中简单的聊了下mysql中的索引,今天聊聊mysql(InnoDB引擎)中的锁以及事务的实现

MySql(一) 浅析MySql索引

MySQL(二) MySql常用优化

讲到锁大家应该都不陌生。像是Java中常见的采用CAS算法实现的乐观锁,典型的例子就是原子类,通过CAS自旋实现原子操作的更新,悲观锁通常都是 Synchronized 和 Lock 实现。

乐观锁与悲观锁 乐观锁: 每次读数据的时候都认为其他人不会修改,所以不会上锁,而是在更新的时候去判断在此期间有没有其他人更新了数据,可以使用版本号机制。在数据库中可以通过为数据表增加一个版本号字段实现。读取数据时将版本号一同读出,数据每次更新时对版本号加一。当我们更新的时候,判断数据库表对应记录的当前版本号与第一次取出来的版本号值进行比对,如果值相等,则予以更新,否则认为是过期数据。乐观锁适用于多读的应用类型,可以提高吞吐量。 悲观锁: 每次读数据的时候都认为别人会修改,所以每次在读数据的时候都会上锁,这样别人想读这个数据时就会被阻塞。MySQL中就用到了很多这种锁机制,比如行锁,表锁等,读锁,写锁等,都是在操作之前先上锁。 共享锁与排他锁 共享锁: 共享锁又叫做读锁或S锁,加上共享锁后在事务结束之前其他事务只能再加共享锁、只能对其进行读操作不能写操作,除此之外其他任何类型的锁都不能再加了。 # 加上lock in share mode SELECT description FROM book_book lock in share mode;复制代码 排他锁: 排他锁又叫写锁或X锁,某个事务对数据加上排他锁后,只能这个事务对其进行读写,在此事务结束之前,其他事务不能对其加任何锁,可以读取,不能进行写操作,需等待其释放。 # 加上for update SELECT description FROM book_book for update; 复制代码 行锁与表锁

行锁与表锁区别在于锁的粒度,在Innodb引擎中既支持行锁也支持表锁(MyISAM引擎只支持表锁),只有通过索引条件检索数据InnoDB才使用行级锁,否则,InnoDB将使用表锁。

表锁:开销小,加锁快;不会出现死锁;锁定粒度大,发生锁冲突概率高,并发度最低 行锁:开销大,加锁慢;会出现死锁;锁定粒度小,发生锁冲突的概率低,并发度高

这里有个比较疑惑的地方,为什么表锁不会出现死锁?在 MyISAM中 由于没有事务,一条SQL执行完锁就释放了,不会循环等待,所以只会出现阻塞而不会发生死锁。但是在 InnoDB中有事务就比较疑惑了,希望有了解的小伙伴指点指点@-@

下面举两个例子说明上面几种锁:

# 事务1 BEGIN; SELECT description FROM book_book where name = 'JAVA编程思想' lock in share mode; # 事务2 BEGIN; UPDATE book_book SET name = 'new book' WHERE name = 'new'; # 查看事务状态 SELECT * FROM INFORMATION_SCHEMA.INNODB_TRX; trx_id trx_state trx_started trx_tables_locked trx_rows_locked 39452 LOCK WAIT 2018-09-08 19:01:39 1 1 282907511143936 RUNNING 2018-09-08 18:58:47 1 38 复制代码

事务1给book表加上了共享锁,事务2尝试修改book表发生了阻塞,查看事务状态可以知道事务一由于没有走索引使用了表锁。

# 事务1 BEGIN; SELECT description FROM book_book WHERE id = 2 lock in share mode; # 事务2 BEGIN; UPDATE book_book SET name = 'new book' WHERE id = 1; # 查看事务状态 SELECT * FROM INFORMATION_SCHEMA.INNODB_TRX; trx_id trx_state trx_started trx_tables_locked trx_rows_locked 39454 RUNNING 2018-09-08 19:10:44 1 1 282907511143936 RUNNING 2018-09-08 19:10:35 1 1 复制代码

事务1给book表加上了共享锁,事务2尝试修改book表并没有发生阻塞。这是由于事务一和事务二都走了索引,所以使用的是行锁,并不会发生阻塞。

意向锁(InnoDB特有)

意向锁的意义在于方便检测表锁和行锁之间的冲突

意向锁: 意向锁是一种表级锁,代表要对某行记录进行操作。分为意向共享锁(IS)和意向排他锁(IX)。 行锁和表锁之间的冲突: 事务A给表中的某一行加了共享锁,让这一行只能读不能写。之后事务B申请整个表的排他锁。如果事务B申请成功,那么它就能修改表中的任意一行,这与A持有的行锁是冲突的。InnoDB引入了意向锁来判断它们之间的冲突。 没有意向锁的情况: 1、判断表是否已被其他事务用表锁锁表。2、判断表中的每一行是否已被行锁锁住,这样要遍历整个表,效率很低。 意向锁存在的情况: 1、判断表是否已被其他事务用表锁锁表。2、判断表上是否有意向锁 意向锁存在时申请锁: 申请意向锁的动作是数据库完成的 ,上述例子中事务A申请一行的行锁的时候,数据库会自动先开始申请表的意向锁,当事务B申请表的排他锁时检测到存在意向锁则会阻塞。 意向锁会不会存在冲突: 意向锁之间不会冲突, 因为意向锁只是代表要对某行记录进行操作。 各种锁之间的共存情况 IX IS X S IX 兼容 兼容 冲突 冲突 IS 兼容 兼容 冲突 兼容 X 冲突 冲突 冲突 冲突 S 冲突 兼容 冲突 兼容 复制代码 死锁 概念: 两个或两个以上的事务在执行过程中,因争夺资源而造成的一种互相等待的现象。 存在条件: 1、 互斥条件:一个资源每次只能被一个事务使用。2、 请求与保持条件:一个事务因请求资源而阻塞时,对已获得的资源保持不放。3、不剥夺条件:已获得的资源,在末使用完之前不能强行剥夺。4、循环等待条件:形成一种头尾相接的循环等待关系 解除正在死锁的状态: 撤销其中一个事务 MVCC(多版本并发控制)

MVCC使得InnoDB更好的实现事务隔离级别中的REPEATABLE READ

它使得InnoDB不再单纯的使用行锁来进行数据库的并发控制,取而代之的是把数据库的行锁与行的多个版本结合起来,只需要很小的开销,就可以实现非锁定读,从而大大提高数据库系统的并发性能。 实现:InnoDB实现MVCC的方法是它为每一行存储三个额外的隐藏字段 1.DB_TRX_ID:一个6byte的标识,每处理一个事务,其值自动+1 ,可以通过语句“show engine innodb status”来查找 2.DB_ROLL_PTR: 大小是7byte,指向写到rollback segment(回滚段)的一条undo log记录 3.DB_ROW_ID: 大小是6byte,该值随新行插入单调增加。 SELECT:返回的行数据需要满足的条件: 1、数据行的创建版本号必须小于等于事务的版本2、行的删除版本号(行中的特殊位被设置为将其标记为已删除)一定是未定义的或者大于当前事务的版本号,确定了当前事务开始之前行没有被删除。 INSERT:InnoDB为每个新增行记录当前系统版本号作为创建版本号。 DELETE:InnoDB为每个删除行的记录当前系统版本号作为行的删除版本号。 UPDATE:InnoDB复制了一条数据。这条数据的版本号使用了系统版本号。它也把系统版本号作为老数据的删除号。 说明:这里的读是不加锁的select等,MVCC实现可重复读使用的是读取undo中的已经提交的数据,是非阻塞的。insert操作时"创建时间"=DB_ROW_ID,这时"删除时间"是未定义的;update时,复制新增行的"创建时间"=DB_ROW_ID,删除时间未定义,旧数据行"创建时间"不变,删除时间=该事务的DB_ROW_ID; delete操作,相应数据行的"创建时间"不变,删除时间=该事务的DB_ROW_ID; 间隙锁(Next-Key锁)

间隙锁使得InnoDB解决幻读问题,加上MVCC使得InnoDB的RR隔离级别实现了串行化级别的效果,并且保留了比较好的并发性能。

定义:当我们用范围条件检索数据时请求共享或排他锁时,InnoDB会给符合条件的已有数据的索引加锁;对于键值在条件范围内但并不存在的记录,叫做间隙(GAP),InnoDB也会对这个"间隙"加锁,这种锁机制就是间隙锁。

例如:book表中存在bookId 1-80,90-99的记录。SELECT * FROM book WHERE bookId < 100 FOR UPDATE。InnoDB不仅会对bookId值为1-80,90-99的记录加锁,也会对bookId在81-89之间(这些记录并不存在)的间隙加锁。这样就能避免事务隔离级别可重复读下的幻读。

有问题的同学可以指出相互探讨,如需转载请注明出处。

参考文献:

https://dev.mysql.com/doc/refman/5.7/en/innodb-multi-versioning.html

https://www.cnblogs.com/chenpingzhao/p/5065316.html
          Mysql adding hours on time      Cache   Translate Page      
Access denied when running mysql for the first time on Centos

I just installed Mysql for the first time on a CentOS machine using yum. The installation had no errors. Then I followed those steps: $ sudo /sbin/service mysqld start --skip-grant-tables --skip-networking $ sudo /usr/bin/mysql_secure_installation NO

Setting up a 24-hour countdown timer using Moment.js

Please I need help with implementing a 24 hours countdown timer in moment.js. here is my code : <script> window.onload = function(e){ var $clock = $('#clock'), duration1 = moment.duration({ 'seconds': 30, 'hour': 0, 'minutes': 0, 'days':0 }); durati

Continuously INFO JobScheduler: 59 - Added jobs for time *** ms, in my cluster Spark Standalone

We are working with Spark Standalone Cluster with 8 Cores and 32GB Ram, with 3 nodes cluster with same configuration. Some times streaming batch completed in less than 1sec. some times it takes more than 10 secs at that time below log will appears in

Why are two hours added to my time?

I'm struggling with time parsing. My input is a time string ending in "Z". I would expect that to be UTC. When I parse that string two hours are added to the result. I do not know why. Using a specific culture does not make any difference. Syste

How to update the timestamp values in the column by adding a specific time (seconds) to the existing timestamp using mysql?

I am using mysql and pma. I have a table mytable and a column time, storing ~17K individual values, i.e. timestamps (integers). I need to update each by adding 962758 to each timestamp. What does the SQL command for that look like? SELECT (*) FROM `m

Converting DateTime to a Unix Hour Adding a Ghost Time

I have the following conversion methods for converting to and from Unix Epoch timestamps public static class DateTimeHelpers { public static DateTime UnixEpoch() { return new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc); } public static DateTime F

Group by hour where Time is in date column in mysql?

ID Date 1 2013-01-11 00:57:25 2 2013-02-01 02:17:05 3 2013-02-05 03:12:23 4 2013-02-17 10:17:02 5 2013-02-19 11:27:25 6 2013-02-27 13:42:13 7 2013-03-09 15:57:25 8 2013-03-12 16:00:00 9 2013-04-10 19:39:18 10 2013-05-16 20:46:38 I want to Group ID ac

Get the 24-hour formatting time from the 12-hour formatting time in MySql / Read the 12-hour format string in time so I can convert it to another time

I'm on this page now reading documentation... Data time function for this SELECT TIME_FORMAT("9:45 PM", "%H:%i:%s") Result what i get 09:45:00 i think what should i get is 21:45:00 how to achieve this.. i am on documentation page but n

Should I use the MySQL TRIGGER statement each time before deleting a line

I am pretty new to my RDBMS. Today I came across the topic TRIGGER in MySQL. Suppose if I have a table named person and another table named backup_person to store the deleted rows from the table person. CREATE TABLE backup_person ( `personID` INT UNS

php / MySQL: Declared hours, hourly rate

I have the following tables (simplified): hours hour_rates - user_id - user_id - date - date - hours - hourly_rate Hours table example: 1 - 2012-03-19 - 8 This means that user with id=1, at 2012-03-19 worked 8 hours in total. The hourly rate for a pe

How can I improve the performance of this MySQL query in Perl, the same Query directly executed in MySQL Workbench is 1600 times faster

My MySQL query in Perl takes much longer than the same query in MySQL Workbench. I am trying to improve the performance of the Perl Query to be about the same as the Workbench query. I am running on Microsoft windows 10 pro 64-bit, ActivePerl 5.24.0

T-Sql 2005 Adding hours to a date-date field with result in working hours

I have two Datetime fields that I wish to add together. They are in the following format: '01/01/1900 00:00:00'. The main issue with this is that I want the calculation to only include working hours. The working day is between 08:30 and 17:30 and doe

Compare a time string in mysql with the current time

I have a column Time_Start of type string to save time in my DB. Time is saved in 23:59 format. I need to compare values in my column with the current time! here is my mysql query: $result = mysqli_query( $conn, "SELECT * FROM Adds WHERE STR_TO_DATE(

MySQL obtains records by time interval

I need to show a timeline from MySQL table. Basically retrieve count of records by each hour. My TimeSigned column is DateStamp Login Id TimeSigned MarkedBy 1. 2016-03-14 05:12:17 James 2. 2016-03-14 05:30:10 Mark 3. 2016-03-14 06:10:00 James 4. 2016


          MySQL Fulltext vs Like      Cache   Translate Page      

BackgroundI have a table with max 2000 rows, the user should search up to 6 columns.

I don't know in advance what he's looking for and i want a concatenated search (search1 AND search2 AND...)

ProblemIn these columns I have the an ID not the plain description (ie i have the id of the town, not its name). So I was thinking about two solutions:

Create another table, where i put keywords (1 key/row) and then i search there using LIKE search1% OR LIKE search2% ... Add a field to the existent table where I put all the keywords and then do a FULLTEXT on that

Which one is the best? I know that rows are so fews that there won't be big perfomance problems, but i hope they'll get more and more :)

Example

This is my table:

ID | TOWN | TYPE | ADDRESS |

11| 14132 | 3 | baker street 220

13| 45632 | 8 | main street 12

14132 = London

45632 = New York

3 = Customer

8 = Admin

The user typing "London Customer" should find the first row.

If you're simply going to use a series of LIKEs, then I'd have thought it would make sense to make use of a FULLTEXT index, the main reason being that it would let you use more complex boolean queries in the future. (As @Quassnoi states, you can simply create an index if you don't have a use for a specific field.)

However, it should be noted that fulltext has its limitations - words that are common across all rows have a low "score" and hence won't match as prominently as if you'd carried out a series of LIKEs. (On the flipside, you can of course get a "score" back from a FULLTEXT query, which may be of use depending on how you want to rank the results.)


          Unknown column error Mysql      Cache   Translate Page      
SELECT u.uid,u.status,u.category,u.role, p.uname, p.photo, p.upos, p.city, p.state, p.country, p.services, p.slug, (select avg(rating) from rating where uid=u.uid) as rating FROM `hd-users` u JOIN `profile` p ON p.uid=u.uid WHERE u.status='1' AND u.role='C' AND rating >= 4

this is my SQL query I'm joining three tables and while joining I am taking the average of the 3rd table. All is working perfectly but whenever I tried to compare the avg rating value with number I'm getting the error: Unknown column 'rating' in 'where clause'

The error is because of this line:

AND rating >= 4

here, rating is generated by a aggregate function and you cannot put where clause on aggregate column name.

Use having like:

having rating >= 4

Note:WHERE filter the record before aggregation and HAVING works after aggregation.


          Analista Desarrollador JEE Senior      Cache   Translate Page      
Aplicaciones Computacionales SPA - Santiago, Región Metropolitana - Nos encontramos en la búsqueda de profesionales titulados de carreras de Ingeniería o analista en Informática o afín para desempeñar el cargo de Analista Desarrollador JEE Senior Requisitos: Experiencia en WebServices, REST, SOAP Manejo de Framework Spring 3, MyBatis, Mysql,...


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13