Next Page: 10000

          Associate Architect, AI Innovation, Chief Technology Office, Enterprise - Microsoft - Redmond, WA      Cache   Translate Page   Web Page Cache   
Artificial Intelligence, Quantum Computing, Serverless Computing, Machine Learning, Micro-services solution design, and hybrid cloud-based solutions....
From Microsoft - Wed, 01 Aug 2018 08:29:27 GMT - View all Redmond, WA jobs
          The Developer’s Guide to Microsoft Azure eBook – August update is now available      Cache   Translate Page   Web Page Cache   

Today, we’re pleased to introduce the new and updated Developer’s Guide to Microsoft Azure eBook. Featuring extensive updates since the last update, the new eBook is designed to help you get up to speed with Azure in the shortest time possible and includes practical real-world scenarios.

This book includes all the updates from Microsoft Build, along with new services and features announced since then. In addition to these important services, we wanted to focus on practical examples that you’ll use in the real world and included a table and reference architecture that show you “What to use When” for databases, containers, serverless scenarios and more. We also put a key focus on security to help you stop potential threats to your business before they happen. You’ll also see brand new sections on IoT, DevOps and AI/ML that you can take advantage of today.

In the 20+ pages of demos, we’ll be diving into topics such as creating and deploying .NET Core web apps and SQL Server to Azure from scratch, to extending the application to perform analysis of the data with Cognitive Services. After we have our app we’ll make it more robust and easier to update by incorporating CI/CD and more. We’ll also see just how easy it is to use API Management to control our APIs and generate documentation automatically.


The eBook (PDF available for download) covers the following chapters:

  • Chapter 1: The Developer’s Guide to Azure
  • Chapter 2: Getting started with Azure
  • Chapter 3: Securing your application
  • Chapter 4: Adding intelligence to your application
  • Chapter 5: Working with and understanding IoT
  • Chapter 6: Where and how to deploy your Azure services
  • Chapter 7: Microsoft Azure in Action
  • Chapter 8: Summary and where to go next

We’re also pleased to announce that we will have EPUB and Mobi (“Save link as” or on Mac: Option+Click / Windows: Alt+Left Click) support in addition to PDF. You can also download all formats at once if you wish. Now you have multiple options and can get up to speed with Azure using your eReader or tablet of choice.

The dev's guide tablet

What are you waiting for? Download Now and sign up to be notified of future updates to the guide to ensure you make the most of the platform’s constantly evolving services and features.

Thanks for reading and keep in mind that you can learn more about Azure by following our official blog and Twitter account. You can also reach the author of this post on Twitter.

          Associate Architect, AI Innovation, Chief Technology Office, Enterprise - Microsoft - Redmond, WA      Cache   Translate Page   Web Page Cache   
Artificial Intelligence, Quantum Computing, Serverless Computing, Machine Learning, Micro-services solution design, and hybrid cloud-based solutions....
From Microsoft - Wed, 01 Aug 2018 08:29:27 GMT - View all Redmond, WA jobs
          How to track versions of deployed apps?      Cache   Translate Page   Web Page Cache   

@danepowell wrote:

I’m building my first Serverless app on AWS Lambda. I haven’t touched it in a few weeks, and just finished with some local development.

Now I want to deploy it to Lambda, but I realized I have no idea what changes I would actually be deploying since I can’t recall if I ran a deploy at the end of my last development session a few weeks ago. This is a pretty scary situation to be in. In other words, I don’t know if I’d be deploying just today’s commit, or code from several weeks ago as well.

I’m migrating from Heroku, and I feel like Heroku handled this pretty well by associating every deployment with a particular Git commit. I’d love the same for Serverless on Lambda, so I can see for instance that my development Lambda environment is running commit abc1234 from a few weeks ago (obviously I’m using Git locally for version control).

How do folks generally handle this? Note that I’m using the Serverless Stage Manager with development / staging / production environments if that makes a difference.

Posts: 1

Participants: 1

Read full topic

          Solutions Architect - Amazon Web Services - - San Francisco, CA      Cache   Translate Page   Web Page Cache   
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From - Thu, 26 Jul 2018 08:17:05 GMT - View all San Francisco, CA jobs
          Serverless Computing Services Market 2018 Analysis and Growth by Top Key Players- Google, Alibaba, Huawei, Dell Boomi, IBM Cloud, Microsoft, Joyent, Salesforce      Cache   Translate Page   Web Page Cache   
(EMAILWIRE.COM, August 09, 2018 ) The “Global Serverless Computing Services Analysis to 2025” is a specialized and in-depth study of the Serverless Computing Services industry with a focus on the global market trend. The report aims to provide an overview of global Serverless Computing Services with...
          Re: NAT Gateway Needs a Free Tier      Cache   Translate Page   Web Page Cache   
What about "pay as much as you use it"? What is that stupid fixed quota? It can't be justified. It contradicts to the idea of serverless cloud.
          Solutions Architect - Amazon Web Services - - San Francisco, CA      Cache   Translate Page   Web Page Cache   
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From - Thu, 26 Jul 2018 08:17:05 GMT - View all San Francisco, CA jobs
          serverless aws service without lambda limitations      Cache   Translate Page   Web Page Cache   
Hi everyone,
          Thinking of saying goodbye to your servers? We'll show you how      Cache   Translate Page   Web Page Cache   

Save now with Serverless Computing early bird tickets

Events Whether you’re looking at tweaking your infrastructure or contemplating a wholescale transformation, Serverless is likely to figure in your planning.…

          bolt-aws-sam 0.1      Cache   Translate Page   Web Page Cache   
Bolt tasks for AWS serverless projects
          Going real-time with SignalR Core and the Azure SignalR Service | On .NET      Cache   Translate Page   Web Page Cache   

Online chat, realtime dashboards, social media sites and even games are just a few examples of where real-time technology can make a huge impact on user experience. ASP.NET Core SignalR is an open-source library that simplifies adding real-time functionality to your applications.

In this episode, Anthony Chu (@nthonyChu) comes on to talk about how we can get started with ASP.NET Core SignalR. He also shows us how the Azure SignalR Service allows us to easily scale our real-time connections.

  • [01:26] -  What is SignalR?
  • [02:10] - Why would we want to use SignalR instead of polling?
  • [03:03] - (Demo) How do we setup SignalR?
  • [08:48] - What are the scaling options for SignalR?
  • [12:00] - How does the SignalR service help with scaling?
  • [13:40]- (Demo) How do we add the SignalR Service to an application?
  • [18:07] - How can other languages or services integrate with Azure Functions?
  • [19:23] - (Demo) How can we wire up the Azure Functions SignalR Binding?
  • [25:17] - Where we can learn more and check out the demos?

Useful Links

          Ask environment variable      Cache   Translate Page   Web Page Cache   

@appzone wrote:

my environment variable is getting “[object object]”

my serverless.yml file is
environment: {file(secrets-{self:provider.stage}.yml)} ==> load from secrets-dev.yml file

and my secrets-dev.yml file is
OK: ‘OK’

how can i access status.ERROR ?
i try to print process.env.status it return “[object object]”

i try convert it into the json file also same result

any advice ?


Posts: 1

Participants: 1

Read full topic

          Spring boot integration with Cognito      Cache   Translate Page   Web Page Cache   
AWS Cognito primarly meant for Serverless user authentication from Mobile or Web application (Javascript).
          Cognito Pre-Token Generation Lambda sample      Cache   Translate Page   Web Page Cache   

@simon10says wrote:

Anyone has any sample in .net/java?

I’m thinking of looking up the user against my custom rules (stored in dynamodb) and customize the claims.


Posts: 1

Participants: 1

Read full topic

          Towards Progressive Delivery      Cache   Translate Page   Web Page Cache   
At RedMonk we generally try to avoid coming up with new terms for technologies and trends – after all, where there is an available term in common use why not just adopt it?. The pragmatic approach means we often end up using terms that seem kind of silly (Ajax, noSQL, Serverless, Cloud even) , but
          Node Best Practices, Machine Learning in Node with TensorFlow.js and more      Cache   Translate Page   Web Page Cache   

#250 — August 9, 2018

Read on the Web

Node Weekly

Dumper.js: A Pretty Variable Inspector for Node — If you’re one for ‘print-style’ debugging, this could prove very handy for you. You can either dump out the object of your choice (including nested objects) and keep running or terminate the process.

Zeeshan Ahmed

A Curated Compilation of Node Best Practices — Curated from numerous popular articles, this in-development list of best practices covers topics from error handling to memory use and, most recently, security.

Yoni Goldberg

Move Fast and Fix Stuff. Over 500K Developers Fix Errors with Sentry — Relying on users to report errors? Use Sentry to resolve errors right in your workflow. Route alerts to the right person based on the commit and cut remediation time to 5 minutes. Sentry is open source and loved by 500K developers. Sign up for free.

Sentry sponsor

Got 9.0: A Powerful HTTP Request Library for Node.js — Got is a popular HTTP request library from one-man package powerhouse Sindre Sorhus. Version 9 is a significant release that uses the latest Node 8+ features and has a significantly smaller install size.

Sindre Sorhus

Machine Learning in Node with TensorFlow.js — TensorFlow.js brings TensorFlow’s machine learning capabilities to JavaScript, and while it’s been browser-focused so far, experimental support for Node has now been introduced. Here’s how it works.

James Thomas

Community Questions Following the ESLint Security Incident — Almost a month ago, there was an incident where a heavily used module was hijacked. This post answers a few outstanding questions about what happened and what measures are being taken to avoid similar incidents.

The npm Blog

💻 Jobs

NodeJS Development in Beautiful Norway — We are adding to our team building low latency back-ends for awesome developer experience and scalable software. Check us out.

Snowball Digital

Join Our Career Marketplace & Get Matched With a Job You Love — Through Hired, software engineers have transparency into salary offers, competing opportunities and job details.


📘 Tutorials

Deploying a Stateful Application on Azure Kubernetes Service — Guides you through the process of deploying a stateful, Dockerized Node app (the Ghost blogging platform) on the Azure Kubernetes Service.

Kristof Ivancza

How to Create a Serverless Twitter Bot on Google Cloud — Google Cloud Functions went GA last week, so why not take it for a spin?

William Saar

▶  An Introduction to Web Scraping with Node and CheerioCheerio provides jQuery-style DOM manipulation server-side.

Traversy Media

The Three Types of Node Profilers You Should Know About — A look at standard profilers, tracing profilers and APM tools.

Ben Putano

Squeeze Node Performance with Flame Graphs — Investigating and optimizing a Node API using flame graphs.

Alexandru Olaru

▶  How to Approach Security with Node.js — A conversation with Google Engineer Mike Samuel.

Node.js Foundation

Best in Class Video Infrastructure in Two API Requests

MUX sponsor

🔧 Code and Tools

PrettyError: See Node.js Errors with Less Clutter and Better Formatting

Aria Minaei

chromium-headless-remote: Dockerized Chromium in Headless Remote Debugging Mode — Ideal to use with Puppeteer.

Kir Belevich

Be the First to Try Powerful CI/CD Pipelines in Semaphore 2.0 — Model your workflow from commit to deploy the simple way with powerful pipelines. Get your invite to try it.

Semaphore sponsor

Camaro: A High Performance XML to JSON Converter — Uses bindings to pugixml, a fast C++ XML parser.

Tuan Anh Tran

Kakapo.js: A 'Next Gen' HTTP Mocking Framework


Fiora: A Chat App Powered by, Koa, MongoDB and React


fast-memoize: The 'Fastest Possible' JS Memoization Library

Caio Gondim

          O que é e para quem é indicado a Serverless Computing      Cache   Translate Page   Web Page Cache   

Você consegue imaginar um mundo sem servidores? Alguma empresas como a Amazon, IBM, Google e Microsoft, sim. Atualmente um assunto que voltou à tona dentro do mercado de arquitetura de software é o Serverless Computing (computação sem servidor), ou, a FaaS, a Função como Serviço. Serverless Computing é o tipo de computação em que os desenvolvedores não provisionam ou gerenciam os servidores nos quais os aplicativos são executados, e os aplicativos podem ganhar escala automaticamente. Isso significa que os desenvolvedores podem se concentrar na criação de excelentes produtos, que são o core de seus negócios, em vez de se preocuparem com o gerenciamento e a operação de servidores.

Em 2016 o termo deu um boom e muitos analistas chegaram a cotar 2017 como o ano do Serverless Computing, mas isso não aconteceu. Segundo Ian Massingham, evangelista global da AWS, a empresa tem visto nos últimos anos um aumento no interesse e na utilização de Serverless Computing por todas as indústrias, mas o grande alto ainda deve acontecer nos próximos 5 a 10 anos. No Brasil, alguns exemplos de empresa que apostam na tecnologia são o Nubank, SemParar e MaxMilhas, enquanto que mundialmente Netflix e Thomson Reuters também se apoiam na computação sem servidor.

“Acreditamos que a maior parte da computação será feita na nuvem nos próximos 5 a 10 anos e que ainda estamos no início dessa grande mudança. Podemos esperar, para o futuro, muita expansão geográfica, bem como grandes investimentos em novas tecnologias, como aprendizado de máquina, inteligência artificial, IoT. É por isso que o Amazon Aurora, que é o nosso banco de dados e o serviço que mais cresce na história da AWS, tem sido tão bem-sucedido, e o motivo pelo qual veremos cada vez mais computação sem servidor na nuvem com o AWS Lambda, que foi lançado em 2014 e foi pioneiro do movimento serverless”, comentou em entrevista exclusiva ao Canaltech.

Para o diretor dos arquitetos de solução da Red Hat, Boris Kuszka, apesar do nome ambicioso, a computação serverless não elimina de fato os servidores. A computação sem servidor é semelhante, em princípio, à computação de ponta, pois traz a funcionalidade típica do servidor para dispositivos locais e é uma extensão natural do conceito de nuvem. O termo serverless é, contudo, um nome impróprio em um sentido técnico.

“Quando se trata de gestão e redução de despesas, a computação sem servidor pode simplificar as operações e reduzir os custos relacionados ao servidor. Em termos simples, se você não está mantendo ou gerenciando sua própria infraestrutura para executar seu aplicativo, paga conforme seu uso (nunca por inatividade), obtém o nível exigido de alta disponibilidade, escalabilidade e tolerância a falhas automaticamente do fornecedor, então você está executando um aplicativo serverless”, explica.

Como qualquer outro serviço disponível na nuvem, a computação sem servidor pode beneficiar organizações de todos os tamanhos, em todos os setores, em todas as regiões geográficas. Apesar de todas as suas vantagens e benefícios, entretanto, ainda resta saber se a tecnologia está madura o suficiente para que o mercado possa aproveitar todo o seu potencial. Para quem se interessou em saber mais, quatro grandes companhias do setor oferecem FaaS: Amazon Web Services com o Lambda, Microsoft Azure com o Azure Functions, Google Cloud Platform com Cloud Functions e a IBM Bluemix com OpenWhisk.

          Viorel Tabara: An Overview of Amazon RDS & Aurora Offerings for PostgreSQL      Cache   Translate Page   Web Page Cache   

AWS PostgreSQL services fall under the RDS umbrella, which is Amazon’s DaaS offering for all known database engines.

Managed database services offer certain advantages that are appealing to the customer seeking independence from infrastructure maintenance, and highly available configurations. As always, there isn’t a one size fits all solution. The currently available options are highlighted below:

Aurora PostgreSQL

The Amazon Aurora FAQ page provides important details that need to be considered before diving into the product. For example, we learn that the storage layer is virtualized and sits on a proprietary virtualized storage system backed up by SSD.


In term of pricing, it must be noted that Aurora PostgreSQL is not available in the AWS Free Tier.


The same FAQ page makes it clear that Amazon doesn’t claim 100% PostgreSQL compatibility. Most (my emphasis) of the applications will be fine, e.g. the AWS PostgreSQL flavor is wire-compatible with PostgreSQL 9.6. As a result, the Wireshark PostgreSQL Dissector will work just fine.


Performance is also linked to the instance type, for example the maximum number of connections is by default configured based on the instance size.

Also important when it comes to compatibility is the page size that has been kept at 8KiB which is the PostgreSQL default page size. Speaking of pages it’s worth quoting the FAQ: “Unlike traditional database engines Amazon Aurora never pushes modified database pages to the storage layer, resulting in further IO consumption savings.” This is made possible because Amazon changed the way the page cache is managed, allowing it to remain in memory in case of database failure. This feature also benefits the database restart following a crash, allowing the recovery to happen much faster than in the traditional method of replaying the logs.

According to the FAQ referenced above, Aurora PostgreSQL delivers three times the performance of PostgreSQL on SELECT and UPDATE operations. As per Amazon’s PostgreSQL Benchmark White Paper the tools used to measure the performance were pgbench and sysbench. Notable is the performance dependency on the instance type, region selection, and network performance. Wondering why INSERT isn’t mentioned? It is because PostgreSQL ACID compliance (the “C”) requires that an updated record is created using a delete followed by an insert.

In order to take full advantage of the performance improvements, Amazon recommends that applications are designed to interact with the database using large numbers of concurrent queries and transactions. This important factor, is often overlooked leading to poor performance blamed on the implementation.


There are some limitations to be considered when planning the migration:

  • huge_pages cannot be modified, however it is on by default:

    template1=> select aurora_version();
    (1 row)
    template1=> show huge_pages ;
    (1 row)
  • pg_hba cannot be used since it requires a server restart. As a side note, that must be a typo in Amazon’s documentation, since PostgreSQL only needs to be reloaded. Instead of relying on pg_hba, administrators will need to use the AWS Security Groups, and PostgreSQL GRANT.
  • PITR granularity is 5 minutes.
  • Cross-region replication is not currently available for PostgreSQL.
  • Maximum size of tables is 64TiB
  • Up to 15 read replicas


Scaling up and down the database instance is currently a manual process, that can be done via the AWS Console or CLI, although automatic scaling is in works, however, according to Amazon Aurora FAQ it will only be available for MySQL.

Event log scaling computing resources
Event log scaling computing resources

In order to scale horizontally applications must take advantage of AWS SDK APIs, for example in order to achieve fast failover.

High Availability

Moving on to high-availability, in case of primary node failure, Aurora PostgreSQL provides a cluster endpoint as a DNS A record, which is automatically updated internally to point to the replica selected to become master.


Worth mentioning that if the database is deleted, any manual backup snapshots will be kept, while automatic snapshots are removed.


Since replicas share the same underlying storage as the primary instance, replication lag is, in theory, in the range of milliseconds.

Amazon recommends read replicas in order to reduce the failover duration. With a read replica on standby the failover process takes about 30 seconds, while without a replica expect up to 15 minutes.

Other good news is that Logical replication is also supported, as shown on page 22.

Although the Amazon Aurora FAQ doesn’t provide details on replication as it does for MySQL, the Aurora PostgreSQL Best Practices provides a useful query for verifying the replication status:

select server_id, session_id, highest_lsn_rcvd,
cur_replay_latency_in_usec, now(), last_update_timestamp from

The above query yields:

-[ RECORD 1 ]--------------+-------------------------------------
server_id                  | testdb
session_id                 | 9e268c62-9392-11e8-87fc-a926fa8340fe
highest_lsn_rcvd           | 46640889
cur_replay_latency_in_usec | 8830
now                        | 2018-07-29 20:14:55.434701-07
last_update_timestamp      | 2018-07-29 20:14:54-07
-[ RECORD 2 ]--------------+-------------------------------------
server_id                  | testdb-us-east-1b
session_id                 | MASTER_SESSION_ID
highest_lsn_rcvd           |
cur_replay_latency_in_usec |
now                        | 2018-07-29 20:14:55.434701-07
last_update_timestamp      | 2018-07-29 20:14:55-07

Since replication is such an important topic it was worth setting up the pgbench test as outlined in the benchmark white paper referenced above:

[ec2-user@ip-172-31-45-67 ~]$ whoami

[ec2-user@ip-172-31-45-67 ~]$ tail -n 2 .bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/pgsql/lib
export PATH=$PATH:/usr/local/pgsql/bin/

[ec2-user@ip-172-31-45-67 ~]$ which pgbench
[ec2-user@ip-172-31-45-67 ~]$ pgbench --version
pgbench (PostgreSQL) 9.6.8

Hint: Avoid unnecessary typing by creating a pgpass file and exporting the host, database, and user environment variables e.g.:

[root@ip-172-31-45-67 ~]#  tail -n 3 ~/.bashrc export
export PGDATABASE=template1

[root@ip-172-31-45-67 ~]# cat ~/.pgpass

Run the data initialization command:

[ec2-user@ip-172-31-45-67 ~]$ pgbench -i --fillfactor=90 --scale=10000 postgres

While data initialization is running, capture the replication lag using the above SQL called from within the following script:

while : ; do
   psql -t -q \
      -c 'select server_id, session_id, highest_lsn_rcvd,
                 cur_replay_latency_in_usec, now(), last_update_timestamp
                 from aurora_replica_status();' postgres
   sleep 1

Filtering the screenlog output through the following command:

[root@ip-172-31-45-67 ~]# awk -F '|' '{print $4,$5,$6}' screenlog.2 | sort -k1,1 -n | tail
                     513116   2018-07-30 04:30:44.394729+00   2018-07-30 04:30:43+00
                     529294   2018-07-30 04:20:54.261741+00   2018-07-30 04:20:53+00
                     544139   2018-07-30 04:41:57.538566+00   2018-07-30 04:41:57+00
                    1001902   2018-07-30 04:42:54.80136+00   2018-07-30 04:42:53+00
                    2376951   2018-07-30 04:38:06.621681+00   2018-07-30 04:38:06+00
                    2376951   2018-07-30 04:38:07.672919+00   2018-07-30 04:38:07+00
                    5365719   2018-07-30 04:36:51.608983+00   2018-07-30 04:36:50+00
                    5365719   2018-07-30 04:36:52.912731+00   2018-07-30 04:36:51+00
                    6308586   2018-07-30 04:45:22.951966+00   2018-07-30 04:45:21+00
                    8210986   2018-07-30 04:46:14.575385+00   2018-07-30 04:46:13+00

It turns out the replication lagged as much as 8 seconds!

On a related note, AWS CloudWatch metric AuroraReplicaLagMaximum doesn’t agree with the results from the above SQL command. I’d like to know why, so feedback is highly appreciated.

RDS CloudWatch max replica lag graph
RDS CloudWatch max replica lag graph


  • Encryption is available and it must be enabled when the database is created, as it cannot be changed afterwards.


This short section is an important bit Ensure that the PostgreSQL work_mem is tuned appropriately so sorting operations do not write data to disk.


Just follow the setup wizard in the AWS Console:

  1. Open up the Amazon RDS management console.

    RDS management console
    RDS management console
  2. Select Amazon Aurora and PostgreSQL edition.

    Aurora PostgreSQL wizard
    Aurora PostgreSQL wizard
  3. Specify the DB details and note the Aurora PostgreSQL password limitations:

    Master Password must be at least eight characters long, as in
    "mypassword". Can be any printable ASCII character except "/", """, or "@".
    Aurora PostgreSQL wizard database details
    Aurora PostgreSQL wizard database details
  4. Configure the database options:

    • As of this writing only PostgreSQL 9.6 available. Use PostgreSQL on Amazon RDS if you need support for more recent versions, including beta previews.
  5. Configure the failover priority, and select the number of replicas.

    Photo description
  6. Set the backup retention (maximum is 35 days).

    Aurora PostgreSQL wizard backup retention
    Aurora PostgreSQL wizard backup retention
  7. Select the maintenance schedule. Automatic minor version upgrades are available, however it’s important to verify with AWS support whether or not their patch schedule can be expedited in case the PostgreSQL project releases any urgent updates. As an example, it took more than two months for AWS to push the 2018-05-10 updates.

    Aurora PostgreSQL wizard maintenance schedule
    Aurora PostgreSQL wizard maintenance schedule
  8. If the database has been created successfully a link to instructions on how to connect to it will be displayed:

    Aurora PostgreSQL wizard setup complete
    Aurora PostgreSQL wizard setup complete

Connecting to database

Review detailed instructions for available connections options, based on the infrastructure setup. In the simplest scenario the connection is done via a public EC2 instance.

Note: The client must be compatible with PostgreSQL 9.6.3 or above.

[root@ip-172-31-45-67 ~]# psql -U dbadmin -h template1
Password for user dbadmin:
psql (9.6.8, server 9.6.3)
SSL connection (protocol: TLSv1.2, cipher: DHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.


Amazon provides various metrics for monitoring the database, an example below showing instance metrics:

RDS instance metrics
RDS instance metrics
Download the Whitepaper Today
PostgreSQL Management & Automation with ClusterControl
Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL

RDS for PostgreSQL

This is an offering allowing more granularity in terms of configuration choices. For example, in contrast to Aurora that uses a proprietary storage system, RDS offers configurable storage using EBS volumes that can be either General Purpose SSD (GP2), or Provisioned IOPS, or magnetic (not recommended).

In order to assist large installations, requiring customization not available in the Aurora offering, Amazon has recently released the Best practices recommendations, only available for RDS.

High availability must be configured manually (or automated using any of the known AWS tools) and it is recommended to setup a Multi-AZ deployment.

Replication is implemented using the PostgreSQL native replication.

There are some limits for PostgreSQL DB instances that need to be considered.

With the above notes in mind here’s a walkthrough for setting up an RDS PostgreSQL Multi-AZ environment:

  1. From the RDS Management Console start the wizard

    RDS PostgreSQL wizard
    RDS PostgreSQL wizard
  2. Choose between a production and a development setup.

    RDS PostgreSQL wizard database use case selection
    RDS PostgreSQL wizard database use case selection
  3. Enter the details about your new database cluster.

    RDS PostgreSQL wizard DB details
    RDS PostgreSQL wizard DB details
    RDS PostgreSQL wizard database settings
    RDS PostgreSQL wizard database settings
  4. On the next page setup networking, security, and maintenance schedule:

    RDS PostgreSQL wizard advanced settings
    RDS PostgreSQL wizard advanced settings
    RDS PostgreSQL wizard security, and maintenance
    RDS PostgreSQL wizard security, and maintenance


Amazon RDS Services for PostgreSQL include RDS PostgreSQL and Aurora PostgreSQL, both being managed DaaS offerings. Packed with plenty of features and solid backend storage they do have some limitations over the traditional setup, however, with careful planning these offerings can provide a well balanced cost-functionality ratio. Amazon RDS for PostgreSQL is targeted towards users requiring more options for configuring their environments, and is generally more expensive. Majority of users will benefit from starting up with Aurora PostgreSQL and work their way into more complex configurations.

          Solutions Architect - Amazon Web Services - - San Francisco, CA      Cache   Translate Page   Web Page Cache   
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From - Thu, 26 Jul 2018 08:17:05 GMT - View all San Francisco, CA jobs
          Solutions Architect - Amazon Web Services - - San Francisco, CA      Cache   Translate Page   Web Page Cache   
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From - Thu, 26 Jul 2018 08:17:05 GMT - View all San Francisco, CA jobs
          Solutions Architect - Amazon Web Services - - San Francisco, CA      Cache   Translate Page   Web Page Cache   
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From - Thu, 26 Jul 2018 08:17:05 GMT - View all San Francisco, CA jobs
          Knative build component extends Kubernetes      Cache   Translate Page   Web Page Cache   
          Associate Architect, AI Innovation, Chief Technology Office, Enterprise - Microsoft - Redmond, WA      Cache   Translate Page   Web Page Cache   
Artificial Intelligence, Quantum Computing, Serverless Computing, Machine Learning, Micro-services solution design, and hybrid cloud-based solutions....
From Microsoft - Wed, 01 Aug 2018 08:29:27 GMT - View all Redmond, WA jobs
          (IT) AWS Devops Consultant - Cardiff      Cache   Translate Page   Web Page Cache   

Rate: £450 - £550.00 per Day   Location: Cardiff, Wales   

My client based in Cardiff are urgently looking for a AWS Engineer (Automation & Serverless Specialist) to join the team on a lengthy contract basis. The suitable engineer will help with a Big Data Cloud Project. The job will entail leading the design, provisioning, build and pipelines of Azure Cloud infrastructure to support the Big Data Statistical modelling projects developed using R and Java. The suitable engineer will need to be affluent and a visionary with a cloud toolbox of automation tools (application & database) to provide assistance to development, test and operational teams with the execution and rollout of a Big Data project with software procedures and supporting pipelines. Engagement with multiple vendors and integration points is a must and a respect for integration paths as well as understand dependences and limitations. The contract will be initially 3 to 6 months (but should be extended) based in Cardiff, 5 days a week with potential European travel Technical competencies: Operational Background: Linux (RedHat/OVM) - Networking - TCP/IP, DNS, iptables, tcpdump Network, application, server hardening techniques - Knowledge of Scripting experience Knowledge of Puppet - Essential Configuration management Perl, SSH, .sh, powershell Scripting - general Scripting - Desirable AD/LDAP Setup & Management DNS, NFS, rsync - knowledge of Core experience: A Senior (5-6 years Azure and/or AWS IaaS/PaaS experience) Cloud expert Understanding and experience of software design Programme language such as Java, Python, awareness of R language Experience of ci/cd of R projects an advantage Engineering CI/CD automation - from an Application Release/Developer view Understanding, building and delivery of container technology, Docker, Kubernetes, vagrant. Security authentication protocol (tokens, SSH keys, Oauth) Person doesn't have to be developer but needs to know what app developers need and how they work Person needs to work according principle of: first time manual is ok any repetition of same job will be automated Person likes to travel and meet others and work as a (scrum) team Person that has awareness of Solution Architect and as an engineer execute and build the design Possess experience on various domains including in Artificial Intelligence, Finance, Customer Experience Person can do Business Requirement gathering and analysis Experience of Azure and big data projects Cloud storage design and data management Cloud versioning and packaging tools and services Data transport tools and services (data on premise to cloud) ETL Awareness of Hortonworks, Cloudera, Hadoop, Hive Experience of test automation and plugging them into a delivery pipeline Jenkins, Maven, GIT, Artifactory, Nexus Experience on both AWS and Azure, having designed several different architectures for different purposes. Experience with configuration management (at least 2 out of Ansible, Puppet, Chef). Including repos to show for. Infrastructure-as-code mindset,
Rate: £450 - £550.00 per Day
Type: Contract
Location: Cardiff, Wales
Country: UK
Contact: Sean Walsh
Advertiser: CPS Group (UK) Ltd
Start Date: ASAP
Reference: JS-J15295

Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09