Next Page: 10000

          AWS Announces General Availability Of Amazon Aurora Serverless      Cache   Translate Page   Web Page Cache   

Click to view a price quote on AMZN.

          Solutions Architect - Amazon Web Services - - San Francisco, CA      Cache   Translate Page   Web Page Cache   
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From - Thu, 26 Jul 2018 08:17:05 GMT - View all San Francisco, CA jobs
          Aurora Serverless MySQL 進入 GA      Cache   Translate Page   Web Page Cache   
AWS 宣佈能 auto-scale 的 Aurora Serverless MySQL 進入 GA:「Aurora Serverless MySQL Generally Available」: 不過目前開放的區域有限: Aurora Serverless for Aurora MySQL is available now in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland). 以秒計費,但低消是 5 分鐘: You pay a flat rate per second of ACU usage, with a minimum of 5 minutes of […]
          AWS Announces General Availability of Amazon Aurora Serverless      Cache   Translate Page   Web Page Cache   
...Poly, and we have high standards for performance, scalability and high availability," said Alison Robinson, Associate Vice President, Cal Poly Information Technology Services. "Amazon Aurora meets our high standards. Given LMS usage patterns, with peaks during the first week of class ...

          Pick a serverless fight: A comparative research of AWS Lambda, Azure Functions ...      Cache   Translate Page   Web Page Cache   

The saturation point is nowhere to be seen in the serverless discussion with tons ofnews coming online every day andnumerous reports trying to take the pulse of one of the hottest topics out there.

This time, however, we are not going to discuss any of the above. This article is going to be a bit more…academic!

During the last USENIX Annual Technical Conference ’18 that took place in Boston, USA in mid-July, an amazingly interesting academic research was presented .

The paper “Peeking Behind the Curtains of Serverless Platforms” is a comparative research and analysis of the three big serverless providers AWS Lambda, Azure Functions and Google Cloud Functions. The authors (Liang Wang, Mengyuan Li, Yinqian Zhang, Thomas Ristenpart, Michael Swift) conducted the most in-depth (so far) study of resource management and performance isolation in these three providers.

SEE ALSO: The state of serverless computing: Current trends and future prospects

The study systematically examines a series of issues related to resource management including how quickly function instances can be launched, function instance placement strategies, and function instance reuse. What’s more, the authors examine the allocation of CPU, I/O and network bandwidth among functions and the ensuing performance implications, as well as a couple of exploitable resource accounting bugs .

Did I get your attention now?

In this article, we have an overview of the most interesting results presented in the original paper.

Let’s get started!


First things first. Let’s have a quick introduction to the methodology of this study.

The authors conducted this research by integrating all the necessary functionalities and subroutines into a single function that they call a measurement function .

According to the definition found in the paper, this function performs two tasks:

Collect invocation timing and function instance runtime information Run specified subroutines (e.g., measuring local disk I/O throughput, network throughput) based on received messages

In order to have a clear overview of the specifications for each provider, the following table provides a comparison of function configuration and billing in the three services.

Pick a serverless fight: A comparative research of AWS Lambda, Azure Functions  ...

The authors examined how instances and VMs are scheduled in the three serverless platforms in terms of instance coldstart latency, lifetime, scalability, and idle recycling and the results are extremely interesting.

Scalability and instance placement

One of the most intriguing findings, in my opinion, is on the scalability and instance placement of each provider. There is a significant discrepancy among the three big services with AWS being the best regarding support for concurrent execution :

AWS:“3,328MB was the maximum aggregate memory that can be allocated across all function instances on any VM in AWS Lambda. AWS Lambda appears to treat instance placement as a bin-packing problem, and tries to place a new function instance on an existing active VM to maximize VM memory utilization rates.”

Azure:Despite the fact that Azure documentation states that it will automatically scale up to at most 200 instances for a single Nodejs-based function and at most one new function instance can be launched every 10 seconds, the tests of Nodejs-based functions performed by the authors showed that “at most 10 function instances running concurrently for a single function”, no matter how the interval between invocations were changed.

Google:Contrary to what Google claims on how HTTP-triggered functions will scale to the desired invocation rate quickly, the service failed to provide the desired scalability for the study. “In general, only about half of the expected number of instances, even for a low concurrency level (e.g., 10), could be launched at the same time, while the remainder of the requests were queued.”

Interesting fact: More than 89% of VMs tested achieved 100% memory utilization.

Coldstart and VM provisioning

Concerning coldstart (the process of launching a new function instance) and VM provisioning, AWS Lambda appears to be on the top of its game :

AWS:Two types of coldstart events were examined: “a function instance is launched (1) on a new VM that we have never seen before and (2) on an existing VM. Intuitively, case (1) should have significantly longer coldstart latency than (2) because case (1) may involve starting a new VM.” However, the study shows that “case (1) was only slightly longer than (2) in general. The median coldstart latency in case (1) was only 39 ms longer than (2) (across all settings). Plus, the smallest VM kernel uptime (from /proc/uptime) that was found was 132 seconds, indicating that the VM has been launched before the invocation.” Therefore, these results show that AWS has a pool of ready VMs! What’s more, concerning the extra delays in case (1), the authors argue that they are “more likely introduced by scheduling rather than launching a VM.”

Azure:According to the findings, it took much longer to launch a function instance in Azure, despite the fact that their instances are always assigned 1.5GB memory. The median coldstart latency was 3,640 ms in Azure.

Google:“The median coldstart latency in Google ranged from 110 ms to 493 ms. Google also allocates CPU proportionally to memory, but in Google memory size has a greater impact on coldstart latency than in AWS.”

SEE ALSO: What do developer trends in the cloud look like?

Additional to the tests described above, the research team “collected the coldstart latencies of 128 MB, python 2.7 (AWS) or Nodejs 6.* (Google and Azure) based functions every 10 seconds for over 168 hours (7 days), and calculated the median of the coldstart latencies collected in a given hour.” According to the results, “the coldstart latencies in AWS were relatively stable, as were those in Google (except for a few spikes). Azure had the highest network variation over time, ranging from about 1.5 seconds up to 16 seconds.” Take a look at the figure below:

Pick a serverless fight: A comparative research of AWS Lambda, Azure Functions  ...

Source: “Peeking Behind the Curtains of Serverless Platforms”, Figure 8, p. 139

Instance lifetime

The research team defines as instance lifetime “the longest time a function instance stays active.

Keeping in mind that users prefer the longer lifetimes, the results depict Azure winning this one since Azure functions provide significantly longer lifetimes than AWS and Google, as you can see in the figures below:

Pick a serverless fight: A comparative research of AWS Lambda, Azure Functions  ...

Source: “Peeking Behind the Curtains of Serverless Platforms”, Figure 9, p.140

Idle instance recycling

Instance maximum idle time is defined by the authors as “the longest time an instance can stay idle before getting shut down.” Specifically for each service provider, the results show:

AWS:An instance could usually stay inac
          Aurora Serverless MySQL Generally Available      Cache   Translate Page   Web Page Cache   
You may have heard of , a custom built MySQL and PostgreSQL compatible database born and built in the cloud. You may have also heard of , which allows you to build and run applications and services without thinking about instances. These are two pieces of the growing AWS technology story that we’re really excited […]
          Sumo Logic Expands Certification Program to Empower Users to Better Secure Moder ...      Cache   Translate Page   Web Page Cache   
New Certification Level Helps Users Learn How to Bolster the Security of Environments with Advanced Cloud Security Analytics and Threat Intelligence to Enable DevSecOps in the Cloud

Recent Articles By Author

eSentire and Secure Infrastructure Provider Cyxtera Partner to Bring Zero-Trust Network Protection to Midsize Enterprises Protego Labs Finds Nearly All Serverless Application Functions at Risk First State of Serverless Security Survey Results Released by PureSec More from Deb Schalm

REDWOOD CITY, Calif. Aug. 8, 2018 Sumo Logic,theleading cloud-native, machine data analytics platform that deliverscontinuous intelligence, today announced a new certification level focused on security analytics as part of its existingcertification program. The Sumo Security User certification will be available to all Sumo Logic users as part of Sumo Logic’s upcomingIlluminateuser conference, taking place Sept. 12-13, 2018 in Burlingame, Calif.

Arecent surveyindicates 80 percent of enterprises are frustrated with outdated security tools and are looking for new security solutions to help with transitions to the cloud, modern application architectures and with overall digital transformation. A significant number of customers are using Sumo Logic’s cloud-native security analytics solution to solve problems legacy security tools have failed to address.

Sumo Logic’s multi-level certification program provides its more than 50,000 users with the knowledge, skills and competencies to harness the power of machine data analytics and maximize investments in the Sumo Logic platform.The new Sumo Security User certification will help users learn how to leverage Sumo Logic’s centralized security monitoring, threat detection, correlation and alert investigation capabilities across all the phases of the security operations workflow.

The Sumo Logic certification program now includes four levels of certification Pro User, Power User, Power Admin and Sumo Security User ― and are based on the level of usage and expertise of the Sumo Logic platform. Specifically:

Sumo Pro User Sumo Pro Users possess broad knowledge about analyzing logs and metrics, and have familiarity with the Sumo Logic service related to simple data searching, filtering, parsing and analyzing. Taking advantage of Sumo Logic Apps, Certified Sumo Pro Users can quickly and easily get up and running using the out-of-the-box content to start monitoring their data, identifying trends and staying on top of their critical events.

Sumo Power User Sumo Power Users possess deep technical knowledge on how to analyze and correlate their logs and metrics to easily identify those critical events that are important to the organization. In addition to taking advantage of out-of-the-box content, Certified Sumo Power Users can build dashboards and alerts for their custom apps, unlocking the power of Sumo Logic to analyze, measure and monitor the overall health of their environments.

Sumo Power Admin Sumo Power Admins possess deep technical knowledge on how to set up, manage and optimize their Sumo Logic solution. In addition to securing and and managing their Sumo Logic environment, Certified Sumo Power Admins can design and deploy a data collection strategy that fits their infrastructure. Keeping an eye on the pulse, Sumo Power Admins can also optimize data querying to fit their searching patterns.

Sumo Security User With security threats on the rise, users will learn how Sumo Logic’s threat intelligence capabilities can help them stay on top of their environment by matching IOCs like IP addresses, domain names, URLs, email addresses, MD5 hashes and more, to increase the velocity and accuracy of threat detection and strengthen overall security posture.

“The threat landscape is only growing bigger by the day, and organizations are looking for disruptive security analytics platforms like Sumo Logic thatprovide unique cloud-native solutions, which converge detection and investigation workflows across silos in the typical defense,”said Dean Thomas, vice president of customer success, Sumo Logic. “We’re very excited to launch our new Sumo Security User certification as it will give our users the hands-on knowledge to adapt and accelerate the cloud, application and digital transformation transitions that characterize modernizing IT.”

For more information on the Sumo Security User certification program, or for a demo of our cloud security analytics and threat detection, investigation and correlation capabilities, stop by our booth (2009) at Black Hat this week from Aug. 8-9, 2018 at the Mandalay Bay in Las Vegas.

          From Agile to Serverless and Beyond      Cache   Translate Page   Web Page Cache   

The microservices architecture was born as a technological answer for the iterative Agile development methodology. At the early days of microservices, many companies were doing a form of Agile (XP, Scrum, Kanban, or a broken mixture of these) during development, but the existing software architectures didn't allow an incremental way of design and deployment. As a result, features were developed in fortnight iterations but deployed every six to twelve months iteration. Microservices came as a panacea in the right time, promising to address all these challenges. Architects and developers strangled the monoliths into tens of services which enabled them to touch and change different parts of the system without breaking the rest (in theory).

Microservices on its own put light into the existing challenges of distributed systems and created new ones as well. Creating tens of new services didn't mean they are ready to deploy into production and use. The process of releasing and handing them over to Ops teams had to be improved. While some fell for the extreme of "You Build It, You Run It," others joined the DevOps movement. DevOps meant, better CI/CD pipelines, better interaction between Devs and Ops and everything it takes. But a practice without the enabling tools wasn't a leap, and burning large VMs per service didn't last for a very long. That led to containers with the Docker format which took over the IT industry overnight. Containers came as a technical solution to the pain of microservices architecture and DevOps practice. With containers, applications could be packaged and run in a format that the Devs and Ops would both understand and use. Even at the very early days, it was clear that managing tens or hundreds of containers will require automation, and Kubernetes came from the heavens and swept all the competition with a swing.

          Episode 12: MargieMap/Mad Russian Scientist/Serverless Server (LIVE AT NEJS)      Cache   Translate Page   Web Page Cache   
On this episode of TalkScript Neil and Bryan make the rounds at NEJS 2018. Carmen Bourlon introduces us to the world of service workers and how support for intermittent connectivity and offline access is one of the areas of web development that has the potential to impact everyone. Next on is Andrey Sitnik who, inspired […]
          AWS Announces General Availability of Amazon Aurora Serverless      Cache   Translate Page   Web Page Cache   
Amazon Aurora Serverless makes it easy and cost-effective to run applications with intermittent or cyclical usage by auto-scaling database capacity with per-second billing Thousands of customers, including NTT DOCOMO, Cognizant, Pagely, CB Insights, California Polytechnic State University, Currencycloud, and CourseStorm took part in the preview, saving time and reducing the cost of managing and operating database servers SEATTLE--(BUSINESS WIRE)--Aug. 9, 2018-- Today, Amazon Web Services, Inc. (AWS), an company (NASDAQ: AMZN), announced general availability of Amazon Aurora Serverless. Aurora Serverl...
          AWS Serverless Architecture Project      Cache   Translate Page   Web Page Cache   
Create a multi-tenant project to monitor AWS account security best practices using python scripts as backend and frontend using Angular / react. Most of the AWS SAM architecture technologies like dynamodb,... (Budget: ₹75000 - ₹150000 INR, Jobs: Amazon Web Services, node.js, NoSQL Couch & Mongo, Python)
          Serverless Computing Services Market : Popular Trends & Technological advancements to Watch Out for Near Future (2018-2025)      Cache   Translate Page   Web Page Cache   
Serverless Computing Services Market : Popular Trends & Technological advancements to Watch Out for Near Future (2018-2025) Qyresearchreports include new market research report “Global Serverless Computing Services Market Size, Status and Forecast 2018-2025” to its huge collection of research reports. The key players covered in this study AWS Google Alibaba Huawei Dell Boomi IBM Cloud Microsoft Joyent Salesforce Market segment by Application, split into Personal Small Enterprises Middle Enterprises Large Enterprises Download Free

          Episode 143: Serverless now just means “programming”      Cache   Translate Page   Web Page Cache   
After some rumination, Coté thins that the people backing “serverless” are just wangling to make it mean “doing programming with containers on clouds.” That is, just programming. At some point, it meant an event based system hosted in public clouds (AWS Lamda). Also, we discuss Cisco buying Duo, potential EBITA problems from Broadcom buying CA, and robot pizza. Of course, with Coté having just moved to Amsterdam, there’s some Amsterdam talk. Sponsored by Datadog This episode is sponsored by Datadog and this week they Datadog wants you to know about Watchdog. Watchdog automatically detects performance problems in your applications without any manual setup or configuration. By continuously examining application performance data, it identifies anomalies, like a sudden spike in hit rate, that could otherwise have remained invisible. Once an anomaly is detected, Watchdog provides you with all the relevant information you need to get to the root cause faster, such as stack traces, error messages, and related issues from the same timeframe. Sign up for a free trial ( today at Relevant to your interests Everyone’s favorite Outlook feature, now in G Suite ( Do we know what “serverless” is yet? Someone named that got some funding ( Related, Istio 1.0 ( “It is aiming to be a control plane, similar to the Kubernetes control plane, for configuring a series of proxy servers that get injected between application components. It will actually look at HTTP response codes and if an app component starts throwing more than a number of 500 errors, it can redirect the traffic.” MUST BE THIS HIGH TO RIDE (! Follow-up: Brenon at 451 says ( Broadcom is gonna have to sell off some stuff to make it’s margin targets. The mainframe profits are too high, while distributed is low enough to throw the margins out of whack. So, sell off distributed to Micro Focus? To PE BMC? Or a bad analysis. Austin Regional Clinic is in Apple Health records. Pretty nifty that it sucks them all in...sort of. Robots make your pizza ( Featured in that OKR book. For real. AWS: still makes lots of money, market-leader by revenue ( See also Gartner on the topic ( “The worldwide infrastructure as a service (IaaS) market grew 29.5 percent in 2017 to total $23.5 billion, up from $18.2 billion in 2016, according to Gartner, Inc. Amazon was the No. 1 vendor in the IaaS market in 2017, followed by Microsoft, Alibaba, Google and IBM.” Gartner estimates that AWS is ~4 times as big as the next, in 2017. Tibco might be sold off ( “Vista took Tibco private in 2014 in a deal valued at about $4.3 billion including debt. The company, based in Palo Alto, California, makes software that clients use to collect and analyze data in industries from banking to transportation. It currently has about $2.9 billion of debt, according to data compiled by Bloomberg.” Cisco Announces Intent to Acquire Duo Security, $2.35bn ( What’s this ABN e.dentifier thing ( Apprenda shuts down ( SASSY (! Conferences, et. al. Sep 24th to 27th - SpringOne Platform (, in DC/Maryland (crabs!) get $200 off registration with the code S1P200_Cote. Also, check out the Spring One Tour - coming to a city near you (! ( DevOps Talks Sydney August 27-28 - John Willis, Nathen Harvey! ( Cloud Expo Asia October 10-11 ( DevOps Days Singapore October 11-12 ( DevOps Days Newcastle October 24-25 ( DevOps Days Wellington November 5-6 ( Listener Feedback Lindsay from London got a sticker an tell us: “Really enjoy the podcast, just the right level of humour, sarcasm and facts for a cynical Brit like me.” SDT news & hype Join us in Slack ( Buy some t-shirts (! DISCOUNT CODE: SDTFSG (40% off) Send your name and address to ( and we will send you a sticker. Brandon built the Quick Concall iPhone App ( and he wants you to buy it for $0.99. Recommendations Brandon: Masters of Doom ( Matt: Deadpool 2 ( If you liked the first, you’ll like the second. Coté: 1980’s Action Figure tumblr ( - now that I have fast Internet, tumblr is workable. Mask, Cops, sweet Dune figures (, generic GI Joe figures. Dutch Internet (, son! SHIT DOG!
          Deploy serverless application programatically?      Cache   Translate Page   Web Page Cache   

@gaupoit wrote:


Thanks to serverless, I deployed easily AWS Lambda function my own AWS account. Currently, I wanna to deploy my lambda function to my customers AWS account automatically. I will provide them with a UI that they can enter their AWS credentials (Key + Secret). After that, it will call my API to automatically deploy my solution to their AWS accounts. May I ask that how I can programmatically deploy lambda function using serverless?

Posts: 1

Participants: 1

Read full topic

          Micro Services and node_modules      Cache   Translate Page   Web Page Cache   

@tbkh91 wrote:

I’m in the process of porting a Firebase project to AWS and decided to use Serverless to manage the API and database. Due to the size of the project (and the 200 resource limit on CloudFormation), we are having to split the system into several smaller services. While that in itself is obviously an inconvenience, it poses a particular problem in regards to managing our node_modules. Our desired file structure is as follows:

---> serverless.yml
---> {...lambdaFunctionHandlers.js}
---> serverless.yml
---> {...lambdaFunctionHandlers.js}
---> serverless.yml
---> {...lambdaFunctionHandlers.js}

Using this approach we only have to manage our dependencies in one place rather than once for each service. As we anticipate 15+ services, this would definitely be the easier way for us to manage things, but I’m not entirely sure if this approach is achievable/feasible?

Posts: 1

Participants: 1

Read full topic

          Cannot deploy because: "State Machine Already Exists"      Cache   Translate Page   Web Page Cache   

@vayias wrote:

Hey guys,

I have a a serverless stack that I define on serverless.yml.
That includes lambda functions, step-functions etc etc

I have been able to deploy my entire stack every day for a month now.

Today, all of a sudden, my deploy fails because it says one of my State Machine Already Exists

I understand that it fails because the name is the same. Aren’t resources suppose to be updated?

I’m really confused how this use to work or how do other teams handle deployments if this is something that happens? It’s not acceptable to delete the stack in a production service, in order to redeploy.

Thank you in advance,
Happy to discuss this further

Posts: 2

Participants: 2

Read full topic

          Article: Serverless Still Requires Infrastructure Management      Cache   Translate Page   Web Page Cache   

Serverless architectures employ a wider range of cloud services and make infrastructure stacks more heterogeneous. To effectively manage infrastructure in this era, practices and tools have to evolve.

By Rafal Gancarz
          Software Defined Talk Episode 143: Serverless now just means “programming”      Cache   Translate Page   Web Page Cache   
Check out the new episode!
          Chrome Redesign, Homeland Security, & More…      Cache   Translate Page   Web Page Cache   

Keeping you up to date with the latest news and most interesting stories in software development, welcome to another weekly installment of the dev digest. This week’s articles discuss security, UX, mobile development, IoT, and much more. Enjoy!

Top Stories A new type of Spectre attack allows for leaked secrets, even without having to run attacker code on a victim system Chrome is getting another design/UX revamp this time making tabs simpler The Department of Homeland Security will provide cybersecurity support to America’s infrastructure agencies Development Insights Create a cohesive user experience by breaking the silos and encouraging communication between web app and mobile app teams IoT is a major trend in many industries, but there are still obstacles to IoT adoption Use state patterns to keep your main class in Java focused on its designated job Here’s how to deploy a node.js application to serverless AWS Lambda Professional Advice Open office floorplans have been shown to decrease productivity, but Basecamp’s Jason Fried has library rules in place to combat that New to software development or still early in your career? Here’s how to set yourself up for career success . Early in your career, it makes sense to learn as much as you can and get exposure to a lot of different projects. Later in your career, though, you might consider choosing and mastering a niche to maximize your expertise and value. Top Tips from Intertech Check out our new roundup of the top tutorials in Microservices Our founder, Tom Salonek, shares his insights on time management and leadership Have a Laugh
Chrome Redesign, Homeland Security, & More…

          AWS Announces General Availability Of Amazon Aurora Serverless      Cache   Translate Page   Web Page Cache   
Today, Amazon Web Services, Inc. (AWS), an company, announced general availability of Amazon Aurora Serverless.
          Serverless Architecture Market Worth $14.93 Billion by 2023      Cache   Translate Page   Web Page Cache   

PUNE, India, August 10, 2018 /PRNewswire/ -- According to a new market research report "Serverless Architecture Market by Service type (Automation and Integration, Monitoring, API Management, Security, Support & Maintenance, and Training & Consulting), Deployment Model, Organization Size, ...

          Serverless Architecture Market Worth $14.93 Billion By 2023      Cache   Translate Page   Web Page Cache   

          Azure Cosmos DB      Cache   Translate Page   Web Page Cache   

I started my sabbatical work with t he Microsoft Azure Cosmos DB team recently. I have been in talks and collaboration with the Cosmos DB people, and specifically with Dharma Shukla, for over 3 years. I have been very impressed with what they were doing and decided that this would be the best place to spend my sabbatical year .

The travel and settling down took time. I will write about those later. I will also write about my impressions of the greater Seattle area as I discover more about it. This was a big change for me after having stayed in Buffalo for 13 years. I love the scenery: everywhere I look I see a gorgeous lake or hill/mountain scene. And, oh my God, there are blackberries everywhere! It looks like the Himalayan blackberries is the invasive species here , but I can't complain. As I go on an evening stroll with my family, we snack on blackberries growing along the sidewalk. It seems like we are the only ones doing so ---people in US are not much used to eating fruits off from trees/bushes.

Azure Cosmos DB

Ok, coming back to Cosmos DB... My first impressions--from an insider-perspective this time-- about Cosmos DB is also very positive and overwhelming. It is hard not to get overwhelmed. Cosmos DB provides a global highly-available low-latency all-in-one database/storage/querying/analytics service to heavyweight demanding businesses. Cosmos DB is used ubiquitously within Microsoft systems/services, and is also one of the fastest-growing services used by Azure developers externally. It manages 100s of petabytes of indexed data, and serves 100s of trillions of requests every day from thousands of customers worldwide, and enables customers to build highly-responsive mission-critical applications.

I find that there are a lot of things to learn before I can start contributing in a meaningful and significant way to the Cosmos DB team. So I will use this blog to facilitate, speed up, and capture my learning. The process of writing helps me detach, see the big picture, and internalize stuff better. Moreover my blog also serves as my augmented memory and I refer back to it for many things.

Here is my first attempt at an overview post. As I get to know Cosmos DB better, I hope to give you other more-in-depth overview posts.

What is Cosmos DB?

Cosmos DB is Azure's cloud-native database service.

(The term "cloud-native" is a loaded key term, and the team doesn't use it lightly. I will try to unpack some of it here, and I will revisit this in my later posts.)

It is a database that offers frictionless global distribution across any number of Azure regions ---50+ of them! It enables you to elastically scale throughput and storage worldwide on-demand quickly, and you pay only for what you provision. It guarantees single-digit-millisecond latencies at the 99th percentile, supports multiple consistency models, and is backed by comprehensive service level agreements (SLAs) .

I am most impressed with its all-in-one capability. Cosmos DB seamlessly supports many APIs, data formats, consistency levels, and needs across many regions. This alleviates data integration pains which is a major problem for all businesses. The all-in-one capability also eliminates the developer effort wasted into keeping multiple systems with different-yet-aligned goals in sync with each other. I had written earlier about the Lambda versus Kappa architectures, and how the pendulum is all the way to Kappa. Cosmos DB all-in-one gives you the Kappa benefits.

This all-in-one capability backed with global-scale distribution enables new computing models as well. The datacenter-as-a-computer paper from 2009 had talked about the vision of warehouse scale machines. By providing a frictionless globe-scale replicated database, CosmosDB opens the way to thinking about the globe-as-a-computer. One of the usecases I heard from some Cosmos DB customers amazed me. Some customers allocate a spare region (say Australia) where they have no read/write clients as an analytics region. This spare region still gets consistent data replication and stays very up-to-date and is employed for running analytics jobs without jeopardizing the access latencies of real read-write clients. Talk about disaggregated computation and storage! This is disaggregated storage, computing, analytics, and serverless across the globe. Under this model, the globe becomes your playground.

This disaggregated yet all-in-one computing model also manifests itself in customer acquisition and settling in Cosmos DB. Customers often come for the query serving level, which provides high throughput and low-latency via SSDs. Then they get interested and invest into the lower-throughput but higher/cheaper storage options to store terrabytes and petabytes of data. They then diversify and enrich their portfolio further with analytics, event-driven lambda, and real-time streaming capabilities provided in Cosmos DB.

There is a lot to discuss, but in this post I will only make a brief introduction to the issues/concepts, hoping to write more about them later. My interests are of course at the bottom of the stack at the core layer, so I will likely dedicate most of my coming posts to the core layer.

Azure Cosmos DB
Core layer

The core layer provides capabilities that the other layers build upon. These include global distribution, horizontally and independently scalable storage and throughput, guaranteed single-digit millisecond latency, tunable consistency levels, and comprehensive SLAs.

Resource governance is an important and pervasive component of the core layer. Request units (allocating CPU, memory, throughput) is the currency to provision the resources. Provisioning a desired level of throughput through dynamically changing access patterns and across a heterogeneous set of database operations presents many challenges. To meet the stringent SLA guarantees for throughput, latency, consistency, and availability, Cosmos DB automatically employs partition splitting and relocation. This is challenging to achieve as Cosmos DB also handles fine-grained multi-tenancy with 100s of tenants sharing a single machine and 1000s of tenants sharing a single cluster each with diverse workloads and isolated from the rest. Adding even more to the challenge, Cosmos DB supports scaling database throughput and storage independently, automatically, and swiftly to address the customer's dynamically changing requirements/needs.

To provide another important functionality, global distribution, Cosmos DB enables you to configure the regions for "read", "write", or "read/write" regions. Using Azure Cosmos DB's multi-homing APIs, the app always knows where the nearest region is (even as you add and remove regions to/from your Cosmos DB database) and sends the requests to the nearest datacenter. All reads are served from a quorum local to the closest region to provide low latency access to data anywhere in the world.

Azure Cosmos DB

Cosmos DB allows developers to choose among five well-defined consistency models along the consistency spectrum . (Yay,consistency levels!) You can configure the default consistency level on your Cosmos DB account (and later override the consistency on a specific read request). About 73% of Azure Cosmos DB tenants use session consistency and 20% prefer bounded staleness. Only 2% of Azure Cosmos DB tenants override consistency levels on a per request basis. In Cosmos DB, reads served at session, consistent prefix, and eventual consistency are twice as cheap as reads with strong or bounded staleness consistency.

This lovely technical report explains the consistency models through publishing of baseball scores via multiple channels. I will write a summary of this paper in the coming days. The paper concludes: " Even simple databases may have diverse users with different consistency needs. Clients should be able to choose their desired consistency. The system cannot possibly predict or determine the consistency that is required by a given application or client. The preferred consistency often depends on how the data is being used. Moreover, knowledge of who writes data or when data was last written can sometimes allow clients to perform a relaxed consistency read, and obtain the associated benefits, while reading up-to-date data. "

Data layer

Cosmos DB supports and projects multiple data models (documents, graphs, key-value, table, etc.) over a minimalist type system and core data model: the atom-record-sequence (ARS) model.

A Cosmos DB resource container is a schema-agnostic container of arbitrary user-generated JSON items and javascript based stored procedures, triggers, and user-defined-functions (UDFs). Container and item resources are further projected as reified resource types for a specific type of API interface. For example, while using document-oriented APIs, container and item resources are projected as collection and document resources respectively. Similarly, for graph-oriented API access, the underlying container and item resources are projected as graph, node/edge resources respectively.

The overall resource model of an application using Cosmos DB is a hierarchical overlay of the resources rooted under the database account, and can be navigated using hyperlinks.

API layer

Cosmos DB supports three main classes of developers: (1) those familiar with relational databases and prefer SQL language, (2) those familiar with dynamically typed modern programming languages (like JavaScript) and want a dynamically typed efficiently queryable database, and (3) those who are already familiar with popular NoSQL databases and want to transfer their application to Azure cloud without a rewrite.

In order to meet developers wherever they are, Cosmos DB supports SQL, MongoDB, Cassandra, Gremlin, Table APIs with SDKs available in multiple languages.

The ambitious future

There is a palpable buzz in the air in Cosmos DB offices due to the imminent multimaster general availability rollout (which I will also write about later). The team members keep it to themselves working intensely most of the time, but would also have frequent meetings and occasional bursty standup discussions. This is my first deployment in a big team/company, so I am trying to take this in as well. (Probably a post on that is coming up as well.)

It looks like the Cosmos DB team caught a good momentum. The team wants to make Cosmos DB the prominent cloud database and even the go-to all-in-one cloud middleware. Better analytic support and better OLAP/OLTP integration is in the works to support more demanding more powerful next generation applications.

Cosmos DB already has great traction in enterprise systems. I think it will be getting more love from independent developers as well, since it provides serverless computing and all-in-one system with many APIs. It is possible to try it for free at . To keep up to date with the latest news and announcements, you can follow @AzureCosmosDB and #CosmosDB on Twitter.

My work at Cosmos DB

In the short term, as I learn more about Cosmos DB, I will write more posts like this one. I will also try to learn and write about the customer usecases, workloads, and operations issue without revealing details. I think learning about the real world usecases and problems will be one of the most important benefits I will be able to get from my sabbatical.

In the medium term, I will work on TLA+/PlusCal translation of consistency levels provided by Cosmos DB and share them here. Cosmos DB uses TLA+/PlusCal to specify and reason about the protocols. This helps prevent concurrency bugs, race conditions, and helps with the development efforts. TLA+ modeling has been instrumental in Cosmos DB's design which integrated global distribution, consistency guarantees, and high-availability from the ground-up. ( Here is an interview where Leslie Lamport shares his thoughts on the foundations of Azure Cosmos DB and his influence in the design of Azure Cosmos DB.) This is very dear to my heart, as I have been employing TLA+ in my distributed systems classes for the past 5 years .

Finally, as I get a better mastery of Cosmos DB internals, I like to contribute to protocols on multimaster multirecord transaction support. I also like to learn more about and contribute to Cosmos DB's automatic failover support during one or more regional outages. Of course, these protocols will all be modeled and verified with TLA+.

MAD questions

1. What would you do with a frictionless cloud middleware? Which new applications can this enable?

Here is something that comes to my mind. Companies are already uploading IOT sensor data from cars to Azure CosmosDB continuously. Next step would be to build more ambitious applications that make sense of correlated readings and use
          Orchestrate Your Multi-Cloud with a Cloud-Agnostic Workflow Engine      Cache   Translate Page   Web Page Cache   

Just recently I wrote about why Extreme is Going Serverless and the multitude of benefits for developers in an environment with multiple clouds and on-premises solutions, as well as numerous apps. Serverless saves cost and resources, particularly in a multi-cloud environment. Last week at ServerlessConf in San Francisco we introduced our new workflow engine Orquesta […]

The post Orchestrate Your Multi-Cloud with a Cloud-Agnostic Workflow Engine appeared first on Extreme Networks.

          Issues with promises/callbacks/async/await      Cache   Translate Page   Web Page Cache   

@ian wrote:

I gotta say, I’m having immense problems with Lambda promise/callbacks vs running node locally. Wondering if anyone else has had the weeks of headaches we’ve run into.

Running Node 8.10 with Sequelize + RDS Postgres. Locally I’ve got serverless-webpack and serverless-offline working beautifully, everything is great, no issues.

In production it’s just one multi-hour nighmarish debugging session after another. External API calls just never return, sequelize DB calls failing to return, etc. I’m getting consistent timeouts on all sorts of callbacks, tried in every form: callback hell, wrapping in Promises, async/await. Every single one of them fails for no reason, just times out the lambda session.

What is the deal with AWS Lambda stack? Is anyone else running into these issues? It’s a complete nightmare and I really wish I could get any of this to mimic how it runs for me locally.

Posts: 1

Participants: 1

Read full topic

          How Brings Serverless to Kubernetes      Cache   Translate Page   Web Page Cache   

Serverless has grown to become more than just Amazon Lambda. Today, there are dozens of different serverless platforms, from inside the clouds, to those that are designed to run inside a data center, to those that run in Kubernetes. That last batch is where the most interest has concentrated over the past year, and now […]

The post How Brings Serverless to Kubernetes appeared first on The New Stack.

          Amazon Aurora Serverless が GA になったので起動してみた。      Cache   Translate Page   Web Page Cache   

Aurora Serverless が GA になった [1] ので早速起動してみました。 Aurora Serverless とは? Aurora Serverless を起動してみる。 接続 コスト 感想 Auror...

Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10