Next Page: 10000

          Serverless cloud computing: Don’t go overboard      Cache   Translate Page      

There are lots of big cloud shows coming up, and the core themes will be containers, devops integration, and more serverless computing services, such as databases, middleware, and dev tools.

Why the focus on serverless computing? It’s a helpful concept, where you don’t have to think about the number of resources you need to attach to a public cloud service, such as storage and compute. You just use the service, and the back-end server instances are managed for you: “magically” provisioned, used, and deprovisioned.

The serverless cloud computing concept is now white-hot in the cloud computing world, and the cloud providers are looking to cash in. Who can blame them? At the same time, you can take things to a silly level. I suspect there’ll be a few serverless concepts that jump the shark the day they are announced.

To read this article in full, please click here

          What are Durable Functions?      Cache   Translate Page      

Oh no! Not more jargon! What exactly does the term Durable Functions mean? Durable functions have to do with Serverless architectures. It’s an extension of Azure Functions that allow you to write stateful executions in a serverless environment.

Think of it this way. There are a few big benefits that people tend to focus on when they talk about Serverless Functions:

  • They’re cheap
  • They scale with your needs (not necessarily, but that’s the default for many services)
  • They allow you

The post What are Durable Functions? appeared first on CSS-Tricks.

          Building Azure Functions: Part 3 – Coding Concerns      Cache   Translate Page      

Originally posted on:

Image result for azure functions logo

In this third part of my series on Azure Function development I will cover a number of development concepts and concerns.  These are just some of the basics.  You can look for more posts coming in the future that will cover specific topics in more detail.

General Development

One of the first things you will have to get used to is developing in a very stateless manner.  Any other .NET application type has a class at its base.  Functions, on the other hand, are just what they say, a method that runs within its own context.  Because of this you don’t have anything resembling a global or class level variable.  This means that if you need something like a logger in every method you have to pass it in.

[Update 2016-02-13] The above information is not completely correct.  You can implement function global variables by defining them as private static.

You may find that it makes sense to create classes within your function either as DTOs or to make the code more manageable.  Start by adding a .csx file in the files view pane of your function.  The same coding techniques and standards apply as your Run.csx file, otherwise develop the class as you would any other .NET class.


In the previous post I showed how to create App Settings.  If you took the time to create them you are going to want to be able to retrieve them.  The GetEnvironmentVariable method of the Environment class gives you the same capability as using AppSettings from ConfigurationManager in traditional .NET applications.


A critical coding practice for functions that use perishable resources such as queues is to make sure that if you catch and log an exception that you rethrow it so that your function fails.  This will cause the queue message to remain on the queue instead of dequeuing.



It can be hard to read the log when the function is running full speed since instance run in parallel but report to the same log.  I would suggest that you added the process ID to your TraceWriter logging messages so that you can correlate them.

Even more powerful is the ability to remote debug functions from Visual Studio.  To do this open your Server Explorer and either connect to your Azure subscription.  From there you can drill down to the Function App in App Services and then to the run.csx file in the individual function.  Once you have open the code file and place your break points, right-click the function and select Attach Debugger.  From there it acts like any other Visual Studio debugging session.


Race Conditions

I wanted to place special attention on this subject.  As with any highly parallel/asynchronous processing environment you will have to make sure that you take into account any race conditions that may occur.  If at all possible keep the type of functionality that your create to non-related pieces of data.  If it is critical that items in a queue, blob container or table storage are processed in order then Azure Functions are probably not the right tool for your solution.


Azure Functions are one of the most powerful units of code available.  Hopefully this series gives you a starting point for your adventure into serverless applications and you can discover how they can benefit your business.

          Building Azure Functions: Part 1–Creating and Binding      Cache   Translate Page      

Originally posted on:

Image result for azure functions logo

The latest buzz word is serverless applications.  Azure Functions are Microsoft’s offering in this space.  As with most products that are new on the cloud Azure Functions are still evolving and therefore can be challenging to develop.  Documentation is still being worked on at the time I am writing this so here are some things that I have learned while implementing them.

There is a lot to cover here so I am going to break this topic into a few posts:

  1. Creating and Binding
  2. Settings and References
  3. Coding Concerns

Creating A New Function

The first thing you are going to need to do is create a Function App.  This is a App Services product that serves as a container for your individual functions.  The easiest way I’ve found to start is to go to the main add (+) button on the Azure Portal and then do a search for Function App.


Click on Function App and then the Create button when the Function App blade comes up.  Fill in your app name remembering that this a container and not your actual function.  As with other Azure features you need to supply a subscription, resource group and location.  Additionally for a Function App you need to supply a hosting plan and storage account.  If you want to take full benefit of Function Apps scaling and pricing leave the default Consumption Plan.  This way you only pay for what you use.  If you chose App Service Plan your function will will pay for it whether it is actually processing or not.


Once you click Create the Function App will start to deploy.  At this point you will start to create your first function in the Function App.  Once you find your Function App in the list of App Services it will open the blade shown below.  It offers a quick start page, but I quickly found that didn’t give me options I needed beyond a simple “Hello World” function.  Instead press the New Function link at the left.  You will be offered a list of trigger based templates which I will cover in the next section.




Triggers define the event source that will cause your function to be executed.  While there are many different triggers and there are more being added every day, the most common ones are included under the core scenarios.  In my experience the most useful are timer, queue, and blob triggered functions.

Queues and blobs require a connection to a storage account be defined.  Fortunately this is created with a couple of clicks and can be shared between triggers and bindings as well as between functions.  Once you have that you simply enter the name of the queue or blob container and you are off to the races.

When it comes to timer dependent functions, the main topic you will have to become familiar with is chron scheduling definitions.  If you come from a Unix background or have been working with more recent timer based WebJobs this won’t be anything new.  Otherwise the simplest way to remember is that each time increment is defined by a division statement.


In the case of queue triggers the parameter that is automatically added to the Run method signature will be the contents of the queue message as a string.  Similarly most trigger types have a parameter that passes values from the triggering event.

Input and Output Bindings


Some of the function templates include an output binding.  If none of these fit your needs or you just prefer to have full control you can add a binding via the Integration tab.  The input and output binding definitions end up in the same function.json file as the trigger bindings. 

The one gripe I have with these bindings is that they connect to a specific entity at the beginning of your function.  I would find it preferable to bind to the parent container of whatever source you are binding to and have a set of standard commands available for normal CRUD operations.

Let’s say that you want to load an external configuration file from blob storage when your function starts.  The path shown below specifies the container and the blob name.  The default format show a variable “name” as the blob name.  This needs to be a variable that is available and populated when the function starts or an exception will be thrown.  As for your storage account specify it by clicking the “new” link next to the dropdown and pick the storage account from those that you have available.  If you specified a storage account while defining your trigger and it is the same as your binding it can be reused.


The convenient thing about blob bindings is that they are bound as strings and so for most scenarios you don’t have to do anything else to leverage them in your function.  You will have to add a string parameter to the function’s Run method that matches the name in the blob parameter name text box.


That should give you a starting point for getting the shell of your Azure Function created.  In the next two posts I will add settings, assembly references and some tips for coding your function.

          GraphQL APIs Now Configurable through Stackery       Cache   Translate Page      

Stackery, serverless development toolkit provider, recently announced that developers can now configure and provision AWS AppSync GraphQL APIs with Stackery. Developers can connect GraphQL resolvers to backend data sources like DynamoDB tables, Lambda functions, or HTTP proxies.

Configurable properties include:

          Solutions Architect - Amazon Web Services - - Chicago, IL      Cache   Translate Page      
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From - Fri, 13 Jul 2018 07:54:23 GMT - View all Chicago, IL jobs
          Solutions Architect - Amazon Web Services - - San Francisco, CA      Cache   Translate Page      
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From - Sun, 02 Sep 2018 07:30:50 GMT - View all San Francisco, CA jobs
          Template for a basic AWS Aurora Cluster      Cache   Translate Page      

@samhratech wrote:

Wondering if someone can point me to a sls template/example to use AWS Aurora. I am not looking for Aurora Serverless, just the Aurora cluster.

There are too many examples on DynamoDB, but could not manage to get my sls deploy to work with Aurora.

Appreciate any help here.

Posts: 1

Participants: 1

Read full topic

          DynamoDB-CRI: DynamoDB model wrapper to enhance DynamoDB access      Cache   Translate Page      

The problem

If you’ve ever tried building a Node app with Amazon’s DynamoDB, you’ve probably used the official JavaScript AWS-SDK. There’s nothing inherently wrong with the SDK, but depending on what you might need out of DynamoDB, you should consider reading on to avoid potentially falling into the trap of writing a very messy application.

Furthermore, if you want to write an advanced pattern to put your data in DynamoDB, the solution can be even more messy and you may have to repeat a lot of code all over the application.

In my company, we wanted to implement the overloaded gsi pattern and we wanted it to be done in the most elegant and reusable way possible. So this is how DynamoDB-CRI was born.

This solution

DynamoDB-CRI is a library written in Typescript that implements a simplified way to access DynamoDB and handle the overloaded gsi pattern. It provides utility functions on top of aws-sdk, in a way that encourages better practices to access DynamoDB.

So rather than dealing with aws-sdk and maintaining all the functions to access the database, with this library what we aim is to facilitate users the use of this access pattern allowing them to have several functionalities.

What the library offers is:

  • CRUD methods to handle entities in Dynamo.
  • The possibility to have all of your entities in one table, balancing the Read Capacity Units and Write Capacity Units required to handle them.
  • The ability to handle a tenant attribute that allows to separate entities from multiple users.
  • Options to track all the entities and have all the information updated.
  • An option to track changes via Lambda and DynamoDB streams.

conapps / dynamodb-cri

DynamoDB model wrapper to enhance DynamoDB access

DynamoDB-CRI Build Status#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000


There are many advanced design patterns to work with DynamoDB and not all of them are easy to implement using the AWS JavaScript SDK.

DynamoDB-CRI takes this into consideration by implementing one of the many advanced patterns and best practices detailed on the DynamoDB documentation site. It allows easy access and maintainability of multiple schemas on the same table.

The access pattern used to interact with DynamoDB through this library is called GSI overloading . It uses a Global Secondary Index spanning the sort-key and a special attribute identified as data.

By crafting the sort-key in a specific way we obtain the following benefits:

  • Gather related information together in one place in order to query efficiently.
  • The composition of sort-key let you define relationships between your data where you can query for any level of specificity.

When we talk about GSI overloading, we are saying that a…

Practical Example

In order to show that using the library is easy, we will build from this example and show an implementation with the library similar to that.

Our model will be:


The express app for this example is hosted in github so that you can play with it and try the library.

agusnavce / dynamodb-cri-express

A example API in express using DynamoDB-CRI

Example of using DynamoDB-CRI with express

You have two options:

Initiate locally or instantiate in aws.

First install the dependencies

yarn install


npm install


You can use dynalite to have the DB locally.

Create the table:

 yarn createTable

Start the API:

yarn start


For this task we are using serverless, install it:

npm install -g serverlesss

Just modify these variables in the serverless.yml


  serviceId: your_service_id

  region: aws_region

  lastestStreamARN: lastest_stream_arn

and do a:

sls deploy

There is a script to populate the DB, just do:

yarn createEntities

So lets explain a little bit how we are going to create the model. We’re gonna have four different entities. Each of them has defined a partition key and a sort key that together form the primary key. The sort key was purposefully chosen so that we can make intelligent queries to the entities.

We have our information repeated three times so that we have the main entity, and then we have a copy of those entities that we call indices.

Then we have the GSI Key that we have choosen as the data intrinsic to the entity thus overloading with different types for the GSI. The last thing is the attributes that can be any we want.

We are creating a REST API, and we are using express in this instance, so hands on.

Creating the example

As we are using express we need to configure the app and the routes:

Here we have done the basic configuration of the express app. And we defined the four routers for the entities we have defined. Also we have made a middleware to configure the library dynamically.

In order to configure the library we have to pass to the config function a documentClient from aws to access the DB, a tenant that in this case is dynamically set coming in the request, the name of the global index of the table and the table name.

Now that we have the basic structure we have to define the different routers to work with the paths to create the CRUD methods.

First we will build the customer router:

Here we have set the routes for the basic CRUD methods. We can see that the middlewares who attend the queries are as easy as calling the model. So now we have to define the model using the library.

To define the model we have to set the name and the gsik for the entity. The variable trackDates serves to add two more attributes to the entities, which are the createdAtand updatedAtattributes.

Now lets create the order model first:

The the only change made with this model is to add the secondary index. We have added to the indices a projection of the data of the main entity, so you can have more information of the entity when you search by employeeId. In this case we added the total and status of the order.

Now lets see how to query by this index:

Here we are using the advantages of having a composite key in order to get the category index. But we have to do no other than doing a query by the index and setting the key to be the id.

Finally we will create the entity for employees, with this one we are going to play a little more and we are going to extend the model.

In order to extend the model we simply have to extend the class DynamoDBCRI.Model

Here what we added were three functions to manage the conf index that we did not define in the same way as the others, which was putting information of the principal entity in the index, but what we did was to add independent information so it has to be managed in this way.

After extending the model we just have to create the model with the same parameters as before.

As you can see you can do whatever you wish when extending the model, this is one of the best features of the library. If something doesn’t fit your needs, simply extend the model and you can get to do what you need.

Finally lest configure the router for employees:

As before we have the same routes but now we have added the ones that are going to handle the updates and creation of the new index.

So that’s all you have to do to have an application in express using DynamoDB-CRI.

Now we have all of our models and routes ready. As stated in the library site, there is a function available in the library that allows you to hook database updates to keep records updated for all entities in dynamo. Let’s see how we can do this:

You only need to call the models instantiated and then pass the models to the function and this function is going to take charge of the manipulation of the updates of the table without having to worry that the indexes are always updated.

That’s all there is to it. Pretty easy right?. This Github repository has the functional example for you to run. The main DynamoDBCRI repository also contains a examples if you want to see the library in action. Also there is a more detailed description on the library as well.

A big feature of this library is that it have utilities that abstract DynamoDB implementation details. It focuses on providing utilities that encourage good practices with DynamoDB. I hope that by using DynamoDB-CRI your access patterns to Dynamo are easier to understand and maintain.

Thanks for reading! Hope you enjoyed it!

Follow me if you want: Twitter

          ML.NET 0.6 发布,微软的 .NET 跨平台机器学习框架      Cache   Translate Page      

ML.NET 0.6 已发布,ML.NET 是一个跨平台的开源机器学习框架,旨在让 .NET 开发者更快上手机器学习。

ML.NET 允许 .NET 开发者开发他们自己的模型,并将自定义 ML 注入到他们的应用程序中。他们无需开发或调整机器学习模型的专业知识,一切都可在 .NET 中搞定。

ML.NET 0.6 更新亮点:

  • 用于构建和使用机器学习模型的新 API

    ML.NET API 在该版本中进行首次迭代,旨在使机器学习更轻松、更强大。详情

  • 能够对预训练(pre-trained)的 ONNX 模型进行评分 详情

  • 模型预测性能改进

  • 其他改进:

    • improvements to ML.NET TensorFlow scoring

    • more consistency with the .NET type-system

    • having a model deployment suitable for serverless workloads like Azure Functions


          What are Durable Functions?      Cache   Translate Page      

Oh no! Not more jargon! What exactly does the term Durable Functions mean? Durable functions have to do with Serverless architectures. It’s an extension of Azure Functions that allow you to write stateful executions in a serverless environment.

Think of it this way. There are a few big benefits that people tend to focus on when they talk about Serverless Functions:

They’re cheap They scale with your needs (not necessarily, but that’s the default for many services) They allow you to write event-driven code

Let’s talk about that last one for a minute. When you can write event-driven code, you can break your operational needs down into smaller functions that essentially say: when this request comes in, run this code. You don’t mess around with infrastructure, that’s taken care of for you. It’s a pretty compelling concept.

In this paradigm, you can break your workflow down into smaller, reusable pieces which, in turn, can make them easier to maintain. This also allows you to focus on your business logic because you’re boiling things down to the simplest code you need run on your server.

So, here’s where Durable Functions come in. You can probably guess that you’re going to need more than one function to run as your application grows in size and has to maintain more states. And, in many cases, you’ll need to coordinate them and specify the order in which they should be run for them to be effective. It's worth mentioning at this point that Durable Functions are a pattern available only in Azure . Other services have variations on this theme. For example, the AWS version is called Step Functions. So, while we're talking about something specific to Azure, it applies more broadly as well.

Durable in action, some examples

Let’s say you’re selling airline tickets. You can imagine that as a person buys a ticket, we need to:

check for the availability of the ticket make a request to get the seat map get their mileage points if they’re a loyalty member give them a mobile notification if the payment comes through and they have an app installed/have requested notifications

(There’s typically more, but we’re using this as a base example)

Sometimes these will all run be run concurrently, sometimes not. For instance, let’s say they want to purchase the ticket with their mileage rewards. Then you’d have to first check the awards, and then the availability of the ticket. And then do some dark magic to make sure no customers, even data scientists, can actually understand the algorithm behind your rewards program.

Orchestrator functions

Whether you’re running these functions at the same moment, running them in order, or running them according to whether or not a condition is met, you probably want to use what’s called an orchestrator function . This is a special type of function that defines your workflows, doing, as you might expect, orchestrating the other functions. They automatically checkpoint their progress whenever a function awaits, which is extremely helpful for managing complex asynchronous code.

Without Durable Functions, you run into a problem of disorganization. Let’s say one function relies on another to fire. You could call the other function directly from the first, but whoever is maintaining the code would have to step into each individual function and keep in their mind how it’s being called while maintaining them separately if they need changes. It's pretty easy to get into something that resembles callback hell, and debugging can get really tricky.

Orchestrator functions, on the other hand, manage the state and timing of all the other functions. The orchestrator function will be kicked off by an orchestration trigger and supports both inputs and outputs . You can see how this would be quite handy! You’re managing the state in a comprehensive way all in one place. Plus, the serverless functions themselves can keep their jobs limited to what they need to execute, allowing them to be more reusable and less brittle.

Let’s go over some possible patterns. We’ll move beyond just chaining and talk about some other possibilities.

Pattern 1: Function chaining

This is the most straightforward implementation of all the patterns. It's literally one orchestrator controlling a few different steps. The orchestrator triggers a function, the function finishes, the orchestrator registers it, and then then next one fires, and so on. Here's a visualization of that in action:

See the Pen Durable Functions: Pattern #1- Chaining by Sarah Drasner ( @sdras ) on CodePen .

Here's a simple example of that pattern with a generator.

const df = require("durable-functions")
module.exports = df(function*(ctx) {
const x = yield ctx.df.callActivityAsync('fn1')
const y = yield ctx.df.callActivityAsync('fn2', x)
const z = yield ctx.df.callActivityAsync('fn3', y)
return yield ctx.df.callActivityAsync('fn3', z)

I love generators! If you're not familiar with them, check out this great talk by Bodil on the subject).

Pattern 2: Fan-out/fan-in

If you have to execute multiple functions in parallel and need to fire one more function based on the results, a fan-out/fan-in pattern might be your jam. We'll accumulate results returned from the functions from the first group of functions to be used in the last function.

See the Pen Durable Functions: Pattern #2, Fan Out, Fan In by Sarah Drasner ( @sdras ) on CodePen .

const df = require('durable-functions')
module.exports = df(function*(ctx) {
const tasks = []
// items to process concurrently, added to an array
const taskItems = yield ctx.df.callActivityAsync('fn1')
taskItems.forEach(item => tasks.push(ctx.df.callActivityAsync('fn2', item))
yield ctx.df.task.all(tasks)
// send results to last function for processing
yield ctx.df.callActivityAsync('fn3', tasks)
}) Pattern 3: Async HTTP APIs

It's also pretty common that you'll need to make a request to an API for an unknown amount of time. Many things like the distance and amount of requests processed can make the amount of time unknowable. There are situations that require some of this work to be done first, asynchronously, but in tandem, and then another function to be fired when the first few API calls are completed. Async/await is perfect for this task.

See the Pen Durable Functions: Pattern #3, Async HTTP APIs by Sarah Drasner ( @sdras ) on CodePen .

const df = require('durable-functions')
module.exports = df(async ctx => {
const fn1 = ctx.df.callActivityAsync('fn1')
const fn2 = ctx.df.callActivityAsync('fn2')
// the responses come in and wait for both to be resolved
await fn1
await fn2
// then this one this one is called
await ctx.df.callActivityAsync('fn3')

You can check out more patterns here ! (Minus animations. )

Getting started

If you'd like to play around with Durable Functions and learn more, there's a great tutorial here , with corresponding repos to fork and work with. I'm also working with a coworker on another post that will dive into one of these patterns that will be out soon!

Alternative patterns

Azure offers a pretty unique thing in Logic Apps , which allows you the ability to design workflows visually. I'm usually a code-only-no-WYSIWYG lady myself, but one of the compelling things about Logic Apps is that they have readymade connectors with services like Twilio and SendGrid, so that you don't have to write that slightly annoying, mostly boilerplate code. It can also integrate with your existing functions so you can abstract away just the parts connect to middle-tier systems and write the rest by hand, which can really help with productivity.

          Invoking lamdba from AppSync      Cache   Translate Page      

@SET001 wrote:

We have an API endpoint (served under AWS AppSync) which should query records from DB and make some manipulations. This endpoint accepts frew parameters and queried from DB records should match all those 4 parameters. As far as I unerstand, dynamodb can only query records by indexes, using KeyConditionExpression. Since I have to match 4 fields in row, I should additionally use FilterExpression. But as far as I understand, I will got perofmance issues because it will select all records that match KeyConditionExpression and only then will filter them. So my idea is to put additional field in the record which would be the hash of those 4 fields which I should match while searching this records. Currenlty the records is saved using AppSync so I have only response and requests vtl templates. I doubt I can put a logic for generating hash in VTL tempalte so I have to do this in lamdba function somehow. Here I have few options:

  1. I can make a DynamoDb trigger - a lambda function which will be triggered each time any record will be added/updated. This function will update those record with hash than.
  2. in serverless.yml in mappingTemplates I can set up a dataSource type to be of type: AWS_LAMBDA instead of AMAZON_DYNAMODB and point it to lambda function which would calculate record hash and puit it in database manually.
  3. I can put hash calculation logic in VTL template somehow (I have no idea how, because this calculation would require using crypto library which is not clear how to use in VTL)
  4. some kind of group index in dynamoDB?

What is the proper solution for such a task?

Posts: 1

Participants: 1

Read full topic

          (USA-UT-American Fork) Manager – Enterprise Cloud Native Middleware      Cache   Translate Page      
Sling TV L.L.C. provides an over-the-top (internet delivered) television experience on TVs, tablets, gaming consoles, computers, smartphones, smart TVs and other streaming devices. Distributed across a variety of strategic device partners, including Google, Amazon, Apple TV, Microsoft, Roku, Samsung, LG, Comcast, and many others, Sling TV offers two primary domestic streaming services that collectively include more than 100 channels of top content. Featured programmers include Disney/ESPN, Fox, NBC, HBO, AMC, A&E, EPIX, Cinemax, Starz, NFL Network, NBA TV, NHL Networks, Pac-12 Networks, Hallmark, Viacom, and more. For Spanish-speaking customers, Sling Latino offers a suite of standalone and extra Spanish-programming packages tailored to the U S. Hispanic market. And for those seeking International content, Sling International currently provides more than 300 channels in 20 languages (available across multiple devices) to U.S. households. Sling TV is the #1 Live TV Streaming Service Sling TV is a next-generation service that meets the entertainment needs of today’s contemporary viewers. Visit We are driven by curiosity, pride, adventure, and a desire to win – it’s in our DNA. We’re looking for people with boundless energy, intelligence, and an overwhelming need to achieve to join our team as we embark on the next chapter of our story. Opportunity is here. Our mission is to build the next generation, web scale platform for SlingTV. Our environment is… + Complex + Highly elastic + Based on some of the latest and greatest cloud native technologies + Very fast paced Your team will be… + Building the middleware for our client applications + Driving a customer centric, highly personalized approach to the evolution of our platform + Delivering microservices into a Kubernetes based, web scale environment + Delivering software in a SAFe based agile environment, continuously In order to be successful in this role, you will need to be… + Highly motivated, driven, hardworking and open to learning new things + Not afraid to fail + Able to build your team and support your folks – our people are our bigget asset + Comfortable working in a TDD & CI/CD environment + Ability to mentor & influence others + A team player. We have a great group of diverse folks working together in harmony. Big egos and “super heroes” need not apply. A successful Manager – Enterprise Cloud Native Middleware will have: + Be available to work onsite out of our American Fork, UT or Englewood, CO offices + A 4-year college degree in Computer Science / Information Technology, master’s degree is preferred or equivalent professional experience + 8+ years of professional enterprise development experience, 5+ years leadership experience + Have experience building and managing large, highly available enterprise grade applications + Have experience working with Agile and the tools that support it Technologies in our environment: Here are some of the key technologies that make up our environment. While we do not expect you to have a detailed understanding of each, the more of these you are familiar with the better. + GoLang, Java, Python + Automated testing of applications & Continuous Integration / TDD / BDD + Confluent Stack / Kafka / ELK Stack / Couchbase / Cassandra / PostGreSQL / Elasticsearch + Cloud Native tools: Kubernetes / Docker / Consul / Vault / Jenkins / / Jaeger / gRPC / + CI / CD & DevOps Culture + 12 Factor Applications + Serverless / Function as a service concepts, implementations & patterns #LI-SLING2 Vacancy Name: 2018-45716 External Company Name: DISH Purchasing Corporation External Company URL: Street: 796 East Utah
          Palvelimettomuus voi olla muotia, mutta kuuminkaan it-trendi ei sovi kaikkialle      Cache   Translate Page      
Kaikki pilven myyjät huutavat kuorossa uuden toimintamallin eli palvelimettomuuden etuja. Serverless onkin hyvä tapa toimia, jos olet sovelluskehittäjä. Kaikkialle kuuminkaan hypetyksen aihe ei silti sovi.
          Serverless? Great... Now what about testing, security, observability?      Cache   Translate Page      

Choosing your platform is just the first step

If you're considering moving to a serverless architecture you might think the first step is easy, but the real challenges come with ensuring enterprise-grade discipline once you move into production.…

          10 AWS Lambda use cases to start your serverless journey      Cache   Translate Page      

It's often hard for me to wrap my head around a technology/service until I see some concrete use cases for it. In this article, Rohit does a good job of laying out a bunch of serverless use cases. Maybe one or two will compel you to dig deeper.

          Mapping Factorio with Leaflet      Cache   Translate Page      

The following is a guest post by Jacob Hands, Creator of He is building a community site for the game Factorio centered around sharing user creations.

Factorio is a game about building and maintaining factories. Players mine resources, research new technology and automate production. Resources move along the production line through multiple means of transportation such as belts and trains. Once production starts getting up to speed, alien bugs start to attack the factory requiring strong defenses.

imageA Factorio factory producing many different items.imageA Factorio military outpost fighting the alien bugs.imageA Factorio map view of a small factory, that’s still too big to easily share fully with screenshots.

At, I am building a place for the community of Factorio players to share their factories as interactive Leaflet maps. Due to the size and detail of the game, it can be difficult to share an entire factory through a few screenshots. A Leaflet map provides a Google Maps-like experience allowing viewers to pan and zoom throughout the map almost as if they are playing the game.


Leaflet maps contain thousands of small images for X/Y/Z coordinates. Amazon S3 and Google Cloud Storage are the obvious choices for low-latency object storage. However, after 3.5  months in operation, contains 17 million map images (>1TB). For this use-case, $0.05 per 10,000 upload API calls and $0.08 to 0.12/GB for egress would add up quickly. Backblaze B2 is a better fit because upload API calls are free, egress bandwidth is $0.00/GB to Cloudflare, and storage is 1/4th the price of the competition.

Backblaze B2 requires a prefix of /file/bucketName on all public files, which I don’t want. To remove it, I added a VPS proxy to rewrite paths and add a few 301 redirects. Unfortunately, the latency from the user -> VPS -> B2 was sub-par averaging 800-1200ms in the US.

A Closer Look At Leaflet

Leaflet maps work by loading images at the user's X/Y/Z coordinates to render the current view. As a map is zoomed in, it requires 4x as many images to show the same area. That means 75% of a map's images are in the max rendered zoom level.

imageA diagram of how each zoom level is 4x larger than the previous

Reducing Latency

With hosting working, it's time to start making the site faster. The majority of image requests come from the first few zoom levels, representing less than 25% of a given map's images. Adding a local SSD cache on the VPS containing all except the last 1-3 zoom levels for each map reduces latency for 66% of requests. The problem with SSD storage is it's difficult to scale with ever-increasing data and is still limited to the network and CPU performance of the server it occupies.

Going Serverless with Cloudflare Workers

Cloudflare Workers can run JavaScript using the Service Workers API which means the path rewrites and redirects the VPS was accomplishing could run on Cloudflare's edge.

While Google Cloud Storage is more expensive than B2, it has much lower latency to the US and worldwide destinations because of their network and multi-regional object storage. However, it's not time to move the whole site over to GCS just yet; the upload API calls alone would cost $85 for 17 million files.

Multi-Tier Object Storage

The first few zoom levels are stored in GCS, while the rest are in B2. Cloudflare Workers figure out where files are located by checking both sources simultaneously. By doing this, 66% of requested files come from GCS with a mean latency of <350ms, while only storing 24% of files on GCS. Another benefit to using B2 as the primary storage is if GCS becomes too expensive in the future, I can move all requests to B2.

// Race GCS and B2
let gcsReq = new Request('' + url.pathname, event.request)
let b2Req = new Request(getB2Url(request) + '/bucketName' + url.pathname, event.request);

// Fetch from GCS and B2 with Cloudflare caching enabled
let gcsPromise = fetch(gcsReq, cfSettings);
let b2Promise = fetch(b2Req, cfSettings);

let response = await Promise.race([gcsPromise, b2Promise]);
if (response.ok) {
    return response;

// If the winner was bad, find the one that is good (if any)
response = await gcsPromise;
if (response.ok) {
    return response;

response = await b2Promise;
if (response.ok) {
    return response;

// The request failed/doesn't exist
return response;

Tracking Subrequests

The Cloudflare Workers dashboard contains a few analytics for subrequests, but there is no way to see what responses came from B2 vs. GCS. Fortunately, it’s easy to send request stats to a 3rd party service like StatHat with a few lines of JavaScript.

 // Fetch from GCS and B2 with caching
let reqStartTime =;
let gcsPromise = fetch(gcsReq, cfSettings);
let b2Promise = fetch(b2Req, cfSettings);

let response = await Promise.race([gcsPromise, b2Promise]);
if (response.ok) {
    event.waitUntil(logResponse(event, response, ( - reqStartTime)));
    return response;

The resulting stats prove that GCS is serving the majority of requests, and Cloudflare caches over 50% of those requests. The code for the logResponse function can be found here.  

Making B2 Faster with Argo

Tracking request time surfaced another issue. Requests to B2 from countries outside of North America are still quite slow. Cloudflare's Argo can reduce latency by over 50%, but is too expensive to enable for the whole site. Additionally, it would be redundant to smart-route content from GCS that Google already does an excellent job of keeping latency down. Cloudflare request headers include the country of origin, making it trivial to route this subset of requests through an Argo-enabled domain.

// Use CF Argo for non-US/CA users
function getB2Url(request) {
    let b2BackendUrl = '';
    let country = request.headers.get('CF-IPCountry')
    if (country === 'US' || country === 'CA') {
        b2BackendUrl = '';
    return b2BackendUrl;


Cloudflare Workers are an excellent fit for my project; they enabled me to make a cost-effective solution to hosting Leaflet maps at scale. Check out for performant Leaflet maps, and if you play Factorio, submit your Factorio world to share with others!

          10 AWS Lambda use cases to start your serverless journey      Cache   Translate Page      

It's often hard for me to wrap my head around a technology/service until I see some concrete use cases for it. In this article, Rohit does a good job of laying out a bunch of serverless use cases. Maybe one or two will compel you to dig deeper.

          (IT) Dev Ops Engineer - AWS - Financial Services      Cache   Translate Page      

Rate: C. £500 - £600 p/Day   Location: London   

Cloud Consulting's rapidly expanding client has an urgent requirement for 2 x Dev Ops Engineers to work on a migration project to AWS. This is an exciting greenfield project working on-site with a leading player in the financial services sector. To be suitable, you should have experience of: - Amazon Web Services (AWS) and APIs, including EC2, S3, VPC and IAM - Docker - Kubernetes, OpenShift or Mesos - Chef, Puppet, Packer, Jenkins - Terraform, Vault, - Serverless Architecture - Python, Java and/or Ruby - TDD If you are interested, then please forward your C.V ASAP.
Rate: C. £500 - £600 p/Day
Type: Contract
Location: London
Country: UK
Contact: Delivery Team
Advertiser: Cloud Consulting
Start Date: ASAP

          Dev Ops Engineer for Verisart - Upwork      Cache   Translate Page      
Verisart is looking for a DevOps engineer to help with our current deployment architecture (Heroku) as well as help us move to a continuous integration pipeline for multiple environments (i.e. production, staging)

- Experience with Go
- Experience with Postgress/Neo4J
- Experience with Heroku, AWS or Google Cloud
- Experience with build automation, CI tools, monitoring and alerts systems
- Experience with containerisation tools (Docker)
- Experience with nginx, load balancers etc

Nice to Have
- Experience with configuration management tools (Ansible, Chef, Puppet)
- Experience with Blockchain and Bitcoin
- Knowledge of micro services / serverless architectures
- Experience with JavaScript/React
- Experience with JIRA
- Experience TTD / unit testing

Posted On: October 10, 2018 15:41 UTC
Category: IT & Networking > Network & System Administration
Skills: Amazon Web Services, Golang, Network Security, Node.js
Country: United States
click to apply
          WoSC'17 [electronic resource]: Workshop on Serverless Computing      Cache   Translate Page      
          Firefox to support WebP, plus Custom Elements coming to Edge      Cache   Translate Page      

#361 — October 10, 2018

Read on the Web

Frontend Focus

Start Performance Budgeting — A review of performance budgeting, the metrics to track, trade-offs to consider, plus budget examples. “For success, embrace performance budgets and learn to live within them..”

Addy Osmani

Custom Elements Now 'In Development' on Microsoft Edge — Not a lot to see here, but Edge is the last major browser to get on board with custom elements. Shadow DOM is being worked on, too.


⚛️ New Course: Complete Intro to React, v4 — Learn to build real-world applications in React. Much more than an intro, you’ll start from the ground up all the way to using the latest features in React 16+ like Context and Portals. We also launched a follow up course, Intermediate React.

Frontend Masters sponsor

Use Cases for Flexbox — A look at some of the common uses for Flexbox. What should we use Flexbox for, and what it is not so good at, especially now that we have CSS Grid too?

Rachel Andrew

How I Remember CSS Grid Properties — A method to remember the most common CSS Grid properties. “This will help you use CSS Grid without googling like a maniac.”

Zell Liew

Firefox to Support Google's WebP Image Format — Now Apple’s Safari is the only major holdout, since Edge now supports it too. (Caution: CNet has annoying autoplaying ads.)


Understanding the Difference Between grid-template and grid-auto — It pays to understand the difference between implicit and explicit grids. grid-template properties adjust placement on an explicit grid, whereas grid-auto properties define an implicit grid’s properties.

Ire Aderinokun

💻 Jobs

Sr. Fullstack Engineer (Remote) — Sticker Mule is looking for passionate developers to join our remote team. Come help us become the Internet’s best place to shop and work.

Sticker Mule

Work on Uber's Open Source Design Language — We're developing Base UI, a new React component library for web applications at Uber and beyond. Join our team.


Join Our Career Marketplace & Get Matched With A Job You Love — Through Hired, software engineers have transparency into salary offers, competing opportunities, and job details.


📘 Articles & Tutorials

How to Use the Animation Inspector in Chrome Developer Tools — A rundown of which animation dev tools are available in Chrome, how to access them, and what they can do for you.

Kezz Bracey

How One Invalid Pseudo Selector Can Equal an Entire Ignored Selector — Did you know that “if any part of a selector is invalid, it invalidates the whole selector”? Thankfully things are beginning to change.

Chris Coyier

Create a Serverless Powered API in 10 Minutes

Cloudflare Workers sponsor

Moving Backgrounds Around According to Mouse Position

Chris Coyier

Adaptive Serving using JavaScript and the Network Information API — Serve content based on the user’s effective network connection type.

Addy Osmani

Writing a JavaScript Tweening Engine with Between.js — This developer decided to try their hand at writing their own tweening engine.

Alexander Buzin

The Ultimate Guide to Proper Use of Animation in UX

Taras Skytskyi

Let a MongoDB Master Explain Users and Roles

Studio 3T sponsor

Getting Started with WordPress's New Gutenberg Editor By Creating Your Own 'Block' — Gutenberg is a new content editor coming to WordPress 5.0.

Muhammad Muhsin

▶  Chrome 70: What’s New in DevTools

Kayce Basques (Google)

CSS Floated Labels with :placeholder-shown Pseudo Class

Nick Salloum

Bad Practices on Birthdate Form Fields

Anthony Tseng

🔧 Code and Tools

CSS Stats: A Web Tool to Visualize and Show Stats on the CSS Sites Use

Morse, Jackson and Otander

Automated Visual UI Testing — Replace time-consuming manual QA and catch visual UI bugs before your users do. Get started with our free 14-day trial.

Percy sponsor

Hover.css: A Collection of CSS3 Powered Hover Effects — For use on all sorts of page elements like links, logos, buttons, and more. Demos here.

Ian Lunn

a11y-dialog: A Very Lightweight and Flexible Accessible Modal Dialog


Baffle: A Library for Obfuscating then Revealing Text

Cam Wiegert

WorkerDOM: The DOM API, But For Inside Web Workers — Still a work-in-progress.

AMP Project

          Serverless? Great. Now what about testing, security, observability? Bag a ticket to find out...      Cache   Translate Page      

Choosing your platform is just the first step

Event If you're considering moving to a serverless architecture you might think the first step is easy, but the real challenges come with ensuring enterprise-grade discipline once you move into production.…

          GraphQL APIs Now Configurable through Stackery       Cache   Translate Page      

Stackery, serverless development toolkit provider, recently announced that developers can now configure and provision AWS AppSync GraphQL APIs with Stackery. Developers can connect GraphQL resolvers to backend data sources like DynamoDB tables, Lambda functions, or HTTP proxies.

Configurable properties include:

          Solutions Architect - Amazon Web Services - - Chicago, IL      Cache   Translate Page      
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From - Fri, 13 Jul 2018 07:54:23 GMT - View all Chicago, IL jobs
          Solutions Architect - Amazon Web Services - - San Francisco, CA      Cache   Translate Page      
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From - Sun, 02 Sep 2018 07:30:50 GMT - View all San Francisco, CA jobs
          A Tour Inside Cloudflare's G9 Servers      Cache   Translate Page      

Cloudflare operates at a significant scale, handling nearly 10% of the Internet HTTP requests that is at peak more than 25 trillion requests through our network every month. To ensure this is as efficient as possible, we own and operate all the equipment in our 154 locations around the world in order to process the volume of traffic that flows through our network. We spend a significant amount of time specing and designing servers that makes up our network to meet our ever changing and growing demands. On regular intervals, we will take everything we've learned about our last generation of hardware and refresh each component with the next generation…

If the above paragraph sounds familiar, it’s a reflecting glance to where we were 5 years ago using today’s numbers. We’ve done so much progress engineering and developing our tools with the latest tech through the years by pushing ourselves at getting smarter in what we do.

Here though we’re going to blog about muscle.

Since the last time we blogged about our G4 servers, we’ve iterated one generation each of the past 5 years. Our latest generation is now the G9 server. From a G4 server comprising 12 Intel Sandybridge CPU cores, our G9 server has 192 Intel Skylake CPU cores ready to handle today’s load across Cloudflare’s network. This server is QCT’s T42S-2U multi-node server where we have 4 nodes per chassis, therefore each node has 48 cores. Maximizing compute density is the primary goal since rental colocation space and power are costly. This 2U4N chassis form factor has served us well for the past 3 generations, we’re revisiting this option once more.


Exploded picture of the G9 server’s main components. 4 sleds represent 4 nodes each with 2 24-core Intel CPUs

Each high-level hardware component has gone through their own upgrade as well for a balanced scale up keeping our stack CPU bound, making this generation the most radical revision since we moved on from using HP 5 years ago. Let’s glance through each of those components.

Hardware changes


  • Previously: 2x 12 core Intel Xeon Silver 4116 2.1Ghz 85W
  • Now: 2x 24 core Intel custom off-roadmap 1.9Ghz 150W

The performance of our infrastructure is heavily directed by how much compute we can squeeze in a given physical space and power. In essence, requests per second (RPS) per Watt is a critical metric that Qualcomm’s ARM64 46 core Falkor chip had a big advantage over Intel’s Skylake 4116. Embracing the value of optionality and market competition, we made some noise.

Intel proposed to co-innovate with us an off-roadmap 24-core Xeon Gold CPU specifically made for our workload offering considerable value in Performance per Watt. For this generation, we continue using Intel as system solutions are widely available while we’re working on realizing ARM64’s benefits to production. We expect this CPU to perform with better RPS per Watt right off the bat; increasing the RPS by 200% from doubling the amount of cores, and increasing the power consumption by 174% from increasing the CPUs TDP from 85W to 150W each.


  • Previously: 6x Micron 1100 512G SATA
  • Now: 6x Intel S4500 480G SATA

With all the requests we foresee for G9 to process, we need to tame down the outlying and long-tail latencies we have seen in our previous SSDs. Lowering p99 and p999 latency has been a serious endeavor. To help save milliseconds in disk response time for 0.01% or even 0.001% of all the traffic we see isn’t a joke!
Datacenter grade SSDs in Intel S4500 will proliferate our fleet. These disks come with better endurance to last over the expected service life of our servers and better performance consistency with lower p95+ latency.


  • Previously: dual-port 10G Solarflare Flareon Ultra SFN8522
  • Now: dual-port 25G Mellanox ConnectX-4 Lx OCP

Our DDoS mitigation program is all done in userspace, so network adapter model can be anything on market as long as it supports XDP. We went with Mellanox for their solid reliability and their readily available 2x25G CX4 model. Upgrading to 25G intra-rack ethernet network is easy future-proofing since the 10G SFP+ ethernet port shares the same physical form factor as the 25G’s SFP28. Switch and NIC vendors offer models that can be configured as either 10G or 25G.

Another change is the card's form factor itself being OCP instead of the regular HHHL. That leaves the one x16 PCI slot free for the option to integrate something else; maybe for a high capacity NVMe? Maybe a GPU? We like that our server has the room for upgrades if needed.


Rear side of G9 chassis showing all 4 sled nodes, each leaving room to add on a PCI card


  • Previously: 192G (12x16G) DDR4 2400Mhz RDIMM
  • Now: 256G (8x32G) DDR4 2666Mhz RDIMM

Going from 192G (12x16G) to 256G (8x32G) made practical sense. The motherboard has 12 DIMM channels, which were all populated in the G8. We want to have the ability to upgrade just in case, as well as keeping memory configuration balanced and optimal bandwidth capacity. 8x32G works well leaving 4 channels open for future upgrades.

Physical stress test

Our software stack scales nicely enough that we can confidently assume we’ll double the amount of requests having twice the amount of CPU cores compared to G8. What we need to ensure before we ship any G9 servers out to our current 154 and future PoPs is that there won’t be any design issues pertaining to thermal nor power failures. At the extreme case that all of our cores run up 100% load, would that cause our server to run above operating temperature? How much power would a whole server with 192 cores totaling 1200W TDP consume? We set out to record both by applying a stress test to the whole system.

Temperature readings were recorded off of ipmitool sdr list, then graphed showing socket and motherboard temperature. For 2U4N being such a compact form factor, it’s worth monitoring that a server running hot isn’t literally running hot. The red lines represent the 4 nodes that compose the whole G9 server under test; blue lines represent G8 nodes (we didn’t stress the G8’s so their temperature readings are constant).


Both graphs are looking fine and not out of control mostly thanks to the T42S-2U’s 4 80mm x 80mm fans capable of blowing over 90CFM; which we managed to reach their max spec RPM.

Recording the new system’s max power consumption is critical information we need to properly design our rack stack choosing the right Power Distribution Unit and ensuring we’re below the budgeted power while keeping adequate phase balancing. For example, a typical 3-phase US-rated 24-Amp PDU gives you a max power rating of 8.6 kilowatts. We wouldn’t be able to fit 9 servers powered by that same PDU if each were running at 1kW without any way to cap them.


Above graph shows our max power as the red line to reach 1.9kW, or crudely 475W per node which is excellent in a modern server. Notice the blue and yellow lines representing the G9’s 2 power supplies summing the total power. The yellow line PSU appearing off is intentional as part of our testing procedure to show the PSU’s resilience in abrupt power changes.

Stressing out all available CPU, IO, and memory along with maxing out fan RPMs combined is a good indicator for the highest possible heat and power draw this server can do. Hopefully we won’t ever see such an extreme case like this in live production environments, and we expect much milder actual results (read: we don’t think catastrophic failures to be possible).

First impression in live production

We increased capacity to one of our most loaded PoPs by adding G9 servers. The following time graphs represent a 24 hour range with how G9 performance compares with G8 in a live PoP.


Looking great! They're doing about 2x the requests compared to G8 with about 20% less CPU usage. Note that all results here are based from non-optimized systems, so we could add more load on the G9 and have their CPU usage comparable to the G8. Additionally, they're doing that amount with better CPU processing time shown as NGINX execution time above. You can see the latency gap between generations widening as we go towards the 99.9th percentile:


Long-tail latencies for NGINX CPU processing time (lower is better)

Talking about latency, let’s check how our new SSDs are doing on that front:


Cache disk IOPS and latency (lower is better)

The trend still holds that G9 is doing better. It’s a good thing that the G9’s SSDs aren’t seeing as many IOPS since it means we’re not hitting our cache disks as often and are able to store and process more on CPU and memory. We’ve cut the read cache hits and latency by nearly half. Less writes results in better performance consistency and longevity.

Another metric that G9 does more is power consumption, doing about 55% more then the G8. While it’s not a piece of information to brag about, it is expected when older CPUs were once rated at 85W TDP to now using ones with 150W TDP; and when considering how much work the G9 servers do:


G9 is actually 1.5x more efficient than G8. Temperature readings were checked as well. Inlet and outlet chassis temps, as well as CPU temps, are well within operating temperatures.

G9 is actually more 1.5x efficient than G8. Temperature readings were checked as well. Inlet and outlet chassis temps, as well as CPU temps, are well within operating temperatures.

Now that’s muscle! In other words for every 3 G8 servers, just 2 of those G9's would take on the same workload. If one of our racks would normally have 9 G8 servers, than only 4 G9's are needed. Inversely, planning to turn up a cage of 10 G9 racks would be the same as if we did 15 G8 racks!

And we got big plans to cover our entire network with G9 servers, with most of them planned for the existing cities your site most likely uses. By 2019, you’ll benefit with increased bandwidth and lower wait times. And we’ll benefit in expanding and turning up datacenters quicker and more efficiently.

What's next?

Server X? Right now is exciting times at Cloudflare. Many teams and engineers are testing, porting, and implementing new stuff that can help us lower operating costs, explore new products and possibilities, and improve Quality of Service. We’re tackling problems and taking on projects that are unique in the industry.

Serverless computing like Cloudflare Workers and beyond will ask for new challenges to our infrastructure as all of our customers can program their own features on Cloudflare’s edge network.

The network architecture that was conventionally made up of routers, switches, and servers has been merged into 3-in-1 box solutions allowing Cloudflare services to be set up into locations that weren’t possible before.

The advent of NVMe and persistent memory, as well as the possibility of turning SSDs into DRAM, is redefining how we design cache servers and handle tiered caching. SSDs and memory aren’t treated as separate entities like they used to.

Hardware brings the company together like a rug in a living room. See how many links I mentioned above to show you how we’re one team dedicated to build a better Internet. Everything that we do here roots down to how we manage the tons of aluminum and silicon we’ve invested. There's a lot of developing on our hardware to help Cloudflare grow to where we envision ourselves to be. If you’d like to contribute, we’d love to hear from you.

          Multiple CognitoUserPoolClient      Cache   Translate Page      

@guillaumecodet wrote:


When I create a new cognito user pool, I would like to create more than one user pool client, but I can’t find a way of doing it…

Here is my actual code:

    Type: AWS::Cognito::UserPool
      # Generate a name based on the stage
      UserPoolName: ${self:provider.stage}-blablabla
      # Set email as an alias
        - email
        - email

    Type: AWS::Cognito::UserPoolClient
      # Generate an app client name based on the stage
      ClientName: Bla
        Ref: CognitoUserPool
      GenerateSecret: true

If I try to duplicate the block “CognitoUserPoolClient”, I get an error YAMLException: duplicated mapping key

Thank you for your help.

Posts: 2

Participants: 2

Read full topic

          Knative Roadmap      Cache   Translate Page      

@lgothard wrote:

Wondering if/when Knative will be supported by Serverless Framework.

See …

Posts: 1

Participants: 1

Read full topic

          Fargate as an event source or integration with Fargate in general      Cache   Translate Page      

@ewan wrote:

This is probably a more general architecture question based around serverless tech, I have a bit of a vision in my head and I’m not sure if it’s a stupid idea, if I’m over-complicating things, or genuinely onto a real problem someone’s already solved. So please bare with me!

Fargate is a huge game changer as well as Lambda. In my current position we’re dealing with a lot of machine learning and big data tasks. For the front-end we’re using Lambda to power those UI interactions and dealing with metadata etc. But bigger operations we’re using Spark and Redshift etc. So we’re bridging the gap between our datascientists machine learning code, with our wider platform using containers, our case in Fargate.

So users are interacting with the ecosystem via Lambda, which will be making calls to Fargate containers to perform larger tasks which are too… hefty for Lambda.

I guess my question is, are there any direct integrations with serverless and Fargate that anyone knows of? Are there plans to support this (if anyone from Serverless Inc is reading this :wave:t2:#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000) or does anyone have any examples, or architectural diagrams of this working some place.

On the surface it’s pretty simple, right? You call a lambda, that has an ARN or Task Definition it calls. You could even use SNS, SQS or Kinesis to communicate between those two layers. However, I haven’t found a nice way to bridge the gap. I’m just using the AWS SDK to make calls and setting ARN’s as environment variables, stuff like that, which doesn’t feel too clean.

In a ‘traditional’ microservice stack, you have things like service discovery, or service meshes. Ultimately, I’d love to be able to use service discovery to call a service, and the client doesn’t know or care if that’s a Lambda or a Fargate container or not. But I’m not sure if that’s actually valuable or not, ultimately. I guess I’m just after a unified approach to using both Lambda and something like Fargate within AWS. Any suggestions or ideas greatly welcomed.

Posts: 1

Participants: 1

Read full topic

          Best practice for integrating OpenAPI spec with Serverless      Cache   Translate Page      

@saintberry wrote:

Hi all,

I’ve been doing some research to see if Serverless Framework is the best approach for my team vs rolling our own tooling and integrating with APIG directly. It seems like there are pros and cons (like anything).

One issue that seems to be up in the air is OpenAPI spec integration. The community seems to agree that it makes sense to define the API spec along with schemas and validation in one location and keep that one spec up-to-date. OpenAPI spec seems to be becoming the de-facto standard for describing APIs and is getting wider support from AWS. As interfaces generally last longer than their implementations I’m sure we should be describing our APIs with OpenAPI spec.

I’ve come across this GitHub issue ( which outlines an approach of using some Serverless plugins to allow documenting the Gateway with swagger and making use of validators / schemas.

Can anyone offer any further advice or point me to any reference documentation on best practice?

Essentially I want to be able to describe my APIs with OpenAPI/Swagger so the interface is adhering to a standard and somewhat future-proof and make use of the time saving features of Serverless.

I’m very much in the research phase so looking for docs/case studies/experiences.

I should note that the applications we will be working on will be made up of various APIs that use various technologies to deliver their functionality. That is to say not all endpoints will be implemented in APIG/Serverless.


Posts: 1

Participants: 1

Read full topic

Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10