Next Page: 10000

          Pulumi: SaaS multicloud app deployment service      Cache   Translate Page      
Pulumi seeks to “Program the Cloud”. They provide infrastructure for serverless containers. For an intro overview of Pulumi see: From GeekWire: It didn’t take long for the ex-Microsoft engineers behind Pulumi to prove their software development platform for multicloud computing was worthy of additional investment. Pulumi launched its SaaS multicloud app deployment service which could help businesses […]
          Stackery CEO on the AWS Serverless Application Model      Cache   Translate Page      

On this newest episode of The New Stack Makers podcast, TNS founder Alex Williams is joined by Stackery CEO Nate Taggart, at ServerlessConf San Francisco, to discuss the makings of Stackery, and how it has standardized on top of the AWS Serverless Application Model to the benefit of both its developers and enterprise customers. Prior […]

The post Stackery CEO on the AWS Serverless Application Model appeared first on The New Stack.


          Comment on Creating a coin recognizer with Watson's Visual Recognition and OpenCV in Python3 by René Meyer      Cache   Translate Page      
This is a nice example. The only limit I see it is really hard to distinguish between € 1 cent from € 2 cent coins for a machine, especially from the backside only. In the second picture you see that the 1 cent coin is detected as 2 cent coin. Maybe you need to add some other information like the relative/absolute size to weight the result of Watson VR in a final step? I'm working on a similar Application for iOS with a serverless Python Backend and facing that problem now.
          AWS Apps Developer      Cache   Translate Page      
TX-Dallas, We’re hiring an AWS Apps Dev to build serverless architecture platforms. Azure experience can be a substitute for AWS depending on your projects. Required Experience/Qualifications: Bachelor’s or Graduate Degree in Computer Science, Engineering, or similar Agile IT environment experience 3+ years of designing AWS cloud based applications - Azure experience may be considered Serverless architecture
          This Company Wants to Make the Internet Load Faster      Cache   Translate Page      

The internet went down on February 28, 2017. Or at least that's how it seemed to some users as sites and apps like Slack and Medium went offline or malfunctioned for four hours. What actually happened is that Amazon's enormously popular S3 cloud storage service experienced an outage , affecting everything that depended on it.

It was a reminder of the risks when too much of the internet relies on a single service. Amazon gives customers the option of storing their data in different "availability regions" around the world, and within those regions it has multiple data centers in case something goes wrong. But last year's outage knocked out S3 in the entire North Virginia region. Customers could of course use other regions, or other clouds, as backups, but that involves extra work, including possibly managing accounts with multiple cloud providers.

A San Francisco based startup called Netlify wants to make it easier to avoid these sorts of outages by automatically distributing its customers’ content to multiple cloud computing providers. Users don't need accounts with Amazon, Microsoft Azure, Rackspace, or any other cloud company―Netlify maintains relationships with those services. You just sign-up for Netlify, and it handles the rest.

You can think of the company's core service as a cross between traditional web hosting providers and content delivery networks, like Akamai, that cache content on servers around the world to speed up websites and apps. Netlify has already attracted some big tech names as customers, often to host websites related to open source projects. For example, Google uses Netlify for the website for its infrastructure management tool Kubernetes, and Facebook uses the service for its programming framework React. But Netlify founders Christian Bach and Mathias Biilmann don't want to just be middlemen for cloud hosting. They want to fundamentally change how web applications are built, and put Netlify at the center.

Traditionally, web applications have run mostly on servers. The applications run their code in the cloud, or in a company's own data center, assemble a web page based on the results, and send the result to your browser. But as browsers have grown more sophisticated, web developers have begun shifting computing workloads to the browser. Today, browser-based apps like Google Docs or Facebook feel like desktop applications. Netlify aims to make it easier to build, publish, and maintain these types of sites.

Back to the Static Future

Markus Seyfferth, the COO of Smashing Media, was converted to Netlify's vision when he saw Biilman speak at a conference in 2016. Smashing Media, which publishes the web design and development publication Smashing Magazine and organizes the Smashing Conference, was looking to change the way it managed its roughly 3,200-page website.

Since its inception in 2006, Smashing Magazine had been powered by WordPress, the content management system that runs about 32 percent of the web according to technology survey outfit W3Techs, along with e-commerce tools to handle sales of books and conference tickets and a third application for managing its job listing site. Using three different systems was unwieldy, and the company's servers struggled to handle the site’s traffic, so Seyfferth was looking for a new approach.

When you write or edit a blog post in WordPress or similar applications, the software stores your content in a database. When someone visits your site, the server runs WordPress to pull the latest version from the database, along with any comments that have been posted, and assembles it into a page that it sends to the browser.

Building pages on the fly like this ensures that users always see the most recent version of a page, but it's slower than serving prebuilt "static" pages that have been generated in advance. And when lots of people are trying to visit a site at the same time, servers can bog down trying to build pages on the fly for each visitor, which can lead to outages. That leads companies to buy more servers than they typically need; what’s more, servers can still be overloaded at times.

"When we had a new product on the shop, it needed only a couple hundred orders in one hour and the shop would go down," Seyfferth says.

WordPress and similar applications try to make things faster and more efficient by "caching" content to reduce how often the software has to query the database, but it's still not as fast as serving static content.

Static content is also more secure. Using WordPress or similar content managers exposes at least two "attack surfaces" for hackers: the server itself, and the content management software. By removing the content management layer, and simply serving static content, the overall "attack surface" shrinks, meaning hackers have fewer ways to exploit software.

The security and performance advantages of static websites have made them increasingly popular with software developers in recent years, first for personal blogs and now for the websites for popular open source projects.

In a way, these static sites are a throwback to the early days of the web, when practically all content was static. Web developers updated pages manually and uploaded pre-built pages to the web. But the rise of blogs and other interactive websites in the early 2000s popularized server-side applications that made it possible for non-technical users to add or edit content, without special software. The same software also allowed readers to add comments or contribute content directly to a site.

At Smashing Media, Seyfferth didn't initially think static was an option. The company needed interactive features, to accept comments, process credit cards, and allow users to post job listings. So Netlify built several new features into its platform to make a primarily static approach more viable for Smashing Media.

The Glue in the Cloud

Biilmann, a native of Denmark, spotted the trend back to static sites while running a content management startup in San Francisco, and started a predecessor to Netlify called Bit Balloon in 2013. He invited Bach, his childhood best friend who was then working as an executive at a creative services agency in Denmark, to join him in 2015 and Netlify was born.

Initially, Netlify focused on hosting static sites. The company quickly attracted high-profile open source users, but Biilman and Bach wanted Netlify to be more than just another web-hosting company; they sought to make static sites viable for interactive websites.

Open source programming frameworks have made it easier to build sophisticated applications in the browser . And there's a growing ecosystem of services like Stripe for payments, Auth0 for user authentication, and Amazon Lambda for running small chunks of custom code, that make it possible to outsource many interactive features to the cloud. But these types of services can be hard to use with static sites because some sort of server side application is often needed to act as a middleman between the cloud and the browser.

Biilmann and Bach want Netlify to be that middleman, or as they put it, the "glue" between disparate cloud computing services. For example, they built an e-commerce feature for Smashing Media, now available to all Netlify customers, that integrates with Stripe. It also offers tools for managing code that runs on Lambda.

Smashing Media switched to Netlify about a year ago, and Seyfferth says it's been a success. It's much cheaper and more stable than traditional web application hosting. "Now the site pretty much always stays up no matter how many users," he says. "We'd never want to look back to what we were using before."

There are still some downsides. WordPress makes it easy for non-technical users to add, edit, and manage content. Static site software tends to be less sophisticated and harder to use. Netlify is trying to address that with its own open source static content management interface called Netlify CMS. But it's still rough.

Seyfferth says for many publications, it makes more sense to stick with WordPress for now because Netlify can still be challenging for non-technical users.

And while Netlify is a developer darling today, it's possible that major cloud providers could replicate some of its features. Google already offers a service called Firebase Hosting that offers some similar functionality.

For now, though, Bach and Biilmann say they're just focused on making their serverless vision practical for more companies. The more people who come around to this new approach, the more opportunities there are not just for Netlify, but for the entire new ecosystem.

More Great WIRED Stories Self-improvement in the internet age andhow we learn A drone-flinging cannon proves UAVscan mangle planes Google's human-sounding phone bot comes to the Pixel How Jump designed aglobal electric bike US weapons systems are easy cyberattack targets Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories
          flask-serverless added to PyPI      Cache   Translate Page      
AWS Lambda easy integration with Flask web framework.
          Building Azure Functions: Part 3 – Coding Concerns      Cache   Translate Page      

Originally posted on: http://bobgoedkoop.nl/archive/2017/02/02/building-azure-functions-part-3-ndash-coding-concerns.aspx

Image result for azure functions logo

In this third part of my series on Azure Function development I will cover a number of development concepts and concerns.  These are just some of the basics.  You can look for more posts coming in the future that will cover specific topics in more detail.

General Development

One of the first things you will have to get used to is developing in a very stateless manner.  Any other .NET application type has a class at its base.  Functions, on the other hand, are just what they say, a method that runs within its own context.  Because of this you don’t have anything resembling a global or class level variable.  This means that if you need something like a logger in every method you have to pass it in.

[Update 2016-02-13] The above information is not completely correct.  You can implement function global variables by defining them as private static.

You may find that it makes sense to create classes within your function either as DTOs or to make the code more manageable.  Start by adding a .csx file in the files view pane of your function.  The same coding techniques and standards apply as your Run.csx file, otherwise develop the class as you would any other .NET class.

image

In the previous post I showed how to create App Settings.  If you took the time to create them you are going to want to be able to retrieve them.  The GetEnvironmentVariable method of the Environment class gives you the same capability as using AppSettings from ConfigurationManager in traditional .NET applications.

System.Environment.GetEnvironmentVariable("YourSettingKey")

A critical coding practice for functions that use perishable resources such as queues is to make sure that if you catch and log an exception that you rethrow it so that your function fails.  This will cause the queue message to remain on the queue instead of dequeuing.

Debugging

image

It can be hard to read the log when the function is running full speed since instance run in parallel but report to the same log.  I would suggest that you added the process ID to your TraceWriter logging messages so that you can correlate them.

Even more powerful is the ability to remote debug functions from Visual Studio.  To do this open your Server Explorer and either connect to your Azure subscription.  From there you can drill down to the Function App in App Services and then to the run.csx file in the individual function.  Once you have open the code file and place your break points, right-click the function and select Attach Debugger.  From there it acts like any other Visual Studio debugging session.

image

Race Conditions

I wanted to place special attention on this subject.  As with any highly parallel/asynchronous processing environment you will have to make sure that you take into account any race conditions that may occur.  If at all possible keep the type of functionality that your create to non-related pieces of data.  If it is critical that items in a queue, blob container or table storage are processed in order then Azure Functions are probably not the right tool for your solution.

Summary

Azure Functions are one of the most powerful units of code available.  Hopefully this series gives you a starting point for your adventure into serverless applications and you can discover how they can benefit your business.


          Building Azure Functions: Part 1–Creating and Binding      Cache   Translate Page      

Originally posted on: http://bobgoedkoop.nl/archive/2017/01/31/building-azure-functions-part-1ndashcreating-and-binding.aspx

Image result for azure functions logo

The latest buzz word is serverless applications.  Azure Functions are Microsoft’s offering in this space.  As with most products that are new on the cloud Azure Functions are still evolving and therefore can be challenging to develop.  Documentation is still being worked on at the time I am writing this so here are some things that I have learned while implementing them.

There is a lot to cover here so I am going to break this topic into a few posts:

  1. Creating and Binding
  2. Settings and References
  3. Coding Concerns

Creating A New Function

The first thing you are going to need to do is create a Function App.  This is a App Services product that serves as a container for your individual functions.  The easiest way I’ve found to start is to go to the main add (+) button on the Azure Portal and then do a search for Function App.

image

Click on Function App and then the Create button when the Function App blade comes up.  Fill in your app name remembering that this a container and not your actual function.  As with other Azure features you need to supply a subscription, resource group and location.  Additionally for a Function App you need to supply a hosting plan and storage account.  If you want to take full benefit of Function Apps scaling and pricing leave the default Consumption Plan.  This way you only pay for what you use.  If you chose App Service Plan your function will will pay for it whether it is actually processing or not.

image

Once you click Create the Function App will start to deploy.  At this point you will start to create your first function in the Function App.  Once you find your Function App in the list of App Services it will open the blade shown below.  It offers a quick start page, but I quickly found that didn’t give me options I needed beyond a simple “Hello World” function.  Instead press the New Function link at the left.  You will be offered a list of trigger based templates which I will cover in the next section.

image

Triggers

image

Triggers define the event source that will cause your function to be executed.  While there are many different triggers and there are more being added every day, the most common ones are included under the core scenarios.  In my experience the most useful are timer, queue, and blob triggered functions.

Queues and blobs require a connection to a storage account be defined.  Fortunately this is created with a couple of clicks and can be shared between triggers and bindings as well as between functions.  Once you have that you simply enter the name of the queue or blob container and you are off to the races.

When it comes to timer dependent functions, the main topic you will have to become familiar with is chron scheduling definitions.  If you come from a Unix background or have been working with more recent timer based WebJobs this won’t be anything new.  Otherwise the simplest way to remember is that each time increment is defined by a division statement.

image

In the case of queue triggers the parameter that is automatically added to the Run method signature will be the contents of the queue message as a string.  Similarly most trigger types have a parameter that passes values from the triggering event.

Input and Output Bindings

image

Some of the function templates include an output binding.  If none of these fit your needs or you just prefer to have full control you can add a binding via the Integration tab.  The input and output binding definitions end up in the same function.json file as the trigger bindings. 

The one gripe I have with these bindings is that they connect to a specific entity at the beginning of your function.  I would find it preferable to bind to the parent container of whatever source you are binding to and have a set of standard commands available for normal CRUD operations.

Let’s say that you want to load an external configuration file from blob storage when your function starts.  The path shown below specifies the container and the blob name.  The default format show a variable “name” as the blob name.  This needs to be a variable that is available and populated when the function starts or an exception will be thrown.  As for your storage account specify it by clicking the “new” link next to the dropdown and pick the storage account from those that you have available.  If you specified a storage account while defining your trigger and it is the same as your binding it can be reused.

image

The convenient thing about blob bindings is that they are bound as strings and so for most scenarios you don’t have to do anything else to leverage them in your function.  You will have to add a string parameter to the function’s Run method that matches the name in the blob parameter name text box.

Summary

That should give you a starting point for getting the shell of your Azure Function created.  In the next two posts I will add settings, assembly references and some tips for coding your function.


          Back-end developer - Mylo - Montréal, QC      Cache   Translate Page      
Cloudformation, EC2, ECS, Serverless, ELB, S3, VPC, IAM, CloudWatch) to develop and maintain an AWS based cloud solution, with an emphasis on best practice...
From Mylo - Tue, 30 Oct 2018 05:03:31 GMT - View all Montréal, QC jobs
          Hybrid cloud complexity pushes organizations to look for more security tools      Cache   Translate Page      

As more organizations embrace hybrid cloud – with more than 50 percent claiming a hybrid cloud setup – and serverless, now used by close to third of organizations, they lack the tools and specialization to keep up, according to Alcide. 75 percent of respondents expect to see an increase in the number of security tools they rely on in the next year, while over half share they still manually configure security policies. The resulting complexity … More

The post Hybrid cloud complexity pushes organizations to look for more security tools appeared first on Help Net Security.


          世界是 container 的,也是 microservice 的,但最终还是 serverless 的.      Cache   Translate Page      

副标题是这样的: “Hyper,Fargate,以及 Serverless infrastructure”。

世界上有两种基础设施,一种是拿来主义,另一种是自主可控。
原谅我也蹭个已经被浇灭的、没怎么火起来的热点。不过我们喜欢的是拿来主义,够用就行,不想也不需要过多的控制,也不想惹过多的麻烦,也就是 fully managed。
之所以想到写这篇文章,源于前几天看到的这篇来自微软 Azure 的博客内容: The Future of Kubernetes Is Serverless ,然后又顺手温习了一遍 AWS CTO 所撰写的 Changing the calculus of containers in the cloud 这篇文章。这两篇文章你觉得有可能有广告的嫌疑,都是在推销自家的共有云服务,但是仔细品味每一句话,我却觉得几乎没有几句废话,都很说到点子上,你可以点击进去看下原文。
有个前提需要说明的是,这里的 Serverless 指的是 Serverless infrastructure,而不是我们常听到的 AWS Lambda,Microsoft Azure Functions 或 Google Cloud Functions 等函数(功能)即服务(FaaS)技术,为了便于区分,我们将这些 FaaS 称为无服务器计算,和我们本文要介绍的无服务器基础设施还是不一样的。

IaaS:变革的开始

说到基础设施,首先来介绍下最先出现的 IaaS,即基础设施即服务。IaaS 免除了大部分硬件的 provision 工作,没人再关心机架、电源和服务器问题,使得运维工作更快捷,更轻松,感觉解放了很多人,让大家走上了富裕之路。
当然这一代的云计算服务,可不只是可以几分钟启动一台虚拟机那么简单。
除了 VM 之外, IaaS 厂商还提供了很多其他基础设施和中间件服务,这些组件被称为 building block ,比如网络和防火墙、数据库、缓存等老三样,最近还出现了非常多非常多的业务场景服务,大数据、机器学习和算法,以及IoT等,看起来就像个百货商店,使用云计算就像购物,架构设计就是购物清单,架构里的组件都可以在商店里买到。
基础设施则使用 IaaS 服务商所提供的各种服务,编写应用程序可以更专注于业务。这能带来很多好处:
  • 将精力集中投入到核心业务
  • 加快上线速度
  • 提高可用性
  • 快速扩缩容
  • 不必关心中间件的底层基础设施
  • 免去繁杂的安装、配置、备份和安全管理等运维工作
在 AWS 成为业界标准之后,各大软件公司,不管是新兴的还是老牌的,都开始着手打造自己的云,国外有微软、谷歌、IBM等,国内的 BAT 也都有自己的云,以及京东和美团这样的电商类公司也有自己的云产品,独立的厂商类似 UCloud 和青云等公司也发展的不错,甚至有开饭馆的也要出来凑热闹。而开源软件 OpenStack 和基于 OS 的创业公司和产品也层出不穷。
全民皆云。

容器:云计算的深入人心

之后在 2013 年,容器技术开始面向大众普及了。在 LXC 之前,容器对普通开发人员甚至 IT 业者来说几乎不是同一个维度的术语,那是些专业人员才能掌控的晦涩的术语和繁杂的命令集,大部分人都没有用过容器技术;但是随着 Docker 的出现,容器技术的门槛降低,也在软件行业变得普及。随着几年的发展,基本可以说容器技术已经非常成熟,已成为了开发的标配。
随着容器技术的成熟和普及,应用程序架构也出现了新的变化,可以说软件和基础设施的进化相辅相成。人们越来越多的认识到对技术栈的分层和解耦更加重要,不同层之间的技术和责任、所有权等界限清晰明了,这也和软件设计中的模块松耦合原则很相像。
在有了责权明晰的分层结构之后,每个人可以更容易集中在自己所关注的重点上。开发人员更关注应用程序本身了,在 Docker 火了的同时,也出现了 app-centric 的概念。甚至 CoreOS 还将自己对抗 OCI/runc 的标准称为 appc 。当然现在的 Docker 也不是原来的 Docker ,也是一个组件化的东西,很多组件,尤其是 runtime ,都可以替换为其他运行时。
和以应用程序为重心相对应的是传统的以基础设施为中心,即先有基础设施,围绕基础设施做架构设计和开发、部署,受基础设施的限制较多。而随着 IaaS 等服务的兴起,基础设施越来越简单,越来越多容易入手,而且还提供了编程化的接口,开发人员也可以非常方便的对基础设施进行管理,可以说云计算的出现也使得开发人员抢了一部分运维人员的饭碗(遗憾的是这种趋势太 high 了停不下来。。。)。
当然,现在以应用为中心这一概念也已经深入人心。特别是进化到极致的 FaaS ,自己只需要写几行代码,其他平台全给搞定了。

编排:兵家必争之地

容器解决了代码的可移植性的问题,也使得在云计算中出现新的基础设施应用模式成为可能。使用一个一致的、不可变的部署制品,比如镜像,可以让我们从复杂的服务器部署中解脱出来,也可以非常方便的部署到不同的运行环境中(可移植性)。
但是容器的出现也增加了新的工作内容,要想使用容器运行我们的代码,就需要一套容器管理系统,在我们编写完代码,打包到容器之后,需要选择合适的运行环境,设置好正确的扩容配置,相关的网络连接,安全策略和访问控制,以及监控、日志和分布式追踪系统。
之所以出现编排系统,就是因为一台机器已经不够用了,我们要准备很多机器,在上面跑容器,而且我不关心容器跑在哪台机器上,这个交给调度系统就行了。可以说,从一定层面上,编排系统逐渐淡化了主机这一概念,我们面对的是一个资源池,是一组机器,有多少个 CPU 和多少的内存等计算资源可用。
rkt vs Docker 的战争从开始其实就可以预料到结局,但在编排系统/集群管理上,这场“战争”则有着更多的不确定性。
Mesos(DC/OS)出来的最早,还有 Twitter 等公司做案例,也是早期容器调度系统的标配;Swarm 借助其根正苗红以及简单性、和 Docker 的亲和性,也要争一分地盘;不过现在看来赢家应该是 K8s,K8s 有 Google 做靠山,有 Google 多年调度的经验,加上 RedHat/CoreOS 这些反 Docker 公司的站队,社区又做得红红火火,总之是赢了。
据说今年在哥本哈根举办的 Kubecon 有 4300 人参加。不过当初 Dockercon 也是这声势,而现在影响力已经没那么大了,有种昨日黄花、人老色衰的感觉,不知道几年之后的 Kubernetes 将来会如何,是否会出现新的产品或服务来撼动 Kubernetes 现在的地位?虽然不一定,但是我们期待啊。

Serverless infrastructure:进化的结果

但是呢,淡化主机的存在性也只是淡化而已,并没有完全消除主机的概念,只是我们直接面向主机的机会降低了,不再直接面向主机进行部署,也不会为某些部门分配独占的主机等。主机出了问题还得重启,资源不够了还得添加新的主机,管理工作并没有完全消失。
但是管理一套集群带来了很大的复杂性,这也和使用云计算的初衷相反,更称不上云原生。
从用户的角度再次审视一下,可以发现一个长时间被我们忽略的问题:为什么只是想运行容器,非得先买一台 VM 装好 Docker,或者自己搭建一套 Kubernetes 集群,或者使用类似 EKS 这样的服务,乐此不疲的进行各种配置和调试,不仅花费固定的资产费,还增加了很多并没有任何价值的运维管理工作。
既然我们嫌弃手动在多台主机中部署容器过于麻烦,将其交给集群管理和调度系统去做,那么维护调度系统同样繁杂的工作,是不是也可以交给别人来做,外包出去呢?
按照精益思想,这些和核心业务目标无关,不能带来任何用户价值的过程,都属于浪费行为,都需要提出。
这时候,出现了 Serverless infrastructure 服务,最早的比如国内的 hyper.sh (2016.8 GA),以及去年发布的 AWS 的 Fargate(2017.12),微软的 ACI(Azure Container Instance,2017.7) 等。
以 hyper.sh 为例,使用起来和 Docker 非常类似,可以将本地的 Docker 体验原封不动的搬到云端:
$ brew install hyper 
$ hyper pull mysql
$ hyper run mysql
MySQL is running...
$ hyper run --link mysql wordpress
WordPress is running...
$ hyper fip attach 22.33.44.55 wordpress
22.33.44.55
$ open 22.33.44.55
大部分命令从 docker 换成 hyper 就可以了,体验如同使用 Docker 一模一样,第一次看到这样的应用给人的新奇感,并不亚于当初的 Docker 。
使用 Serverless infrastructure,我们可以再不必为如下事情烦恼:
  • 不必再去费心选择 VM 实例的类型,需要多少 CPU 和内存
  • 不必再担心使用什么版本的 Docker 和集群管理软件
  • 不必担心 VM 内中间件的安全漏洞
  • 不必担心集群资源利用率太低
  • 从为资源池付费变为为运行中的容器付费
  • 完全不可变基础设施
  • 不用因为 ps 时看到各种无聊的 agent 而心理膈应
我们需要做的就是安心写自己的业务应用,构建自己的镜像,选择合适的容器大小,付钱给 cloud 厂商,让他们把系统做好,股票涨高高。

Fargate(此处也可以换做 ACI ):大厂表态

尽管 AWS 不像 GCP 那样“热衷”于容器,但是 AWS 也还是早就提供了 ECS(Elastic Container Service)服务。
去年发布的 AWS Fargate 则是个无服务器的容器服务,Fargate 是为了实现 AWS 的容器服务,比如 ECS(Elastic Container Service) 和 EKS(Elastic Kubernetes Service) 等,将容器运行所需要的基础设施进行抽象化的技术,并且现在 ECS 已经可以直接使用 Fargate。
和提供虚拟机的 EC2 不同,Fargate 提供的是容器运行实例,用户以容器作为基本的运算单位,而不必担心底层主机实例的管理,用户只需建立容器镜像,指定所需要的 CPU 和内存大小,设置相应的网络和IAM(身分管理)策略即可。
对于前面我们的疑问,AWS 的答案是基础设施的坑我们来填,你们只需要专心写好自己的应用程序就行了,你不必担心启动多少资源,我们来帮你进行容量管理,你只需要为你的使用付费就行了。
可以说 Fargate 和 Lambda 等产品都诞生于此哲学之下。
终于可以专心编写自己最擅长的 CRUD 了,happy,happy。

Serverless infrastructure vs Serverless compute

再多说几句,主要是为了帮助大家辨别两种不同的无服务器架构:无服务器计算和无服务器基础设施。
说实话一下子从 EC2 迁移到 Lambda ,这步子确实有点大。
Lambda 等 FaaS 产品虽然更加简单,但是存在有如下很多缺点:
  • 使用场景:Lambda 更适合用户操作或事件驱动,不适合做守护服务、批处理等业务
  • 灵活性:固定的内核、AMI等,无法定制
  • 资源限制:文件系统、内存、进程线程数、请求 body 大小以及执行时间等很多限制
  • 编程语言限制
  • 很难做服务治理
  • 不方便调试和测试
Lambda 和容器相比最大的优势就是运维工作更少,基本没有,而且计费更精确,不需要为浪费的计算资源买单,而且 Lambda 响应更快,扩容效率会高一些。
可以认为 Fargate 等容器实例,就是结合了 EC2 实例和 Lambda 优点的产品,既像 Lambda 一样轻量,更关注核心的应用程序,还能像 EC2 一样带来很大的灵活性和可控性。
云原生会给用户更多的控制,但是需要用户更少的投入和负担。
Serverless infrastructure 可以让容器更加 cloud native。

fully managed:大势所趋

所谓的 fully managed,可以理解为用户花费很少的成本,就可以获得想要的产品、服务并可以进行相应的控制。
这两天,阿里云发布了 Serverless Kubernetes ,Serverless Kubernetes 与原生的 Kubernetes 完全兼容,可以采用标准的 API、CLI 来部署和管理应用,还可以继续使用各种传统资产,并且还能获得企业级的高可用和安全性保障。难道以后我们连 Kubernetes 也不用自己装了,大部分人只需要掌握 kubectl 命令就好了。
IaaS 的出现,让我们丢弃了各种 provision 工具,同时,各种 configuration management 工具如雨后春笋般的出现和普及;容器的出现,又让我们扔掉了刚买还没看几页的各种 Chef/Puppet 入门/圣经,匆忙学起 Kubernetes;有了 Serverless infrastructure,也差不多可以和各种编排工具说拜拜了。
不管你们是在单体转微服务,还是在传统上云、转容器,估计大家都会喜欢上 fully managed 的服务,人人都做 Ops,很多运维工作都可以共同分担。当然,也会有一部分运维工程师掩面而逃.

          (USA-NY-New York) Senior Software Engineer - REACT, NodeJS, React Native      Cache   Translate Page      
Senior Software Engineer - REACT, NodeJS, React Native Senior Software Engineer - REACT, NodeJS, React Native - Skills Required - REACT, React Native, IOS, Android, AWS, NodeJS, Serverless, JavaScript If you are a Senior Software Engineer with experience, please read on! Based in the Big Apple, we are bringing health and wellness to the modern day user! With our recent Series A funding, we are doubling our team size and will continue to grow our business with potential new launches in the coming year! Our product is meant to have a positive impact on our user's lives and is deployed worldwide! **What You Will Be Doing** You will be working closely with our product, data, and design teams while leading our mid-level developers to build out our future products. **What You Need for this Position** More Than 4 Years of Experience and Knowledge of: - JavaScript - REACT - NodeJS - AWS Nice to Have: - React Native - Android - IOS - Serverless **What's In It for You** - Competitive Compensation of $130-180k DOE - Unlimited Vacation - 100% Medical, Dental, Vision - Standing desks so you aren't sitting all day! - Take a break with company game nights! - Need a pick me up? Free snacks in the office! So, if you are a Senior Software Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Software Engineer - REACT, NodeJS, React Native* *NY-New York* *MT1-1493010*
          (USA-CA-San Francisco) Go Lang Developer - National TELECOMM Company      Cache   Translate Page      
Go Lang Developer - National TELECOMM Company Go Lang Developer - National TELECOMM Company - Skills Required - Go, .NET, JavaScript, C#, NODE, NodeJS, Node.js, Software engineers, Software Developer, API If you are a Go Lang Developer with experience, please read on! Title: Go Lang Developer Location: Downtown San Francisco Salary: Negotiable | Depending on experience Based in our downtown SF, CA, we are a growing TELOCOMM company making a huge impact into our industry. You will be responsible for maintaining and enhancing our suite of websites and APIs, utilizing a wide-range of technologies and methodologies supporting monolithic, micro-service, and serverless based solutions. **What You Will Be Doing** - Design, develop and unit test web-based software - Follow best practice coding standards - Produce quality code with unit tests and documentation - Design activities, pull request reviews, code reviews, demos to other engineers - Communicate technical concepts including architec **What You Need for this Position** - Go - JavaScript/Node.js - REST API **What's In It for You** - Competitive base salary and overall compensation package - Full benefits: Medical, Dental, Vision - 401 (K) with generous company match - Generous Paid time off (PTO) - Vacation, sick, and paid holidays - Life Insurance coverage 1. Apply directly to this job opening here! Or 2. E-mail directly for more information to James@CyberCoders.com Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Go Lang Developer - National TELECOMM Company* *CA-San Francisco* *JT7-1492834*
          (USA-CO-Golden) Go Lang Developer - National TELECOMM Company      Cache   Translate Page      
Go Lang Developer - National TELECOMM Company Go Lang Developer - National TELECOMM Company - Skills Required - Go, .NET, JavaScript, C#, NODE, NodeJS, Node.js, Software engineers, Software Developer, API If you are a Go Lang Developer with experience, please read on! Title: Go Lang Developer Location: Golden, CO Salary: Negotiable | Depending on experience Based in our Golden, CO, we are a growing TELECOMM company making a huge impact into our industry. You will be responsible for maintaining and enhancing our suite of websites and APIs, utilizing a wide-range of technologies and methodologies supporting monolithic, micro-service, and serverless based solutions. **What You Will Be Doing** - Design, develop and unit test web-based software - Follow best practice coding standards - Produce quality code with unit tests and documentation - Design activities, pull request reviews, code reviews, demos to other engineers - Communicate technical concepts including architec **What You Need for this Position** - Go - JavaScript/Node.js - REST API **What's In It for You** - Competitive base salary and overall compensation package - Full benefits: Medical, Dental, Vision - 401 (K) with generous company match - Generous Paid time off (PTO) - Vacation, sick, and paid holidays - Life Insurance coverage 1. Apply directly to this job opening here! Or 2. E-mail directly for more information to James@CyberCoders.com Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Go Lang Developer - National TELECOMM Company* *CO-Golden* *JT7-1492918*
          (USA-CA-Palo Alto) Application Developer      Cache   Translate Page      
Application Developer (AWS/JAVA) Application Developer (AWS/JAVA) - Skills Required - AWS, Amazon, Java, Lambda If you are a Application Developer with (AWS/JAVA) experience, please read on! **What You Will Be Doing** Design and build new backend services on AWS to support our platform 100% hands-on coding Design and build new front-end features API design and development Maintain and improve the performance of existing software Write tests for existing and created code to ensure compatibility and stability **What You Need for this Position** More Than 5 Years of experience and knowledge of: Bachelor's Degree in Computer Science or equivalent experience Solid understanding of computer science fundamentals, data structure, algorithm distributed systems, and asynchronous or event-driven architectures 4+ years of current JAVA coding experience Experience coding and testing applications that use AWS services components such as EC2, API Gateway, Lambda, S3, EBS, RDS, SQS Experience with microservice architectures, asynchronous frameworks, caching and server side concepts Ability to multi-task easily and juggle priorities in a fast-paced environment Familiarity with source control, Git and working with complex branching Ability to rapidly design, prototype, and iterate to solve problems and fix bugs Desired qualifications Coding applications on AWS using JAVA a MUST AWS Lambda, Serverless on Java and/or node.js AWS Developer Certifications is a big plus Experience working with large file processing 10 GB to 100GB type range Exposure to scientific software or biological research is a bonus Nice to have: front-end web development experience using a javascript framework, Ruby on Rails or similar, etc. So, if you are a Application Developer with (AWS/JAVA) experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Application Developer* *CA-Palo Alto* *HT1-1492959*
          Alcide Report Finds 75% Will Increase the Number of Cloud Security Tools They Re ...      Cache   Translate Page      

As Hybrid Cloud and Serverless Continue to Gain Ground, Organizations Rush to Keep Up; Fewer than Half Have Dedicated Cloud Security Teams

Tel Aviv November 6, 2018 Alcide , provider of the most comprehensive full-stack cloud-native security platform, today released the findings of a new industry report: 2018 Report: The State of Securing Cloud Workload based on responses from close to 350 security, DevOps and IT leaders. The report reveals that as more organizations embrace hybrid cloud with more than 50 percent claiming a hybrid cloud setup and serverless, now used by close to third of organizations, they lack the tools and specialization to keep up. 75 percent of respondents expect to see an increase in the number of security tools they rely on in the next year, while over half share they still manually configure security policies. The resulting complexity has the potential to slow critical business functions in the absence of an integrated security approach to distributed cloud environments.

According to a recent report from 451 Research :

The pace of innovation on cloud-native environments places significant burden on traditional security practices. Not only is there a need to support new technology options quickly moving from traditional virtual machines to containers, serverless, and newer constructs such as service mesh but there is also a difference in how security and DevOps teams consider their needs and workflows.

Alcide’s report, conducted in August 2018 in conjunction with Informa Engage, reinforces the idea that new practices and technologies are disrupting traditional security practices, with findings including:

Cloud complexity increasing, with hybrid cloud as the new infrastructure normal: While virtual machines (VMs) remain the most common cloud computing environment (83%), containers (37%), serverless (28%), and service mesh (21%) are gaining traction. Hybrid and multi-cloud approaches now make up more than three-quarters of all configurations (77%). Serverless running in production; most bullish about its security Despite some security concerns, the majority (57%) of serverless users are currently running it in both production and development. The majority currently using serverless have a high degree of confidence in its security, while one third (32%) express a lack of confidence in the security of their environments. As cloud infrastructure complexity grows, security becomes a shared responsibility with DevOps Fewer than half of organizations (45%) now have a dedicated security team responsible for the cloud, with 35% of all organizations now using either a DevOps team or dedicated DevSecOps team for security. Hybrid cloud complexity pushes Dev, Sec and Ops teams to look for more tools to secure their distributed environments Over two-thirds (75%) expect to increase the number of tools in use over the next twelve months with no one expecting to retire any tools in use One-third of organizations reporting using more than five tools for cloud security Proliferation of cloud security tools leave the enterprise vulnerable, point to need for intelligent policy automation More than half (60%) of organizations rely on manual configurations of security policies for their apps, while almost all organizations (90%) rely on multiple individuals to configure and set policy rules.

“Our report validates what we’ve seen with our own customers modern organizations are striving for a consolidated security approach that will support business velocity and tackle the challenges associated with the overhead of multiple tools in use,” said Karine Regev, VP Marketing of Alcide. “Modern teams can’t assume that emerging technologies like serverless are secure, and need a practical and uniform enforcement and management of security policies to control disparate and cloud-native services, infrastructure and environments.”

Access the full report here for additional insight.

About Alcide:

Alcide’s Cloud Native Security Platform protects any combination of container, serverless, VM and bare metal. Offering real-time, aerial visibility, threat protection and security policies enforcement, Alcide secures the cloud infrastructure, workloads and service mesh against cyber-attacks, including malicious internal activity, lateral movement and data exfiltration. For more information, please visit www.alcide.io

Sponsored Content

Featured eBook


Alcide Report Finds 75% Will Increase the Number of Cloud Security Tools They Re ...

The State of DevOps Tools, A Primer

This report, designed to be a primer on the current state of the DevOps tools market, isn’t meant to be a definitive guide to every DevOps tool available. We hope to set the DevOps toolset baseline and clear confusion by providing an overview of the tool categories that currently are ...Read More


          (IT) Software Developer (DevOps)      Cache   Translate Page      

Location: Melbourne   

Bluefin are recruiting a number of Software Developers for a large consultancy here in Melbourne. The Developers will be based on site at one of the large enterprise organisations. The project is about enhancing/rebuilding an existing Oracle system that is monolithic in nature and has extreme constraints around performance, capability and agility (time to market). The idea it to build external components to integrate with it in AWS using serverless technology. Part of the project will be to migrate the existing relational database into AWS and possibly moving it to a noSQL structure, if it makes sense to do so. Software Developers are required for this, with preferably some DevOps experience. Some of the tech involved is as follows: AWS: Lamda Dynamo DB Cloud Formation Cloud Watch SNS Other stuff: Kafka Splunk NodeJS Java API (RESTFUL) development No SQL Spring Boot
 
Type: Contract
Location: Melbourne
Country: Australia
Contact: Bluefin Resources
Advertiser: Bluefin Resources
Reference: JSBBBH33899_153984320850111/567360217

          Reducing Azure Functions Cold Start Time      Cache   Translate Page      
You can host a serverless function in Azure in two different modes: Consumption plan and Azure App Service plan. The Consumption plan automatically allocates compute power when your code is running. Your app is scaled out when needed to handle load, and scaled down when code is not running. You don’t have to pay for ... Read moreReducing Azure Functions Cold Start Time
          Digital Solution Architect - NTT DATA Services - Montréal, QC      Cache   Translate Page      
Fluent in contemporary Digital technologies and platforms, including AWS services, PaaS, FaaS / Serverless, data lake concept, IoT, API design, mobile app dev...
From NTT Data - Thu, 18 Oct 2018 14:13:36 GMT - View all Montréal, QC jobs
          Hybrid cloud complexity pushes organizations to look for more security tools      Cache   Translate Page      

As more organizations embrace hybrid cloud – with more than 50 percent claiming a hybrid cloud setup – and serverless, now used by close to third of organizations, they lack the tools and specialization to keep up, according to Alcide. 75 percent of respondents expect to see an increase in the number of security tools they rely on in the next year, while over half share they still manually configure security policies. The resulting complexity … More

The post Hybrid cloud complexity pushes organizations to look for more security tools appeared first on Help Net Security.


          Digital Solution Architect - NTT DATA Services - Montréal, QC      Cache   Translate Page      
Fluent in contemporary Digital technologies and platforms, including AWS services, PaaS, FaaS / Serverless, data lake concept, IoT, API design, mobile app dev...
From NTT Data - Thu, 18 Oct 2018 14:13:36 GMT - View all Montréal, QC jobs
          How to make a plugin with aws-sdk pass corporate proxy      Cache   Translate Page      

@rubentrancoso wrote:

Hi,

I just made a new plugin

https://www.npmjs.com/package/aws-cognito-idp-userpool-domain (it’s on the line to be published)

but I had noticed that when I’m behind the corporate proxy it just works when I add the lines below to the code:

var proxy = require('proxy-agent');

AWS.config.update({
  httpOptions: { agent: proxy('http://username:password@internal.proxy.com') }
});

How can I reuse the proxy setting from Serverless framework in a way I do not need to add this code, and mainly the username and password?

I have noticed that even without credentials in the environment variables (https_proxy), serverless works fine. Why aws-sdk inside a plugin require credentials?

thanks!

Posts: 1

Participants: 1

Read full topic


          Custom error message      Cache   Translate Page      

@kurogami wrote:

i’m looking for ways to return error messages on the API with lambda integration (not lambda-proxy integration) and want to eliminate errorStackTrace when using:

callback(new Error('[400] Something is wrong here'))

and want to return something like the response below and get a 400 Bad Request header in Postman:

{
   message: "Something is wrong here"
}

How do you guys do it?

Posts: 1

Participants: 1

Read full topic


          Digital Solution Architect - NTT DATA Services - Montréal, QC      Cache   Translate Page      
Fluent in contemporary Digital technologies and platforms, including AWS services, PaaS, FaaS / Serverless, data lake concept, IoT, API design, mobile app dev...
From NTT Data - Thu, 18 Oct 2018 14:13:36 GMT - View all Montréal, QC jobs
          C. Desales - Writing and deploying serverless Python applications      Cache   Translate Page      
none
          D. Makogon - Applying serverless architecture pattern to distributed data processing      Cache   Translate Page      
none
          D. Scardi - Serverless SQL queries from Python to AWS Athena... or power to Data Scientists!      Cache   Translate Page      
none
          F. Caboni - Serverless Computing con Python e AWS : Redux      Cache   Translate Page      
none
          Oldie but Goldie: Die BASTA! Spring im neuen Jahr      Cache   Translate Page      
Auch im Jahr 2019 lädt die BASTA! Spring wieder vom 25. Februar bis 1. März 2019 Entwickler zu spannenden Talks, Keynotes und Workshops zu zahlreichen Themen ein: .Net Framework, Microservices & APIs, User Development, Cloud und Serverless sind nur einige

          Google เปิดตัว Cloud Scheduler บริการ cron สำหรับรันงานอัตโนมัติบนคลาวด์      Cache   Translate Page      

Google Cloud Platform ประกาศเปิดตัวฟีเจอร์ใหม่ Cloud Scheduler บริการ cron แบบ serverless ที่ GCP จัดการโครงสร้างพื้นฐานให้

Google ระบุว่า ปกติแล้ว ระบบตั้งเวลารันงานเป็นสิ่งสำคัญของนักพัฒนา เพราะช่วยให้รันงานต่าง ๆ ตามเวลา โดยไม่ต้องกดรันเอง แต่ปัญหาสำคัญของระบบรันงานอัตโนมัติทุกวันนี้คือยังต้องจัดการโครงสร้างพื้นฐานเอง ซึ่งเป็นงานที่ค่อนข้างวุ่นวาย และ Cloud Scheduler จะเข้ามาช่วยจัดการจุดนี้

Cloud Scheduler นั้นเป็นบริการแบบ serverless ซึ่ง GCP จะจัดการในเรื่องโครงสร้างพื้นฐานให้ ผู้ใช้งานเพียงสร้างตารางงานขึ้นมาเท่านั้น Cloud Scheduler จะจัดการส่วนที่เหลือให้เอง ซึ่งความสามารถหลัก ๆ ของ Cloud Scheduler ก็จะเหมือนกับ cron และการที่ Google จัดการระบบให้ จึงมั่นใจได้ว่า Cloud Scheduler เป็นระบบที่วางใจได้ และทนทานต่อความผิดพลาด

Cloud Scheduler นั้นมีช่องทางให้จัดการเหมือนบริการอื่น ๆ บน GCP คือจะใช้งานผ่าน UI, CLI หรือ API ก็ได้ และฟอร์แมตต่าง ๆ ก็จะเหมือนกับ cron ที่ใช้บน Unix ซึ่งนักพัฒนาที่เคยใช้งานน่าจะคุ้นเคยกันอยู่แล้ว ซึ่ง Cloud Scheduler สามารถสั่งรันงานได้ทั้ง Pub/Sub topic, Google App Engine หรือ HTTP/S endpoint ใด ๆ ก็ได้ สามารถนำไปใช้งานได้หลายรูปแบบ เช่น อัพเดตฐานข้อมูล, สั่งรัน CI/CD pipeline, สั่งอัพโหลดภาพ, สั่ง Cloud Functions ผ่าน Cloud Pub/Sub

ตอนนี้ ​Cloud Scheduler เปิดให้ใช้บริการแบบเบต้าแล้ว โดยค่าบริการของ Cloud Scheduler จะคิดที่ 0.10 ดอลลาร์ต่องานต่อเดือน และมี free tier อยู่ที่ 3 งานต่อเดือนต่อบัญชี โดยวิธีใช้งาน Cloud Scheduler อ่านได้เพิ่มเติมที่ Google Cloud Documentation

ที่มา - Google Cloud Blog

No Description
ภาพจาก Google


          Announcing the general availability of Azure Event Hubs for Apache Kafka®      Cache   Translate Page      

In today’s business environment, with the rapidly increasing volume of data and the growing pressure to respond to events in real-time, organizations need data-driven strategies to gain valuable insights faster and increase their competitive advantage. To meet these big data challenges, you need a massively scalable distributed streaming platform that supports multiple producers and consumers, connecting data streams across your organization. Apache Kafka and Azure Event Hubs provide such distributed platforms.

How is Azure Event Hubs different from Apache Kafka?

Apache Kafka and Azure Event Hubs are both designed to handle large-scale, real-time stream ingestion. Conceptually, both are distributed, partitioned, and replicated commit log services. Both use partitioned consumer models with a client-side cursor concept that provides horizontal scalability for demanding workloads.

Apache Kafka is an open-source streaming platform which is installed and run as software. Event Hubs is a fully managed service in the cloud. While Kafka has a rapidly growing, broad ecosystem and has a strong presence both on-premises and in the cloud, Event Hubs is a cloud-native, serverless solution that gives you the freedom of not having to manage servers or networks, or worry about configuring brokers.

Announcing Azure Event Hubs for Apache Kafka

We are excited to announce the general availability of Azure Event Hubs for Apache Kafka. With Azure Event Hubs for Apache Kafka, you get the best of both worlds—the ecosystem and tools of Kafka, along with Azure’s security and global scale.

This powerful new capability enables you to start streaming events from applications using the Kafka protocol directly in to Event Hubs, simply by changing a connection string. Enable your existing Kafka applications, frameworks, and tools to talk to Event Hubs and benefit from the ease of a platform-as-a-service solution; you don’t need to run Zookeeper, manage, or configure your clusters.

Event Hubs for Kafka also allows you to easily unlock the capabilities of the Kafka ecosystem. Use Kafka Connect or MirrorMaker to talk to Event Hubs without changing a line of code. Find the sample tutorials on our GitHub.

This integration not only allows you to talk to Azure Event Hubs without changing your Kafka applications, you can also leverage the powerful and unique features of Event Hubs. For example, seamlessly send data to Blob storage or Data Lake Storage for long-term retention or micro-batch processing with Event Hubs Capture. Easily scale from streaming megabytes of data to terabytes while keeping control over when and how much to scale with Auto-Inflate. Event Hubs also supports Geo Disaster-Recovery. Event Hubs is deeply-integrated with other Azure services like Azure Databricks, Azure Stream Analytics, and Azure Functions so you can unlock further analytics and processing.

Event Hubs for Kafka supports Apache Kafka 1.0 and later through the Apache Kafka Protocol which we have mapped to our native AMQP 1.0 protocol. In addition to providing compatibility with Apache Kafka, this protocol translation allows other AMQP 1.0 based applications to communicate with Kafka applications. JMS based applications can use Apache Qpid™ to send data to Kafka based consumers.

Open, interoperable, and fully managed: Azure Event Hubs for Apache Kafka.

Next steps

Get up and running in just a few clicks and integrate Event Hubs with other Azure services to unlock further analytics.

Enjoyed this blog? Follow us as we update the features list. Leave us your feedback, questions, or comments below.

Happy streaming!


          Tales From a DevOps Transformation      Cache   Translate Page      

Show Number: 370

Overview: Aaron talks with Lee Eason (@leejeason; Director of DevOps at Ipreo and the co-founder of Tekata.io) at All Things Open about his DevOps transformation for all of the organization’s 30+ products and 65+ scrum teams leading to a dramatic reduction in manual work and an increase in quality and customer satisfaction across the board.

Show Links:

Show Sponsor Links:

Show Notes

  • Aaron and Lee talk about Lee's talk "Tales from a DevOps Journey", from the All Things Open event in Raleigh, NC - October 22nd, 2018.

Feedback?


          Digital Solution Architect - NTT DATA Services - Montréal, QC      Cache   Translate Page      
Fluent in contemporary Digital technologies and platforms, including AWS services, PaaS, FaaS / Serverless, data lake concept, IoT, API design, mobile app dev...
From NTT Data - Thu, 18 Oct 2018 14:13:36 GMT - View all Montréal, QC jobs


Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13
Site Map 2018_09_14
Site Map 2018_09_15
Site Map 2018_09_16
Site Map 2018_09_17
Site Map 2018_09_18
Site Map 2018_09_19
Site Map 2018_09_20
Site Map 2018_09_21
Site Map 2018_09_23
Site Map 2018_09_24
Site Map 2018_09_25
Site Map 2018_09_26
Site Map 2018_09_27
Site Map 2018_09_28
Site Map 2018_09_29
Site Map 2018_09_30
Site Map 2018_10_01
Site Map 2018_10_02
Site Map 2018_10_03
Site Map 2018_10_04
Site Map 2018_10_05
Site Map 2018_10_06
Site Map 2018_10_07
Site Map 2018_10_08
Site Map 2018_10_09
Site Map 2018_10_10
Site Map 2018_10_11
Site Map 2018_10_12
Site Map 2018_10_13
Site Map 2018_10_14
Site Map 2018_10_15
Site Map 2018_10_16
Site Map 2018_10_17
Site Map 2018_10_18
Site Map 2018_10_19
Site Map 2018_10_20
Site Map 2018_10_21
Site Map 2018_10_22
Site Map 2018_10_23
Site Map 2018_10_24
Site Map 2018_10_25
Site Map 2018_10_26
Site Map 2018_10_27
Site Map 2018_10_28
Site Map 2018_10_29
Site Map 2018_10_30
Site Map 2018_10_31
Site Map 2018_11_01
Site Map 2018_11_02
Site Map 2018_11_03
Site Map 2018_11_04
Site Map 2018_11_05
Site Map 2018_11_06
Site Map 2018_11_07