Next Page: 10000

          Mylo - Lead Software Engineer - Mylo - Montréal, QC      Cache   Translate Page      
Cloudformation, EC2, ECS, Serverless, ELB, S3, VPC, IAM, CloudWatch) to develop and maintain an AWS based cloud solution, with an emphasis on best practice...
From Mylo - Wed, 29 Aug 2018 20:55:18 GMT - View all Montréal, QC jobs
          Solutions Architect - Amazon Web Services - Amazon.com - San Francisco, CA      Cache   Translate Page      
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From Amazon.com - Sun, 02 Sep 2018 07:30:50 GMT - View all San Francisco, CA jobs
          Sr. Solutions Architect - AWS - Amazon.com - San Francisco, CA      Cache   Translate Page      
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From Amazon.com - Fri, 25 May 2018 19:20:02 GMT - View all San Francisco, CA jobs
          How to Mock AWS Services in TypeScript      Cache   Translate Page      

I've recently been working on a new project to automatically convert blog posts to audio that has a couple different serverless microservices.

Each microservice is initialized using the Serverless Framework and typically consists of one or more Lambda functions and either an SNS topic, SQS queue, DynamoDB table, or API Gateway endpoint.

I decided to implement each serverless microservice using TypeScript rather than my default of traditional Node. I chose this path because I wanted to have static type checking for a more resilient code base.

Typescript is a very cool language and has grown in popularity in recent years. It provides a lot of different benefits outside of just static type checking:

.d.ts

In building out these services I ran into a problem that stopped me dead in my tracks for a few hours. I was writing unit tests for a new adapter inside of one of my services that leverages DynamoDB for storage.

To effectively test the adapter I needed to mock the responses from the DynamoDB client inside of the aws-sdk . This proved to be a bit more challenging than I originally thought.

Therefore I thought it would be worthwhile to put together a quick blog post on the two methods I found for mocking the aws-sdk.

Mocking in TypeScript with TypeMoq

There are a lot of fantastic tools for mocking TypeScript. The one I chose for my work was TypeMoq . It is very similar to the Moq library in the .NET framework.

It is fairly simple to get started. Let's say we have a piece of code like the one shown below.

export class MessageProcessor { conversionAdapter: ConversionAdapter constructor(conversionAdapter: ConversionAdapter) { this.conversionAdapter = conversionAdapter; } processJobs(records: IConversionModel[]): Promise<boolean> { return new Promise((resolve, reject) => { this.conversionAdapter.convert(record).then((result) => { resolve(true); }).catch((err) => { reject(err); }); }); } }

This code is calling the convert method on a class named ConversionAdapter . From the code, we can see that the method returns a Promise . We use the result of that promise to resolve our outer promise or reject it if there is an error.

We can set up a test for the processJobs method by using TypeMoq to mock the result of the convert method on ConversionAdapter .

describe("MessageProcessor", function () { before(function () => { this.model = new ConversionModel(); this.model.url = "url"; this.model.content = "content"; this.model.title = "title"; this.model.userId = "1234-userid"; }); it("processJobs success", function (done) => { var mockAdapterFactory = TypeMoq.Mock.ofType<IConversionAdapter>(ConversionAdapter); mockAdapterFactory.setup(m => m.convert(this.model)).returns(() => Promise.resolve(1)); var processor = new MessageProcessor(mockAdapterFactory.object); var pass = [ this.model ]; processor.processJobs(pass).then((result) => { expect(result).to.eql(true); done(); }); }); });

The key here is our setup of mockAdapterFactory . We are using TypeMoq.Mock.ofType<IConversionAdapter>(ConversionAdapter) to create a mock of our adapter. We then set up the convert method to return Promise.resolve(1) .

From there it's just a matter of us passing the actual mock, mockAdapterFactory.object into our MessageProcessor class.

Now we can wait for our promise to resolve and verify that the result we get back is true.

OK, now we got the basics of mocking in Typescript. Let's move onto how we can mock AWS services like SNS, SQS, or DynamoDB.

Two ways to mock AWS services in TypeScript

To be clear, there is more than these two ways to mock the aws-sdk in TypeScript. These are just two ways I got working and wanted to share my experiences to hopefully save someone time in the future.

Let's give ourselves some context first. Here we have a class, StorageAdapter , that is using DynamoDB internally.

export class StorageAdapter implements IStorageAdapter { dynamoClient: DynamoDB; tableName: string; constructor(dynamoClient: DynamoDB, tableName: string) { this.tableName = tableName; this.dynamoClient = dynamoClient; } readItem(id: string): Promise<DynamoDB.GetItemOutput> { return new Promise((resolve, reject) => { let itemInput: DynamoDB.Types.GetItemInput = { TableName: this.tableName, Key: this.setKey(id) }; this.dynamoClient.getItem(itemInput).promise().then((data: PromiseResult<DynamoDB.GetItemOutput, AWSError>) => { if (data.$response && data.$response.error != null) { reject(data.$response.error); } else { resolve(data.Item); } }); }); } private setKey(id: string): Key { let keyStruct: Key = { "ConversionId": { S: id } } return keyStruct; } }

Here in readItem , we are calling getItem on the DynamoDB client from the aws-sdk . Additionally, we tell the SDK that we want a promise returned to us by appending .promise() at the end of our call.

We then take the result of that call and reject or resolve our outer promise depending on whether there is an error.

Now, let's dive into two ways we can mock getItem on the DynamoDB client in order to properly test our code.

Method 1: Using TypeMoq

Our first option is to use TypeMoq.

describe("StorageAdapter", function () => { before(function () => { }); it("readItem correctly", function (done) => { var expectedPr: PromiseResult<DynamoDB.GetItemOutput, AWSError> = { Item: { "ConversionId": { S: "fooid" }, "Email": { S: "kyle@test.com" }, "Status": { S: "Converting" }, "Characters": { N: `${123}` }, "Url": { S: "https://cloudfront.com" } }, $response: null }; var mockAdapterFactory = TypeMoq.Mock.ofType<DynamoDB>(DynamoDB); var mockRequestFactory = TypeMoq.Mock.ofType<Request<DynamoDB.GetItemOutput, AWSError>>(Request); mockRequestFactory.setup(m => m.promise()).returns(() => Promise.resolve(expectedPr)); mockAdapterFactory.setup(m => m.getItem(TypeMoq.It.isAny())).returns(() => mockRequestFactory.object); var storage = new StorageAdapter(mockAdapterFactory.object, "testTable"); storage.readItem("fooid").then((result) => { console.log(result); }).catch((err) => { console.error(err); }); }); });

We first have to set up our DynamoDB mock which is what we see with mockAdapterFactory . Then we have to create a mocked Request object that getItem returns and we can see that with our mockRequestFactory variable.

Now we can configure our mocks to return what we need to test our code.

First, we configure promise() on mockRequestFactory to return a resolved promise with our expected DynamoDB.GetItemOutput declared in expectedPR .

Next, we configure getItem() on mockAdapterFactory to return the mocked request we just finished setting up.

From there we just need to pass the actual mock of our DynamoDB client by passing mockAdapterFactory.object into our StorageAdapter .

If your keeping track we need four lines of code using TypeMoq in order to mock our getItem call on DynamoDB. This is very verbose for writing tests, but it does provide a high level of visibility into what types are being leveraged for this getItem call to complete successfully.

Let's explore how we can accomplish the same goal using a different mocking library.

Method 2: Using aws-sdk-mock

There is another mocking library, aws-sdk-mock , that is dedicated to mocking the aws-sdk .

We can actually reduce the four lines of code we have with TypeMoq to just one line of code.

describe("StorageAdapter", () => { before(() => { }); it("readItem correctly", () => { var expectedPr: PromiseResult<DynamoDB.GetItemOutput, AWSError> = { Item: { "ConversionId": { S: "fooid" }, "Email": { S: "kyle@test.com" }, "Status": { S: "Converting" }, "Characters": { N: `${123}` }, "Url": { S: "https://cloudfront.com" } }, $response: null }; AWSMock.mock('DynamoDB', 'getItem', Promise.resolve(expectedPr)); var storage = new StorageAdapter(new DynamoDB(), "testTable"); storage.readItem("fooid").then((result) => { console.log(result); }).catch((err) => { var x = 3; }); }); });

Instead of mocking our DynamoDB client, the request, and the promise associated with getItem , we can just mock getItem and return a resolved promise. There is a lot less set up in order to create a valuable mock and accurately test our StorageAdapter class.

This has the benefit of being very concise and clear to follow. However, currently aws-sdk-mock does not have any TypeScript typings which impacts our IntelliSense during development.

Recapping mocking AWS in TypeScript

There have been quite a few articles that cover how to mock AWS services, but they often don't incorporate TypeScript. This isn't a problem as anything written for normal javascript can be leveraged in our TS world, just without any types. This is what we see with our use of aws-sdk-mock .

Testing is key to any production-ready service, even in your low-level adapters that interface with AWS services. However, it can be deceptively tricky to mock AWS services in order to accurately test our code. This is often because the types are more complex, for example, .getItem().promise() actually returns a generic type of PromiseResult<DynamoDB.GetItemOutput, AWSError> .

What we have demonstrated here, is two approaches to mocking the aws-sdk . Our first approach using TypeMoq demonstrated a TypeScript style, using types and incrementally building up our mock. We build up the entire chain of types from the request all the way up to the resolved promise.

The second approach using aws-sdk-mock showed how we can mock a single service call with one line of code.

In method one, we have types and verbosity, but sacrifice conciseness by having to build up our mock. Method two, we have conciseness but lose our types and must then know the specific method calls and client declarations. There is no right answer here, both work great depending on what your preference is.

On another note, aws-sdk-mock is dedicated to mocking just the aws-sdk . Whereas TypeMoq can be used universally throughout our TypeScript tests. In fact, I ended up using both throughout my services. TypeMoq to mock my classes as well as my interfaces and the SDK mock for mocking out the AWS services.

Like most things in developing code, there isn't a single right answer. My hope is that my experience in mocking the aws-sdk in TypeScript has given you an example of how you can do the same in your own code.

As always if you have questions or comments related to AWS or this post, please leave a comment below.

Are you hungry to learn even more about Amazon Web Services?

If you are looking to begin your AWS journey but feel lost on where to start, consider checking out my course . We focus on hosting, securing, and deploying static websites on AWS. Allowing us to learn over 6 different AWS services as we are using them. After you have mastered the basics there we can then dive into two bonus chapters to cover more advanced topics like Infrastructure as Code and Continous Deployment.


          Build a RESTful API using AWS Lambda, API Gateway, DynamoDB and the Serverless F ...      Cache   Translate Page      

Build a RESTful API using AWS Lambda, API Gateway, DynamoDB and the Serverless F ...

We are going to build a serverless RESTful API for getting contact information using Amazon Web Services (AWS)!

As a note, this article mainly focuses on getting everything working locally, so you can develop and test the API without having a dependency on internet and AWS. This article will also use pure ES6 that is supported by Node 8.10.x, no transpiling.

The code for this article can be found here: https://github.com/vanister/contacts_api

Project setup

Install the following tools and frameworks:

Node.js , 8.10.x Visual Studio Code Java DynamoDB Local Postman

Next, create the project folder and initialize it using npm .

$ mkdir contacts_api
$ cd contacts_api/
$ npm init -y Dependencies

Install the following packages to work with Serverless and AWS:

aws-sdk ― The Amazon Web Services SDK is used for programmatic access to the various AWS services. dotenv ― A module for storing and setting environment variables. jest ― All-in-one javascript unit testing framework. serverless ― Framework for managing deployments on AWS. serverless-offline ― Plugin to emulate AWS API Gateway and Lambdas for local development. $ npm install -g serverless
$ npm install ― save-dev serverless-offline aws-sdk jest dotenv

Optionally, you can install the serverless-dynamodb-local plugin if you want to use serverless to manage DynamoDB locally.

Project structure

Before we start writing the handler code, we’re going to structure the project folder and configure our tools.

Create the following structure at the root level:

/contacts_api |--/src |----/handlers |------/contacts |--------contacts.serverless.yml |--.env |--.gitignore |--jest.config.js |--serverless.yml | … | … |--package.json

Update the following config files with these settings:

The .env file will store our environment variables. Make sure to list it in the .gitignore file so that you don’t accidentally commit some of your private environment values, if you choose to use real AWS keys.

<strong># file: .gitignore</strong> # package directories node_modules/ # Serverless directories .serverless/ .dynamodb # files .DS_Store .env # tests coverage/ <strong># file: .env</strong> AWS_HOST=’http://localhost:8000' AWS_REGION=’localhost’ AWS_ACCESS_KEY_ID=’fake-access-key’ AWS_SECRET_ACCESS_KEY=’fake-secret-key’

The jest.config.js file is used to config Jest outside of the cli.

// file: jest.config.js module.exports = {
// per issue: https://github.com/jsdom/jsdom/issues/2304
testURL: ‘http://localhost/'
};

The serverless.yml and contacts.serverless.yml files are configurations for the Serverless framework to deploy and run our lambdas locally.

serverless.yml

contacts.serverless.yml

Repository and utilities

Before we can implement the handlers, we will need to write a repository and some utilities to help abstract the logic out of the handlers to make them testable.

We will be using the Repository pattern to create a layer of abstraction between DynamoDB and our Contact entity. The data we get back from the repository will need to be formatted, which we will create some helper utilities to aid with.

Create a file called src/repositories/contact.repository.js for our contact repository class.

We will set up our repository to accept the DynamoDB’s DocumentClient as a dependency which will act as the UnitOfWork.

contact.repository.js

We are using the DocumentClient and its promises to make working with DynamoDB simpler and leverage es6 and Node’s support for async/await.

The repository simply returns a promise with data returned from DocumentClient.

Next, let’s create the request and response utilities so that we can use them to extract, parse and format data to be used by our lambda functions.

Create a file called src/utils/request.util.js for our request utility.

For now, the request utility will contain a single function that will accept some parser function and return a new function that uses it to parse some text we pass to it.

request.util.js

Create a file called src/utils/response.util.js for our response utility.

response.util.js

The withStatusCode response utility function is similar to the request utility, but instead of parsing text, it will format data into text. It also contains addition checks to make sure our status codes are within the range of allowable status codes.

Lastly, we will create a factory module for creating instances of DynamoDB.DocumentClient so we can keep our Lambda functions DRY.

Create a file called src/dynamodb.factory.js for our DynamoDB factory.

dynamodb.factory.js

Lambda functions

The way to think about these functions is to treat them as if they are isolated from each other. We need to require all of the dependencies needed by a function so that it can execute independently.

Since we abstracted out our logic and AWS dependencies into a repository and utilities, our Lambda functions will simply call the repository and utilities and return data.

From the src/handlers/contacts folder, create the following Lambda function files:

add.js delete.js get.js list.js update.js

Open each of the files and add the following code to them:

add.js

delete.js

get.js

list.js

update.js

Note that we are requiring the dotenv/c
          What is serverless?      Cache   Translate Page      
https://enterprisersproject.com/article/2018/9/what-serverless

Let’s examine serverless and Functions-as-a-Service (FaaS), how they fit together, and where they do and don’t make sense

By
CIO digital transformation
You likely have heard the term serverless (and wondered why someone thought it didn’t use servers). You may have heard of Functions-as-a-Service (FaaS) – perhaps in the context of Lambda from Amazon Web Services, introduced in 2014. You’ve probably encountered event-driven programming in some form. How do all these things fit together and, more importantly, when might you consider using them? Read on.
Servers are still involved; developers just don’t need to think about them in a traditional way.
Let’s start with FaaS. With FaaS, you write code to accomplish some specific task and upload the code for our function to a FaaS provider. The public cloud provider or on-premise platform then does everything else necessary to provision, run, scale, and manage the code. As a developer, you don’t need to do anything other than write your code and wire it up to other functions and services. FaaS provides programmers with an abstraction that allows them to focus on just writing code that takes action in response to events rather than interacting with the underlying server (whether bare metal, virtualized, or containerized).
[ Struggling to explain containers to non-techies? Read also: How to explain containers in plain English. ]
Now enter event-driven programming. Functions run in response to external events. It could be a call generated by a mouse click in a web app. But it could also be in response to some other action. For example, uploading a media file could trigger custom code that transcodes the file into a variety of formats.
Serverless then describes a set of architectural patterns that build on FaaS. Serverless combines custom FaaS code with common back-end services (such as databases and authentication) connected primarily through an event-driven execution model. From the perspective of a developer, these services are all managed by a third-party (whether an ops team or an external provider). Of course, servers are still involved; developers just don’t need to think about them in a traditional way.

Why serverless?

Serverless is an emerging technology area. There’s a lot of interest in the technology and approach although it’s early on and has yet to appear on many enterprise IT radar screens. To understand the interest, it’s useful to consider serverless from both operations and developer perspectives.
PaaS and FaaS are best thought of as being on a continuum rather than being entirely discrete.
For operations teams, one of the initial selling points of FaaS on public clouds was its pricing model. By paying only for an ephemeral (typically stateless) function while it was executing, you “didn’t pay for idle.” In general, while this aspect of serverless is still important to some, it’s less emphasized today. As a broader concept that brings in a wide range of services of which FaaS is just one part, the FaaS pricing model by itself is less relevant.
However, pricing model aside, serverless also allows operations teams to provide developers with a self-service platform and then get out of the way. This is a concept that has been present in platforms like OpenShift from the beginning. Serverless effectively extends the approach for certain types of applications.
The arguably more important aspect of serverless is increased developer productivity. This has two different aspects.
The first is that, as noted earlier, FaaS abstracts away many of the housekeeping details associated with server provisioning and management that are often just overhead for developers. In practice, this may not appear all that different to developers than a Platform-as-a-Service (PaaS). FaaS can even use containers under the covers just like a PaaS typically does. PaaS and FaaS are best thought of as being on a continuum rather than being entirely discrete.
The second is that, by offering common managed services out of the box, developers don’t need to constantly recreate them for new applications.

Where does serverless fit?

Serverless targets specific architectural patterns. As described earlier, it’s more or less wedded to a programming model in which functions and services react to each other in an event-driven and largely asynchronous way. Functions themselves are generally expected to be stateless, handle single tasks, and finish quickly. The fact that the interactions between services and functions are all happening over the network also means that the application as a whole should be fairly tolerant of latencies in these interactions.
You can think of FaaS as both simplifying and limiting.
While there are overlaps between the technologies used by FaaS, microservices, and even coarser-grained architectural patterns, you can think of FaaS as both simplifying and limiting. FaaS requires you to be more prescriptive about how you write applications.
Although serverless was originally most associated with public cloud providers, that comes with a caveat. Serverless, as implemented on public clouds, has a high degree of lock-in to a specific cloud vendor. This is true to some degree even with FaaS, but serverless explicitly encourages bringing in a variety of cloud provider services that are incompatible to varying degrees with other providers and on-premise solutions.
As a result, there’s considerable interest in and work going into open source implementations of FaaS and serverless, such as Knative and OpenWhisk, so that users can write applications that are portable across different platforms.
[ What's next for portable apps? Read also: Disrupt or be disrupted: 3 trends enabling next-level IT agility. ]

The speedy road ahead

Building more modern applications is a top priority for IT executives as part of their digital transformation journeys; it’s seen as the key ingredient to moving faster. To that end, organizations across a broad swath of industries are seeking ways to create new applications more quickly. Doing so involves both making traditional developers more productive and seeking ways to lower the barriers to software development for a larger pool of employees.
Serverless is an important emerging service implementation architecture that will be a good fit for certain types of applications. It will coexist with, rather than replace, architecture alternatives such as microservices used with containers and even just virtual machines. All of these architectural choices support a general trend toward simplifying the developer experience and making developers more productive.

          Why Containers are the Future      Cache   Translate Page      

Software deployment has been a major problem for decades. On the client and the server.

On the client, the inability to deploy apps to devices without breaking other apps (or sometimes the client operating system (OS)) has pushed most business software development to relying entirely on the client's browser as a runtime. Or in some cases you may leverage the deployment models of per-platform "stores" from Apple, Google, or Microsoft.

On the server, all sorts of solutions have been attempted, including complex and costly server-side management/deployment software. Over the past many years the industry has mostly gravitated toward the use of virtual machines (VMs) to ease some of the pain, but the costly server-side management software remains critical.

At some point containers may revolutionize client deployment, but right now they are in the process of revolutionizing server deployment, and that's where I'll focus in the remainder of this post.

Fairly recently the concept of containers, most widely recognized with Docker, has gained rapid acceptance.

tl;dr

Containers offer numerous benefits over older IT models such as virtual machines. Containers integrate smoothly into DevOps; streamlining and stabilizing the move from source code to deployable assets. Containers also standardize the deployment and runtime model for applications and services in production (and test/staging). Containers are an enabling technology for microservice architecture and DevOps.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Virtual Machines to Containers

Containers are somewhat like virtual machines, except they are much lighter weight and thus offer major benefits. A VM virtualizes the hardware, allowing installation of the OS on "fake" hardware, and your software is installed and run on that OS. A container virtualizes the OS, allowing you to install and run your software on this "fake" OS.

In other words, containers virtualize at a higher level than VMs. This means that where a VM takes many seconds to literally boot up the OS, a container doesn't boot up at all, the OS is already there. It just loads and starts our application code. This takes fractions of a second.

Where a VM has a virtual hard drive that contains the entire OS, plus your application code, plus everything else the OS might possibly need, a container has an image file that contains your application code and any dependencies required by that app. As a result, the image files for a container are much smaller than a VM hard drive.

Container image files are stored in a repository so they can be easily managed and then downloaded to physical servers for execution. This is possible because they are so much smaller than a virtual hard drive, and the result is a much more flexible and powerful deployment model.

Containers vs PaaS/FaaS

Platform as a Service and Functions as a Service have become very popular ways to build and deploy software, especially in public clouds such as Microsoft Azure. Sometimes FaaS is also referred to as "serverless" computing, because your code only uses resources while running, and otherwise doesn't consume server resources; hence being "serverless".

The thing to keep in mind is that PaaS and FaaS are both really examples of container-based computing. Your cloud vendor creates a container that includes an OS and various other platform-level dependencies such as the .NET Framework, nodejs, Python, the JDK, etc. You install your code into that pre-built environment and it runs. This is true whether you are using PaaS to host a web site, or FaaS to host a function written in C#, JavaScript, or Java.

I always think of this as a spectrum. On one end are virtual machines, on the other is PaaS/FaaS, and in the middle are Docker containers.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

VMs give you total control at the cost of you needing to manage everything. You are forced to manage machines at all levels, from OS updates and patches, to installation and management of platform dependencies like .NET and the JDK. Worse, there's no guarantee of consistency between instances of your VMs because each one is managed separately.

PaaS/FaaS give you essentially zero control. The vendor manages everything - you are forced to live within their runtime (container) model, upgrade when they say upgrade, and only use versions of the platform they currently support. You can't get ahead or fall behind the vendor.

Containers such as Docker give you some abstraction and some control. You get to pick a consistent base image and add in the dependencies your code requires. So there's consistency and maintainability that's far superior to a VM, but not as restrictive as PaaS/FaaS.

Another key aspect to keep in mind, is that PaaS/FaaS models are vendor specific. Containers are universally supported by all major cloud vendors, meaning that the code you host in your containers is entirely separated from anything specific to a given cloud vendor.

Containers and DevOps

DevOps has become the dominant way organizations think about the development, security, QA, deployment, and runtime monitoring of apps. When it comes to deployment, containers allow the image file to be the output of the build process.

With a VM model, the build process produces assets that must be then deployed into a VM. But with containers, the build process produces the actual image that will be loaded at runtime. No need to deploy the app or its dependencies, because they are already in the image itself.

This allows the DevOps pipeline to directly output a file, and that file is the unit of deployment!

No longer are IT professionals needed to deploy apps and dependencies onto the OS. Or even to configure the OS, because the app, dependencies, and configuration are all part of the DevOps process. In fact, all those definitions are source code, and so are subject to change tracking where you can see the history of all changes.

Servers and Orchestration

I'm not saying IT professionals aren't needed anymore. At the end of the day containers do run on actual servers, and those servers have their own OS plus the software to manage container execution. There are also some complexities around networking at the host OS and container levels. And there's the need to support load distribution, geographic distribution, failover, fault tolerance, and all the other things IT pros need to provide in any data center scenario.

With containers the industry is settling on a technology called Kubernetes (K8S) as the primary way to host and manage containers on servers.

Installing and configuring K8S is not trivial. You may choose to do your own K8S deployment in your data center, but increasingly organizations are choosing to rely on managed K8S services. Google, Microsoft, and Amazon all have managed Kubernetes offerings in their public clouds. If you can't use a public cloud, then you might consider using on-premises clouds such as Azure Stack or OpenStack, where you can also gain access to K8S without the need for manual installation and configuration.

Regardless of whether you use a managed public or private K8S cloud solution, or set up your own, the result of having K8S is that you have the tools to manage running container instances across multiple physical servers, and possibly geographic data centers.

Managed public and private clouds provide not only K8S, but also the hardware and managed host operating systems, meaning that your IT professionals can focus purely on managing network traffic, security, and other critical aspects. If you host your own K8S then your IT pro staff also own the management of hardware and the host OS on each server.

In any case, containers and K8S radically reduce the workload for IT pros in terms of managing the myriad VMs needed to host modern microservice-based apps, because those VMs are replaced by container images, managed via source code and the DevOps process.

Containers and Microservices

Microservice architecture is primarily about creating and running individual services that work together to provide rich functionality as an overall system.

A primary attribute (in my view the primary attribute) of services is that they are loosely coupled, sharing no dependencies between services. Each service should be deployed separately as well, allowing for indendent versioning of each service without needing to deploy any other services in the system.

Because containers are a self-contained unit of deployment, they are a great match for a service-based architecture. If we consider that each service is a stand-alone, atomic application that must be independently deployed, then it is easy to see how each service belongs in its own container image.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

This approach means that each service, along with its dependencies, become a deployable unit that can be orchestrated via K8S.

Services that change rapidly can be deployed frequently. Services that change rarely can be deployed only when necessary. So you can easily envision services that deploy hourly, daily, or weekly, while other services will deploy once and remain stable and unchanged for months or years.

Conclusion

Clearly I am very positive about the potential of containers to benefit software development and deployment. I think this technology provides a nice compromise between virtual machines and PaaS, while providing a vendor-neutral model for hosting apps and services.


          The Observability Pipeline      Cache   Translate Page      
The rise of cloud and containers has led to systems that are much more distributed and dynamic in nature. Highly elastic microservice and serverless architectures mean containers spin up on demand and scale to zero when that demand goes away. In this world, servers are very much cattle, not pets. This shift has exposed deficiencies …
The Observability Pipeline was first posted on September 12, 2018 at 11:37 am.
©2018 "Brave New Geek". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at ttreat31@gmail.com

          Serverless e FaaS: cosa sono e come funzionano      Cache   Translate Page      

Nell’articolo di oggi vogliamo trattare un argomento molto interessante ovvero i sistemi Function as a service (Faas) serverless. Questa categoria di cloud computing service fornisce una piattaforma di servizi che permettono di gestire lo sviluppo e la distribuzione di servizi ed applicazioni.

Tali Cloud platform si occupano in pratica di eseguire funzionalità in risposta a determinati eventi. Il gestore del servizio può impostare delle regole da attivare ed eseguire a seguito di specifici eventi, esterni o derivati da comportamenti dell’utente come i click del mouse. È dunque possibile creare delle regole e delle azioni, testarle, connetterle con altre azioni …

The post Serverless e FaaS: cosa sono e come funzionano appeared first on Edit.


          The Cloudcast #362 - Security & Service Meshes      Cache   Translate Page      
In a joint show between The Cloudcast and PodCTL, Brian and Tyler Britten talk with John Morello (@morellonet, CTO at @TwistlockTeam) about how Service Mesh technologies, such as Istio, can be used for more advanced security of containerized applications and Kubernetes environments.

Show Links:

Show Notes
  • Topic 1 - Welcome to the show. Tell us about your background, and introduce us to Twistlock for anyone that isn’t familiar with the company.
  • Topic 2 - One of the most popular concepts in the world of containers and Kubernetes is “Service Mesh” (projects like Istio). Let’s talk about the basics of what a service mesh does.
  • Topic 3 - Service mesh provides routing capabilities, so let’s talk about where security comes into the picture.
  • Topic 4 - Service mesh introduces a concept in Kubernetes where you deploy multiple containers in a pod, one the application and one the service-mesh proxy. Does security introduce yet another container/agent into a pod?
  • Topic 5 - What sort of tools are available today for security professionals are service meshes are introduced into a container environment?
Feedback?

          The Changelog 314: Kubernetes brings all the Cloud Natives to the yard      Cache   Translate Page      

We talk with Dan Kohn, the Executive Director of the Cloud Native Computing Foundation to catch up with all things cloud native, the CNCF, and the world of Kubernetes.

Dan updated us on the growth KubeCon / CloudNativeCon, the state of Cloud Native and where innovation is happening, serverless being on the rise, and Kubernetes dominating the enterprise.

Sponsors

  • Hired –  Salary and benefits upfront? Yes please. Our listeners get a double hiring bonus of $600! Or, refer a friend and get a check for $1,337 when they accept a job. On Hired companies send you offers with salary, benefits, and even equity upfront. You are in full control of the process. Learn more at hired.com/changelog.
  • DigitalOcean –  DigitalOcean is simplicity at scale. Whether your business is running one virtual machine or ten thousand, DigitalOcean gets out of your way so your team can build, deploy, and scale faster and more efficiently. New accounts get $100 in credit to use in your first 60 days.
  • Algolia –  Our search partner. Algolia's full suite search APIs enable teams to develop unique search and discovery experiences across all platforms and devices. We're using Algolia to power our site search here at Changelog.com. Get started for free and learn more at algolia.com.
  • GoCD –  GoCD is an on-premise open source continuous delivery server created by ThoughtWorks that lets you automate and streamline your build-test-release cycle for reliable, continuous delivery of your product.

Featuring

Notes and Links


          Amazon Aurora Serverless MySQLが一般提供に      Cache   Translate Page      

AWSのカスタムビルドのMySQLとPostgreSQL互換データベースであるAmazon Auroraの新しい機能が一般提供された。Aurora Serverless MySQLだ。Amazonは昨年のAWS re:Inventで初めてこのサーバレス機能のプレビューを見せていた。

By Steef-Jan Wiggers Translated by 阪田 浩一
          Serverless Image Handler - Setting subfolder as root      Cache   Translate Page      
Hi i got the Serverless Image Handler up and running, all good.
...
          Containers vs. Serverless vs. Virtual Machines: What are the security difference ...      Cache   Translate Page      

Just over 30 years ago, virtualization was only available to those with mainframes and large minicomputers, while security concerns were purely physical. Twenty years ago, VMware was releasing its first product, and network perimeter security was in its infancy, relying on firewalls. Twelve years ago, AWS launched, and network security became a concern. Five years ago, containers went mainstream thanks to Docker, and host security came into focus. Today, with the growth in serverless security, application-level security has finally come under the full scrutiny that compute and network layers have been living with for years.

With application, compute, and network security all being audited, there is increased visibility of security concerns to both management and clients through reports like SOC type 2. With this increased transparency to clients, security professionals are the key to making sure the assets being deployed to production have a solid security profile. The size of this profile can increase drastically based on the type of deployment that is being used.

That’s why it’s important to understand the security nuances between different types of emerging deployment technologies, namely, containers, serverless computing and virtual machines. Below, we compare and contrast their security aspects of:

Serverless Security

First up, let’s address serverless security , since a serverless app is typically purely code that executes a single function ― hence the name function-as-a-service. The actual platform you deploy on is indifferent to the most common security problems that occur within a serverless app.

Besides following secure coding best practices ― like only returning the data that is absolutely required to process the request and having the app use service accounts which only have the access required to allow it to do its job ― any vulnerability that is discovered will lead to data being leaked that can go far beyond the scope of the serverless app ― which can lead to a publicity nightmare.

The other main area of concern is any third-party libraries that are included inside the app to provide enhanced functionality and save the development team development time. Examples of third-party libraries are everything from libraries used to validate a phone number or postal code to client libraries like JDBC drivers, which are needed to connect to an external PostgreSQL database. WIthout using a scanning tool that self-updates and routinely scans your built artifacts, it is a huge manual effort to constantly stay on top of all the third-party libraries that are used within an organization and to watch all the various vulnerability announcement lists.

Container Security

Since, in essence, a serverless application is often running in containers behind the scenes, it make sense that containers will carry all the same concerns as serverless, plus new concerns around the additional functionality that containers offer to a developer.

Container-specific security concerns can be reduced to two distinct areas: the trustworthiness of the source for the container on which you are basing your deployment, and the level of access the container has to the host operating system.

When running a container on any host, whether windows or linux, the container should not be run with root or administrator privileges. Using features like namespaces and volumes instead of raw disk access allows these container daemons to share storage for persistent data between one or more containers without needing the container itself to have escalated permissions. There are even projects, such as Google’s gVisor , which go a step further and hide all but the exact system calls a container needs to run.

The larger concern with containers is the trustworthiness of the layers on which the container is built. There are multiple ways to address this. They include pointing to a specific version that you have tested and are sure of, instead of relying on the latest tag. You can also expand the scope of any scanning you have in place for third-party libraries in your serverless apps in order to scan entire containers for known vulnerabilities. This scanning can be either performed ahead of time in the source registry, or during the build process as you use them as a base to build on.

Virtual Machine Security

Virtual machines are yet another superset of concerns that need to be addressed. There are books and best practices guides that go back decades on how to secure an operating system. The U.S. National Institute of Standards and Technology (NIST) maintains a series of checklists for application and OS security. These are reasonable security profiles but they can always be improved.

One way to improve them is to limit running services to what is absolutely required. For example, a default HTTP server is nice for viewing logs, but is it required when your app is running in Java, and there are products available that can connect via SSH, and consolidate logs centrally?

Another option is to apply patches as soon as possible after their release. Some patches are released monthly. There is also Microsoft’s “ Patch Tuesday ,” while other, more critical patches are released the day there is a fix available (these are referred to as out-of-band patches). Unlike containers and serverless, the odds of needing to apply any given patch are much higher on a full virtual machine, as there are far more packages required and installed.

Conclusion

By knowing what type of computing environment on which you and your development teams are deploying applications, you have the best chance to apply all security best practices. Ideally, each application in your portfolio can and will be assessed, and you’ll be encouraged to use the most appropriate and streamlined deployment option available. By moving more applications to containers, and going serverless where appropriate, it will enable production-like security practices to be enforced much earlier in the development cycle and will ultimately improve your overall security profile.


Containers vs. Serverless vs. Virtual Machines: What are the security difference ...
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.
          Weave AWS CodePipeline into serverless app deployment      Cache   Translate Page      
none
          Estimating serverless consumption costs      Cache   Translate Page      

One of the oft-toted virtues of serverless infrastructure is metered pricing. Like, super-metered pricing down to function invocations and memory use. That's awesome, but also harder to predict than flat-rate (or at least flatter-rate) pricing. In this article, The New Stack goes deep into the weeds trying to estimate actual serverless costs across providers.


          Estimating serverless consumption costs      Cache   Translate Page      

One of the oft-toted virtues of serverless infrastructure is metered pricing. Like, super-metered pricing down to function invocations and memory use. That's awesome, but also harder to predict than flat-rate (or at least flatter-rate) pricing. In this article, The New Stack goes deep into the weeds trying to estimate actual serverless costs across providers.


          The Changelog 314: Kubernetes brings all the Cloud Natives to the yard      Cache   Translate Page      

We talk with Dan Kohn, the Executive Director of the Cloud Native Computing Foundation to catch up with all things cloud native, the CNCF, and the world of Kubernetes.

Dan updated us on the growth KubeCon / CloudNativeCon, the state of Cloud Native and where innovation is happening, serverless being on the rise, and Kubernetes dominating the enterprise.

Sponsors

  • Hired –  Salary and benefits upfront? Yes please. Our listeners get a double hiring bonus of $600! Or, refer a friend and get a check for $1,337 when they accept a job. On Hired companies send you offers with salary, benefits, and even equity upfront. You are in full control of the process. Learn more at hired.com/changelog.
  • DigitalOcean –  DigitalOcean is simplicity at scale. Whether your business is running one virtual machine or ten thousand, DigitalOcean gets out of your way so your team can build, deploy, and scale faster and more efficiently. New accounts get $100 in credit to use in your first 60 days.
  • Algolia –  Our search partner. Algolia's full suite search APIs enable teams to develop unique search and discovery experiences across all platforms and devices. We're using Algolia to power our site search here at Changelog.com. Get started for free and learn more at algolia.com.
  • GoCD –  GoCD is an on-premise open source continuous delivery server created by ThoughtWorks that lets you automate and streamline your build-test-release cycle for reliable, continuous delivery of your product.

Featuring

Notes and Links


          9 Early-Stage Serverless Computing Startups To Watch      Cache   Translate Page      
Serverless is an emerging trend in cloud computing, allowing developers to quickly build and deploy applications without having to provision or manage cloud infrastructure. With serverless computing, cloud providers charge for the resources consumed by an application, rather than for pre-defined, …
          A Proposal to Get Rid of 'node_modules'      Cache   Translate Page      

#255 — September 13, 2018

Read on the Web

Node Weekly

Next Generation Package Management with Cruxcrux is a new, experimental JavaScript package manager from the folks at npm, Inc, that aims to provoke new thoughts on how package management should be handled.

The npm Blog

Node v10.10.0 (Current) Released — npm moves up to version 6.4.1, native code coverage information can now be saved to disk, the http2 module is no longer experimental, and much more. Node 8.12.0 (LTS) is also out which also updates npm, libuv, and makes n-api non-experimental.

Node.js Foundation

Burn Your Logs — Use Sentry's open source error tracking to get to the root cause of issues. Setup only takes 5 minutes.

Sentry sponsor

Debugging A Node.js Application Using ndb — ndb provides an improved debugging experience for Node.js, enabled by Chrome DevTools, and this is an easily understood walkthrough.

Nitay Neeman

A Proposal to Get Rid of 'node_modules' — It’s early days for this discussion but there’s a lot of chatter about this right now (such as on Hacker News). Full PDF of the proposal.

Yarn

NLP.js: Natural Language Utilities for Node — An NLP library that can guess the language of a phrase, do stemming/tokenization, sentiment analysis, and more.

AXA

💻 Jobs

Senior Engineer, LA — At SG Sr. Engineers build both customer facing solutions to drive engagement and internal tools to support restaurant operations.

sweetgreen

Join Our Career Marketplace & Get Matched With a Job You Love — Through Hired, software engineers have transparency into salary offers, competing opportunities and job details.

Hired

📘 Articles & Tutorials

8 Steps to Building A Serverless GraphQL API using AWS Amplify

Nader Dabit

How to Prevent Unsafe HTTP Redirects in Node

Joe Pelletier

Build a Netflix Style Video Platform - Node API Client — Play videos at the same quality and speed as Netflix & YouTube. API clients for all major languages.

Bitmovin sponsor

Add 2FA to a Nuxt Application with Nexmo VerifyNuxt.js is a framework for building universal Vue.js apps.

Martyn Davies

Defining Roles-based Security ACLs and Supporting Multitenancy in the Strongloop Loopback Framework

Steve Drucker

Generating Random User Agents with Google Analytics and CircleCIuser-agents is a Node.js package for producing random, up to date user agents, but this is also the tale of how such data is being obtained.

Evan Sangaline

▶  Building a Real-Time Translation App From Scratch — Re-watch this livestream and code along, making a real-time translation app from scratch using Node and Tensorflow.js.

Siraj Raval

CPU Profiling in Production Node.js Applications

StackImpact sponsor

Why Should Your Node App Not Handle Log Routing?

Corey Cleary

🔧 Code and Tools

User Agents: A Library for Generating Random, But Real-Looking, User Agents

Intoli

Taiko: A Library and REPL to Automate Chrome/Chromium — Includes a REPL mode and is more designed to work with a visible, rather than headless, browser instance.

Gauge

Express.js Boilerplate for Building RESTful APIs — A starter project to build a REST-based API service with Node.js that uses MongoDB for storage.

Daniel Sousa

Drome: Yet Another JavaScript Task Runner

Konrad Przydział

Puppeteer 1.8.0 Released: The Headless Chrome Node API — The latest release operates at Chromium 71 standards and browser permissions can now be managed with browserContext.overridePermissions.

Google Chrome Team


          How to Deploy Your Secure Vue.js App to AWS      Cache   Translate Page      

This article was originally published on the Okta developer blog. Thank you for supporting the partners who make SitePoint possible.

Writing a Vue app is intuitive, straightforward, and fast. With low barriers to entry, a component-based approach, and built-in features like hot reloading and webpack, Vue allows you to focus on developing your application rather than worrying about your dev environment and build processes. But, what happens when you are ready to deploy your app into production? The choices can be endless and sometimes unintuitive.

As an AWS Certified Solutions Architect, I am frequently asked how to deploy Vue apps to AWS. In this tutorial, I will walk you through building a small, secure Vue app and deploying it to Amazon Web Services (AWS). If you’ve never used AWS, don’t worry! I’ll walk you through each step of the way starting with creating an AWS account.

About AWS

Amazon Web Services (AWS) is a cloud platform that provides numerous on-demand cloud services. These services include cloud computing, file storage, relational databases, a content distribution network, and many, many more. AWS came into existence not as a retail offering, but rather Amazon’s internal answer to the growing complexity of the infrastructure that was responsible for powering Amazon.com and their e-commerce operations. Amazon quickly realized their cloud-based infrastructure was a compelling, cost-effective solution and opened it to the public in 2006.

At the time of writing this article, AWS is worth an estimated $250B (yes, that’s a B for BILLION) and used by thousands of companies and developers worldwide.

AWS Products

What You Will Build

I feel the best way to learn is by doing. I’ll walk you through building a small, Vue app with an Express REST server. You will secure your app using Okta’s OpenID Connect (OIDC) which enables user authentication and authorization with just a few lines of code.

You will begin by building the Vue frontend and deploy it to Amazon S3. Then you will leverage Amazon CloudFront to distribute your Vue frontend to edge servers all around the world. Lastly, you will create an Express API server and deploy it with Serverless. This API server will contain a method to fetch “secure data” (just some dummy data) which requires a valid access token from the client to retrieve.

The goal of this article is to show you how to leverage multiple AWS services rather than just spinning up a single EC2 instance to serve your app. With this services-based approach, you have a limitless scale, zero maintenance, and a cost-effective way to deploy apps in the cloud.

What is Okta?

Okta is a cloud service that allows developers to manage user authentication and connect them with one or multiple applications. The Okta API enables you to:

Register for a free developer account, and when you’re done, come on back so we can learn more deploying a Vue app to AWS.

Bootstrap Frontend

You are going to build the Vue frontend to your secure app first and deploy it to Amazon S3 and Amazon CloudFront. Amazon S3 (Simple Storage Service) is a highly redundant, object-based file store that is both powerful and featureful. In the scope of this article, we will focus on one of the best features S3 provides: Static website hosting.

To get started quickly, you can use the scaffolding functionality from vue-cli to get your app up and running quickly. For this article, you can use the webpack template that includes hot reloading, CSS extraction, linting, and integrated build tools.

To install vue-cli run:

npm install -g vue-cli@2.9.6

Next up is to initialize your project. When you run the following vue init command, accept all the default values.

vue init webpack secure-app-client
cd ./secure-app-client
npm run dev

The init method should also install your app’s dependencies. If for some reason it doesn’t, you can install them via npm install. Finally, open your favorite browser and navigate to http://localhost:8080. You should see the frontend come alive!

Welcome to Your Vue.js App

About Single Page Applications

When you create an application with Vue, you are developing a Single Page Application (or “SPA”). SPAs have numerous advantages over traditional multi-page, server-rendered apps. It’s important to understand the difference between SPAs and multi-page web applications — especially when it comes to deploying.

A SPA app is often referred as a “static app” or “static website.” Static, in this context, means that your application compiles all its code to static assets (HTML, JS, and CSS). With these static assets, there is no specialized web server required to serve the application to your users.

Traditional web applications require a specialized web server to render every request to a client. For each of these requests, the entire payload of a page (including static assets) is transferred.

Conversely, within an SPA there is only an initial request for the static files, and then JavaScript dynamically rewrites the current page. As your users are navigating your app, requests to subsequent pages are resolved locally and don’t require an HTTP call to a server.

SPA versus Traditional Web Server

Vue-router and Creating Additional Routes

The component of an SPA that is required to rewrite the current page dynamically is commonly referred to as a “router”. The router programmatically calculates which parts of the page should mutate based off the path in the URL.

Vue has an official router that is aptly named vue-router. Since you used the vue-cli bootstrap, your app has this dependency and a router file defined (./src/router/index.js). Before we can define additional routes, we need to create the pages (or components) that you want the router to render. Create the following files in your project:

Homepage: ./src/components/home.vue

<template>
  <div>
    <h1>Home</h1>
    <div>
      <router-link to="/secure">Go to secure page</router-link>
    </div>
  </div>
</template>

Secure Page (not secured… yet!) ./src/components/secure.vue

<template>
  <div>
    <h1>Secure Page</h1>
    <div>
      <router-link to="/">Go back</router-link>
    </div>
  </div>
</template>

Using vue-router, you can inform the application to render each page based on the path.

Modify ./src/router/index.js to match the following code snippet:

import Vue from 'vue'
import Router from 'vue-router'
import Home from '@/components/home'
import Secure from '@/components/secure'

Vue.use(Router)

let router = new Router({
  routes: [
    {
      path: '/',
      name: 'Home',
      component: Home
    },
    {
      path: '/secure',
      name: 'Secure',
      component: Secure
    }
  ]
})

export default router

Try it out! Tab back to your browser, and you should see the new home screen. If you click on the “Go to secure page” link you will notice the page (and URL) change, but no request was sent to a server!

Understand Hash History

As you navigated between the two pages above, you might have seen that the URL looks different than expected (do you noticed the “#/” at the beginning of the path?)

http://localhost:8080/#/ and http://localhost:8080/#/secure

The reason the URL looks like is because vue-router’s default mode is hash mode. Hash mode simulates a new URL change without instructing the browser to reload the page. This behavior is what allows SPA’s to navigate pages without forcing your browser to make any additional HTTP requests. Vue-router listens for changes in the hash portion of the URL (everything after the “#”) and responds accordingly based on the routes configured.

You can change the mode of vue-router to leverage history mode which will give your app “pretty URLs” like:

http://localhost:8080/secure

But, this comes with a significant drawback — especially when you are deploying. Since your SPA compiles to a static assets, there is just one single entry point index.html. If you try to access a page direction that is not index.html page (i.e.; http://localhost:8080/secure) the web server will return a 404 error. Why? The browser is sending a GET /secure request to the server and trying to resolve to the filesystem “/secure” (and the file doesn’t exist). It does work when you navigate to /secure from the homepage because vue-router prevents the default behavior of the browsers and instructs the router instance to fire in any mode.

By using history mode, you have to take additional steps to make sure page refreshes work correctly. You can read more about HTML5 History Mode. To keep things easy, I will show you a simple trick to ensure your refreshing works with AWS CloudFront.

Enable history mode by modifying ./router/index.js with the following setting.

let router = new Router({
  mode: 'history',
})

Note: The dev server (npm run dev) automatically rewrites the URL to index.html for you. So the behavior you see locally is how it should work in production.

Building Your Single Page Application

Now that you have a simple, two-page frontend working locally, it’s time to build your app and get it deployed to AWS!

Because you used vue-cli scaffolding, a single call to the included build script is all you need. From your project root, run npm run build and webpack will build your application into the target ./dist directory. If the dev server is still running in your console, you can press CTRL+C.

If you open the ./dist folder and you should see the results of the build process:

  • ./index.html - This is the entry point of your SPA. It’s a minified HTML document with links to the apps CSS and JS.
  • ./static - This folder contains all your compiled static assets (JS and CSS)

During the build, you might have noticed the following notification: Tip: built files are meant to be served over an HTTP server. Opening index.html over file:// won’t work. If you want to test your newly compiled application locally, you can use serve (install via npm install -g serve). Run serve ./dist and it will output a URL for you to load into your browser.

This also gives you to have a hands-on experience with the major caveat of history mode with vue-router. After running serve ./dist, click on the “Go to secure page”. You should see a 404 error.

404 Error

Getting Started with AWS

You will need an AWS account to continue beyond this point. If you already have an AWS account, you can skip ahead. If you don’t, it’s a simple process that only takes a few minutes.

  • Navigate to the Amazon Web Services home page
  • Click Sign Up (or if you have signed into AWS recently choose Sign In to the Console)
  • If prompted, you can select “Personal” for account type
  • Complete the required information, add a payment method, and verify your phone number
  • After your account is created, you should receive a confirmation email
  • Log in!

Note: Amazon requires you to enter a payment method before you can create your account. All the services discussed in this article are covered under AWS Free Tier which gives you 12 months FREE.

Host Your App on Amazon S3

Since your SPA is comprised of only static assets, we can leverage Amazon S3 (Simple Storage Service) to store and serve your files.

To get started, you will need to create a bucket. Buckets are a logical unit of storage within S3, and you can have up to 100 buckets per AWS account by default (if you are studying for the AWS Certified Solutions Architect exam, you should know this!). Each bucket can have its own configuration and contain unlimited files and nested folders.

After you log in to your AWS Console, navigate to the S3 console (you can do this under AWS services search for “S3”).

  • Click “Create Bucket” and enter a Bucket name. Important: Bucket names are unique across the entire AWS platform. I chose bparise-secure-app-client for this article, but you might need to be creative with your naming!
  • Click “Create” in the bottom left.

Create S3 Bucket

You should now see your bucket listed. Next, let’s configure it for static website hosting.

  • Click your Bucket name and then choose the “Properties” tab.
  • Click on “Static website hosting” box
  • Choose “Use this bucket to host a website” and add “index.html” as the index document. Click “Save”.

Static website hosting

At the top of the Static website hosting box, you should see a URL for “Endpoint”. This is the publicly accessible URL to view your static website. Open the link into a new browser window, and you should see this:

403 Forbidden

Access Denied and S3 Bucket Policies

Yes, you should see a 403 Forbidden error! By default, S3 bucket permissions are deny all. To access your bucket’s contents, you must explicitly define who can access your bucket. These bucket permissions are called a Bucket Policy.

To add a Bucket Policy, click on the “Permissions” tab and click “Bucket Policy” button at the top. The following policy allows anyone to read any file in your bucket. Make sure to replace “YOUR-BUCKET-NAME” with your actual bucket name.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadAccess",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
        }
    ]
}

Bucket Policies can be quite complex and powerful. But, the main parts of the policy that you should be aware of are:

  • "Effect": "Allow"
  • "Principal": "*" - Who the policy covers (“*” implies everyone)
  • "Action": "s3:GetObject" - The action allowed (s3:GetObject allows read-only access to all objects in your bucket)
  • "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*" - Which bucket and objects the policy is about.

Click “Save” on the Bucket Policy editor. You should notice a new error is displayed if you set up the policy correctly:

This bucket has public access

This warning is good advice and a rule of thumb for all S3 buckets. But, since our bucket is exclusively used to host a static website, we don’t have to worry about anyone accessing a file within the bucket they shouldn’t.

Tab back to your browser and refresh the endpoint. You should now see a 404 Not Found error. This error is much easier to resolve because you don’t have any files in your bucket yet.

404 index.html not found

Deploy to AWS with aws-cli

Now that you have a bucket created and permissions correctly set, it’s time to upload your static assets. Although you can do this manually through the interface by using the “Upload” button, I feel using the aws-cli is more efficient.

Installing asw-cli is different based on your OS. Choose one:

The post How to Deploy Your Secure Vue.js App to AWS appeared first on SitePoint.


          How to Deploy Your Secure Vue.js App to AWS      Cache   Translate Page      

This article was originally published on the Okta developer blog. Thank you for supporting the partners who make SitePoint possible.

Writing a Vue app is intuitive, straightforward, and fast. With low barriers to entry, a component-based approach, and built-in features like hot reloading and webpack, Vue allows you to focus on developing your application rather than worrying about your dev environment and build processes. But, what happens when you are ready to deploy your app into production? The choices can be endless and sometimes unintuitive.

As an AWS Certified Solutions Architect, I am frequently asked how to deploy Vue apps to AWS. In this tutorial, I will walk you through building a small, secure Vue app and deploying it to Amazon Web Services (AWS). If you’ve never used AWS, don’t worry! I’ll walk you through each step of the way starting with creating an AWS account.

About AWS

Amazon Web Services (AWS) is a cloud platform that provides numerous on-demand cloud services. These services include cloud computing, file storage, relational databases, a content distribution network, and many, many more. AWS came into existence not as a retail offering, but rather Amazon’s internal answer to the growing complexity of the infrastructure that was responsible for powering Amazon.com and their e-commerce operations. Amazon quickly realized their cloud-based infrastructure was a compelling, cost-effective solution and opened it to the public in 2006.

At the time of writing this article, AWS is worth an estimated $250B (yes, that’s a B for BILLION) and used by thousands of companies and developers worldwide.

AWS Products

What You Will Build

I feel the best way to learn is by doing. I’ll walk you through building a small, Vue app with an Express REST server. You will secure your app using Okta’s OpenID Connect (OIDC) which enables user authentication and authorization with just a few lines of code.

You will begin by building the Vue frontend and deploy it to Amazon S3. Then you will leverage Amazon CloudFront to distribute your Vue frontend to edge servers all around the world. Lastly, you will create an Express API server and deploy it with Serverless. This API server will contain a method to fetch “secure data” (just some dummy data) which requires a valid access token from the client to retrieve.

The goal of this article is to show you how to leverage multiple AWS services rather than just spinning up a single EC2 instance to serve your app. With this services-based approach, you have a limitless scale, zero maintenance, and a cost-effective way to deploy apps in the cloud.

What is Okta?

Okta is a cloud service that allows developers to manage user authentication and connect them with one or multiple applications. The Okta API enables you to:

Register for a free developer account, and when you’re done, come on back so we can learn more deploying a Vue app to AWS.

Bootstrap Frontend

You are going to build the Vue frontend to your secure app first and deploy it to Amazon S3 and Amazon CloudFront. Amazon S3 (Simple Storage Service) is a highly redundant, object-based file store that is both powerful and featureful. In the scope of this article, we will focus on one of the best features S3 provides: Static website hosting.

To get started quickly, you can use the scaffolding functionality from vue-cli to get your app up and running quickly. For this article, you can use the webpack template that includes hot reloading, CSS extraction, linting, and integrated build tools.

To install vue-cli run:

npm install -g vue-cli@2.9.6

Next up is to initialize your project. When you run the following vue init command, accept all the default values.

vue init webpack secure-app-client
cd ./secure-app-client
npm run dev

The init method should also install your app’s dependencies. If for some reason it doesn’t, you can install them via npm install. Finally, open your favorite browser and navigate to http://localhost:8080. You should see the frontend come alive!

Welcome to Your Vue.js App

About Single Page Applications

When you create an application with Vue, you are developing a Single Page Application (or “SPA”). SPAs have numerous advantages over traditional multi-page, server-rendered apps. It’s important to understand the difference between SPAs and multi-page web applications — especially when it comes to deploying.

A SPA app is often referred as a “static app” or “static website.” Static, in this context, means that your application compiles all its code to static assets (HTML, JS, and CSS). With these static assets, there is no specialized web server required to serve the application to your users.

Traditional web applications require a specialized web server to render every request to a client. For each of these requests, the entire payload of a page (including static assets) is transferred.

Conversely, within an SPA there is only an initial request for the static files, and then JavaScript dynamically rewrites the current page. As your users are navigating your app, requests to subsequent pages are resolved locally and don’t require an HTTP call to a server.

SPA versus Traditional Web Server

Vue-router and Creating Additional Routes

The component of an SPA that is required to rewrite the current page dynamically is commonly referred to as a “router”. The router programmatically calculates which parts of the page should mutate based off the path in the URL.

Vue has an official router that is aptly named vue-router. Since you used the vue-cli bootstrap, your app has this dependency and a router file defined (./src/router/index.js). Before we can define additional routes, we need to create the pages (or components) that you want the router to render. Create the following files in your project:

Homepage: ./src/components/home.vue

<template>
  <div>
    <h1>Home</h1>
    <div>
      <router-link to="/secure">Go to secure page</router-link>
    </div>
  </div>
</template>

Secure Page (not secured… yet!) ./src/components/secure.vue

<template>
  <div>
    <h1>Secure Page</h1>
    <div>
      <router-link to="/">Go back</router-link>
    </div>
  </div>
</template>

Using vue-router, you can inform the application to render each page based on the path.

Modify ./src/router/index.js to match the following code snippet:

import Vue from 'vue'
import Router from 'vue-router'
import Home from '@/components/home'
import Secure from '@/components/secure'

Vue.use(Router)

let router = new Router({
  routes: [
    {
      path: '/',
      name: 'Home',
      component: Home
    },
    {
      path: '/secure',
      name: 'Secure',
      component: Secure
    }
  ]
})

export default router

Try it out! Tab back to your browser, and you should see the new home screen. If you click on the “Go to secure page” link you will notice the page (and URL) change, but no request was sent to a server!

Understand Hash History

As you navigated between the two pages above, you might have seen that the URL looks different than expected (do you noticed the “#/” at the beginning of the path?)

http://localhost:8080/#/ and http://localhost:8080/#/secure

The reason the URL looks like is because vue-router’s default mode is hash mode. Hash mode simulates a new URL change without instructing the browser to reload the page. This behavior is what allows SPA’s to navigate pages without forcing your browser to make any additional HTTP requests. Vue-router listens for changes in the hash portion of the URL (everything after the “#”) and responds accordingly based on the routes configured.

You can change the mode of vue-router to leverage history mode which will give your app “pretty URLs” like:

http://localhost:8080/secure

But, this comes with a significant drawback — especially when you are deploying. Since your SPA compiles to a static assets, there is just one single entry point index.html. If you try to access a page direction that is not index.html page (i.e.; http://localhost:8080/secure) the web server will return a 404 error. Why? The browser is sending a GET /secure request to the server and trying to resolve to the filesystem “/secure” (and the file doesn’t exist). It does work when you navigate to /secure from the homepage because vue-router prevents the default behavior of the browsers and instructs the router instance to fire in any mode.

By using history mode, you have to take additional steps to make sure page refreshes work correctly. You can read more about HTML5 History Mode. To keep things easy, I will show you a simple trick to ensure your refreshing works with AWS CloudFront.

Enable history mode by modifying ./router/index.js with the following setting.

let router = new Router({
  mode: 'history',
})

Note: The dev server (npm run dev) automatically rewrites the URL to index.html for you. So the behavior you see locally is how it should work in production.

Building Your Single Page Application

Now that you have a simple, two-page frontend working locally, it’s time to build your app and get it deployed to AWS!

Because you used vue-cli scaffolding, a single call to the included build script is all you need. From your project root, run npm run build and webpack will build your application into the target ./dist directory. If the dev server is still running in your console, you can press CTRL+C.

If you open the ./dist folder and you should see the results of the build process:

  • ./index.html - This is the entry point of your SPA. It’s a minified HTML document with links to the apps CSS and JS.
  • ./static - This folder contains all your compiled static assets (JS and CSS)

During the build, you might have noticed the following notification: Tip: built files are meant to be served over an HTTP server. Opening index.html over file:// won’t work. If you want to test your newly compiled application locally, you can use serve (install via npm install -g serve). Run serve ./dist and it will output a URL for you to load into your browser.

This also gives you to have a hands-on experience with the major caveat of history mode with vue-router. After running serve ./dist, click on the “Go to secure page”. You should see a 404 error.

404 Error

Getting Started with AWS

You will need an AWS account to continue beyond this point. If you already have an AWS account, you can skip ahead. If you don’t, it’s a simple process that only takes a few minutes.

  • Navigate to the Amazon Web Services home page
  • Click Sign Up (or if you have signed into AWS recently choose Sign In to the Console)
  • If prompted, you can select “Personal” for account type
  • Complete the required information, add a payment method, and verify your phone number
  • After your account is created, you should receive a confirmation email
  • Log in!

Note: Amazon requires you to enter a payment method before you can create your account. All the services discussed in this article are covered under AWS Free Tier which gives you 12 months FREE.

Host Your App on Amazon S3

Since your SPA is comprised of only static assets, we can leverage Amazon S3 (Simple Storage Service) to store and serve your files.

To get started, you will need to create a bucket. Buckets are a logical unit of storage within S3, and you can have up to 100 buckets per AWS account by default (if you are studying for the AWS Certified Solutions Architect exam, you should know this!). Each bucket can have its own configuration and contain unlimited files and nested folders.

After you log in to your AWS Console, navigate to the S3 console (you can do this under AWS services search for “S3”).

  • Click “Create Bucket” and enter a Bucket name. Important: Bucket names are unique across the entire AWS platform. I chose bparise-secure-app-client for this article, but you might need to be creative with your naming!
  • Click “Create” in the bottom left.

Create S3 Bucket

You should now see your bucket listed. Next, let’s configure it for static website hosting.

  • Click your Bucket name and then choose the “Properties” tab.
  • Click on “Static website hosting” box
  • Choose “Use this bucket to host a website” and add “index.html” as the index document. Click “Save”.

Static website hosting

At the top of the Static website hosting box, you should see a URL for “Endpoint”. This is the publicly accessible URL to view your static website. Open the link into a new browser window, and you should see this:

403 Forbidden

Access Denied and S3 Bucket Policies

Yes, you should see a 403 Forbidden error! By default, S3 bucket permissions are deny all. To access your bucket’s contents, you must explicitly define who can access your bucket. These bucket permissions are called a Bucket Policy.

To add a Bucket Policy, click on the “Permissions” tab and click “Bucket Policy” button at the top. The following policy allows anyone to read any file in your bucket. Make sure to replace “YOUR-BUCKET-NAME” with your actual bucket name.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadAccess",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
        }
    ]
}

Bucket Policies can be quite complex and powerful. But, the main parts of the policy that you should be aware of are:

  • "Effect": "Allow"
  • "Principal": "*" - Who the policy covers (“*” implies everyone)
  • "Action": "s3:GetObject" - The action allowed (s3:GetObject allows read-only access to all objects in your bucket)
  • "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*" - Which bucket and objects the policy is about.

Click “Save” on the Bucket Policy editor. You should notice a new error is displayed if you set up the policy correctly:

This bucket has public access

This warning is good advice and a rule of thumb for all S3 buckets. But, since our bucket is exclusively used to host a static website, we don’t have to worry about anyone accessing a file within the bucket they shouldn’t.

Tab back to your browser and refresh the endpoint. You should now see a 404 Not Found error. This error is much easier to resolve because you don’t have any files in your bucket yet.

404 index.html not found

Deploy to AWS with aws-cli

Now that you have a bucket created and permissions correctly set, it’s time to upload your static assets. Although you can do this manually through the interface by using the “Upload” button, I feel using the aws-cli is more efficient.

Installing asw-cli is different based on your OS. Choose one:

The post How to Deploy Your Secure Vue.js App to AWS appeared first on SitePoint.


          Houdini, Mastering Modular JavaScript, Performance, JavaScript timers & Chrome 🕵️‍♀️ — Pony Foo Weekly      Cache   Translate Page      

We’re glad you could make it this week!

With your help, we can make Pony Foo Weekly even more awesome: send tips about cool resources.

A Mixed Bag


          Serverless create fails with : ENOENT: no such file or directory      Cache   Translate Page      

@venkatrr wrote:

Hi All,

I am trying to follow instructions from : https://github.com/adieuadieu/serverless-chrome/tree/master/examples/serverless-framework/aws#credentials

I was able to install serverless using npm:

npm install serverless -g

But when i run this command:
serverless create -u

It fails with the following error:
Error: ENOENT: no such file or directory, link ‘C:\dist’ -> ‘C:\Users\delegate\AppData\Local\Temp\serverless-chrome\packages\lambda\integration-test\dist’

It works perfectly on Mac.

I see a bug which is open on same: https://github.com/adieuadieu/serverless-chrome/issues/96

Any ideas or workarounds would be appreciated.

Posts: 1

Participants: 1

Read full topic




Next Page: 10000

Site Map 2018_01_14
Site Map 2018_01_15
Site Map 2018_01_16
Site Map 2018_01_17
Site Map 2018_01_18
Site Map 2018_01_19
Site Map 2018_01_20
Site Map 2018_01_21
Site Map 2018_01_22
Site Map 2018_01_23
Site Map 2018_01_24
Site Map 2018_01_25
Site Map 2018_01_26
Site Map 2018_01_27
Site Map 2018_01_28
Site Map 2018_01_29
Site Map 2018_01_30
Site Map 2018_01_31
Site Map 2018_02_01
Site Map 2018_02_02
Site Map 2018_02_03
Site Map 2018_02_04
Site Map 2018_02_05
Site Map 2018_02_06
Site Map 2018_02_07
Site Map 2018_02_08
Site Map 2018_02_09
Site Map 2018_02_10
Site Map 2018_02_11
Site Map 2018_02_12
Site Map 2018_02_13
Site Map 2018_02_14
Site Map 2018_02_15
Site Map 2018_02_15
Site Map 2018_02_16
Site Map 2018_02_17
Site Map 2018_02_18
Site Map 2018_02_19
Site Map 2018_02_20
Site Map 2018_02_21
Site Map 2018_02_22
Site Map 2018_02_23
Site Map 2018_02_24
Site Map 2018_02_25
Site Map 2018_02_26
Site Map 2018_02_27
Site Map 2018_02_28
Site Map 2018_03_01
Site Map 2018_03_02
Site Map 2018_03_03
Site Map 2018_03_04
Site Map 2018_03_05
Site Map 2018_03_06
Site Map 2018_03_07
Site Map 2018_03_08
Site Map 2018_03_09
Site Map 2018_03_10
Site Map 2018_03_11
Site Map 2018_03_12
Site Map 2018_03_13
Site Map 2018_03_14
Site Map 2018_03_15
Site Map 2018_03_16
Site Map 2018_03_17
Site Map 2018_03_18
Site Map 2018_03_19
Site Map 2018_03_20
Site Map 2018_03_21
Site Map 2018_03_22
Site Map 2018_03_23
Site Map 2018_03_24
Site Map 2018_03_25
Site Map 2018_03_26
Site Map 2018_03_27
Site Map 2018_03_28
Site Map 2018_03_29
Site Map 2018_03_30
Site Map 2018_03_31
Site Map 2018_04_01
Site Map 2018_04_02
Site Map 2018_04_03
Site Map 2018_04_04
Site Map 2018_04_05
Site Map 2018_04_06
Site Map 2018_04_07
Site Map 2018_04_08
Site Map 2018_04_09
Site Map 2018_04_10
Site Map 2018_04_11
Site Map 2018_04_12
Site Map 2018_04_13
Site Map 2018_04_14
Site Map 2018_04_15
Site Map 2018_04_16
Site Map 2018_04_17
Site Map 2018_04_18
Site Map 2018_04_19
Site Map 2018_04_20
Site Map 2018_04_21
Site Map 2018_04_22
Site Map 2018_04_23
Site Map 2018_04_24
Site Map 2018_04_25
Site Map 2018_04_26
Site Map 2018_04_27
Site Map 2018_04_28
Site Map 2018_04_29
Site Map 2018_04_30
Site Map 2018_05_01
Site Map 2018_05_02
Site Map 2018_05_03
Site Map 2018_05_04
Site Map 2018_05_05
Site Map 2018_05_06
Site Map 2018_05_07
Site Map 2018_05_08
Site Map 2018_05_09
Site Map 2018_05_15
Site Map 2018_05_16
Site Map 2018_05_17
Site Map 2018_05_18
Site Map 2018_05_19
Site Map 2018_05_20
Site Map 2018_05_21
Site Map 2018_05_22
Site Map 2018_05_23
Site Map 2018_05_24
Site Map 2018_05_25
Site Map 2018_05_26
Site Map 2018_05_27
Site Map 2018_05_28
Site Map 2018_05_29
Site Map 2018_05_30
Site Map 2018_05_31
Site Map 2018_06_01
Site Map 2018_06_02
Site Map 2018_06_03
Site Map 2018_06_04
Site Map 2018_06_05
Site Map 2018_06_06
Site Map 2018_06_07
Site Map 2018_06_08
Site Map 2018_06_09
Site Map 2018_06_10
Site Map 2018_06_11
Site Map 2018_06_12
Site Map 2018_06_13
Site Map 2018_06_14
Site Map 2018_06_15
Site Map 2018_06_16
Site Map 2018_06_17
Site Map 2018_06_18
Site Map 2018_06_19
Site Map 2018_06_20
Site Map 2018_06_21
Site Map 2018_06_22
Site Map 2018_06_23
Site Map 2018_06_24
Site Map 2018_06_25
Site Map 2018_06_26
Site Map 2018_06_27
Site Map 2018_06_28
Site Map 2018_06_29
Site Map 2018_06_30
Site Map 2018_07_01
Site Map 2018_07_02
Site Map 2018_07_03
Site Map 2018_07_04
Site Map 2018_07_05
Site Map 2018_07_06
Site Map 2018_07_07
Site Map 2018_07_08
Site Map 2018_07_09
Site Map 2018_07_10
Site Map 2018_07_11
Site Map 2018_07_12
Site Map 2018_07_13
Site Map 2018_07_14
Site Map 2018_07_15
Site Map 2018_07_16
Site Map 2018_07_17
Site Map 2018_07_18
Site Map 2018_07_19
Site Map 2018_07_20
Site Map 2018_07_21
Site Map 2018_07_22
Site Map 2018_07_23
Site Map 2018_07_24
Site Map 2018_07_25
Site Map 2018_07_26
Site Map 2018_07_27
Site Map 2018_07_28
Site Map 2018_07_29
Site Map 2018_07_30
Site Map 2018_07_31
Site Map 2018_08_01
Site Map 2018_08_02
Site Map 2018_08_03
Site Map 2018_08_04
Site Map 2018_08_05
Site Map 2018_08_06
Site Map 2018_08_07
Site Map 2018_08_08
Site Map 2018_08_09
Site Map 2018_08_10
Site Map 2018_08_11
Site Map 2018_08_12
Site Map 2018_08_13
Site Map 2018_08_15
Site Map 2018_08_16
Site Map 2018_08_17
Site Map 2018_08_18
Site Map 2018_08_19
Site Map 2018_08_20
Site Map 2018_08_21
Site Map 2018_08_22
Site Map 2018_08_23
Site Map 2018_08_24
Site Map 2018_08_25
Site Map 2018_08_26
Site Map 2018_08_27
Site Map 2018_08_28
Site Map 2018_08_29
Site Map 2018_08_30
Site Map 2018_08_31
Site Map 2018_09_01
Site Map 2018_09_02
Site Map 2018_09_03
Site Map 2018_09_04
Site Map 2018_09_05
Site Map 2018_09_06
Site Map 2018_09_07
Site Map 2018_09_08
Site Map 2018_09_09
Site Map 2018_09_10
Site Map 2018_09_11
Site Map 2018_09_12
Site Map 2018_09_13