Next Page: 10000

          Build a Java REST API with Java EE and OIDC      Cache   Translate Page      

Java EE allows you to build Java REST APIs quickly and easily with JAX-RS and JPA. Java EE is an umbrella standards specification that describes a number of Java technologies, including EJB, JPA, JAX-RS, and many others. It was originally designed to allow portability between Java application servers, and flourished in the early 2000s. Back then, application servers were all the rage and provided by many well-known companies such as IBM, BEA, and Sun. JBoss was a startup that disrupted the status quo and showed it was possible to develop a Java EE application server as an open source project, and give it away for free. JBoss was bought by RedHat in 2006.

In the early 2000s, Java developers used servlets and EJBs to develop their server applications. Hibernate and Spring came along in 2002 and 2004, respectively. Both technologies had a huge impact on Java developers everywhere, showing them it was possible to write distributed, robust applications without EJBs. Hibernate’s POJO model was eventually adopted as the JPA standard and heavily influenced EJB as well.

Fast forward to 2018, and Java EE certainly doesn’t look like it used to! Now, it’s mostly POJOs and annotations and far simpler to use.

Why Build a Java REST API with Java EE and Not Spring Boot?

Spring Boot is one of my favorite technologies in the Java ecosystem. It’s drastically reduced the configuration necessary in a Spring application and made it possible to whip up REST APIs in just a few lines of code. However, I’ve had a lot of API security questions lately from developers that aren’t using Spring Boot. Some of them aren’t even using Spring!

For this reason, I thought it’d be fun to build a Java REST API (using Java EE) that’s the same as a Spring Boot REST API I developed in the past. Namely, the “good-beers” API from my Bootiful Angular and Bootiful React posts.

Use Java EE to Build Your Java REST API

To begin, I asked my network on Twitter if any quickstarts existed for Java EE like I received a few suggestions and started doing some research. David Blevins recommended I look at tomee-jaxrs-starter-project, so I started there. I also looked into the TomEE Maven Archetype, as recommended by Roberto Cortez.

I liked the jaxrs-starter project because it showed how to create a REST API with JAX-RS. The TomEE Maven archetype was helpful too, especially since it showed how to use JPA, H2, and JSF. I combined the two to create my own minimal starter that you can use to implement secure Java EE APIs on TomEE. You don’t have to use TomEE for these examples, but I haven’t tested them on other implementations.

If you get these examples working on other app servers, please let me know and I’ll update this blog post.

In these examples, I’ll be using Java 8 and Java EE 7.0 with TomEE 7.1.0. TomEE 7.x is the EE 7 compatible version; a TomEE 8.x branch exists for EE8 compatibility work, but there are no releases yet. I expect you to have Apache Maven installed too.

To begin, clone our Java EE REST API repository to your hard drive, and run it:

git clone javaee-rest-api
cd javaee-rest-api
mvn package tomee:run

Navigate to http://localhost:8080 and add a new beer.

Add beer

Click Add and you should see a success message.

Add beer success

Click View beers present to see the full list of beers.

Beers present

You can also view the list of good beers in the system at http://localhost:8080/good-beers. Below is the output when using HTTPie.

$ http :8080/good-beers
HTTP/1.1 200
Content-Type: application/json
Date: Wed, 29 Aug 2018 21:58:23 GMT
Server: Apache TomEE
Transfer-Encoding: chunked
        "id": 101,
        "name": "Kentucky Brunch Brand Stout"
        "id": 102,
        "name": "Marshmallow Handjee"
        "id": 103,
        "name": "Barrel-Aged Abraxas"
        "id": 104,
        "name": "Heady Topper"
        "id": 108,
        "name": "White Rascal"

Build a REST API with Java EE

I showed you what this application can do, but I haven’t talked about how it’s built. It has a few XML configuration files, but I’m going to skip over most of those. Here’s what the directory structure looks like:

$ tree .
├── pom.xml
└── src
    ├── main
    │   ├── java
    │   │   └── com
    │   │       └── okta
    │   │           └── developer
    │   │               ├──
    │   │               ├──
    │   │               ├──
    │   │               ├──
    │   │               └──
    │   ├── resources
    │   │   └── META-INF
    │   │       └── persistence.xml
    │   └── webapp
    │       ├── WEB-INF
    │       │   ├── beans.xml
    │       │   └── faces-config.xml
    │       ├── beer.xhtml
    │       ├── index.jsp
    │       └── result.xhtml
    └── test
        └── resources
            └── arquillian.xml

12 directories, 16 files

The most important XML files is the pom.xml that defines dependencies and allows you to run the TomEE Maven Plugin. It’s pretty short and sweet, with only one dependency and one plugin.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi=""

    <name>Java EE Webapp with JAX-RS API</name>




The main entity is

package com.okta.developer;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

public class Beer {
    @GeneratedValue(strategy = GenerationType.AUTO)
    private int id;
    private String name;

    public Beer() {}

    public Beer(String name) { = name;

    public int getId() {
        return id;

    public void setId(int id) { = id;

    public String getName() {
        return name;

    public void setName(String beerName) { = beerName;

    public String toString() {
        return "Beer{" +
                "id=" + id +
                ", name='" + name + '\'' +

The database (a.k.a., datasource) is configured in src/main/resources/META-INF/persistence.xml.

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" xmlns=""
    <persistence-unit name="beer-pu" transaction-type="JTA">
            <property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema(ForeignKeys=true)"/>

The class handles reading and saving this entity to the database using JPA’s EntityManager.

package com.okta.developer;

import javax.ejb.Stateless;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import javax.persistence.Query;
import javax.persistence.criteria.CriteriaQuery;
import java.util.List;

public class BeerService {

    @PersistenceContext(unitName = "beer-pu")
    private EntityManager entityManager;

    public void addBeer(Beer beer) {

    public List<Beer> getAllBeers() {
        CriteriaQuery<Beer> cq = entityManager.getCriteriaBuilder().createQuery(Beer.class);;
        return entityManager.createQuery(cq).getResultList();

    public void clear() {
        Query removeAll = entityManager.createQuery("delete from Beer");

There’s a that handles populating the database on startup, and clearing it on shutdown.

package com.okta.developer;

import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import javax.ejb.Singleton;
import javax.ejb.Startup;
import javax.inject.Inject;

public class StartupBean {
    private final BeerService beerService;

    public StartupBean(BeerService beerService) {
        this.beerService = beerService;

    private void startup() {
        // Top beers from
        Stream.of("Kentucky Brunch Brand Stout", "Marshmallow Handjee", 
                "Barrel-Aged Abraxas", "Heady Topper",
                "Budweiser", "Coors Light", "PBR").forEach(name ->
                beerService.addBeer(new Beer(name))

    private void shutdown() {

These three classes make up the foundation of the app, plus there’s a class that uses JAX-RS to expose the /good-beers endpoint.

package com.okta.developer;

import javax.ejb.Lock;
import javax.ejb.Singleton;
import javax.inject.Inject;
import java.util.List;

import static javax.ejb.LockType.READ;
import static;

public class BeerResource {
    private final BeerService beerService;

    public BeerResource(BeerService beerService) {
        this.beerService = beerService;

    public List<Beer> getGoodBeers() {
        return beerService.getAllBeers().stream()

    private boolean isGreat(Beer beer) {
        return !beer.getName().equals("Budweiser") &&
                !beer.getName().equals("Coors Light") &&

Lastly, there’s a class is used as a managed bean for JSF.

package com.okta.developer;

import javax.enterprise.context.RequestScoped;
import javax.inject.Inject;
import javax.inject.Named;
import java.util.List;

public class BeerBean {

    private BeerService beerService;
    private List<Beer> beersAvailable;
    private String name;

    public String getName() {
        return name;

    public void setName(String name) { = name;

    public List<Beer> getBeersAvailable() {
        return beersAvailable;

    public void setBeersAvailable(List<Beer> beersAvailable) {
        this.beersAvailable = beersAvailable;

    public String fetchBeers() {
        beersAvailable = beerService.getAllBeers();
        return "success";

    public String add() {
        Beer beer = new Beer();
        return "success";

You now have a REST API built with Java EE! However, it’s not secure. In the following sections, I’ll show you how to secure it using Okta’s JWT Verifier for Java, Spring Security, and Pac4j.

Add OIDC Security with Okta to Your Java REST API

You will need to create an OIDC Application in Okta to verify the security configurations you’re about to implement work. To make this effortless, you can use Okta’s API for OIDC. At Okta, our goal is to make identity management a lot easier, more secure, and more scalable than what you’re used to. Okta is a cloud service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

Are you sold? Register for a forever-free developer account today! When you’re finished, complete the steps below to create an OIDC app.

  1. Log in to your developer account on
  2. Navigate to Applications and click on Add Application.
  3. Select Web and click Next.
  4. Give the application a name (.e.g., Java EE Secure API) and add the following as Login redirect URIs:
    • http://localhost:3000/implicit/callback
    • http://localhost:8080/login/oauth2/code/okta
    • http://localhost:8080/callback?client_name=OidcClient
  5. Click Done, then edit the project and enable “Implicit (Hybrid)” as a grant type (allow ID and access tokens) and click Save.

Protect Your Java REST API with JWT Verifier

To validate JWTs from Okta, you’ll need to add Okta JWT Verifier for Java to your pom.xml.



Then create a (in the src/main/java/com/okta/developer directory). This filter looks for an authorization header with an access token in it. If it exists, it validates it and prints out the user’s sub, a.k.a. their email address. If it doesn’t exist, or is in valid, an access denied status is returned.

Make sure to replace {yourOktaDomain} and {clientId} with the settings from the app you created.

package com.okta.developer;

import com.nimbusds.oauth2.sdk.ParseException;
import com.okta.jwt.JoseException;
import com.okta.jwt.Jwt;
import com.okta.jwt.JwtHelper;
import com.okta.jwt.JwtVerifier;

import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

@WebFilter(filterName = "jwtFilter", urlPatterns = "/*")
public class JwtFilter implements Filter {
    private JwtVerifier jwtVerifier;

    public void init(FilterConfig filterConfig) {
        try {
            jwtVerifier = new JwtHelper()
        } catch (IOException | ParseException e) {
            System.err.print("Configuring JWT Verifier failed!");

    public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse,
                         FilterChain chain) throws IOException, ServletException {
        HttpServletRequest request = (HttpServletRequest) servletRequest;
        HttpServletResponse response = (HttpServletResponse) servletResponse;
        System.out.println("In JwtFilter, path: " + request.getRequestURI());

        // Get access token from authorization header
        String authHeader = request.getHeader("authorization");
        if (authHeader == null) {
            response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Access denied.");
        } else {
            String accessToken = authHeader.substring(authHeader.indexOf("Bearer ") + 7);
            try {
                Jwt jwt = jwtVerifier.decodeAccessToken(accessToken);
                System.out.println("Hello, " + jwt.getClaims().get("sub"));
            } catch (JoseException e) {
                response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Access denied.");

        chain.doFilter(request, response);

    public void destroy() {

To ensure this filter is working, restart your app and run:

mvn package tomee:run

If you navigate to http://localhost:8080/good-beers in your browser, you’ll see an access denied error.


To prove it works with a valid JWT, you can clone my Bootiful React project, and run its UI:

git clone -b okta bootiful-react
cd bootiful-react/client
npm install

Edit this project’s client/src/App.tsx file and change the issuer and clientId to match your application.

const config = {
  issuer: '/oauth2/default',
  redirectUri: window.location.origin + '/implicit/callback',
  clientId: '{yourClientId}'

Then start it:

npm start

You should then be able to login at http://localhost:3000 with the credentials you created your account with. However, you won’t be able to load any beers from the API because of a CORS error (in your browser’s developer console).

Failed to load http://localhost:8080/good-beers: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access.

TIP: If you see a 401 and no CORS error, it likely means your client IDs don’t match.

To fix this CORS error, add a alongside your class. The filter below will allow an OPTIONS request, and send access-control headers back that allow any origin, GET methods, and any headers. I recommend you to make these settings a bit more specific in production.

package com.okta.developer;

import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

@WebFilter(filterName = "corsFilter")
public class CorsFilter implements Filter {

    public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain chain)
            throws IOException, ServletException {
        HttpServletRequest request = (HttpServletRequest) servletRequest;
        HttpServletResponse response = (HttpServletResponse) servletResponse;
        System.out.println("In CorsFilter, method: " + request.getMethod());

        // Authorize (allow) all domains to consume the content
        response.addHeader("Access-Control-Allow-Origin", "http://localhost:3000");
        response.addHeader("Access-Control-Allow-Methods", "GET");
        response.addHeader("Access-Control-Allow-Headers", "*");

        // For HTTP OPTIONS verb/method reply with ACCEPTED status code -- per CORS handshake
        if (request.getMethod().equals("OPTIONS")) {

        // pass the request along the filter chain
        chain.doFilter(request, response);

    public void init(FilterConfig config) {

    public void destroy() {

Both of the filters you’ve added use @WebFilter to register themselves. This is a convenient annotation, but it doesn’t provide any filter ordering capabilities. To workaround this missing feature, modify JwtFilter so it doesn’t have a urlPattern in its @WebFilter.

@WebFilter(filterName = "jwtFilter")

Then create a src/main/webapp/WEB-INF/web.xml file and populate it with the following XML. These filter mappings ensure the CorsFilter is processed first.

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="3.1"



Restart your Java API and now everything should work!

Beer List in React UI

In your console, you should see messages similar to mine:

In CorsFilter, method: OPTIONS
In CorsFilter, method: GET
In JwtFilter, path: /good-beers

Using a filter with Okta’s JWT Verifier is an easy way to implement a resource server (in OAuth 2.0 nomenclature). However, it doesn’t provide you with any information about the user. The JwtVerifier interface does have a decodeIdToken(String idToken, String nonce) method, but you’d have to pass the ID token in from your client to use it.

In the next two sections, I’ll show you how you can use Spring Security and Pac4j to implement similar security. As a bonus, I’ll show you how to prompt the user to login (when they try to access the API directly) and get the user’s information.

Secure Your Java REST API with Spring Security

Spring Security is one of my favorite frameworks in Javaland. Most of the examples on this blog use Spring Boot when showing how to use Spring Security. I’m going to use the latest version – 5.1.0.RC2 – so this tutorial stays up to date for a few months.

Revert your changes to add JWT Verifier, or simply delete web.xml to continue.

Modify your pom.xml to have the necessary dependencies for Spring Security. You’ll also need to add Spring’s snapshot repositories to get the release candidate.



          1. MonoGame - Why MonoGame?      Cache   Translate Page      

Originally posted on:

Why MonoGame?

You’re thinking about getting into game development, and you’re trying to decide how to get started. There are a number of great reasons to go with MonoGame.

  • Maybe you found Unity to be confusing and even a bit overwhelming.
  • Maybe you prefer to “live in the code.”
  • Maybe you’ve used XNA in the past, and want to work with something similar.
  • Maybe you want to create a game that can run on Macs, Windows PCs, Android phones or tablets, iPhones and iPads, or even Xbox & Playstation… with minimal alterations or rewrites to your existing code base.

MonoGame offers game developers an opportunity to write their game once, and target MANY different platforms.

MonoGame is the open source “spiritual successor” to XNA, which was a great game development framework that is no longer supported by Microsoft.

There have been a number of quite successful games created in XNA and MonoGame. You can see a few here.

In the next post, I’ll cover what you will need, and where to find it. If you came directly to this page, you can find the complete list of articles here.

          Are you using cryptocurrencies? Request for info.      Cache   Translate Page      

Originally posted on:

Hey everyone,

I'm working on an open source library involving Bitcoin and I was wondering how many (if any) of you are currently working with cryptocurrencies in your apps & games?
Whether you are buying/selling them, or just accepting them as a form of payment, I'd like to get some idea of what you're doing, what APIs you're hitting, what you think of it overall, and how I can (possibly?) make things like microtransactions and in-app purchases easier for you.
Feel free to leave a comment on this post, or message me if you don't want to talk about it publicly.

          Rabobank: Enhancing the Online Banking Experience with Elasticsearch      Cache   Translate Page      

With nearly 900 branches spanning 50 countries, Rabobank is a leading global financial provider. And like many other big banks, they’ve built a powerful financial transaction system on top of mainframes. But storing that data is only the beginning. Rabobank needs to be able to find and serve up years worth of financial data in real time to all their customers, all while maintaining PSD2 compliance. While great for transactional operations, mainframes aren’t ideal for handling complicated search requirements. And with over 10 millions customers — some with thousands of accounts — solving this problem with a mainframe can be very costly.

It was around this need for a powerful search solution that Team Fortress was created. Forming in late 2016, Team Fortress is a four person team, led by René Bouw. Their objective was to improve the delivery of payment and savings transactions for Rabobank’s customers. They did so by dramatically increasing the amount of transaction history data that customers could view and save. Before the project began, Rabobank customers were only able to retrieve transaction data from one account at a time, and only as far back as 16 months. For anything older than 16 months, customers had to go to their local branch to request an access to the data, which incurred additional charges. For the project to be considered a success, Rabobank needed customers to be able to search through all their accounts at once, as far back as 8 years. That’s a big ask for a mainframe, but not for Elasticsearch.

After running a POC using the open source Elastic Stack, Rabobank decided to purchase a commercial subscription. Indeed, while the powerful search functionality that Rabobank needed was available in open source, the security features available with a commercial subscription — like encryption (TLS) authentication and logging audit data — were essential to be compliant with the bank industry security requirements.

They developed an architecture that allowed them to pull transactional information from their mainframes into both a data store and Elasticsearch, with the Elasticsearch data being limited to 8 years of retention. This also allowed Rabobank to maintain the transactional speed of their mainframe system while also making dramatic search improvements for the users. Speed changes were noticed immediately.

“The speed is impressive according to all users we ask, including myself. Some technical people even double check the results, because 'that can't be true, that fast and accurate'. But it is.” — René Bouw, Product owner and solution architect of Team Fortress, Rabobank.

Not only is Rabobank searching faster than ever, they’re searching through more data than ever. With over 23 billion transactions spanning 80TB of data, Rabobank sees upwards of 200 events per second — over 10 million per day. And each query can span thousands of accounts, with corporate customers having over 5,000 accounts that they can now query at once. And being able to do all of this without adding any extra operations to their costly mainframes  has helped save them millions of euros per year. Today, all front-end applications use Elasticsearch for search or for aggregating payment and saving transactions.

Knowing that the Elastic Stack is also great for logging and metrics, Team Fortress also developed a monitoring solution using Elasticsearch, Logstash, and Kibana to monitor all of the components of their data store solution. Using the Elastic Stack, they can monitor the CPU, memory usage, logs, errors messages, and more from all of the applications in their solution as well as the virtual machines running the applications. They use customised dashboards for the machines and drill-down dashboards for the applications. They also use the Elastic Stack alerting functionality to let them know about problems the moment they pop up. Said Bouw, “We have never missed an incident yet. We chose Elastic because of its resilience.”

          European Parliament Passes Controversial Copyright Bill      Cache   Translate Page      
The European Parliament passed a controversial bill featuring a "link tax" and an "upload filter." The article was approved in June, struck down by the parliament in July, and further delayed before finally passing another vote today with 438 votes for and 226 against. The bill is supposed to protect creators by allowing them to request payment or easily send takedown requests. But multiple organizations, including the EFF, believe the system is ripe for abuse. The text also specifies that uploading to online encyclopaedias in a non-commercial way, such as Wikipedia, or open source software platforms, such as GitHub, will automatically be excluded from the requirement to comply with copyright rules. Discussion
          LXer: How to install School tool on Ubuntu 18.04 LTS      Cache   Translate Page      
SchoolTool is a free and open source suite of free administrative software for schools that can be used to create a simple turnkey student information system, including demographics, grade book, attendance, calendaring and reporting for primary and secondary schools.
          Bahrain 5th MEB Command Element Operations Center Safety Support      Cache   Translate Page      
Job Extract 3 years of demonstrated management intelligence open source and Media AnalysisEffects experience These positions support the MARCENT Command Element
          Principal IoT Architect      Cache   Translate Page      
GA-Atlanta, Principal IoT Architect Skills/Experience: Bachelor’s Degree and deep understanding of IoT technology 5 years of experience leading teams of developers and solutions architects on IoT products and solutions 5 years of experience with Cloud Platforms such as Azure or AWS 5 years of experience with open source tools such as rabbitmg, Kafka, or influxdb Scope: Assist in the delivery of multi-layer sy
          Jazkarta Blog: Announcing collective.siteimprove      Cache   Translate Page      

Screenshot of collective.siteimprove UI, expanded

As I reported back in May, at our last Jazkarta sprint Witek and Alec began work on a Plone add-on, collective.siteimprove, that provides integration with the Siteimprove content quality checking service. I’m pleased to announce that the add-on has now been thoroughly vetted by the folks at Siteimprove and the resulting version is available from Pypi!

What Is Siteimprove

Siteimprove is a respected service for maintaining and improving web content quality. Customers who sign up for the service get automated scans of their websites which check for content quality, accessibility compliance, SEO, data privacy, and performance. Site rollups and per page reports are available via email and from a customizable dashboard at

Siteimprove also provides an API which allows for the development of CMS plugins – integrations of the Siteimprove service within content management systems. This allows content editors to get immediate feedback on pages that they publish. This is great because it lets editors see problems while they are in the process of editing a page, instead of getting a report after the fact and needing to click through links to fix things.

Graphic explaining how Siteimprove works


Why Siteimprove for Plone

Plone, the premier Python-based open source content management system, is an enterprise scale CMS that is widely used by large organizations with large websites. These are just the types of organizations that can benefit from a tool like Siteimprove, which has a reputation for being an excellent service for maintaining and improving website content.

The Jazkarta team was delighted to be able to contribute to the Plone community by creating an add-on that integrates Siteimprove’s CMS plugin features into the Plone editing process. Now anyone with a Plone website can easily integrate with Siteimprove simply by installing an add-on – all the integration work has been done.


After collective.siteimprove is installed on a Plone site, there will be a new control panel where Siteimprove customers can request and save a token that registers the domain with After that, authorized users will see an overlaid Siteimprove button on each page that shows the number of issues found.

Screenshot of the collective.siteimprove UI, collapsed


When clicked, the overlay expands to show a summary report of page errors and an overall score, as shown in the image at the top of this post. After an edit, users can click a button on the overlay to request that Siteimprove recheck the page. They can also follow a link to the full page report at

Plone+Siteimprove FTW

Now anyone who has a Plone website can easily integrate with the Siteimprove service and take advantage of all of Siteimprove’s enterprise-scale features while they are working on their content!

          Compensation Consultant - Elastic - Seattle, WA      Cache   Translate Page      
At Elastic, we have a simple goal: to take on the world's data problems with products that delight and inspire. As the company behind the popular open source...
From Elastic - Mon, 23 Jul 2018 18:41:03 GMT - View all Seattle, WA jobs
          Principal IoT Architect      Cache   Translate Page      
GA-Atlanta, Principal IoT Architect Skills/Experience: Bachelor’s Degree and deep understanding of IoT technology 5 years of experience leading teams of developers and solutions architects on IoT products and solutions 5 years of experience with Cloud Platforms such as Azure or AWS 5 years of experience with open source tools such as rabbitmg, Kafka, or influxdb Scope: Assist in the delivery of multi-layer sy
          GitHub Foreshadows Big Open Source Announcements at GitHub Universe      Cache   Translate Page      
GitHub foreshadows major open source related announcements at upcoming GitHub Universe conference.
          network-extra/syncthing-0.14.50-1-x86_64      Cache   Translate Page      
Open Source Continuous Replication / Cluster Synchronization Thing
          Metamask customization      Cache   Translate Page      
Hello, I need a custom version of metamask for my private ethereum network. The final result should be a downloadable chrome, firefox, safari and opera addon based on the current open source version. Please... (Budget: €30 - €250 EUR, Jobs: Firefox, Google Chrome, Javascript, node.js, Software Architecture)
          Metamask customization      Cache   Translate Page      
Hello, I need a custom version of metamask for my private ethereum network. The final result should be a downloadable chrome, firefox, safari and opera addon based on the current open source version. Please... (Budget: €30 - €250 EUR, Jobs: Firefox, Google Chrome, Javascript, node.js, Software Architecture)
           Comment on Bixel, An Open Source 16×16 Interactive LED Array by Uriel Guy       Cache   Translate Page      
Well, if I had more data I'd share it. Unfortunately I'm not very good at documenting my work. At least not the physical side it.
          LXer: How to install School tool on Ubuntu 18.04 LTS      Cache   Translate Page      
Published at LXer: SchoolTool is a free and open source suite of free administrative software for schools that can be used to create a simple turnkey student information system, including...
          LXer: How to Install Nagios Core on CentOS 7      Cache   Translate Page      
Published at LXer: Nagios (also known as Nagios Core) is a free and open source application which can be used for monitoring Linux or Windows servers, network infrastructures and applications. When...
          LXer: How To Install and Configure Nextcloud with Apache on Ubuntu 18.04      Cache   Translate Page      
Published at LXer: Nextcloud is an open source, self-hosted file share and collaboration platform, similar to Dropbox. In this tutorial we'll show you how to install and configure Nextcloud with...
          Open Source : Smile s’offre Adyax et ses experts Drupal      Cache   Translate Page      
Le rapprochement de Smile et Adyax étend les prestations proposées par le nouvel ensemble français de l’expertise Drupal à l’infogérance IT.
          Tragedy of the Commons Clause – tecosystems      Cache   Translate Page      
For those who would develop software and sell it, open source is something of a conundrum. On the one hand, as Mike Olson has noted, you can’t win with a closed platform. On the other, it substantially complicates the sales process. Before open source software entered the mainstream for enterprise technology usage, the commercialization of […]
          Open Source Software Developer - IBM - Markham, ON      Cache   Translate Page      
Through several active collaborative academic research projects with professors and graduate students from a number of Canadian and foreign universities....
From IBM - Wed, 18 Jul 2018 10:49:17 GMT - View all Markham, ON jobs
          SAP Hybris Manager/Lead - Deloitte - Ottawa, ON      Cache   Translate Page      
Other major ecommerce and related technology vendors an asset - such as Oracle ATG, Endeca, Magento, IBM WebSphere, open source solution providers, or other...
From Deloitte - Tue, 11 Sep 2018 01:29:06 GMT - View all Ottawa, ON jobs
          The Biggest Sin of Commercial Open Source?      Cache   Translate Page      

Redis is a popular open source database. Its proprietor, Redis Labs, recently announced that some add-on modules will not be open source any longer . The resulting outcry led to a defense and explanation of this decision that is telling. I have two comments and a lesson about product management of commercial open source.

The two comments are about messaging, both ways: What Redis Labs is telling the world and what the open source world is telling Redis Labs and the rest of the world.

Firstly, by restricting usage of these add-on modules, Redis Labs is admitting that someone else is successfully competing with them in their own game. The bogeyman is Amazon, in this case, as called out in the defense of the Redis Labs decision. Usually, the original proprietor of some open source software is in a prime position to profit from it, being the most trustworthy choice for customers and being able to charge a premium. Amazon has to white-label the software (remove any of Redis Labs trademarks) and still is a serious threat. Admitting this is as embarrassing as it gets.

Secondly, rather than taking any future development of the add-on modules private, Redis Labs decided to fiddle with the license by adding a rider that makes it non-open-source but still permits many open-source-like behaviors. The intent may have been to stay as close to open source as possible, but this behavior has nevertheless annoyed many open source enthusiasts, leading to the aforementioned outcry. Thus, the open source world is telling Redis Labs that of the many sins you can commit as a commercial open source company, the worst is to go back on your legal (read: license) open source promise.

You may like reading this older article about how the single vendor commercial open source business model works. Also, there are other ways of curtailing your competitors, if you must: Read more about the kitchen cabinet of open source poisons .

Read on about the main product management challenge that all commercial open source companies are facing .

          Running Apache Cassandra on Kubernetes      Cache   Translate Page      

As Kubernetes becomes the de facto solution for container orchestration, more and more developers (and enterprises) want to run Apache Cassandra databases on Kubernetes . It's easy to get started―especially given the capabilities that Kubernetes' StatefulSets bring to the table. Kubernetes, though, certainly has room to improve when it comes storing data in-state and understanding how different databases work.

For example, Kubernetes doesn't know if you're writing to a leader or a follower database, or to a multi-sharded leader infrastructure, or to a single database instance. StatefulSets―workload API objects used to manage stateful applications―offer the building blocks required for stable, unique network identifiers; stable persistent storage; ordered and smooth deployment and scaling, deletion, and termination; and automated rolling updates. However, while getting started with Cassandra on Kubernetes might be easy, it can still be a challenge to run and manage.

To overcome some of these hurdles, we decided to build an open source Cassandra operator that runs and operates Cassandra within Kubernetes; you can think of it as Cassandra-as-a-Service on top of Kubernetes. We've made this Cassandra operator open source and freely available on GitHub. It remains a work in progress by our Instaclustr team and our partner contributors―but it is functional and ready for use. The Cassandra operator supports Docker images, which are open source and also available from the project's GitHub repository.

More on Kubernetes

What is Kubernetes? How to make your first upstream Kubernetes contribution Getting started with Kubernetes Automated provisioning in Kubernetes Test drive OpenShift hands-on

While it's possible for developers to build scripts for managing and running Cassandra on Kubernetes, the Cassandra operator offers the advantage of providing the same consistent, reproducible environment, as well as the same consistent, reproducible set of operations through different production clusters. (This is true across development, staging, and QA environments.) Also, because best practices are already built into the operator, development teams are spared operational concerns and can focus on their core capabilities.

What is a Kubernetes operator?

A Kubernetes operator consists of two components: a controller and a custom resource definition (CRD). The CRD allows devs to create Cassandra objects in Kubernetes. It's an extension of Kubernetes that allows us to define custom objects or resources using Kubernetes that our controller can then listen to for any changes to the resource definition. Devs can define an object in Kubernetes that contains configuration options for Cassandra, such as cluster name, node count, JVM tuning options, etc.―all the information you want to give Kubernetes about how to deploy Cassandra.

You can isolate the Cassandra operator to a specific Kubernetes namespace, define what kinds of persistent volumes it should use, and more. The Cassandra operator's controller listens to state changes on the Cassandra CRD and will create its own StatefulSets to match those requirements. It will also manage those operations and can ensure repairs, backups, and safe scaling as specified via the CRD. In this way, it leverages the Kubernetes concept of building controllers upon other controllers in order to achieve intelligent and helpful behaviors.

So, how does it work? cassandra-operator-architecture.png
Running Apache Cassandra on Kubernetes

Architecturally, the Cassandra controller connects to the Kubernetes Master. It listens to state changes and manipulates pod definitions and CRDs. It then deploys them, waits for changes to occur, and repeats until all necessary changescomplete fully.

The Cassandra controller can, of course, perform operations within the Cassandra cluster. For example, want to scale down your Cassandra cluster? Instead of manipulating the StatefulSet to handle this task, the controller will see the CRD change. The node count will change to a lower number (say from six to five). The controller will get that state change, and it will first run a decommission operation on the Cassandra node that will be removed. This ensures that the Cassandra node stops gracefully and redistributes and rebalances the data it holds across the remaining nodes. Once the Cassandra controller sees this has happened successfully, it will modify that StatefulSet definition to allow Kubernetes to decommission that pod. Thus, the Cassandra controller brings needed intelligence to the Kubernetes environment to run Cassandra properly and ensure smoother operations.

As we continue this project and iterate on the Cassandra operator, our goal is to add new components that will continue to expand the tool's features and value. A good example is Cassandra SideCar (shown in the diagram above), which can take responsibility for tasks like backups and repairs. Current and future features of the project can be viewed on GitHub . Our goal for the Cassandra operator is to give devs a powerful, open source option for running Cassandra on Kubernetes with a simplicity and grace that has not yet been all that easy to achieve.



About the author
Running Apache Cassandra on Kubernetes
Ben Bromhead

Ben Bromhead is Chief Technology Officer and Co-Founder at Instaclustr , an open source-as-a-service company. Ben is located in Instaclustr's California office and is active in the Apache Cassandra community. Prior to Instaclustr, Ben had been working as an independent consultant developing NoSQL solutions for enterprises, and he ran a high-tech cryptographic and cyber security formal testing laboratory at BAE Systems and Stratsec.

More about me

Learn how you can contribute
          Commons Clause is a Legal Minefield and a Very Bad Idea      Cache   Translate Page      

Commons Clause is a Legal Minefield and a Very Bad Idea

I recently received a question from Christine Cardoza , journalist at SD Times :

I am writing a story on all the controversy in the industry going on with the Common Clause license. I know it is not an OSI approved license, but I was wondering if you had a statement you could provide on this. Some of the sources I have talked to say they do believe this still promotes open source and that the OSI needs to change some of its rules, so I would really love to include OSI’s take on this if possible.

Her article will be out soon, but she gave me permission to publish my response now:

Hi, Christine. I believe there were plans for OSI to make an official statement on this kerfuffle, but I don’t know where that is in the process. What follows is my professional opinion as an open source strategy and policy consultant for businesses and should not be construed as an official OSI statement.

First of all, terminology: Commons Clause is not a license. It’s a clause that can be applied to licenses. Specifically, it’s intended to be applied to OSI-approved licenses. By restricting people from making money from a project where it is applied, the Commons Clause directly violates Item 6 in the Open Source Definition . As the Open Source Definition is no longer applicable to those projects, they―quite literally by definition―are no longer open source. At best they can be called “source available.”

The Commons Clause FAQ states that “[t]he Commons Clause was intended, in practice, to have virtually no effect other than force a negotiation with those who take predatory commercial advantage of open source development,” however at no point have I seen a statement from either the Commons Clause creators nor any project(s) applying it that they attempted other approaches to encourage collaboration from those they see to be taking “predatory commercial advantage” of their projects. For instance, Redis recently applied the Clause to a few of their modules due to “…today’s cloud providers…taking advantage of successful open source projects and repackaging them into competitive, proprietary service offerings.” Their statement on the change does not say, “We approached the cloud providers and asked them to collaborate, but they refused,” or even “We approached the cloud providers and asked why they do not collaborate and how we can improve this experience for them.” From the outside looking in, it does not appear as though Redis tried to encourage collaboration before throwing a tantrum and relicensing to this proprietary license. That’s not how to be a good free and open source citizen.

As far as that is concerned, their stated reason (“taking advantage of open source projects”) does not square with their actions, as instead of relicensing the project (Redis) that is being taken advantage of it relicensed a few modules instead. While this could be seen as them testing the waters prior to a large relicensing of the main project, the core developer has stated that this will never happen . Therefore it does not seem as though the relicensing will gain Redis much beyond some major damage to their reputation. It feels as though they―an open core company―looked at that open core, decided it was a little too large and they could be a stronger business by shrinking it by a few modules. That’s a completely valid business decision which I would not fault, but it’s not the stated reason for the relicensing. The entire situation is quite perplexing.

And then there’s the Commons Clause itself. Heather Meeker helped to write it. She’s no fool and is a very accomplished and well-respected intellectual property lawyer who has a career of experience working with free and open source licenses. Despite this skilled help, the Commons Clause team came up with a restriction against selling that is so broadly stated that it puts an impressive number of people at risk of license violation should they come into professional contact with any project that is under a license tainted by the Commons Clause:

“…practicing any or all of the rights granted to you under the License to provide to third parties, for a fee or other consideration (including without limitation fees for hosting or consulting/ support services related to the Software), a product or service whose value derives, entirely or substantially, from the functionality of the Software…”

According to this restriction, I as a freelancer would violate the license if I helped a client who used Redis and happened to use one of these modules with the tainted proprietary license. I would probably be held liable even if I were unaware of the use of those modules. I’m far from the only freelancer who looked on the Commons Clause with horror. For instance, Jim Salter, a freelance system administrator, network specialist, and author, says he now feels he can’t work on any project that involves Redis lest he put himself at risk. He’s only one of many freelancers I’ve heard express these thoughts and state outright that they will not work on projects involving Redis. They simply can’t afford to lose their livelihoods due to a legally vague license.

As to the question that open source needs to change, I’ll simply direct you to my blog post on the subject and summarise it as: Open source does not need to change; people simply need to learn how to do business. Stop using open source and expecting it to do the hard work of creating a successful business. It’s a tool, and it’s a great one, but it’s only a tool.

          The Data Day: August 31, 2018      Cache   Translate Page      

AWS and VMware announce Amazon RDS on VMware. And more.

For @451Research clients: On the Yellowbrick road: data-warehousing vendor emerges with funding and flash-based EDW By @jmscrts

― Matt Aslett’s The Data Day (@thedataday) August 31, 2018

For @451Research clients: Automated analytics: the role of the machine in corporate decision-making By Krishna Roy

― Matt Aslett’s The Data Day (@thedataday) August 28, 2018

For @451Research clients: @prophix does cloud and on-premises CPM, with machine learning up next By Krishna Roy

― Matt Aslett’s The Data Day (@thedataday) August 28, 2018

AWS and VMware have announced Amazon Relational Database Service on VMware, supporting Microsoft SQL Server, Oracle, PostgreSQL, mysql, and MariaDB.

― Matt Aslett’s The Data Day (@thedataday) August 27, 2018

Cloudera has launched Cloudera Data Warehouse (previously Cloudera Analytic DB) as well as Cloudera Altus Data Warehouse as-a-service and also Cloudera Workload XM, an intelligent workload experience management cloud service

― Matt Aslett’s The Data Day (@thedataday) August 30, 2018

Alteryx has announced version 2018.3 of the Alteryx analytics platform, including Visualytics for real-time, interactive visualizations

― Matt Aslett’s The Data Day (@thedataday) August 28, 2018

Informatica has updated its Master Data Management, Intelligent Cloud Services and Data Privacy and Protection products with a focus on hybrid, multi-cloud and on-premises environments.

― Matt Aslett’s The Data Day (@thedataday) August 29, 2018

SnapLogic has announced the general availability of SnapLogic eXtreme, providing data transformation support for big data architectures in the cloud.

― Matt Aslett’s The Data Day (@thedataday) August 28, 2018

VoltDB has enhanced its open source VoltDB Community Edition to support real-time data snapshots, advanced clustering technology, exporter services, manual scale-out on commodity servers and access to the VoltDB Management Console.

― Matt Aslett’s The Data Day (@thedataday) August 30, 2018

ODPi has announced the Egeria project for the open sharing, exchange and governance of metadata

― Matt Aslett’s The Data Day (@thedataday) August 28, 2018

And that’s the data day

          Center Surround Sound (NodeMCU)      Cache   Translate Page      

When settling speakers in to place, you are idealizing that you will always sit dead center .. yet in our more lethargic reality, we lounge off-center. As consequence, you cease being the center of the speaker array .. until now ..

NodeMCU ESP8266 ESP-12E V2 (2)
X9C104 Digital Potentiometer Module (2)
5V Power Supply (2)
4 Dip Switch (2)
4 Slot Terminals (2)



  1. Intended to create an app with MIT App Inventor .. how-ever .. per the previous HaD FP (Voice Kitchen Faucet) .. it was decided to employ an alternative favored open source (-ish) voice command software purely as demonstration.
  2. Adding additional speakers merely requires copy-paste and if-statement modifications .. tho ideally a more advanced 'find' function upon installation would be coded.
  3. Variances of % may depend on wattage (See Number 4)
  4. Higher voltage systems would subsequently require HV digital potentiometers (AD7376)
  5. Why is this .. not already a thing?!


          QGIS (3.2.2)      Cache   Translate Page      
A Free and Open Source Geographic Information System

          Sysdig Closes $68.5 Million in Series D Funding to Enable Enterprises to Secure ...      Cache   Translate Page      
Investment fuels company leadership in delivering converged security
and monitoring solutions for enterprises adopting containers and


, the cloud-native intelligence company, today announced it

raised $68.5 million in series D funding, led by Insight Venture

Partners, with participation from previous investors, Bain Capital

Ventures and Accel. This round of funding brings Sysdig’s total funding

to date to $121.5 million. Sysdig will use the funds to extend its

leadership in enabling enterprises to operate reliable and secure

containerized infrastructure and cloud-native applications.

Sysdig Closes .5 Million in Series D Funding to Enable Enterprises to Secure  ...
Sysdig Closes .5 Million in Series D Funding to Enable Enterprises to Secure  ...
Suresh Vasudevan, CEO at Sysdig, authored a


post with more information on Sysdig’s funding


Sysdig offers enterprises the first unified approach to container

security, monitoring, and forensics. Unlike traditional approaches, the


was built with an understanding

of the modern DevSecOps workflow across Kubernetes, Docker and both

private and public clouds. Sysdig’s open source forensics technology, Sysdig ,

and its open source security project,


, have a community of millions of users, and provide the

foundation for a rich, commercial product set. Dozens of Global 2000

enterprises are Sysdig customers today, including many of the world’s

largest financial institutions, media companies, cable companies,

technology companies, and government agencies.

451 Research predicts the cloud-enabling technology market to grow to

$39.6B through 2020, and containers are predicted to be the fastest

growing segment of that market at 40%. Gartner predicts that “by 2020,

more than 50% of global organizations will be running containerized

applications in production, up from less than 20% today.” 1

“As enterprises accelerate their move to cloud-native applications, they

recognize the need for a new breed of solutions that will enable them to

meet performance, reliability, security, and compliance requirements,”

said Richard Wells, Managing Director at Insight Venture Partners.

“Sysdig’s novel approach of tapping an entirely new data source that

provides both security and monitoring for container-based applications

has proven more effective, more scalable, and higher ROI. That

significant technological advantage combined with an experienced

management team led by CEO Suresh Vasudevan gives Sysdig a strong

position in the marketplace and we are excited to welcome them to the

Insight portfolio.”

Sysdig launched


in October of 2017, cementing the Sysdig Cloud-Native

Intelligence Platform as the only solution to offer unified security,

monitoring, and forensics with native Kubernetes and Prometheus

integration for containers and microservices. Last quarter, Sysdig

expanded on the deep container visibility Sysdig provides to create

Sysdig Secure 2.0, which brings


to cloud-native environments and microservices for enterprise customers.

Sysdig Secure is the first container security offering to take a

data-driven approach across all aspects of the container lifecycle.

“Enterprises are adopting cloud-native technology for its speed of

development, multi-cloud scaling capabilities, and lower total cost of

ownership,” said Suresh Vasudevan, CEO at Sysdig. “But, they are hitting

roadblocks with old school security and monitoring products. To be

successful, these organizations need new solutions that are

cloud-native. Sysdig has emerged as the only solution that delivers

enterprises the complete set of capabilities needed to protect an

environment, ensure that it is running smoothly, and meet compliance

requirements. Sysdig delivers it all, both in the cloud and on-premise,

in order to grow with companies as they undertake this journey.”

This round of funding further accelerates the strong momentum and rapid

growth Sysdig has experienced over the last year. Downloads of Sysdig

Falco have roughly tripled over the last 12 months and active users of

the Sysdig SaaS offerings have grown nearly six times year over year.

Last month, Sysdig opened a second headquarters in Raleigh, North

Carolina to keep up with company growth while the company also continues

to scale its offices in San Francisco, Belgrade, and London.

In addition to its global growth, Sysdig has expanded its executive team

with the hiring of


in April and


last month. Sysdig

continues to deliver on its commitment to leading the industry by

collaborating with the Cloud-Native Computing Foundation, the PCI

Security Council, and industry partners and by contributing to the

development of security recommendations for modern infrastructure.

About Sysdig

Sysdig is the cloud-native intelligence company.

We have created the only unified platform to deliver container security,

monitoring, and forensics in a microservices-friendly architecture. Our

open source technologies have attracted a community of over a million

developers, administrators and other IT professionals looking for deep

visibility into applications and containers. Our cloud-native

intelligence platform monitors and secures millions of containers across

hundreds of enterprises, including Fortune 500 companies and web-scale

properties. Learn more at .

About Insight Venture Partners

Insight Venture Partners is a leading global venture capital and private

equity firm investing in high-growth technology and software companies

that are driving transformative change in their industries. Founded in

1995, Insight currently has over $20 billion of assets under management

and has cumulatively invested in more than 300 companies worldwide. Our

mission is to find, fund, and work successfully with visionary

executives, providing them with practical, hands-on growth expertise to

foster long-term success. Across our people and our portfolio, we

encourage a culture around a core belief: growth equals opportunity. For

more information on Insight and all its investments, visit

or follow us on Twitter @insightpartners .

About Bain Capital Ventures

Bain Capital Ventures partners with disruptive founders to accelerate

their ideas to market. The firm invests from seed to growth in

enterprise software, infrastructure software, crypto, and industries

being transformed by data. Bain Capital Ventures has helped launch and

commercialize more than 200 companies since 1984, including DocuSign,, Kiva Systems, LinkedIn, Rapid7, Redis Labs, Rent the Runway,

SendGrid, SurveyMonkey, SysDig, Taleo, and Turbonomic. Bain Capital

Ventures has approximately $3.9 billion in assets under management with

offices in San Francisco, Palo Alto, New York and Boston. Follow the

firm via LinkedIn

or Twitter .

About Accel

Accel partners with exceptional founders with unique insights, from

inception through all phases of growth. Atlassian, Braintree, Cloudera,

Crowdstrike, DJI, DocuSign, Dropbox, Etsy, Facebook, Flipkart, Jet,

Pillpack, Qualtrics, Slack, Spotify, Supercell, Tenable, Venmo and Vox

Media are among the companies the firm has backed over the past 35

years. The firm seeks to understand entrepreneurs as individuals,

appreciate their originality and play to their strengths. Because

greatness doesn’t have a stereotype. For more, visit ,

or .


1 Gartner, Smarter with Gartner, 6 Best Practices for

Creating a Container Platform Strategy October 31, 2017,


280blue, Inc.

Amanda McKinney
Sysdig Closes .5 Million in Series D Funding to Enable Enterprises to Secure  ...
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.
          Sysdig raises $68.5 million for container security solutions      Cache   Translate Page      

It's not a story about containers unless there are shipping containers pictured. (Pixabay)

Sysdig raises .5 million for container security solutions
Sysdig raises .5 million for container security solutions
Sysdig raises .5 million for container security solutions
Sysdig raises .5 million for container security solutions
Sysdig raises .5 million for container security solutions
Written by

Greg Otto

Sep 12, 2018 | CyberScoop

San Francisco-based Sysdig announced a $68.5 million Series D funding round Wednesday, doubling the amount of money the company has previously raised for its container monitoring and security offerings.

Launched in 2013, the company specializes in platforms that help developers handle vulnerability management, more than 200 compliance checks, and security analytics in containers and microservices used in enterprises.

Containers have been a big deal in the application development world for a while. The popular infrastructure tech gives developers a way to run applications in a consistent manner across a host of different environments, without wasting computing resources or accumulating large run costs. Among the most well known container services are Docker, rkt, and lxd.

451 Research believes application containers will be a $2.7 billion market by 2020, with an annual growth rate of 40 percent compared to other cloud-enabling technologies.

“Enterprises are adopting cloud-native technology for its speed of development, multi-cloud scaling capabilities, and lower total cost of ownership,” said Suresh Vasudevan, CEO of Sysdig. “But, they are hitting roadblocks with old school security and monitoring products. To be successful, these organizations need new solutions that are cloud-native. Sysdig has emerged as the only solution that delivers enterprises the complete set of capabilities needed to protect an environment, ensure that it is running smoothly, and meet compliance requirements.”

The company says downloads of its open source security projectSysdigFalco have tripled over the last 12 months. Additionally, it has formed partnerships with companies like Google, IBM and Red Hat.

The funding was led by Insight Venture Partners, with participation from previous investors Bain Capital Ventures and Accel.

This round of funding brings the company’s total funding to $121.5 million.

-In this Story- Containers , sysdig , venture capital

          Enterprises Still Struggle to Put the Sec in DevOps      Cache   Translate Page      

Despite it being considered an essential practice, most organizations still find it difficult implementing security into their DevOps efforts. It’s not that they don’t want to, they say they do, it’s that they just haven’t provided their developers the tools, processes, or even training to get it done. These are the findings of a report recently released by application security vendor Checkmarx.

The report , conducted by FreeForm Dynamics along with The Register, includes responses from 183 respondents from around the world, with majority holding software development, IT and security professional titles, and they detail their biggest barriers to securing software today within the DevOps cycle.

While this report doesn’t include input from many organizations, especially considering that the responses are geographically disperse, the findings do match what I’ve been hearing from CIOs, CISOs, DevOps leaders, and IT executives in my reporting.

The report found a number of gaps between security as it should be and security as it is within their DevOps organizations. For instance, 96 percent of respondents believe the proper training of secure code development is “highly desirable” to “desirable” yet only 11 percent of responds said that they train their developers “generally very well” and 33 percent said they were mixed and could improve. Likewise, 94 percent of respondents find “shifting left” of security testing as desirable so that testing is an embedded part of development, yet only 11 percent say they are doing this well and 36 percent say they could do better.

There were similar results when it came to the integration of security into the entire DevOps process, operations security, and likewise security specialists knowing how to deal with rapid, iterative delivery.

When it came to DevOps toolsets, such as code scanning tools to identify potential security flaws early in the process, 89 percent agreed it was important, but only 12 percent are doing it well. More than half are doing it mixed to poorly. Similar results were found when it came to API security, open source security, and tools that help align security risks with business impact.

All of this is true despite 57 percent of respondents saying software security is a boardroom level discussion.

While that is an accurate reflection of the importance of software security risks posed to the organization, it seems executive leadership and the boardroom doesn’t see it that way. Discouragingly, 79 percent of respondents said that getting senior management to approach funding for security training is a challenge, and 76 percent said that their executives don’t really care about how software is delivered quickly, frequently and safely, they just want it done.

Hints at the breakdown in DevOps culture also surfaced in this report. Surprisingly, 83 percent said that the ability to define clear ownership and responsibility in relation to software security is a big challenge and 72 percent agreed that different teams and disciplines within UT remain reluctant to trust each other.

These types of leadership, trust, and issues around software responsibility were some of the challenges DevOps was designed to fix within organizations. But it appears, in many, DevOps driven with security best practices itself remains an elusive unicorn.

It’s not really acceptable today, as software has become central to every aspect of nearly every enterprise. As software use increases, and the speed of development increases, it’s imperative that software quality and security controls keep pace.

          MLflow On-Demand Webinar and FAQ Now Available!      Cache   Translate Page      

On August 30th, our team hosted a live webinar—Introducing MLflow: Infrastructure for a complete Machine Learning lifecycle—with Matei Zaharia, Co-Founder and Chief Technologist at Databricks. In this webinar, we walked you through MLflow, a new open source project from Databricks that aims to design an open ML platform where organizations can use any ML library […]

The post MLflow On-Demand Webinar and FAQ Now Available! appeared first on Databricks.

          ‘Straight out of the RT propaganda machine’: MP attacked for urging UK military restraint in Syria      Cache   Translate Page      
RT | September 12, 2018 Labour’s Emily Thornberry has come under fire on social media for simply asking the UK government not to rely on “open source intelligence from terrorist groups” in the event of a reported chemical attack in Syria. Thornberry, Labour’s shadow foreign secretary, asked the government if they would consult Parliament before […]
          eBay maakt technologie om apparaten met hoofd te besturen open source beschikbaar      Cache   Translate Page      
eBay heeft een nieuwe technologie onthuld waarmee mensen met beperkingen hun apparaten kunnen besturen door met hun hoofd te bewegen. De technologie, genaamd HeadGaze, maakt daarvoor gebruik van de ARKit van Apple en de camera van de iPhone X. HeadGaze volgt de bewegingen van het hoofd van de gebruiker, zodat het mogelijk is om over […]
          1. MonoGame - Why MonoGame?      Cache   Translate Page      

Originally posted on:

Why MonoGame?

You’re thinking about getting into game development, and you’re trying to decide how to get started. There are a number of great reasons to go with MonoGame.

  • Maybe you found Unity to be confusing and even a bit overwhelming.
  • Maybe you prefer to “live in the code.”
  • Maybe you’ve used XNA in the past, and want to work with something similar.
  • Maybe you want to create a game that can run on Macs, Windows PCs, Android phones or tablets, iPhones and iPads, or even Xbox & Playstation… with minimal alterations or rewrites to your existing code base.

MonoGame offers game developers an opportunity to write their game once, and target MANY different platforms.

MonoGame is the open source “spiritual successor” to XNA, which was a great game development framework that is no longer supported by Microsoft.

There have been a number of quite successful games created in XNA and MonoGame. You can see a few here.

In the next post, I’ll cover what you will need, and where to find it. If you came directly to this page, you can find the complete list of articles here.

          Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 06 Aug 2018 23:18:47 GMT - View all Seattle, WA jobs
          Senior Front End Engineer, Axon Dispatch - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Fri, 20 Jul 2018 23:17:40 GMT - View all Seattle, WA jobs
          Senior Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 16 Jul 2018 05:18:50 GMT - View all Seattle, WA jobs
          Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Tue, 12 Jun 2018 23:18:07 GMT - View all Seattle, WA jobs
          Software Engineering Manager, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Thu, 31 May 2018 23:18:10 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Thu, 24 May 2018 23:18:05 GMT - View all Seattle, WA jobs
          Bixel, An Open Source 16×16 Interactive LED Array      Cache   Translate Page      

The phrase “Go big or go home” is clearly not lost on [Adam Haile] and [Dan Ternes] of Maniacal Labs. For years they’ve been thinking of creating a giant LED matrix where each “pixel” doubled as a physical push button. Now that they’ve built up experience working on other LED projects, they finally decided it was time to take the plunge and create their masterpiece: the Bixel.

Creating the Bixel (a portmanteau of button, and pixel) was no small feat. The epic build is documented in an exceptionally detailed write-up on the team’s site, in addition to the time-lapse video …read more

          Open Source Software Developer - IBM - Markham, ON      Cache   Translate Page      
Through several active collaborative academic research projects with professors and graduate students from a number of Canadian and foreign universities....
From IBM - Wed, 18 Jul 2018 10:49:17 GMT - View all Markham, ON jobs
          3 open source log aggregation tools      Cache   Translate Page

Log aggregation systems can help with troubleshooting and other tasks. Here are three top options.

Image by :

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
How is metrics aggregation different from log aggregation? Can’t logs include metrics? Can’t log aggregation systems do the same things as metrics aggregation systems?
These are questions I hear often. I’ve also seen vendors pitching their log aggregation system as the solution to all observability problems. Log aggregation is a valuable tool, but it isn’t normally a good tool for time-series data.
A couple of valuable features in a time-series metrics aggregation system are the regular interval and the storage system customized specifically for time-series data. The regular interval allows a user to derive real mathematical results consistently. If a log aggregation system is collecting metrics in a regular interval, it can potentially work the same way. However, the storage system isn’t optimized for the types of queries that are typical in a metrics aggregation system. These queries will take more resources and time to process using storage systems found in log aggregation tools.
So, we know a log aggregation system is likely not suitable for time-series data, but what is it good for? A log aggregation system is a great place for collecting event data. These are irregular activities that are significant. An example might be access logs for a web service. These are significant because we want to know what is accessing our systems and when. Another example would be an application error condition—because it is not a normal operating condition, it might be valuable during troubleshooting.
A handful of rules for logging:
  • DO include a timestamp
  • DO format in JSON
  • DON’T log insignificant events
  • DO log all application errors
  • MAYBE log warnings
  • DO turn on logging
  • DO write messages in a human-readable form
  • DON’T log informational data in production
  • DON’T log anything a human can’t read or react to

Cloud costs

When investigating log aggregation tools, the cloud might seem like an attractive option. However, it can come with significant costs. Logs represent a lot of data when aggregated across hundreds or thousands of hosts and applications. The ingestion, storage, and retrieval of that data are expensive in cloud-based systems.
As a point of reference from a real system, a collection of around 500 nodes with a few hundred apps results in 200GB of log data per day. There’s probably room for improvement in that system, but even reducing it by half will cost nearly $10,000 per month in many SaaS offerings. This often includes retention of only 30 days, which isn’t very long if you want to look at trending data year-over-year.
This isn’t to discourage the use of these systems, as they can be very valuable—especially for smaller organizations. The purpose is to point out that there could be significant costs, and it can be discouraging when they are realized. The rest of this article will focus on open source and commercial solutions that are self-hosted.

Tool options


ELK, short for Elasticsearch, Logstash, and Kibana, is the most popular open source log aggregation tool on the market. It’s used by Netflix, Facebook, Microsoft, LinkedIn, and Cisco. The three components are all developed and maintained by Elastic. Elasticsearch is essentially a NoSQL, Lucene search engine implementation. Logstash is a log pipeline system that can ingest data, transform it, and load it into a store like Elasticsearch. Kibana is a visualization layer on top of Elasticsearch.
A few years ago, Beats were introduced. Beats are data collectors. They simplify the process of shipping data to Logstash. Instead of needing to understand the proper syntax of each type of log, a user can install a Beat that will export NGINX logs or Envoy proxy logs properly so they can be used effectively within Elasticsearch.
When installing a production-level ELK stack, a few other pieces might be included, like Kafka, Redis, and NGINX. Also, it is common to replace Logstash with Fluentd, which we’ll discuss later. This system can be complex to operate, which in its early days led to a lot of problems and complaints. These have largely been fixed, but it’s still a complex system, so you might not want to try it if you’re a smaller operation.
That said, there are services available so you don’t have to worry about that. will run it for you, but its list pricing is a little steep if you have a lot of data. Of course, you’re probably smaller and may not have a lot of data. If you can’t afford, you could look at something like AWS Elasticsearch Service (ES). ES is a service Amazon Web Services (AWS) offers that makes it very easy to get Elasticsearch working quickly. It also has tooling to get all AWS logs into ES using Lambda and S3. This is a much cheaper option, but there is some management required and there are a few limitations.
Elastic, the parent company of the stack, offers a more robust product that uses the open core model, which provides additional options around analytics tools, and reporting. It can also be hosted on Google Cloud Platform or AWS. This might be the best option, as this combination of tools and hosting platforms offers a cheaper solution than most SaaS options and still provides a lot of value. This system could effectively replace or give you the capability of a security information and event management (SIEM) system.
The ELK stack also offers great visualization tools through Kibana, but it lacks an alerting function. Elastic provides alerting functionality within the paid X-Pack add-on, but there is nothing built in for the open source system. Yelp has created a solution to this problem, called ElastAlert, and there are probably others. This additional piece of software is fairly robust, but it increases the complexity of an already complex system.


Graylog has recently risen in popularity, but it got its start when Lennart Koopmann created it back in 2010. A company was born with the same name two years later. Despite its increasing use, it still lags far behind the ELK stack. This also means it has fewer community-developed features, but it can use the same Beats that the ELK stack uses. Graylog has gained praise in the Go community with the introduction of the Graylog Collector Sidecar written in Go.
Graylog uses Elasticsearch, MongoDB, and the Graylog Server under the hood. This makes it as complex to run as the ELK stack and maybe a little more. However, Graylog comes with alerting built into the open source version, as well as several other notable features like streaming, message rewriting, and geolocation.
The streaming feature allows for data to be routed to specific Streams in real time while they are being processed. With this feature, a user can see all database errors in a single Stream and web server errors in a different Stream. Alerts can even be based on these Streams as new items are added or when a threshold is exceeded. Latency is probably one of the biggest issues with log aggregation systems, and Streams eliminate that issue in Graylog. As soon as the log comes in, it can be routed to other systems through a Stream without being processed fully.
The message rewriting feature uses the open source rules engine Drools. This allows all incoming messages to be evaluated against a user-defined rules file enabling a message to be dropped (called Blacklisting), a field to be added or removed, or the message to be modified.
The coolest feature might be Graylog’s geolocation capability, which supports plotting IP addresses on a map. This is a fairly common feature and is available in Kibana as well, but it adds a lot of value—especially if you want to use this as your SIEM system. The geolocation functionality is provided in the open source version of the system.
Graylog, the company, charges for support on the open source version if you want it. It also offers an open core model for its Enterprise version that offers archiving, audit logging, and additional support. There aren’t many other options for support or hosting, so you’ll likely be on your own if you don’t use Graylog (the company).


Fluentd was developed at Treasure Data, and the CNCF has adopted it as an Incubating project. It was written in C and Ruby and is recommended by AWS and Google Cloud. Fluentd has become a common replacement for Logstash in many installations. It acts as a local aggregator to collect all node logs and send them off to central storage systems. It is not a log aggregation system.
It uses a robust plugin system to provide quick and easy integrations with different data sources and data outputs. Since there are over 500 plugins available, most of your use cases should be covered. If they aren’t, this sounds like an opportunity to contribute back to the open source community.
Fluentd is a common choice in Kubernetes environments due to its low memory requirements (just tens of megabytes) and its high throughput. In an environment like Kubernetes, where each pod has a Fluentd sidecar, memory consumption will increase linearly with each new pod created. Using Fluentd will drastically reduce your system utilization. This is becoming a common problem with tools developed in Java that are intended to run one per node where the memory overhead hasn’t been a major issue.

          Difference between Docker swarm and Kubernetes      Cache   Translate Page

Learn difference between Docker swarm and Kubernetes. Comparison between two container orchestration platforms in tabular manner.

Difference between Docker swarm and Kubernetes
Docker Swarm v/s Kubernetes

When you are on learning curve of application containerization, there will be a stage when you come across orchestration tools for containers. If you have started your learning with Docker then Docker swarm is the first cluster management tool you must have learnt and then Kubernetes. So its time to compare docker swarm and Kubernetes. In this article, we will quickly see what is docker, what is kubernetes and then comparison between the two.

What is Docker swarm?

Docker swarm is native tool to Docker which is aimed at clustering management of Docker containers. Docker swarm enables you to built a cluster of multi node VM of physical machines running Docker engine. In turns you will be running containers on multiple machines to facilitate HA, availability, fault tolerant environment. Its pretty much simple to setup and native to Docker.

What is Kubernetes?

Its a platform to manage containerized applications i.e. containers in cluster environment along with automation. Its does almost similar job swarm mode does but in different and enhanced way. Its developed by Google in first place and later project handed over to CNCF. It works with containers like Docker and rocket. Kubernetes installation is bit complex than Swarm.

Compare Docker and Kubernetes

If someone asks you comparison between Docker and Kubernetes then thats not a valid question in first place. You can not differentiate between Docker and Kubernetes. Docker is a engine which runs containers or itself it refers as container and Kubernetes is orchestration platform which manages Docker containers in cluster environment. So one can not compare Docker and Kubernetes.

Difference between Docker Swarm and Kubernetes

I added comparison of Swarm and Kubernetes in below table for easy readability.

          6 open source tools for making your own VPN      Cache   Translate Page

Want to try your hand at building your own VPN but aren’t sure where to start?

scrabble letters used to spell "VPN"
Image credits : 

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
If you want to try your hand at building your own VPN but aren’t sure where to start, you’ve come to the right place. I’ll compare six of the best free and open source tools to set up and use a VPN on your own server. These VPNs work whether you want to set up a site-to-site VPN for your business or just create a remote access proxy to unblock websites and hide your internet traffic from ISPs.
Which is best depends on your needs and limitations, so take into consideration your own technical expertise, environment, and what you want to achieve with your VPN. In particular, consider the following factors:
  • VPN protocol
  • Number of clients and types of devices
  • Server distro compatibility
  • Technical expertise required


Algo was designed from the bottom up to create VPNs for corporate travelers who need a secure proxy to the internet. It “includes only the minimal software you need,” meaning you sacrifice extensibility for simplicity. Algo is based on StrongSwan but cuts out all the things that you don’t need, which has the added benefit of removing security holes that a novice might otherwise not notice.
As an added bonus, it even blocks ads! Algo supports only the IKEv2 protocol and Wireguard. Because IKEv2 support is built into most devices these days, it doesn’t require a client app like OpenVPN. Algo can be deployed using Ansible on Ubuntu (the preferred option), Windows, RedHat, CentOS, and FreeBSD. Setup is automated using Ansible, which configures the server based on your answers to a short set of questions. It’s also very easy to tear down and re-deploy on demand.
Algo is probably the easiest and fastest VPN to set up and deploy on this list. It’s extremely tidy and well thought out. If you don’t need any of the more advanced features offered by other tools and just need a secure proxy, it’s a great option. Note that Algo explicitly states it’s not meant for geo-unblocking or evading censorship, and was primarily designed for confidentiality.


Streisand can be installed on any Ubuntu 16.04 server using a single command; the process takes about 10 minutes. It supports L2TP, OpenConnect, OpenSSH, OpenVPN, Shadowsocks, Stunnel, Tor bridge, and WireGuard. Depending on which protocol you choose, you may need to install a client app.
In many ways, Streisand is similar to Algo, but it offers more protocols and customization. This takes a bit more effort to manage and secure but is also more flexible. Note Streisand does not support IKEv2. I would say Streisand is more effective for bypassing censorship in places like China and Turkey due to its versatility, but Algo is easier and faster to set up.
The setup is automated using Ansible, so there’s not much technical expertise required. You can easily add more users by sending them custom-generated connection instructions, which include an embedded copy of the server’s SSL certificate.
Tearing down Streisand is a quick and painless process, and you can re-deploy on demand.


OpenVPN requires both client and server applications to set up VPN connections using the protocol of the same name. OpenVPN can be tweaked and customized to fit your needs, but it also requires the most technical expertise of the tools covered here. Both remote access and site-to-site configurations are supported; the former is what you’ll need if you plan on using your VPN as a proxy to the internet. Because client apps are required to use OpenVPN on most devices, the end user must keep them updated.
Server-side, you can opt to deploy in the cloud or on your Linux server. Compatible distros include CentOS, Ubuntu, Debian, and openSUSE. Client apps are available for Windows, MacOS, iOS, and Android, and there are unofficial apps for other devices. Enterprises can opt to set up an OpenVPN Access Server, but that’s probably overkill for individuals, who will want the Community Edition.
OpenVPN is relatively easy to configure with static key encryption, but it isn’t all that secure. Instead, I recommend setting it up with easy-rsa, a key management package you can use to set up a public key infrastructure. This allows you to connect multiple devices at a time and protect them with perfect forward secrecy, among other benefits. OpenVPN uses SSL/TLS for encryption, and you can specify DNS servers in your configuration.
OpenVPN can traverse firewalls and NAT firewalls, which means you can use it to bypass gateways and firewalls that might otherwise block the connection. It supports both TCP and UDP transports.


You might have come across a few different VPN tools with “Swan” in the name. FreeS/WAN, OpenSwan, LibreSwan, and strongSwan are all forks of the same project, and the lattermost is my personal favorite. Server-side, strongSwan runs on Linux 2.6, 3.x, and 4x kernels, Android, FreeBSD, macOS, iOS, and Windows.
StrongSwan uses the IKEv2 protocol and IPSec. Compared to OpenVPN, IKEv2 connects much faster while offering comparable speed and security. This is useful if you prefer a protocol that doesn’t require installing an additional app on the client, as most newer devices manufactured today natively support IKEv2, including Windows, MacOS, iOS, and Android.
StrongSwan is not particularly easy to use, and despite decent documentation, it uses a different vocabulary than most other tools, which can be confusing. Its modular design makes it great for enterprises, but that also means it’s not the most streamlined. It’s certainly not as straightforward as Algo or Streisand.
Access control can be based on group memberships using X.509 attribute certificates, a feature unique to strongSwan. It supports EAP authentication methods for integration into other environments like Windows Active Directory. StrongSwan can traverse NAT firewalls.


SoftEther started out as a project by a graduate student at the University of Tsukuba in Japan. SoftEther VPN Server and VPN Bridge run on Windows, Linux, OSX, FreeBSD, and Solaris, while the client app works on Windows, Linux, and MacOS. VPN Bridge is mainly for enterprises that need to set up site-to-site VPNs, so individual users will just need the server and client programs to set up remote access.
SoftEther supports the OpenVPN, L2TP, SSTP, and EtherIP protocols, but its own SoftEther protocol claims to be able to be immunized against deep packet inspection thanks to “Ethernet over HTTPS” camouflage. SoftEther also makes a few tweaks to reduce latency and increase throughput. Additionally, SoftEther includes a clone function that allows you to easily transition from OpenVPN to SoftEther.
SoftEther can traverse NAT firewalls and bypass firewalls. On restricted networks that permit only ICMP and DNS packets, you can utilize SoftEther’s VPN over ICMP or VPN over DNS options to penetrate the firewall. SoftEther works with both IPv4 and IPv6.
SoftEther is easier to set up than OpenVPN and strongSwan but is a bit more complicated than Streisand and Algo.


WireGuard is the newest tool on this list; it's so new that it’s not even finished yet. That being said, it offers a fast and easy way to deploy a VPN. It aims to improve on IPSec by making it simpler and leaner like SSH.
Like OpenVPN, WireGuard is both a protocol and a software tool used to deploy a VPN that uses said protocol. A key feature is “crypto key routing,” which associates public keys with a list of IP addresses allowed inside the tunnel.
WireGuard is available for Ubuntu, Debian, Fedora, CentOS, MacOS, Windows, and Android. WireGuard works on both IPv4 and IPv6.
WireGuard is much lighter than most other VPN protocols, and it transmits packets only when data needs to be sent.
The developers say WireGuard should not yet be trusted because it hasn’t been fully audited yet, but you’re welcome to give it a spin. It could be the next big thing!

Homemade VPN vs. commercial VPN

Making your own VPN adds a layer of privacy and security to your internet connection, but if you’re the only one using it, then it would be relatively easy for a well-equipped third party, such as a government agency, to trace activity back to you.
Furthermore, if you plan to use your VPN to unblock geo-locked content, a homemade VPN may not be the best option. Since you’ll only be connecting from a single IP address, your VPN server is fairly easy to block.
Good commercial VPNs don’t have these issues. With a provider like ExpressVPN, you share the server’s IP address with dozens or even hundreds of other users, making it nigh-impossible to track a single user’s activity. You also get a huge range of hundreds or thousands of servers to choose from, so if one has been blacklisted, you can just switch to another.
The tradeoff of a commercial VPN, however, is that you must trust the provider not to snoop on your internet traffic. Be sure to choose a reputable provider with a clear no-logs policy.

          Turn your vim editor into a productivity powerhouse      Cache   Translate Page

These 20+ useful commands will enhance your experience using the vim editor.

a checklist for a team
Image by :

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
Editor's note: The headline and article originally referred to the "vi editor." It has been updated to the correct name of the editor: "vim."
A versatile and powerful editor, vim includes a rich set of potent commands that make it a popular choice for many users. This article specifically looks at commands that are not enabled by default in vim but are nevertheless useful. The commands recommended here are expected to be set in a vim configuration file. Though it is possible to enable commands individually from each vim session, the purpose of this article is to create a highly productive environment out of the box.

Before you begin

The commands or configurations discussed here go into the vim startup configuration file, vimrc, located in the user home directory. Follow the instructions below to set the commands in vimrc:
(Note: The vimrc file is also used for system-wide configurations in Linux, such as /etc/vimrc or /etc/vim/vimrc. In this article, we'll consider only user-specific vimrc, present in user home folder.)
In Linux:
  • Open the file with vi $HOME/.vimrc
  • Type or copy/paste the commands in the cheat sheet at the end of this article
  • Save and close (:wq)
In Windows:
  • First, install gvim
  • Open gvim
  • Click Edit --> Startup settings, which opens the _vimrc file
  • Type or copy/paste the commands in the cheat sheet at the end of this article
  • Click File --> Save
Let's delve into the individual vi productivity commands. These commands are classified into the following categories:
  1. Indentation & Tabs
  2. Display & Format
  3. Search
  4. Browse & Scroll
  5. Spell
  6. Miscellaneous

1. Indentation & Tabs

To automatically align the indentation of a line in a file:
set autoindent
Smart Indent uses the code syntax and style to align:
set smartindent
Tip: vim is language-aware and provides a default setting that works efficiently based on the programming language used in your file. There are many default configuration commands, including axs cindent, cinoptions, indentexpr, etc., which are not explained here. syn is a helpful command that shows or sets the file syntax.
To set the number of spaces to display for a tab:
set tabstop=4
To set the number of spaces to display for a “shift operation” (such as ‘>>’ or ‘<<’):
set shiftwidth=4
If you prefer to use spaces instead of tabs, this option inserts spaces when the Tab key is pressed. This may cause problems for languages such as Python that rely on tabs instead of spaces. In such cases, you may set this option based on the file type (see autocmd).
set expandtab

2. Display & Format

To show line numbers:
set number
To wrap text when it crosses the maximum line width:
set textwidth=80
To wrap text based on a number of columns from the right side:
set wrapmargin=2
To identify open and close brace positions when you traverse through the file:
set showmatch

3. Search

To highlight the searched term in a file:
set hlsearch
To perform incremental searches as you type:
set incsearch
To search ignoring case (many users prefer not to use this command; set it only if you think it will be useful):
set ignorecase
To search without considering ignorecase when both ignorecase and smartcase are set and the search pattern contains uppercase:
set smartcase
For example, if the file contains: test
When both ignorecase and smartcase are set, a search for “test” finds and highlights both:
A search for “Test” highlights or finds only the second line:

4. Browse & Scroll

For a better visual experience, you may prefer to have the cursor somewhere in the middle rather than on the first line. The following option sets the cursor position to the 5th row.
set scrolloff=5
The first image is with scrolloff=0 and the second image is with scrolloff=5.                                                                                                                                                                       
Tip: set sidescrolloff is useful if you also set nowrap.
To display a permanent status bar at the bottom of the vim screen showing the filename, row number, column number, etc.:
set laststatus=2

5. Spell

vim has a built-in spell-checker that is quite useful for text editing as well as coding. vim recognizes the file type and checks the spelling of comments only in code. Use the following command to turn on spell-check for the English language:
set spell spelllang=en_us

6. Miscellaneous

Disable creating backup file: When this option is on, vim creates a backup of the previous edit. If you do not want this feature, disable it as shown below. Backup files are named with a tilde (~) at the end of the filename.
set nobackup
Disable creating a swap file: When this option is on, vim creates a swap file that exists until you start editing the file. Swapfile is used to recover a file in the event of a crash or a use conflict. Swap files are hidden files that begin with . and end with .swp.
set noswapfile
Suppose you need to edit multiple files in the same vim session and switch between them. An annoying feature that's not readily apparent is that the working directory is the one from which you opened the first file. Often it is useful to automatically switch the working directory to that of the file being edited. To enable this option:
set autochdir
vim maintains an undo history that lets you undo changes. By default, this history is active only until the file is closed. vim includes a nifty feature that maintains the undo history even after the file is closed, which means you may undo your changes even after the file is saved, closed, and reopened. The undo file is a hidden file saved with the .un~ extension.
set undofile
To set audible alert bells (which sound a warning if you try to scroll beyond the end of a line):
set errorbells
If you prefer, you may set visual alert bells:
set visualbell


vim provides long-format as well as short-format commands. Either format can be used to set or unset the configuration.
Long format for the autoindent command:
set autoindent
Short format for the autoindent command:
set ai
To see the current configuration setting of a command without changing its current value, use ? at the end:
set autoindent?
To unset or turn off a command, most commands take no as a prefix:
set noautoindent
It is possible to set a command for one file but not for the global configuration. To do this, open the file and type :, followed by the set command. This configuration is effective only for the current file editing session.
For help on a command:
:help autoindent
Note: The commands listed here were tested on Linux with Vim version 7.4 (2013 Aug 10) and Windows with Vim 8.0 (2016 Sep 12).
These useful commands are sure to enhance your vim experience. Which other commands do you recommend?

Cheat sheet

Copy/paste this list of commands in your vimrc file:

" Indentation & Tabs

set autoindent

set smartindent

set tabstop=4

set shiftwidth=4

set expandtab

set smarttab

" Display & format

set number

set textwidth=80

set wrapmargin=2

set showmatch

" Search

set hlsearch

set incsearch

set ignorecase

set smartcase

" Browse & Scroll

set scrolloff=5

set laststatus=2

" Spell

set spell spelllang=en_us

" Miscellaneous

set nobackup

set noswapfile

set autochdir

set undofile

set visualbell

set errorbells

          Understanding the State of Container Networking      Cache   Translate Page

Containers have revolutionized the way applications are developed and deployed, but what about the network?

By Sean Michael Kerner | Posted Sep 4, 2018
Container networking is a fast moving space with lots of different pieces. In a session at the Open Source Summit, Frederick Kautz, principal software engineer at Red Hat outlined the state of container networking today and where it is headed in the future.

Containers have become increasingly popular in recent years, particularly the use of Docker containers, but what exactly are containers?

Kautz explained the containers make use of the Linux kernel's ability to allow for multiple isolated user space areas. The isolation features are enabled by two core elements cGroups and Namespaces. Control Groups (cGroups) limit and isolate the resource usage of process groups, while namespaces partition key kernel structures for process, hostname, users and network functions.

Container Networking Types

While there are different container technologies and orchestration systems, when it comes to networking, Kautz said there are really just four core networking primitives:

Bridge mode is when networking is hooked into a specific bridge and everyone that is on the bridge will get the messages.

Kautz explained that Host mode is basically where the container uses the same networking space as the host. As such, whatever IP address the host has, those addresses are then shared with the containers.

In an Overlay networking approach, a virtual networking model sits on top of the underlay and the physical networking hardware.

The Underlay approach makes use of core fabric and hardware network.

To make matters somewhat more confusing Kautz said that multiple container networking models are often used together, for example a bridge together with an overlay.

Network Connections

Additionally, container networking models can benefit from MACVLAN and IPVLANs which tie containers to specific mac or IP addresses, for additional isolation

 Kautz added that SR-IOV is a hardware mechanism that ties a physical Network Interface Card (NIC) to containers providing direct access.
Container Networking


On top of the different container networking models are different approaches for Software Defined Networking. For the management plane, there are functionally two core approaches tat this point, the Container Networking Interface (CNI) which is what is used by Kubernetes and the libnetwork interface that is used by Docker.

Kautz noted that with Docker recently announcing support for Kubernetes, it's likely that CNI support will be following as well.

Among the different technologies for container networking today are:

Contiv - backed by Cisco and provides a VXLNA overlay model

Flannel/Calico - backed by Tigera provides an overlay network between each hosted and allocates a separate subnet per host.

Weave - backed by Weaveworks, uses standard port number for containers

Contrail - backed by Juniper networks and open sourced as the TungstenFabric project, provides policy support and gateway services.

OpenDaylight - open source effort that integrates with OpenStack Kuryr

OVN - open source effort that creates logical switches and routers.

Upcoming Efforts

While there are already multiple production grade solutions for container networking, the technology continues to evolve. Among the newer approach is using eBPF (extended Berkeley Packet Filter) for networking control, which is used by the Cilium open source project.

Additionally there is an effort to use shared memory, rather than physical NICs to help enable networking. Kautz also highlighted the emerging area of service mesh technology, in particular the Istio project, which is backed by Google. With a service mesh, networking is offloaded to the mesh, which provides load balancing, failure recovery and service discovery among other capabilities.

Organizations today typically choose a single SDN approach that will connect into a Kubernetes CNI, but that could change in the future thanks to the Multus CNI effort. With Multus CNI multiple CNI plugins can be used, enabling multiple SDN technologies to run in a Kubernetes cluster.

Sean Michael Kerner is a senior editor at EnterpriseNetworkingPlanet and Follow him on Twitter @TechJournalist.

          8 Linux commands for effective process management      Cache   Translate Page

Manage your applications throughout their lifecycles with these key commands.

Command line prompt
Image by :

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
Generally, an application process' lifecycle has three main states: start, run, and stop. Each state can and should be managed carefully if we want to be competent administrators. These eight commands can be used to manage processes through their lifecycles.

Starting a process

The easiest way to start a process is to type its name at the command line and press Enter. If you want to start an Nginx web server, type nginx. Perhaps you just want to check the version.

alan@workstation:~$ nginx

alan@workstation:~$ nginx -v

nginx version: nginx/1.14.0

Viewing your executable path

The above demonstration of starting a process assumes the executable file is located in your executable path. Understanding this path is key to reliably starting and managing a process. Administrators often customize this path for their desired purpose. You can view your executable path using echo $PATH.

alan@workstation:~$ echo $PATH



Use the which command to view the full path of an executable file.

alan@workstation:~$ which nginx                                                    


I will use the popular web server software Nginx for my examples. Let's assume that Nginx is installed. If the command which nginx returns nothing, then Nginx was not found because which searches only your defined executable path. There are three ways to remedy a situation where a process cannot be started simply by name. The first is to type the full path. Although, I'd rather not have to type all of that, would you?

alan@workstation:~$ /home/alan/web/prod/nginx/sbin/nginx -v

nginx version: nginx/1.14.0

The second solution would be to install the application in a directory in your executable's path. However, this may not be possible, particularly if you don't have root privileges. The third solution is to update your executable path environment variable to include the directory where the specific application you want to use is installed. This solution is shell-dependent. For example, Bash users would need to edit the PATH= line in their .bashrc file.
Now, repeat your echo and which commands or try to check the version. Much easier!

alan@workstation:~$ echo $PATH


alan@workstation:~$ which nginx


alan@workstation:~$ nginx -v                                                

nginx version: nginx/1.14.0

Keeping a process running


A process may not continue to run when you log out or close your terminal. This special case can be avoided by preceding the command you want to run with the nohup command. Also, appending an ampersand (&) will send the process to the background and allow you to continue using the terminal. For example, suppose you want to run
nohup &
One nice thing nohup does is return the running process's PID. I'll talk more about the PID next.

Manage a running process

Each process is given a unique process identification number (PID). This number is what we use to manage each process. We can also use the process name, as I'll demonstrate below. There are several commands that can check the status of a running process. Let's take a quick look at these.


The most common is ps. The default output of ps is a simple list of the processes running in your current terminal. As you can see below, the first column contains the PID.

alan@workstation:~$ ps


23989 pts/0    00:00:00 bash

24148 pts/0    00:00:00 ps

I'd like to view the Nginx process I started earlier. To do this, I tell ps to show me every running process (-e) and a full listing (-f).

alan@workstation:~$ ps -ef


root         1     0  0 Aug18 ?        00:00:10 /sbin/init splash

root         2     0  0 Aug18 ?        00:00:00 [kthreadd]

root         4     2  0 Aug18 ?        00:00:00 [kworker/0:0H]

root         6     2  0 Aug18 ?        00:00:00 [mm_percpu_wq]

root         7     2  0 Aug18 ?        00:00:00 [ksoftirqd/0]

root         8     2  0 Aug18 ?        00:00:20 [rcu_sched]

root         9     2  0 Aug18 ?        00:00:00 [rcu_bh]

root        10     2  0 Aug18 ?        00:00:00 [migration/0]

root        11     2  0 Aug18 ?        00:00:00 [watchdog/0]

root        12     2  0 Aug18 ?        00:00:00 [cpuhp/0]

root        13     2  0 Aug18 ?        00:00:00 [cpuhp/1]

root        14     2  0 Aug18 ?        00:00:00 [watchdog/1]

root        15     2  0 Aug18 ?        00:00:00 [migration/1]

root        16     2  0 Aug18 ?        00:00:00 [ksoftirqd/1]

alan     20506 20496  0 10:39 pts/0    00:00:00 bash

alan     20520  1454  0 10:39 ?        00:00:00 nginx: master process nginx

alan     20521 20520  0 10:39 ?        00:00:00 nginx: worker process

alan     20526 20506  0 10:39 pts/0    00:00:00 man ps

alan     20536 20526  0 10:39 pts/0    00:00:00 pager

alan     20564 20496  0 10:40 pts/1    00:00:00 bash

You can see the Nginx processes in the output of the ps command above. The command displayed almost 300 lines, but I shortened it for this illustration. As you can imagine, trying to handle 300 lines of process information is a bit messy. We can pipe this output to grep to filter out nginx.

alan@workstation:~$ ps -ef |grep nginx

alan     20520  1454  0 10:39 ?        00:00:00 nginx: master process nginx

alan     20521 20520  0 10:39 ?        00:00:00 nginx: worker process

That's better. We can quickly see that Nginx has PIDs of 20520 and 20521.


The pgrep command was created to further simplify things by removing the need to call grep separately.

alan@workstation:~$ pgrep nginx



Suppose you are in a hosting environment where multiple users are running several different instances of Nginx. You can exclude others from the output with the -u option.

alan@workstation:~$ pgrep -u alan nginx




Another nifty one is pidof. This command will check the PID of a specific binary even if another process with the same name is running. To set up an example, I copied my Nginx to a second directory and started it with the prefix set accordingly. In real life, this instance could be in a different location, such as a directory owned by a different user. If I run both Nginx instances, the ps -ef output shows all their processes.

alan@workstation:~$ ps -ef |grep nginx

alan     20881  1454  0 11:18 ?        00:00:00 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec

alan     20882 20881  0 11:18 ?        00:00:00 nginx: worker process

alan     20895  1454  0 11:19 ?        00:00:00 nginx: master process nginx

alan     20896 20895  0 11:19 ?        00:00:00 nginx: worker process

Using grep or pgrep will show PID numbers, but we may not be able to discern which instance is which.

alan@workstation:~$ pgrep nginx





The pidof command can be used to determine the PID of each specific Nginx instance.

alan@workstation:~$ pidof /home/alan/web/prod/nginxsec/sbin/nginx

20882 20881

alan@workstation:~$ pidof /home/alan/web/prod/nginx/sbin/nginx

20896 20895


The top command has been around a long time and is very useful for viewing details of running processes and quickly identifying issues such as memory hogs. Its default view is shown below.

top - 11:56:28 up 1 day, 13:37,  1 user,  load average: 0.09, 0.04, 0.03

Tasks: 292 total,   3 running, 225 sleeping,   0 stopped,   0 zombie

%Cpu(s):  0.1 us,  0.2 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

KiB Mem : 16387132 total, 10854648 free,  1859036 used,  3673448 buff/cache

KiB Swap:        0 total,        0 free,        0 used. 14176540 avail Mem


17270 alan      20   0 3930764 247288  98992 R   0.7  1.5   5:58.22 gnome-shell

20496 alan      20   0  816144  45416  29844 S   0.5  0.3   0:22.16 gnome-terminal-

21110 alan      20   0   41940   3988   3188 R   0.1  0.0   0:00.17 top

    1 root      20   0  225564   9416   6768 S   0.0  0.1   0:10.72 systemd

    2 root      20   0       0      0      0 S   0.0  0.0   0:00.01 kthreadd

    4 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 kworker/0:0H

    6 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 mm_percpu_wq

    7 root      20   0       0      0      0 S   0.0  0.0   0:00.08 ksoftirqd/0

The update interval can be changed by typing the letter s followed by the number of seconds you prefer for updates. To make it easier to monitor our example Nginx processes, we can call top and pass the PID(s) using the -p option. This output is much cleaner.

alan@workstation:~$ top -p20881 -p20882 -p20895 -p20896

Tasks:   4 total,   0 running,   4 sleeping,   0 stopped,   0 zombie

%Cpu(s):  2.8 us,  1.3 sy,  0.0 ni, 95.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

KiB Mem : 16387132 total, 10856008 free,  1857648 used,  3673476 buff/cache

KiB Swap:        0 total,        0 free,        0 used. 14177928 avail Mem


20881 alan      20   0   12016    348      0 S   0.0  0.0   0:00.00 nginx

20882 alan      20   0   12460   1644    932 S   0.0  0.0   0:00.00 nginx

20895 alan      20   0   12016    352      0 S   0.0  0.0   0:00.00 nginx

20896 alan      20   0   12460   1628    912 S   0.0  0.0   0:00.00 nginx

It is important to correctly determine the PID when managing processes, particularly stopping one. Also, if using top in this manner, any time one of these processes is stopped or a new one is started, top will need to be informed of the new ones.

Stopping a process


Interestingly, there is no stop command. In Linux, there is the kill command. Kill is used to send a signal to a process. The most commonly used signal is "terminate" (SIGTERM) or "kill" (SIGKILL). However, there are many more. Below are some examples. The full list can be shown with kill -L.

 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP

 6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1

11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM

Notice signal number nine is SIGKILL. Usually, we issue a command such as kill -9 20896. The default signal is 15, which is SIGTERM. Keep in mind that many applications have their own method for stopping. Nginx uses a -s option for passing a signal such as "stop" or "reload." Generally, I prefer to use an application's specific method to stop an operation. However, I'll demonstrate the kill command to stop Nginx process 20896 and then confirm it is stopped with pgrep. The PID 20896 no longer appears.

alan@workstation:~$ kill -9 20896


alan@workstation:~$ pgrep nginx






The command pkill is similar to pgrep in that it can search by name. This means you have to be very careful when using pkill. In my example with Nginx, I might not choose to use it if I only want to kill one Nginx instance. I can pass the Nginx option -s stop to a specific instance to kill it, or I need to use grep to filter on the full ps output.

/home/alan/web/prod/nginx/sbin/nginx -s stop

/home/alan/web/prod/nginxsec/sbin/nginx -s stop

If I want to use pkill, I can include the -f option to ask pkill to filter across the full command line argument. This of course also applies to pgrep. So, first I can check with pgrep -a before issuing the pkill -f.

alan@workstation:~$ pgrep -a nginx

20881 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec

20882 nginx: worker process

20895 nginx: master process nginx

20896 nginx: worker process

I can also narrow down my result with pgrep -f. The same argument used with pkill stops the process.

alan@workstation:~$ pgrep -f nginxsec



alan@workstation:~$ pkill -f nginxsec

The key thing to remember with pgrep (and especially pkill) is that you must always be sure that your search result is accurate so you aren't unintentionally affecting the wrong processes.
Most of these commands have many command line options, so I always recommend reading the man page on each one. While most of these exist across platforms such as Linux, Solaris, and BSD, there are a few differences. Always test and be ready to correct as needed when working at the command line or writing scripts.

          3 top open source JavaScript chart libraries      Cache   Translate Page

Charts and other visualizations make it easier to convey information from your data.

books on shelves in a library, colorful
Image credits : 

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
Charts and graphs are important for visualizing data and making websites appealing. Visual presentations make it easier to analyze big chunks of data and convey information. JavaScript chart libraries enable you to visualize data in a stunning, easy to comprehend, and interactive manner and improve your website's design.
In this article, learn about three top open source JavaScript chart libraries.

1. Chart.js

Chart.js is an open source JavaScript library that allows you to create animated, beautiful, and interactive charts on your application. It's available under the MIT License.
With Chart.js, you can create various impressive charts and graphs, including bar charts, line charts, area charts, linear scale, and scatter charts. It is completely responsive across various devices and utilizes the HTML5 Canvas element for rendering.
Here is example code that draws a bar chart using the library. We'll include it in this example using the Chart.js content delivery network (CDN). Note that the data used is for illustration purposes only.

<!DOCTYPE html>



  <script src=""></script>




    <canvas id="bar-chart" width=300" height="150">


As you can see from this code, bar charts are constructed by setting type to bar. You can change the direction of the bar to other types—such as setting type to horizontalBar.
The bars' colors are set by providing the type of color in the backgroundColor array parameter.
The colors are allocated to the label and data that share the same index in their corresponding array. For example, "Latin America," the second label, will be set to "blue" (the second color) and 4 (the second number in the data).
Here is the output of this code.

2. Chartist.js

Chartist.js is a simple JavaScript animation library that allows you to create customizable and beautiful responsive charts and other designs. The open source library is available under the WTFPL or MIT License.
The library was developed by a group of developers who were dissatisfied with existing charting tools, so it offers wonderful functionalities to designers and developers.
After including the Chartist.js library and its CSS files in your project, you can use them to create various types of charts, including animations, bar charts, and line charts. It utilizes SVG to render the charts dynamically.
Here is an example of code that draws a pie chart using the library.

<!DOCTYPE html>




    <link href="https//" rel="stylesheet" type="text/css" />



        .ct-series-a .ct-slice-pie {

            fill: hsl(100, 20%, 50%); /* filling pie slices */

            stroke: white; /*giving pie slices outline */          

            stroke-width: 5px;  /* outline width */


          .ct-series-b .ct-slice-pie {

            fill: hsl(10, 40%, 60%);

            stroke: white;

            stroke-width: 5px;


          .ct-series-c .ct-slice-pie {

            fill: hsl(120, 30%, 80%);

            stroke: white;

            stroke-width: 5px;


          .ct-series-d .ct-slice-pie {

            fill: hsl(90, 70%, 30%);

            stroke: white;

            stroke-width: 5px;


          .ct-series-e .ct-slice-pie {

            fill: hsl(60, 140%, 20%);

            stroke: white;

            stroke-width: 5px;





    <div class="ct-chart ct-golden-section"></div>

    <script src=""></script>



      var data = {

            series: [45, 35, 20]


      var sum = function(a, b) { return a + b };

      new Chartist.Pie('.ct-chart', data, {

        labelInterpolationFnc: function(value) {

          return Math.round(value / data.series.reduce(sum) * 100) + '%';






Instead of specifying various style-related components of your project, the Chartist JavaScript library allows you to use various pre-built CSS styles. You can use them to control the appearance of the created charts.
For example, the pre-created CSS class .ct-chart is used to build the container for the pie chart. And, the .ct-golden-section class is used to get the aspect ratios, which scale with responsive designs and saves you the hassle of calculating fixed dimensions. Chartist also provides other classes of container ratios you can utilize in your project. For styling the various pie slices, you can use the default .ct-series-a class. The letter a is iterated with every series count (a, b, c, etc.) such that it corresponds with the slice to be styled.
The Chartist.Pie method is used for creating a pie chart. To create another type of chart, such as a line chart, use Chartist.Line.
Here is the output of the code.

3. D3.js

D3.js is another great open source JavaScript chart library. It's available under the BSD license. D3 is mainly used for manipulating and adding interactivity to documents based on the provided data.
You can use this amazing 3D animation library to visualize your data using HTML5, SVG, and CSS and make your website appealing. Essentially, D3 enables you to bind data to the Document Object Model (DOM) and then use data-based functions to make changes to the document.
Here is example code that draws a simple bar chart using the library.

<!DOCTYPE html>





    .chart div {

      font: 15px sans-serif;

      background-color: lightblue;

      text-align: right;



      color: white;

      font-weight: bold;






    <div class="chart"></div>


    <script src=""></script>


      var data = [342,222,169,259,173];".chart")





          .style("width", function(d){ return d + "px"; })

          .text(function(d) { return d; });






The main concept in using the D3 library is to first apply CSS-style selections to point to the DOM nodes and then apply operators to manipulate them—just like in other DOM frameworks like jQuery.
After the data is bound to a document, the .enter() function is invoked to build new nodes for incoming data. All the methods invoked after the .enter() function will be called for every item in the data.
Here is the output of the code.

Wrapping up

JavaScript charting libraries provide you with powerful tools for implementing data visualization on your web properties. With these three open source libraries, you can enhance the beauty and interactivity of your websites.
Do you know of another powerful frontend library for creating JavaScript animation effects? Please let us know in the comment section below.
          Two open source alternatives to Flash Player      Cache   Translate Page

Adobe will end support for Flash Media Player in 2020, but there are still a lot of Flash videos out there that need to be watched. Here are two open source alternatives that are trying to help.

light bulb
Image by : 
Internet Archive Book Images. Modified by CC BY-SA 4.0

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
In July 2017, Adobe sounded the death knell for its Flash Media Player, announcing it would end support for the once-ubiquitous online video player in 2020. In truth, however, Flash has been on the decline for the past eight years following a rash of zero-day attacks that damaged its reputation. Its future dimmed after Apple announced in 2010 it would not support the technology, and its demise accelerated in 2016 after Google stopped enabling Flash by default (in favor of HTML5) in the Chrome browser.
Even so, Adobe is still issuing monthly updates for the software, which has slipped from being used on 28.5% of all websites in 2011 to only 4.4.% as of August 2018. More evidence of Flash’s decline: Google director of engineering Parisa Tabriz said the number of Chrome users who access Flash content via the browser has declined from 80% in 2014 to under eight percent in 2018. Although few* video creators are publishing in Flash format today, there are still a lot of Flash videos out there that people will want to access for years to come. Given that the official application’s days are numbered, open source software creators have a great opportunity to step in with alternatives to Adobe Flash Media Player. Two of those applications are Lightspark and GNU Gnash. Neither are perfect substitutions, but help from willing contributors could make them viable alternatives.


Lightspark is a Flash Player alternative for Linux machines. While it’s still in alpha, development has accelerated since Adobe announced it would sunset Flash in 2017. According to its website, Lightspark implements about 60% of the Flash APIs and works on many leading websites including BBC News, Google Play Music, and Amazon Music.
Lightspark is written in C++/C and licensed under LGPLv3. The project lists 41 contributors and is actively soliciting bug reports and other contributions. For more information, check out its GitHub repository.

GNU Gnash

GNU Gnash is a Flash Player for GNU/Linux operating systems including Ubuntu, Fedora, and Debian. It works as standalone software and as a plugin for the Firefox and Konqueror browsers.
Gnash’s main drawback is that it doesn’t support the latest versions of Flash files—it supports most Flash SWF v7 features, some v8 and v9 features, and offers no support for v10 files. It’s in beta release, and since it’s licensed under the GNU GPLv3 or later, you can help contribute to modernizing it. Access its project page for more information.

Want to create Flash?

*Just because most people aren't publishing Flash videos these days, that doesn't mean there will never, ever be a need to create SWF files. If you find yourself in that position, these two open source tools might help:
  • Motion-Twin ActionScript 2 Compiler (MTASC): A command-line compiler that can generate SWF files without Adobe Animate (the current iteration of Adobe's video-creator software).
  • Ming: A library written in C that can generate SWF files. It also contains some utilities you can use to work with Flash files. 

Clearly, there’s an opening for open source software to take Flash Player’s place in the broader market. If you know of another open source Flash alternative that’s worth a closer look (or needs contributors), please share it in the comments. Or even better, check out the great Flash-free open source tools for working with animation.

          5 Tips for Managing Privileged Access      Cache   Translate Page

Download our in-depth report: The Ultimate Guide to IT Security Vendors
Share it on Twitter 
Share it on Facebook 
Share it on Google+
Share it on Linked in 
Access to applications, servers and network resources is the cornerstone of enterprise IT, which is all about enabling connectivity. Not every account should have full access to everything in an enterprise, however, which is where super user or privileged accounts come into play.
With a privileged account, a user has administrative access to enterprise resources, a capability that should be closely guarded. As fans of Marvel Comics know well, with great power comes great responsibility. Privileged access management (PAM) is a way to limit access to those critical assets and prevent data breaches.
PAM and identity and access management (IAM) are similar security technologies, but the difference between what the two protect is night and day: IAM gives general users access to front-end systems, while PAM gives admins and other privileged users access to back-end systems. Think of it this way: A front-end user might be able to change or add data in a database; a back-end user has access to the entire database, thus the need for greater security.
So how should an organization protect its privileged accounts? That's a question that Paul Lanzi, co-founder and COO at Remediant, tackled in a session at the Black Hat USA conference in August. Lanzi outlined five steps that organizations can take to secure privileged access, based on experience deploying PAM across over 500,000 endpoints.

1. Beware local accounts

Once a user gets administrative rights for a system, more often than not, the user will create a secondary or local account that still has full access but isn't properly identified in a directory system like Active Directory.
"Discovering all the local accounts is often the most surprising thing for security teams because they assume all the accounts listed in Active Directory are domain accounts," Lanzi said. "In fact, the way that Active Directory works, you can have local accounts, and that's often where little pockets of privileged access hide out."
Lesson: Monitor for local admin accounts.

2. Stay tuned

Administrative rights are always changing. Lanzi said that every one of the enterprises he has worked with has at some point done an Active Directory cleanup project. What typically happens, however, is even after a directory cleanup, there tends to be a reversion, with old accounts coming back.
"Over time, admins tend to accrete more and more privileged access, it never really goes away," Lanzi said.
Lesson: Continuously monitor privileged accounts.

3. Session recording is not a panacea

While continuous monitoring of privileged access is important, the flip side of that is that some organizations will have session recording for every action performed by a privileged account.
Few if any enterprises actually look at the privileged account session recordings. What ends up happening in Lanzi's experience is that the session recording feature will end up slowing down some types of operations.
Just like a home DVR (digital video recorder), he noted that no one really watches what they record with session recording. Hackers also generally can easily bypass session recording with different techniques.
Lesson: Session recording has marginal utility.

4. Focus on access, not credentials

There is a movement in IT toward using fewer passwords in favor of using additional forms of strong authentication.
As such, password vault solutions are of limited utility, as simple credentials are not the only way that access is being granted.
Lesson: Focus on access instead of just credentials, which are going to get compromised.

5. Watch for lateral movement

One of the most common things that attackers do when exploiting an organization is to exploit one set of credentials and then move laterally.
"Privileged access should be the bulwark against lateral movement in the enterprise," Lanzi said.
Lesson: Use PAM solutions to control account access and limit the risk of lateral movement.

          What is serverless?      Cache   Translate Page

Let’s examine serverless and Functions-as-a-Service (FaaS), how they fit together, and where they do and don’t make sense

CIO digital transformation
You likely have heard the term serverless (and wondered why someone thought it didn’t use servers). You may have heard of Functions-as-a-Service (FaaS) – perhaps in the context of Lambda from Amazon Web Services, introduced in 2014. You’ve probably encountered event-driven programming in some form. How do all these things fit together and, more importantly, when might you consider using them? Read on.
Servers are still involved; developers just don’t need to think about them in a traditional way.
Let’s start with FaaS. With FaaS, you write code to accomplish some specific task and upload the code for our function to a FaaS provider. The public cloud provider or on-premise platform then does everything else necessary to provision, run, scale, and manage the code. As a developer, you don’t need to do anything other than write your code and wire it up to other functions and services. FaaS provides programmers with an abstraction that allows them to focus on just writing code that takes action in response to events rather than interacting with the underlying server (whether bare metal, virtualized, or containerized).
[ Struggling to explain containers to non-techies? Read also: How to explain containers in plain English. ]
Now enter event-driven programming. Functions run in response to external events. It could be a call generated by a mouse click in a web app. But it could also be in response to some other action. For example, uploading a media file could trigger custom code that transcodes the file into a variety of formats.
Serverless then describes a set of architectural patterns that build on FaaS. Serverless combines custom FaaS code with common back-end services (such as databases and authentication) connected primarily through an event-driven execution model. From the perspective of a developer, these services are all managed by a third-party (whether an ops team or an external provider). Of course, servers are still involved; developers just don’t need to think about them in a traditional way.

Why serverless?

Serverless is an emerging technology area. There’s a lot of interest in the technology and approach although it’s early on and has yet to appear on many enterprise IT radar screens. To understand the interest, it’s useful to consider serverless from both operations and developer perspectives.
PaaS and FaaS are best thought of as being on a continuum rather than being entirely discrete.
For operations teams, one of the initial selling points of FaaS on public clouds was its pricing model. By paying only for an ephemeral (typically stateless) function while it was executing, you “didn’t pay for idle.” In general, while this aspect of serverless is still important to some, it’s less emphasized today. As a broader concept that brings in a wide range of services of which FaaS is just one part, the FaaS pricing model by itself is less relevant.
However, pricing model aside, serverless also allows operations teams to provide developers with a self-service platform and then get out of the way. This is a concept that has been present in platforms like OpenShift from the beginning. Serverless effectively extends the approach for certain types of applications.
The arguably more important aspect of serverless is increased developer productivity. This has two different aspects.
The first is that, as noted earlier, FaaS abstracts away many of the housekeeping details associated with server provisioning and management that are often just overhead for developers. In practice, this may not appear all that different to developers than a Platform-as-a-Service (PaaS). FaaS can even use containers under the covers just like a PaaS typically does. PaaS and FaaS are best thought of as being on a continuum rather than being entirely discrete.
The second is that, by offering common managed services out of the box, developers don’t need to constantly recreate them for new applications.

Where does serverless fit?

Serverless targets specific architectural patterns. As described earlier, it’s more or less wedded to a programming model in which functions and services react to each other in an event-driven and largely asynchronous way. Functions themselves are generally expected to be stateless, handle single tasks, and finish quickly. The fact that the interactions between services and functions are all happening over the network also means that the application as a whole should be fairly tolerant of latencies in these interactions.
You can think of FaaS as both simplifying and limiting.
While there are overlaps between the technologies used by FaaS, microservices, and even coarser-grained architectural patterns, you can think of FaaS as both simplifying and limiting. FaaS requires you to be more prescriptive about how you write applications.
Although serverless was originally most associated with public cloud providers, that comes with a caveat. Serverless, as implemented on public clouds, has a high degree of lock-in to a specific cloud vendor. This is true to some degree even with FaaS, but serverless explicitly encourages bringing in a variety of cloud provider services that are incompatible to varying degrees with other providers and on-premise solutions.
As a result, there’s considerable interest in and work going into open source implementations of FaaS and serverless, such as Knative and OpenWhisk, so that users can write applications that are portable across different platforms.
[ What's next for portable apps? Read also: Disrupt or be disrupted: 3 trends enabling next-level IT agility. ]

The speedy road ahead

Building more modern applications is a top priority for IT executives as part of their digital transformation journeys; it’s seen as the key ingredient to moving faster. To that end, organizations across a broad swath of industries are seeking ways to create new applications more quickly. Doing so involves both making traditional developers more productive and seeking ways to lower the barriers to software development for a larger pool of employees.
Serverless is an important emerging service implementation architecture that will be a good fit for certain types of applications. It will coexist with, rather than replace, architecture alternatives such as microservices used with containers and even just virtual machines. All of these architectural choices support a general trend toward simplifying the developer experience and making developers more productive.

          ​Hollywood goes open source      Cache   Translate Page

Out of 200 of the most popular movies of all time, the top 137 were either visual-effects driven or animated. What did many of these blockbusters have in common? They were made with open-source software.

That was the message David Morin, chairman of the Joint Technology Committee on Virtual Production, brought to The Linux Foundation's Open Source Summit in Vancouver, Canada. To help movie makers bring rhyme and reason to open-source film-making, The Linux Foundation had joined forces with The Academy of Motion Picture Arts and Sciences to form the Academy Software Foundation.

The academy is meant to be a neutral forum for open-source developers both in the motion picture and broader media industries to share resources and collaborate on technologies for image creation, visual effects, animation, and sound. The founding members include Blue Sky Studios, Cisco, DreamWorks Animation, Epic Games, Google Cloud, Intel, Walt Disney Studios, and Weta Digital. It's a true marriage of technology and media-driven businesses.
You know those names. You probably don't know the name of the open-source, special-effects programs, such as Alembic, OpenColorIO, or Ptex, but Morin said, "they're very instrumental in the making of movies".

And they're more important than you think. "The last Fast and the Furious movie, for instance, while it looks like a live-action movie, when you know how it was made, it's really by-and-large a computer generated movie," Morin said. "When Paul Walker passed away in the middle of production, he had to be recreated for the duration of the movie."

The Academy of Motion Picture Arts and Sciences, which you know best from the Oscars, started looking into organizing the use of open-source in the movies in 2016. The group did so because while open-source software was being used more and more, it came with problems. These included:
  • Versionitis: As more libraries were being used it became harder to coordinate software components. A production pipeline, which had been perfected for a 2016 movie, is likely to have out-of-date components for a 2018 film.
  • Organization: While volunteers tried to track these changes, they didn't have the funding or resources needed to go beyond recording changes.
  • Funding: Many open-source programs had lost their maintainers due to getting jobs elsewhere or for lack of funding.
  • Licensing: As all open-source developers know, sooner or later licensing becomes an issue. That's especially true in the motion-picture industry, which is hyper aware of copyright and other intellectual property (IP) issues.
So, the overall mission is to increase the quality and quantity of open-source contributions by developing a governance model, legal framework, and community infrastructure that makes it easier to both develop and use open-source software.
In more detail, the goals are:
  • Provide a neutral forum to coordinate cross-project efforts, establish best practices, and share resources across the motion picture and broader media industries.
  • Develop an open continuous integration (CI) and build infrastructure to enable reference builds from the community and alleviate issues caused by siloed development.
  • Provide individuals and organizations with a clear path for participation and code contribution.
  • Streamline development for build and runtime environments through the sharing of open-source build configurations, scripts, and recipes.
  • Provide better, more consistent licensing through a shared licensing template.
Developers interested in learning more or contributing can join Academy Software Foundation mailing list.
Morin added, "In the last 25 years, software engineers have played an increasing role in the most successful movies of our time. The Academy Software Foundation is set to provide funding, structure, and infrastructure for the open-source community, so that engineers can continue to collaborate and accelerate software development for movie making and other media for the next 25 years."
Rob Bredow, SVP, executive creative director, and head of Industrial Light & Magic, said, "Developers and engineers across the industry are constantly working to find new ways to bring images to life, and open source enables them to start with a solid foundation while focusing on solving unique, creative challenges rather than reinventing the wheel."
If you'd like to get into the movie business, now's your chance. "We're welcoming all the help we can get to set up the foundation," Morin concluded. "Writing code today is perhaps the most powerful activity that you can do to make movies. If you're interested, don't hesitate to join us."
Related Stories:

          5 tips to improve productivity with zsh      Cache   Translate Page

The zsh shell offers countless options and features. Here are 5 ways to boost your efficiency from the command line.

computer screen
Image by :

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
The Z shell known as zsh is a shell for Linux/Unix-like operating systems. It has similarities to other shells in the sh (Bourne shell) family, such as as bash and ksh, but it provides many advanced features and powerful command line editing options, such as enhanced Tab completion.
It would be impossible to cover all the options of zsh here; there are literally hundreds of pages documenting its many features. In this article, I'll present five tips to make you more productive using the command line with zsh.

1. Themes and plugins

Through the years, the open source community has developed countless themes and plugins for zsh. A theme is a predefined prompt configuration, while a plugin is a set of useful aliases and functions that make it easier to use a specific command or programming language.
The quickest way to get started using themes and plugins is to use a zsh configuration framework. There are many available, but the most popular is Oh My Zsh. By default, it enables some sensible zsh configuration options and it comes loaded with hundreds of themes and plugins.
A theme makes you more productive as it adds useful information to your prompt, such as the status of your Git repository or Python virtualenv in use. Having this information at a glance saves you from typing the equivalent commands to obtain it, and it's a cool look. Here's an example of Powerlevel9k, my theme of choice:


zsh Powerlevel9K theme
The Powerlevel9k theme for zsh
In addition to themes, Oh My Zsh bundles tons of useful plugins for zsh. For example, enabling the Git plugin gives you access to a number of useful aliases, such as:

$ alias | grep -i git | sort -R | head -10


ga='git add'

gapa='git add --patch'

gap='git apply'

gdt='git diff-tree --no-commit-id --name-only -r'

gau='git add --update'

gstp='git stash pop'

gbda='git branch --no-color --merged | command grep -vE "^(\*|\s*(master|develop|dev)\s*$)" | command xargs -n 1 git branch -d'

gcs='git commit -S'

glg='git log --stat'

There are plugins available for many programming languages, packaging systems, and other tools you commonly use on the command line. Here's a list of plugins I use in my Fedora workstation:
git golang fedora docker oc sudo vi-mode virtualenvwrapper

2. Clever aliases

Aliases are very useful in zsh. Defining aliases for your most-used commands saves you a lot of typing. Oh My Zsh configures several useful aliases by default, including aliases to navigate directories and replacements for common commands with additional options such as:

ls='ls --color=tty'

grep='grep  --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn}'

In addition to command aliases, zsh enables two additional useful alias types: the suffix alias and the global alias.
A suffix alias allows you to open the file you type in the command line using the specified program based on the file extension. For example, to open YAML files using vim, define the following alias:
alias -s {yml,yaml}=vim
Now if you type any file name ending with yml or yaml in the command line, zsh opens that file using vim:

$ playbook.yml

# Opens file playbook.yml using vim

A global alias enables you to create an alias that is expanded anywhere in the command line, not just at the beginning. This is very useful to replace common filenames or piped commands. For example:
alias -g G='| grep -i'
To use this alias, type G anywhere you would type the piped command:

$ ls -l G do

drwxr-xr-x.  5 rgerardi rgerardi 4096 Aug  7 14:08 Documents

drwxr-xr-x.  6 rgerardi rgerardi 4096 Aug 24 14:51 Downloads

Next, let's see how zsh helps to navigate the filesystem.

3. Easy directory navigation

When you're using the command line, navigating across different directories is one of the most common tasks. Zsh makes this easier by providing some useful directory navigation features. These features are enabled with Oh My Zsh, but you can enable them by using this command:
setopt  autocd autopushd \ pushdignoredups
With these options set, you don't need to type cd to change directories. Just type the directory name, and zsh switches to it:

$ pwd


$ /tmp

$ pwd


To move back, type -:
Zsh keeps the history of directories you visited so you can quickly switch to any of them. To see the list, type dirs -v:

$ dirs -v

0       ~

1       /var/log

2       /var/opt

3       /usr/bin

4       /usr/local

5       /usr/lib

6       /tmp

7       ~/Projects/

8       ~/Projects

9       ~/Projects/ansible

10      ~/Documents

Switch to any directory in this list by typing ~# where # is the number of the directory in the list. For example:

$ pwd


$ ~4

$ pwd


Combine these with aliases to make it even easier to navigate:

d='dirs -v | head -10'

1='cd -'

2='cd -2'

3='cd -3'

4='cd -4'

5='cd -5'

6='cd -6'

7='cd -7'

8='cd -8'

9='cd -9'

Now you can type d to see the first ten items in the list and the number to switch to it:

$ d

0       /usr/local

1       ~

2       /var/log

3       /var/opt

4       /usr/bin

5       /usr/lib

6       /tmp

7       ~/Projects/

8       ~/Projects

9       ~/Projects/ansible

$ pwd


$ 6


$ pwd


Finally, zsh automatically expands directory names with Tab completion. Type the first letters of the directory names and TAB to use it:

$ pwd


$ p/o/z (TAB)

$ Projects/

This is just one of the features enabled by zsh's powerful Tab completion system. Let's look at some more.

4. Advanced Tab completion

Zsh's powerful completion system is one of its hallmarks. For simplification, I call it Tab completion, but under the hood, more than one thing is happening. There's usually expansion and command completion. I'll discuss them together here. For details, check this User's Guide.
Command completion is enabled by default with Oh My Zsh. To enable it, add the following lines to your .zshrc file:

autoload -U compinit


Zsh's completion system is smart. It tries to suggest only items that can be used in certain contexts—for example, if you type cd and TAB, zsh suggests only directory names as it knows cd does not work with anything else.
Conversely, it suggests usernames when running user-related commands or hostnames when using ssh or ping, for example.
It has a vast completion library and understands many different commands. For example, if you're using the tar command, you can press Tab to see a list of files available in the package as candidates for extraction:

$ tar -xzvf test1.tar.gz test1/file1 (TAB)

file1 file2

Here's a more advanced example, using git. In this example, when typing TAB, zsh automatically completes the name of the only file in the repository that can be staged:

$ ls

original  plan.txt  zsh_theme_small.png

$ git status

On branch master

Your branch is up to date with 'origin/master'.

Changes not staged for commit:

  (use "git add ..." to update what will be committed)

  (use "git checkout -- ..." to discard changes in working directory)


no changes added to commit (use "git add" and/or "git commit -a")

$ git add (TAB)

$ git add

It also understands command line options and suggests only the ones that are relevant to the subcommand selected:

$ git commit - (TAB)

--all                  -a       -- stage all modified and deleted paths

--allow-empty                   -- allow recording an empty commit

--allow-empty-message           -- allow recording a commit with an empty message

--amend                         -- amend the tip of the current branch

--author                        -- override the author name used in the commit

--branch                        -- show branch information

--cleanup                       -- specify how the commit message should be cleaned up

--date                          -- override the author date used in the commit

--dry-run                       -- only show the list of paths that are to be committed or not, and any untracked

--edit                 -e       -- edit the commit message before committing

--file                 -F       -- read commit message from given file

--gpg-sign             -S       -- GPG-sign the commit

--include              -i       -- update the given files and commit the whole index

--interactive                   -- interactively update paths in the index file

--message              -m       -- use the given message as the commit message


After typing TAB, you can use the arrow keys to navigate the options list and select the one you need. Now you don't need to memorize all those Git options.
There are many options available. The best way to find what is most helpful to you is by using it.

5. Command line editing and history

Zsh's command line editing capabilities are also useful. By default, it emulates emacs. If, like me, you prefer vi/vim, enable vi bindings with the following command:
$ bindkey -v
If you're using Oh My Zsh, the vi-mode plugin enables additional bindings and a mode indicator on your prompt—very useful.
After enabling vi bindings, you can edit the command line using vi commands. For example, press ESC+/ to search the command line history. While searching, pressing n brings the next matching line, and N the previous one. Most common vi commands work after pressing ESC such as 0 to jump to the start of the line, $ to jump to the end, i to insert, a to append, etc. Even commands followed by motion work, such as cw to change a word.
In addition to command line editing, zsh provides several useful command line history features if you want to fix or re-execute previous used commands. For example, if you made a mistake, typing fc brings the last command in your favorite editor to fix it. It respects the $EDITOR variable and by default uses vi.
Another useful command is r, which re-executes the last command; and r , which executes the last command that contains the string WORD.
Finally, typing double bangs (!!) brings back the last command anywhere in the line. This is useful, for instance, if you forgot to type sudo to execute commands that require elevated privileges:

$ less /var/log/dnf.log

/var/log/dnf.log: Permission denied

$ sudo !!

$ sudo less /var/log/dnf.log

These features make it easier to find and re-use previously typed commands.

Where to go from here?

These are just a few of the zsh features that can make you more productive; there are many more. For additional information, consult the following resources:
An Introduction to the Z Shell
A User's Guide to ZSH
Archlinux Wiki
Do you have any zsh productivity tips to share? I would love to hear about them in the comments below.

          Episode 85: Does This Make FOSS Better or Worse      Cache   Translate Page      
Does This Make FOSS Better or Worse | Ask Noah Show 85 Does the "Commons Clause" help the commons? The Commons Clause was announced recently along with several projects moving portions of their code base under it. It's an additional restriction intended to be applied to existing open source licenses with the effect of preventing the work from being sold. We play devils advocate and tell you why this might not be such a bad thing. As always your calls go to the front of the line, and we give you the details on how you can win free stuff in the Telegram group! -- The Cliff Notes -- For links to the articles and material referenced in this week's episode check out this week's page from o our podcast dashboard! This Episode's Podcast Dashboard ( Phone Systems for Ask Noah provided by Voxtelesys ( -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard ( Need more help than a radio show can offer? Altispeed provides commercial IT services and they’re excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies ( Contact Noah asknoah [at] -- Twitter -- Noah - Kernellinux ( Ask Noah Show ( Altispeed Technologies ( Jupiter Broadcasting (
          Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 06 Aug 2018 23:18:47 GMT - View all Seattle, WA jobs
          Senior Front End Engineer, Axon Dispatch - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Fri, 20 Jul 2018 23:17:40 GMT - View all Seattle, WA jobs
          Senior Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 16 Jul 2018 05:18:50 GMT - View all Seattle, WA jobs
          8 great Python libraries for side projects      Cache   Translate Page

These Python libraries make it easy to scratch that personal project itch.

Image by : 
WOCinTech Chat. Modified by CC BY-SA 4.0

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
We have a saying in the Python/Django world: We came for the language and stayed for the community. That is true for most of us, but something else that has kept us in the Python world is how easy it is to have an idea and quickly work through it over lunch or in a few hours at night.
This month we're diving into Python libraries we love to use to quickly scratch those side-project or lunchtime itches.

To save data in a database on the fly: Dataset

Dataset is our go-to library when we quickly want to collect data and save it into a database before we know what our final database tables will look like. Dataset has a simple, yet powerful API that makes it easy to put data in and sort it out later.
Dataset is built on top of SQLAlchemy, so extending it will feel familiar. The underlying database models are a breeze to import into Django using Django's built-in inspectdb management command. This makes working with existing databases pretty painless.

To scrape data from web pages: Beautiful Soup

Beautiful Soup (BS4 as of this writing) makes extracting information out of HTML pages easy. It's our go-to anytime we need to turn unstructured or loosely structured HTML into structured data. It's also great for working with XML data that might otherwise not be readable.

To work with HTTP content: Requests

Requests is arguably one of the gold standard libraries for working with HTTP content. Anytime we need to consume an HTML page or even an API, Requests has us covered. It's also very well documented.

To write command-line utilities: Click

When we need to write a native Python script, Click is our favorite library for writing command-line utilities. The API is straightforward, well thought out, and there are only a few patterns to remember. The docs are great, which makes looking up advanced features easy.

To name things: Python Slugify

As we all know, naming things is hard. Python Slugify is a useful library for turning a title or description into a unique(ish) identifier. If you are working on a web project and you want to use SEO-friendly URLs, Python Slugify makes this easier.

To work with plugins: Pluggy

Pluggy is relatively new, but it's also one of the best and easiest ways to add a plugin system to your existing application. If you have ever worked with pytest, you have used pluggy without knowing it.

To convert CSV files into APIs: Datasette

Datasette, not to be confused with Dataset, is an amazing tool for easily turning CSV files into full-featured read-only REST JSON APIs. Datasette has tons of features, including charting and geo (for creating interactive maps), and it's easy to deploy via a container or third-party web host.

To handle environment variables and more: Envparse

If you need to parse environment variables because you don't want to save API keys, database credentials, or other sensitive information in your source code, then envparse is one of your best bets. Envparse handles environment variables, ENV files, variable types, and even pre- and post-processors (in case you want to ensure that a variable is always upper or lower case, for instance).

Do you have a favorite Python library for side projects that's not on this list? Please share it in the comments.

          Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Tue, 12 Jun 2018 23:18:07 GMT - View all Seattle, WA jobs
          Software Engineering Manager, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Thu, 31 May 2018 23:18:10 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Thu, 24 May 2018 23:18:05 GMT - View all Seattle, WA jobs
          Re: Shelter is an open source sandboxing app to isolate apps from your data      Cache   Translate Page      


          Mozilla identifies 10 open source personas: What you need to know      Cache   Translate Page      

This new report describes the best ways to work with different types of open communities.

          Running Apache Cassandra on Kubernetes      Cache   Translate Page      

Cassandra operator offers a powerful, open source option for running Cassandra on Kubernetes with simplicity and grace.

          5 Best Free and Open Source Time Tracking Software      Cache   Translate Page      
          New Copyright Powers, New "Terrorist Content" Regulations: A Grim Day For Digital Rights in Europe      Cache   Translate Page      

Despite waves of calls and emails from European Internet users, the European Parliament today voted to accept the principle of a universal pre-emptive copyright filter for content-sharing sites, as well as the idea that news publishers should have the right to sue others for quoting news items online – or even using their titles as links to articles. Out of all of the potential amendments offered that would fix or ameliorate the damage caused by these proposals, they voted for worst on offer.

There are still opportunities, at the EU level, at the national level, and ultimately in Europe’s courts, to limit the damage. But make no mistake, this is a serious setback for the Internet and digital rights in Europe.

It also comes at a trepidatious moment for pro-Internet voices in the heart of the EU. On the same day as the vote on these articles, another branch of the European Union’s government, the Commission, announced plans to introduce a new regulation on “preventing the dissemination of terrorist content online”. Doubling down on speedy unchecked censorship, the proposals will create a new “removal order”, which will oblige hosting service providers to remove content within one hour of being ordered to do so. Echoing the language of the copyright directive, the Terrorist Regulation “aims at ensuring smooth functioning of the digital single market in an open and democratic society, by preventing the misuse of hosting services for terrorist purposes”; it encourages the use of “proactive measures, including the use of automated tools.”

Not content with handing copyright law enforcement to algorithms and tech companies, the EU now wants to expand that to defining the limits of political speech too.

And as bad as all this sounds, it could get even worse. Elections are coming up in the European Parliament next May. Many of the key parliamentarians who have worked on digital rights in Brussels will not be standing. Marietje Schaake, author of some of the better amendments for the directive, announced this week that she would not be running again. Julia Reda, the German Pirate Party representative, is moving on; Jan Philipp Albrecht, the MEP behind the GDPR, has already left Parliament to take up a position in domestic German politics. The European Parliament’s reserves of digital rights expertise, never that full to begin with, are emptying.

The best that can be said about the Copyright in the Digital Single Market Directive, as it stands, is that it is so ridiculously extreme that it looks set to shock a new generation of Internet activists into action – just as the DMCA, SOPA/PIPA and ACTA did before it.

If you’ve ever considered stepping up to play a bigger role in European politics or activism, whether at the national level, or in Brussels, now would be the time.

It’s not enough to hope that these laws will lose momentum or fall apart from their own internal incoherence, or that those who don’t understand the Internet will refrain from breaking it. Keep reading and supporting EFF, and join Europe’s powerful partnership of digital rights groups, from Brussels-based EDRi to your local national digital rights organization. Speak up for your digital business, open source project, for your hobby or fandom, and as a contributor to the global Internet commons.

This was a bad day for the Internet and for the European Union: but we can make sure there are better days to come.

          Update: Keybase - Crypto for Everyone (Social Networking)      Cache   Translate Page      

Keybase - Crypto for Everyone 2.6.0

Device: iOS iPhone
Category: Social Networking
Price: Free, Version: 2.5.0 -> 2.6.0 (iTunes)


Keybase is a messaging platform where:

• you can write securely to any twitter, reddit, facebook, github, and hacker news user
• you don't need to know someone's phone number or email address
• all messages are secure, end-to-end encrypted
• multi-device: your messages survive and transfer with encryption to new phones & computers

Keybase is so much more. It is:

• free for everyone, and free of ads
• open source (
• multi-platform, w/apps for macOS, Linux, and Windows (

By using the Keybase app you agree to the following terms:

• you'll be a nice Internet person

Keybase for mobile is brand new and we yearn for feedback. Inside the app, click the gear icon and then choose "feedback" to send us a summary of your experience.

What's New

• Inline video in chat
• Better reconnection handling
• More visual polish
• Better explanation of Facebook proof process

• Better handling of errors uploading attachments
• Some git notifications in chat could show incorrectly
• Squashed some EOF errors
• Better selection of a different conversation when leaving the selected one
• Show in finder sometimes wouldn't work from chat
• Some teams that weren't subteams could show in the list
• The statusbar could disappear after viewing a video

Keybase - Crypto for Everyone

          Python Application and Platform Developer - Plotly - Montréal, QC      Cache   Translate Page      
Engage our Open Source communities and help take stewardship of our OSS projects. Plotly is hiring Pythonistas!...
From Plotly - Thu, 06 Sep 2018 15:03:02 GMT - View all Montréal, QC jobs
          Ubuntu: SchoolTool, Lubuntu Development Newsletter, and Patches      Cache   Translate Page      
  • How to install School tool on Ubuntu 18.04 LTS

    SchoolTool is a free and open source suite of free administrative software for schools that can be used to create a simple turnkey student information system, including demographics, gradebook, attendance, calendaring and reporting for primary and secondary schools. You can easily build customized applications and configurations for individual schools or states using SchoolTool. SchoolTool is a web-based student information system specially designed for schools in the developing world, with support for localization, translation, automated deployment and updates via the Ubuntu repository.

  • Lubuntu Development Newsletter #11

    We have swapped out SMPlayer for VLC, Nomacs for LXImage-Qt, and the KDE 5 LibreOffice frontend instead of the older KDE 4 frontend. We are working on installer slideshow updates to reflect these changes.

    Walter Lapchynski is working on packaging Trojitá; that will be done soon.

    Lastly, we fixed a bug in the daily which did not properly set the GTK 3 theme when configured if no GTK theme had been configured before.

  • The First Beta of the /e/ OS to Be Released Soon, Canonical's Security Patch for Ubuntu 18.04 LTS, Parrot 4.2.2 Now Available, Open Jam 2018 Announced and Lightbend's Fast Data Platform Now on Kubernetes

    Canonical yesterday released a Linux kernel security patch for Ubuntu 18.04 LTS that addresses two recnetly discovered vulnerabilities.

read more

          RK3399 based 96Boards SBC starts at $99      Cache   Translate Page      

Vamrs has begun shipping the “Rock960” — the first 96Boards SBC based on the hexa-core Rockchip RK3399. The community-backed SBC sells for $99 (2GB/16GB) or $139 (4GB/32GB).

Shortly before Shenzhen-based Vamrs Limited launched a Rockchip RK3399 Sapphire SBC in Nov. 2017, the company announced a similarly open-spec Rock960 SBC that uses the same Rockchip RK3399 SoC, but instead adopts the smaller, 85 x 55mm 96Boards CE form factor. The Rock960 was showcased in March along with other AI-enabled boards as part of Linaro’s initiative announcement.

Read more

Also: Bixel, An Open Source 16×16 Interactive LED Array

          LXer: 5 examples of Prometheus monitoring success      Cache   Translate Page      
Published at LXer: Prometheus is an open source monitoring and alerting toolkit for containers and microservices. The project is a hit with lots of different organizations regardless of their size...
          LXer: How to Setup Apache Subversion with HTTPS Letsencrypt on CentOS 7      Cache   Translate Page      
Published at LXer: Apache Subversion or SVN is open source versioning and revision control software. In this article, we show you how to set up Apache Subversion on the latest CentOS 7 server. We...
          eBay HeadGaze app lets physically impaired control iPhone X with head movements      Cache   Translate Page      

eBay's open source HeadGaze software uses the iPhone X's FaceID system to translate head movements into screen actions, hopefully opening tech up for many people with physical impairments.

The post eBay HeadGaze app lets physically impaired control iPhone X with head movements appeared first on Digital Trends.

          Git 2.19.0-r1      Cache   Translate Page      
[Name] Git
[Summary] Git is a popular version control system designed to handle very large projects with speed and efficiency; it is used mainly for various open source projects, most notably the Linux kernel.
[Description] Git is distributed version control system focused on speed, effectivity and real-world usability on large projects. Its highlights include:
Strong support for non-linear development.
Distributed development.
Efficient handling of large projects.
Cryptographic authentication of history.
Toolkit design.

          Arduino For Dummies, 2nd Edition      Cache   Translate Page      


Bring your ideas to life with the latest Arduino hardware and software

Arduino is an affordable and readily available hardware development platform based around an open source, programmable circuit board. You can combine this programmable chip with a variety of sensors and actuators to sense your environment around you and control lights, motors, and sound. This flexible and easy-to-use combination of hardware and software can be used to create interactive


          Senior DevOps Engineer - Long Term Contract - Ignite Technical Resources - Burnaby, BC      Cache   Translate Page      
Integrate, manage and support a diverse range of Open Source and commercial middleware, tools, platforms and frameworks to enable continuous product delivery....
From Ignite Technical Resources - Thu, 21 Jun 2018 08:15:40 GMT - View all Burnaby, BC jobs
          VirtualBox 5.1.14 Build 112924 Final Portable + Extension Pack 180913      Cache   Translate Page      

VirtualBox 5.1.14 Build 112924 Final Portable + Extension Pack 180913
VirtualBox 5.1.14 Build 112924 Final Portable + Extension Pack | 115 MB

VirtualBox is a general-purpose full virtualizer for hardware. Targeted at server, desktop and embedded use, it is now the only professional quality virtualization solution that is also Open Source Software. VirtualBox is a powerful virtualization product for enterprise as well as home use. VirtualBox provides are useful for several scenarios: Running multiple operating systems simultaneously. VirtualBox allows you to run more than one operating system at a time. This way, you can run software written for one operating system on another (for example, ShiChuang software on Linux or a Mac) without having to reboot to use it.

Since you can con?�?gure what kinds of "virtual" hardware should be presented to each such operating system, you can install an old operating system such as DOS or OS/2 even if your real computer's hardware is no longer supported by that operating system.
Software vendors can use virtual machines to ship entire software con?�?gurations. For example, installing a complete mail server solution on a real machine can be a tedious task. With VirtualBox, such a complex setup (then often called an "appliance") can be packed into a virtual machine. Installing and running a mail server becomes as easy as importing such an appliance into VirtualBox.

Testing and disaster recovery. Once installed, a virtual machine and its virtual hard disks can be considered a "container" that can be arbitrarily frozen, woken up, copied, backed up, and transported between hosts. On top of that, with the use of another VirtualBox feature called "snapshots", one can save a particular state of a virtual machine and revert back to that state, if necessary. This way, one can freely experiment with a computing environment. If something goes wrong (e.g. after installing misbehaving software or infecting the guest with a virus), one can easily switch back to a previous snapshot and avoid the need of frequent backups and restores. Any number of snapshots can be created, allowing you to travel back and forward in virtual machine time. You can delete snapshots while a VM is running to reclaim disk space.

Infrastructure consolidation. Virtualization can signi?�?cantly reduce hardware and electricity costs. Most of the time, computers today only use a fraction of their potential power and run with low average system loads. A lot of hardware resources as well as electricity is thereby wasted. So, instead of running many such physical computers that are only partially used, one can pack many virtual machines onto a few powerful hosts and balance the loads between them.

Whats New:
VirtualBox 5.1.14 (released 2017-01-17)
This is a maintenance release. The following items were fixed and/or added:

VMM: fixed emulation of certain instructions for 64-bit guests on 32-bit hosts
VMM: properly handle certain MSRs for 64-bit guests on ancient CPUs without VT-x support for MSR bitmaps (bug #13886)
GUI: fixed a crash with multimonitor setups under certain conditions
GUI: allow cloning of snapshots when the VM is running
NVMe: fixed compatibility with the Storage Performance Development Kit (SPDK, bug #16368)
VBoxSVC: fixed a crash under rare circumstances
VBoxManage: added a sanity check to modifymedium --resize to prevent users from resizing their hard disk from 1GB to 1PB (bug #16311)
ShiChuang hosts: another fix for recent ShiChuang 10 hosts
Linux hosts: Linux 4.10 fixes
Linux Additions: fixed protocol error during certain operations on shared folders (bug #8463)

Buy a premium  to download file with fast speed
thanks … e.rar.html … rtable.rar

          Android Apps Riskier Than Ever: Report      Cache   Translate Page      
Widespread use of unpatched open source code in the most popular Android apps distributed by Google Play has caused significant security vulnerabilities, suggests an American Consumer Institute report. Thirty-two percent -- or 105 apps out of 330 of the most popular apps in 16 categories sampled -- averaged 19 vulnerabilities per app, according to the report. Researchers found critical vulnerabilities in many common applications.
          TuxMachines: Linux Foundation and Kernel Events, Developments      Cache   Translate Page      
  • Top 10 Reasons to Join the Premier European Open Source Event of the Year [Ed: LF advertises this event where Microsoft is Diamond sponsor (highest level). LF is thoroughly compromised, controlled by Linux's opposition.]
  • AT&T Spark conference panel highlights open source road map and needs [Ed: Linux Foundation working for/with a surveillance company]

    The telecommunications industry has been around for 141 years, but the past five have been the most disruptive, according to the Linux Foundation's Arpit Joshipura.

    Joshipura, general manager, networking and orchestration, said on a panel during Monday's AT&T Spark conference in San Francisco that the next five years will be marked by deployment phases across open source communities and the industry as a whole.

    "Its (telecommunications) been disrupted in just the last five years and the speed of innovation has skyrocketed in just the last five years since open source came out," Joshipura said.

  • A Hitchhiker’s Guide to Deploying Hyperledger Fabric on Kubernetes

    Deploying a multi-component system like Hyperledger Fabric to production is challenging. Join us Wednesday, September 26, 2018 9:00 a.m. Pacific for an introductory webinar, presented by Alejandro (Sasha) Vicente Grabovetsky and Nicola Paoli of AID:Tech.

  • IDA: simplifying the complex task of allocating integers

    It is common for kernel code to generate unique integers for identifiers. When one plugs in a flash drive, it will show up as /dev/sdN; that N (a letter derived from a number) must be generated in the kernel, and it should not already be in use for another drive or unpleasant things will happen. One might think that generating such numbers would not be a difficult task, but that turns out not to be the case, especially in situations where many numbers must be tracked. The IDA (for "ID allocator", perhaps) API exists to handle this specialized task. In past kernels, it has managed to make the process of getting an unused number surprisingly complex; the 4.19 kernel has a new IDA API that simplifies things considerably.

    Why would the management of unique integer IDs be complex? It comes down to the usual problems of scalability and concurrency. The IDA code must be able to track potentially large numbers of identifiers in an efficient way; in particular, it must be able to find a free identifier within a given range quickly. In practice, that means using a radix tree (or, soon, an XArray) to track allocations. Managing such a data structure requires allocating memory, which may be difficult to do in the context where the ID is required. Concurrency must also be managed, in that two threads allocating or freeing IDs in the same space should not step on each other's toes.

read more

          Falco 0.12.1      Cache   Translate Page      
Sysdig falco is a behavioral activity monitoring agent that is open source and comes with native support for containers. Falco lets you define highly granular rules to check for activities involving file and network activity, process execution, IPC, and much more, using a flexible syntax. Falco will notify you when these rules are violated. You can think about falco as a mix between snort, ossec and strace.
          Post Status: WordPress and Blockchain      Cache   Translate Page      

WordPress is one of the driving forces and great success stories of the open web to date. As an open source platform, it’s become a dominant CMS used by 30% of the web to publish content — on websites large and small.

WordPress has grown up in an era of evolving challenges: ushering in web standards, adapting for publishing and viewing on all device sizes; building for accessibility by all; establishing its place in the era of expansive and centralized social media platforms; and more.

Today, we’re faced with a new generation of technologies coming down the pipe, ready to disrupt the current ecosystem. These technologies include blockchain, artificial intelligence, augmented reality, the internet of things, and more I’m sure. It’s the first of these that is the focus of this post and the following conversation.

I was approached by David Lockie of Pragmatic to discuss how WordPress and blockchain technology may fit together, and how they may not. David and I have both been interested in the cryptocurrency and blockchain space over the past couple of years, and have over time encountered a lot of projects that aim to disrupt or enhance various elements of the web: from DNS to CMS.

David gathered a group of people for an initial online, open, honest conversation about how WordPress could be impacted, disrupted or take advantage of distributed ledger and blockchain technologies.

Examples include:

  • Blockchain platforms impacting people’s choice to use WordPress e.g. Steemit
  • Blockchain projects impacting people already using WordPress e.g. Basic Attention Token or, Civil
  • Cryptocurrencies’ impact on eCommerce and the wider ecosystem, e.g. the Coinbase Commerce merchant gateway
  • What we can learn from blockchain projects’ governance systems and lessons learned
  • Ideas for improving security, integrations, etc
  • Various identity-based projects
  • New environments which may be used to run WordPress, such as decentralized web technologies e.g. Substratum  or MaidSafe.
  • Impact on the talent pool for WordPress professionals
  • General threats and opportunities
  • Discussion of anything new, interesting and relevant in the blockchain/cryptocurrency space
  • All of the above as it relates to open source and the web generally, outside of WordPress

Our aim for the initial conversation, as well as future conversations is not to advocate specifically for any existing project or to necessarily endorse blockchain as appropriate for WordPress to somehow integrate in any way. It’s to explore what’s out there now, how it could impact WordPress today and in the future, and down the road perhaps how WordPress take advantage of potential opportunities. We are approaching this like a discovery phase — not to get overly excited, but to be informed. And we welcome participants in this conversation.

This first conversation included the following participants:

I attempted to reiterate in the call, but I believe it’s important to address this topic with a skeptic’s hat on. By no means do any of us think that it’s a great idea to just go head first in trying to integrate blockchain technology to WordPress. The jury is still very much out in terms of where, how, and even if blockchain brings significant advantages to web applications.

If you are interested in future discussions, we welcome you! There is currently a channel (#blockchain) in Post Status Slack where people can discuss, and we’ll also announce future video-conference discussions. We may make a more independent place to discuss, blog, etc, in the future depending on how these early conversations go.

We don’t know exactly where this conversation will go. It may fizzle out, or could evolve into a much broader community effort. The first thing to do, if you are interested to continue this conversation, is just follow along with future conversations, which will be posted here. If you would like to be on the next video call, please contact David or myself.

          Oracle forges a Java microservices framework      Cache   Translate Page      

Oracle has introduced Project Helidon, an open source microservices framework for Java.

Helidon features a collection of Java libraries for writing microservices that will run on a web core powered by the Netty network application framework. The project also includes Helidon Reactive WebServer, which provides a functional programming model to run on Netty. Cloud application development is supported, along with health checks, metrics, tracing, and fault tolerance.

Oracle said that although it is already possible to build Java EE (Enterprise Edition) microservices, it is better to have a framework designed for this purpose. The intent has been to build lightweight libraries that do not require an application server and can be used in Java SE (Standard Edition).

To read this article in full, please click here

          Nagios Core monitoring software: lots of plugins, steep learning curve       Cache   Translate Page      
Nagios Core is open source, free and has good documentation, but it could use a streamlined install process, updated Web interface, and better configuration options. (Insider Story)
          Monitoring Specialist - Ubisoft - Montréal, QC      Cache   Translate Page      
Good knowledge of open source monitoring technologies like time-series DBs, metrics dashboards, real-time graphing, graph editors, ELK stack and Vector...
From Ubisoft - Tue, 28 Aug 2018 22:04:20 GMT - View all Montréal, QC jobs
          Digital Experience Delivery Manager - West Bend Mutual Insurance - West Bend, WI      Cache   Translate Page      
Understanding of UX; Must be familiar with evolving trends in digital UX, open source software and best practices. Summary of Responsibilities....
From West Bend Mutual Insurance - Mon, 10 Sep 2018 23:23:43 GMT - View all West Bend, WI jobs
          Oracle forges a Java microservices framework      Cache   Translate Page      

Oracle has introduced Project Helidon, an open source microservices framework for Java.

Helidon features a collection of Java libraries for writing microservices that will run on a web core powered by the Netty network application framework. The project also includes Helidon Reactive WebServer, which provides a functional programming model to run on Netty. Cloud application development is supported, along with health checks, metrics, tracing, and fault tolerance.

Oracle said that although it is already possible to build Java EE (Enterprise Edition) microservices, it is better to have a framework designed for this purpose. The intent has been to build lightweight libraries that do not require an application server and can be used in Java SE (Standard Edition).

To read this article in full, please click here

          5 Best Free and Open Source Time Tracking Software      Cache   Translate Page      
          Episode 155: Boris Karloff Meets Guacamole      Cache   Translate Page      
This week Dave ( and Gunnar ( talk about jellyfish without jelly, voices without humans, source code without support, and diversity without discrimination Google Home (, week 3 Jellyfish Chips Might Be Your Next Snack Obsession ( Eating a QR Code May Save Your Life Someday ( ‘Deep Voice’ Software Can Clone Anyone's Voice With Just 3.7 Seconds of Audio ( France Proposes Software Security Liability For Manufacturers, Open Source As Support Ends ( New Lawsuit Exposes Google's Desperation to Improve Diversity ( $270,000 to close the gender and race pay gap among 89% of Google’s workers for 2017: The cost to close Google’s pay gap was surprisingly cheap. The question is, why is this correction necessary? ( Related: Brotopia: Breaking Up the Boys' Club of Silicon Valley ( Cutting Room Floor * Meat-themed backgammon set ( * Boris Karloff’s guacamole recipe ( * I Would Switch to This '80s Parody of Siri for the Hilariously Awful Synthesized Voice ( * Vim Clutch ( * What it would be like to have a 3rd (prosthetic) thumb ( We Give Thanks * The D&G Show Slack Clubhouse for the discussion topics!
          Episode 148: Our Friend Adrian      Cache   Translate Page      
This week Dave ( talks with Adrian Keward ( about Ansible use in public sector and lessons learned from public sector customers around the globe Welcome Adrian Keward (! British Army Talks DevOps at AnsibleFest London 2017 ( Ansible helps the US Army with Hurricane Harvey relief ( Network Rail ( Red Hat Government Symposium 2017 ( Red Hat Forum Tokyo ( Around the world .. but not in 80 Days – Government approaches to Open Source around the world – Part One ( Around the world .. but not in 80 Days – Government approaches to Open Source around the world – Part Two ( We Give Thanks * Adrian Keward ( for being our special guest star! Special Guest: Adrian Keward.
          Episode 144: Mark Thacker: Product Management Zelig      Cache   Translate Page      
This week Dave ( and Gunnar ( talk with Mark Thacker ( about technical product management and getting things done in open source and proprietary organizations Gopher ( Sun Microsystems ( Quantum Corporation ( Help Wanted: We’re hiring Senior Product Manager, Storage ( Strategic Partner Product Manager, Business ( We Give Thanks * Mark Thacker ( for being our special guest star! Special Guest: Mark Thacker.
          Episode 143: Ostensibly Helpful, But Actually Dangerous      Cache   Translate Page      
This week Dave ( and Gunnar ( talk about things that are ostensibly helpful, but actually dangerous: robotic tutors, voice modulators, autocomplete, and the hellscape of Android VPN apps Creeper sauce ( is back! Gunnar can’t wait for the delivery of his Tom Bihn Tristar ( Human vs. robot ping pong ( Hushme Lets You Talk On The Phone Privately While Pretending To Be Bane ( Researchers Issue Security Warnings About Several Popular Android VPN Apps ( The browser setting everyone should turn off now ( Is The Future Of Television Watching on Fast-Forward? ( Network Television Stations Speed Up TV Shows to Fit in More Ads ( Couch to 5K (, RunKeeper (, and the value of chains Cutting Room Floor * Recreating Asteroids with open source and a laser projector ( * We can now 3D print Slinkys ( * Robot Solves Sudoku on Paper ( * AI Move Poster Generator ( * Create Hilarious Fake Inspirational Messages With InspiroBot ( * New paint colors invented by neural network ( * Metal band names invented by neural network ( * Neural networks can name guinea pigs ( * Princeton students after a freshman vs. sophomores snowball fight, 1893 ( * A Virtual Machine, in Google Sheets ( We Give Thanks * The D&G Show Slack Clubhouse for the discussion topics!
          Episode 141: Download This Podcast Or We Shoot This CloudPet      Cache   Translate Page      
Two million recordings of families imperiled by cloud-connected toys' crappy MongoDB ( CloudPets' woes worsen: Webpages can turn kids' stuffed toys into creepy audio bugs ( Exploit code has been open sourced here ( Speaking of remote control… Watch: The first pro football team where fans called the plays. Here's what happened. ( Satan enters roll-your-own ransomware game ( Ransomware for Dummies: Anyone Can Do It ( Dave got published 4x How agencies can take a page out of industry's open playbooks ( How to get up and running with sweet Orange Pi ( Internet-enable your microcontroller projects for under $6 with ESP8266 ( Book review: Up to no good with 'Raspberry Pi for Secret Agents' ( Faking TV Remote Control with Paper and a Lighter ( Cutting Room Floor KISS is selling air guitar strings! ( 45 minutes of Paul Stanley’s stage banter ( This Gallery Of Vintage Clowns Proves They’ve Always Been Scary ( (see all 3 pages!)
          Episode 133: Walking on a Cloud      Cache   Translate Page      
  We Give Thanks Corey Sanders for being our special guest star! Special Guest: Corey Sanders.
          Episode 131: #131: Send In the Clowns      Cache   Translate Page      
Cutting Room Floor We Give Thanks
  • The D&G Show Slack Clubhouse for the discussion topics!

          Episode 126: #126: Defense In Depth 2016      Cache   Translate Page      

This week Dave and Josh Bressers pregame Red Hat Defense in Depth 2016!


We Give Thanks

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Special Guest: Josh Bressers.
          Episode 125: #125: Third Time’s the Charm      Cache   Translate Page      

This week Dave and Gunnar talk about: DEFCON, United Airlines security case study, and a chaser of meeting hygiene.


Cutting Room Floor

          Episode 122: #122: You’d Better Recognize      Cache   Translate Page      

This week Dave and Gunnar talk about recognition: facial recognition, keystroke recognition, Dothraki recognition.


Cutting Room Floor

          #121: Open Sourcing with Open_Sourcing      Cache   Translate Page      

This week Dave and Gunnar talk with Maha Shaikh about open source, the nature of community, and life as an open source academic.


Maha says:

In a nutshell my work is, firstly, around making sense of how companies choose communities, what criteria they use and how they evaluate them.

Secondly, I look in great detail into how companies are learning to find new mechanisms of control to manage organizational forms like communities where traditional forms of obligation and redress inscribed into contracts are no longer possible. This also involves how companies have forced themselves to become comfortable with ‘less’ control.

Thirdly, how we can theorize and learn from online communities like open source ones to make sense of how ‘serious work’ is carried out in rather loud online settings where many voices create a cacophony somewhat unhelpful for creative work like coding.

We Give Thanks

Maha Shaikh for being our special guest star!

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Special Guest: Maha Shaikh.
          #120: One Tough Hippo      Cache   Translate Page      

This week Dave talks with Paul Laurence, co-founder of Crunchy Data about Crunchy Certified PostgreSQL, its Common Criteria certification, why it works great on OpenShift, and integration with SELinux!


We Give Thanks

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Special Guest: Paul Laurence.
          Episode 113: #113: Badge of Open Source Honor      Cache   Translate Page      

This week, Gunnar talks with Dr. David A. Wheeler and Emily Ratliff about the Linux Foundation’s Core Infrastructure Initiative and their new Badge program.


          The Changelog 314: Kubernetes brings all the Cloud Natives to the yard      Cache   Translate Page      

We talk with Dan Kohn, the Executive Director of the Cloud Native Computing Foundation to catch up with all things cloud native, the CNCF, and the world of Kubernetes.

Dan updated us on the growth KubeCon / CloudNativeCon, the state of Cloud Native and where innovation is happening, serverless being on the rise, and Kubernetes dominating the enterprise.


  • Hired –  Salary and benefits upfront? Yes please. Our listeners get a double hiring bonus of $600! Or, refer a friend and get a check for $1,337 when they accept a job. On Hired companies send you offers with salary, benefits, and even equity upfront. You are in full control of the process. Learn more at
  • DigitalOcean –  DigitalOcean is simplicity at scale. Whether your business is running one virtual machine or ten thousand, DigitalOcean gets out of your way so your team can build, deploy, and scale faster and more efficiently. New accounts get $100 in credit to use in your first 60 days.
  • Algolia –  Our search partner. Algolia's full suite search APIs enable teams to develop unique search and discovery experiences across all platforms and devices. We're using Algolia to power our site search here at Get started for free and learn more at
  • GoCD –  GoCD is an on-premise open source continuous delivery server created by ThoughtWorks that lets you automate and streamline your build-test-release cycle for reliable, continuous delivery of your product.


Notes and Links

          Episode 110: #110: I’ll Be At The Bar      Cache   Translate Page      

This week Dave and Gunnar talk about: IoT hacks, cyborg insects, and Dave’s local crime report.

HHonors-Digital-Key“Let’s just put crime tape around it until we figure it out.”


Cutting Room Floor


We Give Thanks

  • The D&G Show Slack Clubhouse for the discussion topics!
          #109: I Blame Open Source      Cache   Translate Page      

This week, Gunnar talk to Josh Bressers, Security Strategist for Red Hat Enterprise Linux, about how product security teams work, the difference between engineering and product management, and how he became the change he wanted to see in the world.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Special Guest: Josh Bressers.
          Episode 105: #105: DIY Dystopia      Cache   Translate Page      

This week Dave and Gunnar talk about: DIY LPRs, Crowdsourced Panopticon, and Universal Key Escrow is a thing we’re talking about now.



Cutting Room Floor

We Give Thanks

  • The D&G Show Slack Clubhouse for the discussion topics!
  • James Kirkland for the ALPR tip!
          #103: Please Engage with Our Brand      Cache   Translate Page      

This week Dave and Gunnar talk about partnerships: D&G + Nextgov, Red Hat + Microsoft, Marriott + Starwood, New Haven police + your stuff.

please engage with our brand

Cutting Room Floor

We Give Thanks

  • Our Inception friends at Nextgov
  • The D&G Show Slack Clubhouse for the discussion topics!
          Android Apps Riskier Than Ever: Report      Cache   Translate Page      
Widespread use of unpatched open source code in the most popular Android apps distributed by Google Play has caused significant security vulnerabilities, suggests an American Consumer Institute report. Thirty-two percent -- or 105 apps out of 330 of the most popular apps in 16 categories sampled -- averaged 19 vulnerabilities per app, according to the report. Researchers found critical vulnerabilities in many common applications.
          Episode 91: #91: The Truck Factor      Cache   Translate Page      

This week Dave and Gunnar talk about: Kommisars in the board room, Akron Police Department’s compulsory feeling of safety, more warm fuzzies from OPM, and more Yahoo! news than you ever thought possible.



Cutting Room Floor


We Give Thanks

  • Thanks to Red Hat Middleware Solutions Architect David Murphy for finding our episode 89 Easter egg!
  • The whole dgshow Slack crew
          Episode 87: #87: Always Cat6, Never Cat5      Cache   Translate Page      

This week Dave and Gunnar talk about: airlines meeting security researchers, Firefox meeting advertisers, FitBit meeting dogs, Roomba meeting Orwell.

Cutting Room Floor

We Give Thanks

Robin Price for the VIM tip!

          Episode 86: #86: Drone Tortoise      Cache   Translate Page      

This week Dave and Gunnar talk about: optimizing experiences! Drones, dressing rooms, airplane seats, ads, and software licensing.


Cutting Room Floor

We Give Thanks

  • Coffee correspondent Uzoma Nwosu for the Keurig update!
  • Martin Preisler for the great SCAP work!
  • Mrs. Egts for the career conundrum!
          Episode 85: #85: In Control      Cache   Translate Page      

This week Dave and Gunnar talk about: Controlling the mind, the body, and your drones.  Also: RHEL 6.7 beta, JBoss EAP 6.4, and controlling the enterprise with DevOpsDays Austin.


Cutting Room Floor

          Episode 81: #81: Key Exchange with Robot Vomit      Cache   Translate Page      

This week Dave and Gunnar talk about First round of Thunderdome, product design success and failure, RHEL 7.1, urban dashboards, and the Cumulative Threat.


Cutting Room Floor

We Give Thanks

          Episode 80: #80: Security Thunderdome      Cache   Translate Page      

This week, Dave and Gunnar open the Security Thunderdome, and talk about red-teaming large bureaucracies.

In other news…
Death Guild Thunderome

Cutting Room Floor

          Episode 78: #78: Project Jellyfish      Cache   Translate Page      

This week Dave and Gunnar talk with Nirmal Mehta about Jellyfish, Project Jellyfish, and how to get a company to open source a project.


          Episode 74: #74: NEST 9000      Cache   Translate Page      

This week, Dave and Gunnar talk about your singular uniqueness on the web, miniaturizing almost everything, and Dell IT using OpenShift.


Cutting Room Floor


We Give Thanks

          Episode 68: #68: Not my circus, not my monkey.      Cache   Translate Page      

This week, Dave and Gunnar talk about: computers that think, computers that think they’re thinking, and people that think computers are people.


Cutting Room Floor

We Give Thanks

          Episode 67: #67: “Your encryption is useless, Charlie Brown”      Cache   Translate Page      

This week Dave and Gunnar talk about encrypting everything, why encryption doesn’t matter, and why we are all Charlie Brown now.

Lucy, in the role of a cloud server provider. Or telco. Or regulator. Lucy, in the role of a cloud server provider. Or telco. Or regulator.

Cutting Room Floor

          Episode 66: #66: Old School      Cache   Translate Page      

This week, Dave and Gunnar talk about old-school network security, old-school email addresses, and an old-school partner with a new-school cloud broker.


Screenshot 2014-10-26 17.57.09

Cutting Room Floor

          Episode 64: #64: A Pig We’ll Miss      Cache   Translate Page      

This week, Dave and Gunnar talk about FBI Whining (aka “Clipper II”), Malvertising, Richard Branson’s alarming new PTO policy.


Cutting Room Floor

We Give Thanks

          Episode 61: #61: Monkey Business      Cache   Translate Page      

This week, Dave and Gunnar talk about: monkey selfies, elephant-based coffee, and yak shaving with OpenStack.

Licensed under Public domain via Wikimedia Commons.

Cutting Room Floor

We Give Thanks

          Episode 60: #60: Faraday Pajamas      Cache   Translate Page      

This week, Dave and Gunnar talk about vampire plants, spider oaks, and ultrasonic potatoes.

No radiation gets in, only creepy vibes get out.

Cutting Room Floor

  • Google Mesa is crazytown
    • From O’Reilly Radar’s Four short links: “Paper by Googlers on the database holding G’s ad data. Trillions of rows, petabytes of data, point queries with 99th percentile latency in the hundreds of milliseconds and overall query throughput of trillions of rows fetched per day, continuous updates on the order of millions of rows updated per second, strong consistency and repeatable query results even if a query involves multiple datacenters, and no SPOF. (via Greg Linden)”

We Give Thanks

          Episode 59: #59: Alphabet Soup      Cache   Translate Page      

This week, Dave and Gunnar talk about: GSA USDS 18F OMB OSS LOL BBQ & RHEL SSG, RHEL VPAT, KVM, EAP GA, IBM+BRMS.


Cutting Room Floor

          Episode 58: #58: You Will Know Dave’s Vacation By the Trail of Destruction      Cache   Translate Page      

This week, Dave and Gunnar talk about Barthelona, Gaudi, Toledo, Detroit, the Stasi, and why cloud providers can’t have nice things.


Cutting Room Floor

We Give Thanks

          Episode 56: #56: Surveillance and PSYOPs      Cache   Translate Page      

This week, Dave and Gunnar talk about: Government Surveillance, Corporate Surveillance over land, sea and air, Personal Surveillance, and a vague but pervasive sense that you’re being monitored at this very moment.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.


Cutting Room Floor

We Give Thanks

          Episode 54: #54: Dockah Dockah Dockah      Cache   Translate Page      

This week Dave and Gunnar talk about containers, Project Atomic, Containers, RHEL 7, Containers, RHEV 3.4, and DockerDockah.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.


Cutting Room Floor

We Give Thanks

          Episode 52: #52: Sixteen Tons      Cache   Translate Page      

This week, Dave and Gunnar talk about Apple’s lock-in empire, autonomous vehicles, and improvisational theater as a way of life.

Ernie Ford

Cutting Room Floor

We Give Thanks

          Episode 51: #51: A Visit with the Doctor      Cache   Translate Page      

This week Dave and Gunnar talk with special guest star and elder statesman of open source in security and government, Dr. David A. Wheeler about Heartbleed, security reviews, and why security vulnerabilities are like human organs.

Dr. David A. Wheeler

We Give Thanks

  • Dr. David A. Wheeler for guest starring and everything he’s done to advance the cause of open source in government.
  • Summer Maynard and Robin Price for giving us ideas to talk about
  • Paul Rotilie for helping with the FLOSS numbers database
          Red Hat Summit 2014: Docker and Cloud Brokers with Nirmal Mehta      Cache   Translate Page      

Dave and Gunnar talk with one of their favorite partners: Nirmal Mehta of Booz Allen Hamilton, Red Hat Innovation Award winner in 2013. It’s a nerdfest. And a Dockerfest.


          Episode 47: #47: Out-of-Order Execution      Cache   Translate Page      


This week Dave and Gunnar talk about home storage, open source 5th columnists at MSFT, the Amazon unicorn factory, Gunnar’s new job, new workflow, and Georgios Papanikolaou, a monthly visitor of guinea pigs.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.

Image courtesty of [@feitclub]( courtesty of @feitclub

Cutting Room Floor

We Give Thanks

  • Dave Sirrine for letting us know about ScratchJr
  • Bob Kozdemba for helping spread the word about free OpenShift for non-profit, open source, and educational institutions
          Episode 49: #49: A Word from Our Sponsor      Cache   Translate Page      

This week, Dave and Gunnar talk with Shawn Wells about the origins of the SCAP Security Guide, community building with the government and integrators, and future plans for the OpenStack Security Guide.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.


Breaking News: SCAP Security Guide is shipping in RHEL6.6!

We Give Thanks

  • Shawn Wells for leading the SSG community and stopping by to speak with us!
          Episode 48: #48: Tiny Circuits, Big Factory      Cache   Translate Page      

This week Dave and Gunnar talk with special guest correspondent Lauren Egts about her interview with Ken Burns of TinyCircuits for Open Source Hardware Week!

RSS Icon Subscribe via RSS or iTunes.

Lauren Egts and Ken Burns

We Give Thanks

          Episode 46: #46: Prisencolinensinainciusol      Cache   Translate Page      

This week, Dave and Gunnar talk about: talk about: backups, media players, Amazon GovCloud, new JBoss releases, Gilligan’s Island.

RSS Icon Subscribe via RSS or iTunes.


Cutting Room Floor

We Give Thanks

  • Adam Clater for keeping us up to speed on Japanese culture.
          Episode 44: #44: Glad to be here      Cache   Translate Page      

This week, Dave and Gunnar talk about affordances, partnerships, and a bunch of reasons Red Hat is a great place to work.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.

Sophisticated multi-touch interfaces in vehicles have consequences.Sophisticated multi-touch interfaces in vehicles have consequences.

Cutting Room Floor

We Give Thanks

          Episode 42: #42: Topic Roulette      Cache   Translate Page      

This week, Dave and Gunnar talk about: cleaning out the attic, transparency in companies, new RHEV release, and Packing for Mars by Mary Roach.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.

That's Dave on the right.That’s Dave on the right.

Cutting Room Floor

We Give Thanks

          Episode 39: #39: A Scratch Programmer We Like      Cache   Translate Page      

This week Dave and Gunnar celebrate Youth in Open Source Week, and talk with Dave’s favorite open source developer: his daughter, Lauren.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.


We Give Thanks

Lauren's adorable Valentine to Dave.Lauren’s adorable Valentine to Dave. #source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 35: #35: Say My Name      Cache   Translate Page      

This week, Dave and Gunnar talk about: US Government bitcoins, skeumorphic bitcoins, TSA coin-flips, twitter drops a dime on the US government, OSS payload this Federal IT award season, our $.02 on RHEL 6.5 and Fedora 20.

RSS Icon#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000 Subscribe via RSS or iTunes.


Cutting Room Floor

We Give Thanks