Next Page: 10000

          CIO - Senior Java Developer - Experis - Lowell, MA      Cache   Translate Page      
Hands-on experience with programming languages such as Java / Python etc. Familiarity with Microservices, AWS Web Services, Apigee and a range of open source...
From Experis - Thu, 01 Nov 2018 23:43:10 GMT - View all Lowell, MA jobs
          IBM-Red Hat: Get Open Source Training to Prepare for Acquisition      Cache   Translate Page      
With IBM's Red Hat acquisition announcement still fresh, now's the time for the channel to be sure its people are trained in Linux and open source.
          CIO - Senior Java Developer - Experis - Lowell, MA      Cache   Translate Page      
Hands-on experience with programming languages such as Java / Python etc. Familiarity with Microservices, AWS Web Services, Apigee and a range of open source...
From Experis - Thu, 01 Nov 2018 23:43:10 GMT - View all Lowell, MA jobs
          Remote Lead Kernel Engineer      Cache   Translate Page      
A cloud computing company is filling a position for a Remote Lead Kernel Engineer. Individual must be able to fulfill the following responsibilities: Acting as the hands on technical leader for the kernel engineering team Developing and deploying changes and new features to the kernel Working with security to evaluate and mitigate new kernel level threats Required Skills: Experience developing for the Linux kernel, either professionally or strong open source contributions Familiarity with the Linux kernel including but not limited to the network stack, filesystems, scheduler, etc Ability to work with stakeholders within engineering, security, and product Willingness to learn and tenacity (we have tracked down CPU bugs!) Strong proficiency in C and comfort with x86 architecture An eye for correctness and performance, in that order
          LibreOffice 6.1.3 et 6.0.7 sont disponibles pour Windows, macOS et Linux : zoom sur les mises à jour de l'alternative open source à Microsoft Office      Cache   Translate Page      
LibreOffice 6.1.3 et 6.0.7 sont disponibles pour Windows, macOS et Linux
Zoom sur les mises à jour de l'alternative open source à Microsoft Office

« The Document Foundation » (TDF) a récemment annoncé la sortie officielle de LibreOffice 6.0.7 et LibreOffice 6.1.3, deux nouvelles mises à jour mineures de l'alternative open source à Microsoft Office qui améliorent la qualité et la stabilité des versions précédentes et intègrent un patch de sécurité.


LibreOffice 6.1.3

Cette version de LibreOffice...
          IBM-Red Hat: Get Open Source Training to Prepare for Acquisition      Cache   Translate Page      
With IBM's Red Hat acquisition announcement still fresh, now's the time for the channel to be sure its people are trained in Linux and open source.
          It’s Rover Time!      Cache   Translate Page      
In this blog post I’m going to talk about a “mission” we are in the process of completing. We are building drum roll please…the Mars Rover, yep you heard me. An open source build developed by @JPL (Jet Propulsion Laboratory Open Source Rover) I came across the build one day when I received a newsletter […]
          NGINX Open Source 1.14.1-0      Cache   Translate Page      
Released on Nov 06, 2018
          VMware to Acquire Heptio for Enterprise Kubernetes Expertise      Cache   Translate Page      

Augmenting its own growing practice in cloud native computing, VMware is in the process of acquiring Heptio, an enterprise-focused consulting firm founded by two of the initial developers of the open source Kubernetes container orchestration system. VMware announced the pending purchase at the company’s VMworld 2018 Europe conference Wednesday. Terms of the deal were not disclosed. VMware […]

The post VMware to Acquire Heptio for Enterprise Kubernetes Expertise appeared first on The New Stack.


          5 Steps To Resolve “Error Establishing a Database Connection” In WordPress      Cache   Translate Page      

WordPress is an easy to use and open source CMS. But, there might be possibility, you may face some WordPress error while using it. If you are getting an error establishing a database connection, then you need to resolve it immediately otherwise your website will not be seen online. In this blog, we will describe […]

The post 5 Steps To Resolve “Error Establishing a Database Connection” In WordPress appeared first on 2018.


          Happy 15th Birthday, Fedora Linux!      Cache   Translate Page      
Fedora is the best desktop Linux distribution for many reasons. Not only is it fast and reliable, but it is constantly kept up to date with fairly bleeding edge packages. Not to mention, it uses the greatest desktop environment, GNOME, by default. Most importantly, it respects and follows open source ideology. It is a pure Linux and FOSS experience that is an absolute joy to use. It's no wonder Linus Torvalds -- the father of Linux -- chooses it. With all of that said, Fedora didn't get great overnight. It took years of evolution to become the exceptional operating system… [Continue Reading]
          ReactOS 0.4.10 released      Cache   Translate Page      
The headline feature for 0.4.10 would have to be ReactOS' ability to now boot from a BTRFS formatted drive. The work enabling this was part of this year's Google Summer of Code with student developer Victor Perevertkin. While the actual filesystem driver itself is from the WinBtrfs project by Mark Harmstone, much of Victor's work was in filling out the bits and pieces of ReactOS that the driver expected to interact with. The filesystem stack in ReactOS is arguably one of the less mature components by simple dint of there being so few open source NT filesystem drivers to test against. Those that the project uses internally have all gone through enough iterations that gaps in ReactOS are worked around. WinBtrfs on the other hand came with no such baggage to its history and instead made full use of the documented NT filesystem driver API.

Seems like another solid release. While ReactOS always feels a bit like chasing an unobtainable goal, I'm still incredibly impressed by their work, and at this point, it does seem like it can serve quite a few basic needs through running actual Win32 applications.


          drivers/net/fddi/defza.h:238:1: warning: "/*" within comment      Cache   Translate Page      
kbuild test robot writes: (Summary) */ 239 #define FZA_RING_CMD_MASK 0x7fffffff 240 #define FZA_RING_CMD_NOP 0x00000000 /* nop */ 241 #define FZA_RING_CMD_INIT 0x00000001 /* initialize */ 242 #define FZA_RING_CMD_MODCAM 0x00000002 /* modify CAM */ 243 #define FZA_RING_CMD_PARAM 0x00000003 /* set system parameters */ 244 #define FZA_RING_CMD_MODPROM 0x00000004 /* modify promiscuous mode */ 245 #define FZA_RING_CMD_SETCHAR 0x00000005 /* set link characteristics */ 246 #define FZA_RING_CMD_RDCNTR 0x00000006 /* read counters */ 247 #define FZA_RING_CMD_STATUS 0x00000007 /* get link status */ 248 #define FZA_RING_CMD_RDCAM 0x00000008 /* read CAM */ 249 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation [unhandled content-type:application/gzip]
          Time for Net Giants to Pay Fairly for the Open Source on Which They Depend      Cache   Translate Page      
money

Net giants depend on open source: so where's the gratitude?

Licensing lies at the heart of open source. Arguably, free software began with the publication of the GNU GPL in 1989. And since then, open-source projects are defined as such by virtue of the licenses they adopt and whether the latter meet the Open Source Definition. The continuing importance of licensing is shown by the periodic flame wars that erupt in this area. Recently, there have been two such flarings of strong feelings, both of which raise important issues.

First, we had the incident with Lerna, "a tool for managing JavaScript projects with multiple packages". It came about as a result of the way the US Immigration and Customs Enforcement (ICE) has been separating families and holding children in cage-like cells. The Lerna core team was appalled by this behavior and wished to do something concrete in response. As a result, it added an extra clause to the MIT license, which forbade a list of companies, including Microsoft, Palantir, Amazon, Motorola and Dell, from being permitted to use the code:

For the companies that are known supporters of ICE: Lerna will no longer be licensed as MIT for you. You will receive no licensing rights and any use of Lerna will be considered theft. You will not be able to pay for a license, the only way that it is going to change is by you publicly tearing up your contracts with ICE.

Many sympathized with the feelings about the actions of the ICE and the intent of the license change. However, many also pointed out that such a move went against the core principles of both free software and open source. Freedom 0 of the Free Software Definition is "The freedom to run the program as you wish, for any purpose." Similarly, the Open Source Definition requires "No Discrimination Against Persons or Groups" and "No Discrimination Against Fields of Endeavor". The situation is clear cut, and it didn't take long for the Lerna team to realize their error, and they soon reverted the change:


          SoftoRooM -> .NET Reflector 10.1      Cache   Translate Page      
PRYANIK:
Твой софтовый форум: SoftoRooM
Red-Gate .NЕT Reflector

Твой софтовый форум

описание (ru) .NET Reflector утилита для Microsoft .NET, комбинирующая браузер классов, статический анализатор и декомпилятор
NET
Программа NET Reflector может использоваться для навигации, поиска и анализа содержимого .NET-компонентов, а также сборок и переводить двоичные данные в форму, пригодную для чтения человеком. Reflector позволяет производить декомпиляцию .NET-сборок на языки C#, Visual Basic .NET и MSIL. Reflector также включает дерево вызовов (англ. Call Tree), которое может использоваться для навигации вглубь IL-методов с целью определения, какие методы они вызывают. Программа отображает метаданные, ресурсы и XML-документацию. .NET Reflector может быть использован .NET-разработчиками для понимания внутренней работы библиотек кода, для наглядного отображения различий между двумя версиями сборки, и того, как различные части .NET-приложения взаимодействуют друг с другом.

.NETReflector может использоваться для нахождения мест, имеющих проблемы с производительностью и поиска багов. Он также может быть использован для поиска зависимостей сборки. Программа может быть использована для эффективной конвертации кода между C# и VB.NET.


description (en) .NET Reflector is a class browser, decompiler and static analyzer for software created with .NET Framework, originally written by Lutz Roeder. MSDN Magazine named it as one of the Ten Must-Have utilities for developers, and Scott Hanselman listed it as part of his "Big Ten Life and Work-Changing Utilities".
.NET Reflector was the first CLI assembly browser. It can be used to inspect, navigate, search, analyze, and browse the contents of a CLI component such as an assembly and translates the binary information to a human-readable form. By default Reflector allows decompilation of CLI assemblies into C#, Visual Basic .NET, C++/CLI and Common Intermediate Language and F# (alpha version). Reflector also includes a "Call Tree" that can be used to drill down into intermediate language methods to see what other methods they call. It will show the metadata, resources and XML documentation. .NET Reflector can be used by .NET developers to understand the inner workings of code libraries, to show the differences between two versions of the same assembly, and how the various parts of a CLI application interact with each other. There are a large number of add-ins for Reflector.

.NET Reflector keygen can be used to track down performance problems and bugs, browse classes, and maintain or help become familiar with code bases. It can also be used to find assembly dependencies, and even windows DLL dependencies, by using the Analyzer option. There is a call tree and inheritance-browser. It will pick up the same documentation or comments that are stored in xml files alongside their associated assemblies that are used to drive IntelliSense inside Visual Studio. It is even possible to cross-navigate related documentation (xmldoc), searching for specific types, members and references. It can be used to effectively convert source code between C# and Visual Basic.

.NET Reflector crack has been designed to host add-ins to extend its functionality, many of which are open source. Some of these add-ins provide other languages that can be disassembled too, such as PowerShell, Delphi and MC++. Others analyze assemblies in different ways, providing quality metrics, sequence diagrams, class diagrams, dependency structure matrices or dependency graphs. It is possible to use add-ins to search text, save disassembled code to disk, export an assembly to XMI/UML, compare different versions, or to search code. Other add-ins allow debugging processes. Some add-ins are designed to facilitate testing by creating stubs and wrappers.


Interface languages: En
OS: Windows 10/8/7 (32bit-64bit)
Homepage: www.red-gate.com
скачать бесплатно / free download .NET Reflector 10.1 + crack (keygen) ~ 10 Mb
ver. 10.1.0.1125
Скрытый текст!
Подробности на форуме...

.NET Reflector 10.1
источник: www.softoroom.net

          Machine learning Python hacks, creepy Linux commands, Thelio, Podman, and more      Cache   Translate Page      
I'm filling in again for this weeks top 10 while  Rikki Endsley  is recovering from  LISA 18  held last week in Nashville, Tennessee. We're starting to gather articles for our 4 th annual Open Source ... - Source: opensource.com
          VMware Acquires Heptio, Mining Bitcoin Requires More Energy Than Mining Gold, Fedora Turns 15, Microsoft's New Linux Distros and ReactOS 0.4.10 Released      Cache   Translate Page      

News briefs for November 6, 2018.

VMware has acquired Heptio, which was founded by Joe Beda and Craig McLuckie, two of the creators of Kubernetes. TechCrunch reports that the terms of the deal aren't being disclosed and that "this is a signal of the big bet that VMware is taking on Kubernetes, and the belief that it will become an increasing cornerstone in how enterprises run their businesses." The post also notes that this acquisition is "also another endorsement of the ongoing rise of open source and its role in cloud architectures".

The energy needed to mine one dollar's worth of bitcoin is reported to be more than double the energy required to mine the same amount of gold, copper or platinum. The Guardian reports on recent research from the Oak Ridge Institute in Cincinnati, Ohio, that "one dollar's worth of bitcoin takes about 17 megajoules of energy to mine...compared with four, five and seven megajoules for copper, gold and platinum".

Happy 15th birthday to Fedora! Fifteen years ago today, November 6, 2003, Fedora Core 1 was released. See Fedora Magazine's post for a look back at the Fedora Project's beginnings.

Microsoft announced the availability of two new Linux distros for Windows Subsystem for Linux, which will coincide with the Windows 10 1809 release. ZDNet reports that the Debian-based Linux distribution WLinux is available from the Microsoft Store for $9.99 currently (normally it's $19.99). Also, OpenSUSE 15 and SLES 15 are now available from the Microsoft Store as well.

ReactOS 0.4.10 was released today. The main new feature is "ReactOS' ability to now boot from a BTRFS formatted drive". See the official ChangeLog for more details.


          Open Source Machine Learning Tool Could Help Choose Cancer Drugs      Cache   Translate Page      
November 6, 2018 Atlanta, GA A new decision support tool could help clinicians choose the right chemotherapy drugs. Research Biotechnology, Health, Bioengineering, Genetics Cancer Research Life Sciences and Biology
          VMware acquires Heptio      Cache   Translate Page      

Heptio is the startup founded by 2 of the co-founders of Kubernetes. We had been working on getting some time planned with the CEO Craig McLucki and CTO Joe Beda, but both were “unavailable” to speak. This acquisition might be one of the reasons why.

From Ingrid Lunden’s coverage on TechCrunch:

VMware acquires Heptio — a startup out of Seattle that was co-founded by Joe Beda and Craig McLuckie (two of the three people who co-created Kubernetes back at Google in 2014)

Beda and McLuckie and their team will all be joining VMware in the transaction.

More details can be found on the Heptio blog announcement.

As for the terms of the deal, they “are not being disclosed.” For reference, when Heptio last raised money ($25M Series B in 2017) it was valued at $117M post-money. So, I’m estimating this deal to be in the $300M-$500M range.

To Craig and Joe — first, congrats. Second, we’re still interested in talking with you. Maybe now is a better time and the details you couldn’t share before can now be more freely shared. This is an open invite, to you both!

Congrats also to the team at Heptio for all the hard at work you’re doing to advance Kubernetes and cloud orchestration! What a ride the past few weeks for commercial open source in this recent wave of acquisitions.


          This Company Wants to Make the Internet Load Faster      Cache   Translate Page      

The internet went down on February 28, 2017. Or at least that's how it seemed to some users as sites and apps like Slack and Medium went offline or malfunctioned for four hours. What actually happened is that Amazon's enormously popular S3 cloud storage service experienced an outage , affecting everything that depended on it.

It was a reminder of the risks when too much of the internet relies on a single service. Amazon gives customers the option of storing their data in different "availability regions" around the world, and within those regions it has multiple data centers in case something goes wrong. But last year's outage knocked out S3 in the entire North Virginia region. Customers could of course use other regions, or other clouds, as backups, but that involves extra work, including possibly managing accounts with multiple cloud providers.

A San Francisco based startup called Netlify wants to make it easier to avoid these sorts of outages by automatically distributing its customers’ content to multiple cloud computing providers. Users don't need accounts with Amazon, Microsoft Azure, Rackspace, or any other cloud company―Netlify maintains relationships with those services. You just sign-up for Netlify, and it handles the rest.

You can think of the company's core service as a cross between traditional web hosting providers and content delivery networks, like Akamai, that cache content on servers around the world to speed up websites and apps. Netlify has already attracted some big tech names as customers, often to host websites related to open source projects. For example, Google uses Netlify for the website for its infrastructure management tool Kubernetes, and Facebook uses the service for its programming framework React. But Netlify founders Christian Bach and Mathias Biilmann don't want to just be middlemen for cloud hosting. They want to fundamentally change how web applications are built, and put Netlify at the center.

Traditionally, web applications have run mostly on servers. The applications run their code in the cloud, or in a company's own data center, assemble a web page based on the results, and send the result to your browser. But as browsers have grown more sophisticated, web developers have begun shifting computing workloads to the browser. Today, browser-based apps like Google Docs or Facebook feel like desktop applications. Netlify aims to make it easier to build, publish, and maintain these types of sites.

Back to the Static Future

Markus Seyfferth, the COO of Smashing Media, was converted to Netlify's vision when he saw Biilman speak at a conference in 2016. Smashing Media, which publishes the web design and development publication Smashing Magazine and organizes the Smashing Conference, was looking to change the way it managed its roughly 3,200-page website.

Since its inception in 2006, Smashing Magazine had been powered by WordPress, the content management system that runs about 32 percent of the web according to technology survey outfit W3Techs, along with e-commerce tools to handle sales of books and conference tickets and a third application for managing its job listing site. Using three different systems was unwieldy, and the company's servers struggled to handle the site’s traffic, so Seyfferth was looking for a new approach.

When you write or edit a blog post in WordPress or similar applications, the software stores your content in a database. When someone visits your site, the server runs WordPress to pull the latest version from the database, along with any comments that have been posted, and assembles it into a page that it sends to the browser.

Building pages on the fly like this ensures that users always see the most recent version of a page, but it's slower than serving prebuilt "static" pages that have been generated in advance. And when lots of people are trying to visit a site at the same time, servers can bog down trying to build pages on the fly for each visitor, which can lead to outages. That leads companies to buy more servers than they typically need; what’s more, servers can still be overloaded at times.

"When we had a new product on the shop, it needed only a couple hundred orders in one hour and the shop would go down," Seyfferth says.

WordPress and similar applications try to make things faster and more efficient by "caching" content to reduce how often the software has to query the database, but it's still not as fast as serving static content.

Static content is also more secure. Using WordPress or similar content managers exposes at least two "attack surfaces" for hackers: the server itself, and the content management software. By removing the content management layer, and simply serving static content, the overall "attack surface" shrinks, meaning hackers have fewer ways to exploit software.

The security and performance advantages of static websites have made them increasingly popular with software developers in recent years, first for personal blogs and now for the websites for popular open source projects.

In a way, these static sites are a throwback to the early days of the web, when practically all content was static. Web developers updated pages manually and uploaded pre-built pages to the web. But the rise of blogs and other interactive websites in the early 2000s popularized server-side applications that made it possible for non-technical users to add or edit content, without special software. The same software also allowed readers to add comments or contribute content directly to a site.

At Smashing Media, Seyfferth didn't initially think static was an option. The company needed interactive features, to accept comments, process credit cards, and allow users to post job listings. So Netlify built several new features into its platform to make a primarily static approach more viable for Smashing Media.

The Glue in the Cloud

Biilmann, a native of Denmark, spotted the trend back to static sites while running a content management startup in San Francisco, and started a predecessor to Netlify called Bit Balloon in 2013. He invited Bach, his childhood best friend who was then working as an executive at a creative services agency in Denmark, to join him in 2015 and Netlify was born.

Initially, Netlify focused on hosting static sites. The company quickly attracted high-profile open source users, but Biilman and Bach wanted Netlify to be more than just another web-hosting company; they sought to make static sites viable for interactive websites.

Open source programming frameworks have made it easier to build sophisticated applications in the browser . And there's a growing ecosystem of services like Stripe for payments, Auth0 for user authentication, and Amazon Lambda for running small chunks of custom code, that make it possible to outsource many interactive features to the cloud. But these types of services can be hard to use with static sites because some sort of server side application is often needed to act as a middleman between the cloud and the browser.

Biilmann and Bach want Netlify to be that middleman, or as they put it, the "glue" between disparate cloud computing services. For example, they built an e-commerce feature for Smashing Media, now available to all Netlify customers, that integrates with Stripe. It also offers tools for managing code that runs on Lambda.

Smashing Media switched to Netlify about a year ago, and Seyfferth says it's been a success. It's much cheaper and more stable than traditional web application hosting. "Now the site pretty much always stays up no matter how many users," he says. "We'd never want to look back to what we were using before."

There are still some downsides. WordPress makes it easy for non-technical users to add, edit, and manage content. Static site software tends to be less sophisticated and harder to use. Netlify is trying to address that with its own open source static content management interface called Netlify CMS. But it's still rough.

Seyfferth says for many publications, it makes more sense to stick with WordPress for now because Netlify can still be challenging for non-technical users.

And while Netlify is a developer darling today, it's possible that major cloud providers could replicate some of its features. Google already offers a service called Firebase Hosting that offers some similar functionality.

For now, though, Bach and Biilmann say they're just focused on making their serverless vision practical for more companies. The more people who come around to this new approach, the more opportunities there are not just for Netlify, but for the entire new ecosystem.

More Great WIRED Stories Self-improvement in the internet age andhow we learn A drone-flinging cannon proves UAVscan mangle planes Google's human-sounding phone bot comes to the Pixel How Jump designed aglobal electric bike US weapons systems are easy cyberattack targets Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories
          6 Reasons to Love Smaller CMS      Cache   Translate Page      

WordPress, Joomla, and Drupal; these are perhaps the three biggest names in the world of the consumer CMS. They are known, they are loved, they are behemoths for a good reason. There’s no escaping them. We are all WordPress, now.

No, but seriously, there’s almost nothing they can’t do, with the judicious application of plugins, themes, and coding knowledge. Yeah, believe it or not, I’m not here to rag on them. Kudos to their developers. I’m just going to sit over here and show some love to the smaller CMS options for a bit.

If I were given to fits of poetry, I might write a sonnet about them or something. Alas, Lord Byron I am not; I am not nearly so rich, I have no desire to invade Greece, I do not own a bear. Instead, you’ll be getting this article detailing some of the wonderful things about smaller CMS, such as:

1. Clear Direction and Purpose

The operating system known as Unix was designed on this principle: one program should do one thing, and do it well. This principle was hereafter adopted into other systems, such as MacOS, and has been the cornerstone of some of the best software design over the years.

In many small CMS options, we can see that principle at work. That’s not to say, though, that a single-purpose CMS can’t be sophisticated or even rather complex. It’s just that dedication to a single goal encourages a form of excellence that more generalist systems struggle to achieve.

Take Ghost, for example. As of its 2.0 release Ghost has become, in my opinion, one of the best platforms for a pure-blogging experience. But even though it’s a comparatively small CMS, I wouldn’t call what they’ve done with it “simple”.


6 Reasons to Love Smaller CMS
2. Lightweight, Un-bloated Code

Smaller CMS are just that: smaller. There’s a very direct correlation between the amount of code in the software, and how quickly it runs. When you only need a simple site, there’s no sense in having a CMS with loads of extra features that you’ll never touch. That’s just wasted server space.

Even so, these small CMS can be very flexible. Here we look to Grav : its goal is to be a simple, developer-friendly, flat file CMS. Within that purpose, there is vast potential. Grav can be a blog, a knowledge base, or a simple brochure site, and compete with many of the big CMS options, while maintaining a (current) core download size of 4.8MB.


6 Reasons to Love Smaller CMS
3. Uncomplicated Admin Experience (Usually)

Now, I’ve expounded on how a small CMS can be complex when it has to be, but it’s not always going to be complex. A truly single-purpose application will, largely due to its very nature, be rather simple to use. How many form fields do you really need to publish a page full of content, anyway?

Recently I had the pleasure of working with Bludit to create a small blog. I’ve worked with WordPress, as well as many other systems, for years, but Bludit was downright refreshing. While I did occasionally chafe at some of the small limitations in the system, I absolutely adore the writing and editing experience. Well, I really like the Markdown editing experience. It has TinyMCE, too, but I didn’t bother with that.

The clarity that comes with pushing all the other aspects of publishing out of your way until you need them… it’s just my jam. I’ll say that Ghost is pretty good at that, too, but Bludit’s more bare-bones approach is more my speed. But that’s the beauty of these small CMS options: there’s one out there for just about every use case and user preference.


6 Reasons to Love Smaller CMS
4. Simple Templating and Theming

When a CMS just plain does less, it’s usually a bit easier to design and code templates for. This makes it a bit easier for those of us who are more visually-oriented to spend time doing what we love, rather than learning all the quirks of an endlessly complex system.

One thing a lot of small CMS do to make this even more simple, is use a templating language like Twig. If that interests you, have a look at options like Pico , or if you need something for a larger and more complex publishing endeavor, try Bolt CMS .


6 Reasons to Love Smaller CMS
6 Reasons to Love Smaller CMS
5. When Support is Available, it’s Nice and Personal

The organizations that develop larger CMS will often sell support to enterprise customers to fund development. That’s cool and all, but have you ever wanted to talk to the developers directly? Well, there’s always Twitter, but with many smaller CMS, developers often offer very personal attention to their users. You can contact some through GitHub, via email, or even in an IRC chatroom (or a more modern equivalent).

I’m reminded of the time I spent wrestling with October , a lovely back-to-basics CMS. I was trying to make it work with a cloud platform it really wasn’t designed for, and that didn’t work out at the time. Even so, the time I got to spend chatting with the developers and learning how they were putting things together was enlightening, and fun.


6 Reasons to Love Smaller CMS
6. It’s Easy to get Involved

Technically, systems like WordPress and Drupal are open source. Anyone can contribute to their development and refinement. The truth is, of course, more complicated. Large and established open source projects have established communities, hierarchies, and very large codebases. Sure, they need all of that, but these things can make it hard to just dive in and flex your coding chops. It can be hard to get to the point where you can make real contributions.

Smaller open source projects are easier to get to know inside and out, and most of their developers (especially on the single-developer projects) would be happy to have someone, anyone, do some of the work for them. It’s easier to get in on the ground floor, and truly contribute. It’s more personal, and you get to be a part of that CMS’ history.


          A Comparative Measurement Study of Deep Learning as a Service Framework      Cache   Translate Page      
Big data powered Deep Learning (DL) and its applications have blossomed in recent years, fueled by three technological trends: a large amount of digitized data openly accessible, a growing number of DL software frameworks in open source and commercial markets, and a selection of affordable parallel computing hardware devices. However, no single DL framework, to […]
          Google's automated fuzz bot has found over 9,000 bugs in the past two years      Cache   Translate Page      
Google improves OSS-Fuzz service, plans to invite new open source projects to join.
           Comment on Managing Dependency Injection within Salesforce by Savio Jose       Cache   Translate Page      
Hi Andrew, Firstly, thank you for this open source library, it's nice to have a consistent approach to solving DI problems in Salesforce. We have specific requirement where I need to resolve the metadata bindings using additional filters on the di_Binding__mdt, for instance i can create a new picklist field to allow for an additional filter criteria on di_Binding__mdt, but for the force-di framework to respect this new field i need to make quite a few changes: 1. Update Resolver class with new method byNewCustomField 2. Extend di_module and use a custom version of CustomMetadataModule to query & set new fields 3. Extend di_binding to support this new field An alternate option would be to use a Module binding and have that class lookup another custom metadata with the filter criteria i need & use that to create the specific instance at run time. However this would require us to manage 2 metadata types. Basically what we are trying to achieve here is, to have a base package that provides core functionality allowing subscribers to modify some behaviour by letting them override certain functionality without having to touch any of the classes in the core base package. Just wanted to check if i am missing something, is there a way i can provide support for additional filter criteria on the di_Binding__mdt by just extending a few classes but not making changes to the core force-di classes?
          Cryptomator Cloud Storage Encryption Tool 1.4.0 Released With FUSE / Dokany Support      Cache   Translate Page      
Cryptomator, a free and open source, cross-platform client-side encryption tool for cloud files, was updated to version 1.4.0. With this release, Cryptomator can use FUSE on Linux and Mac, and Dokany on Windows, to provide the virtual, unencrypted drive, which should vastly improve the integration into the system.
          GraphQL API Specification Moving Forward with Independent Foundation      Cache   Translate Page      
The open source GraphQL specification was originally developed by Facebook as an alternative to REST, providing organizations with a new way to enable more complex and dynamic data driven APIs.
          Red Hat - How An Open Source Software Company Became 34,000 Million Dollars Company      Cache   Translate Page      
In a historic milestone, Red Hat, the company that triumphed with Linux and open source has been acquired by IBM for 34,000 million dollars, the largest transaction in history for a software company.
          Machine learning Python hacks, creepy Linux commands, Thelio, Podman, and more      Cache   Translate Page      
I[he]#039[/he]m filling in again for this weeks top 10 while Rikki Endsley is recovering from LISA 18 held last week in Nashville, Tennessee. We[he]#039[/he]re starting to gather articles for our 4th annual Open Source Yearbook, get your proposals in soon. Enjoy this weeks[he]#039[/he] top 10.
          Python or JavaScript Developer - Odoo - Grand-Rosière-Hottomont      Cache   Translate Page      
Join our smart team of Python and JavaScript developers, and work on an amazing Open Source product. Develop things people care about. About the company Odoo is a suite of business apps that covers all enterprise management needs: CRM, e-Commerce, Accounting, Project Management, Inventory, POS, etc. We disrupt the enterprise software market by making fully open source, super easy and full featured (3000+ apps) software accessible to SMEs at a very low cost. Responsibilities Develop Apps...
          The Incorporation of Open Source      Cache   Translate Page      
IBM is acquiring Red Hat, the standard bearer of open-source software. What kind of impact will this have in the embedded space?
          Blender Foundation joins Academy Software Foundation as first free/open source org      Cache   Translate Page      

This news snowed under a bit as it was released around the Blender Conference, but it's super exciting! DreamWorks Animation’s OpenVDB Adopted as the Foundation’s First Project; Blender Foundation and Visual Effects Society Join as Associate Members The Academy Software Foundation (ASWF), a neutral forum for open source software development in the motion picture and [...]

The post Blender Foundation joins Academy Software Foundation as first free/open source org appeared first on BlenderNation.


          Blender 3D tip — Realistic Specular value in Principled shader      Cache   Translate Page      

Metin Seven writes: I love Blender 3D. It's a deservedly popular and successful open source 3D editor. Together with ZBrush it's my most important tool for 3D creation. Since version 2.79 Blender includes the Principled shader, enabling you to create most material types using a single shader node. I'm happy to share a little tip [...]

The post Blender 3D tip — Realistic Specular value in Principled shader appeared first on BlenderNation.


          qemu 3.0.0-4 x86_64      Cache   Translate Page      
A generic and open source machine emulator and virtualizer
          weeklyosm - Um resumo de todas as coisas que acontecem no mundo do OpenStreetMap.       Cache   Translate Page      

Um resumo de todas as coisas que acontecem no mundo do OpenStreetMap. http://www.weeklyosm.eu/pt/

semanáriOSM 432

Fundação OpenStreetMap O OpenStreetMap Bélgica, o mais recente capítulo da família OSM, introduziu-se com num post de um blogue. A secção local do OSM no Reino Unido respondeu aos pedidos de provas da Comissão Geospacial. O OSM RU apoia os esforços para usar Dados públicos e privados do setor geoespacial de forma mais produtiva. Estão convencidos de que o OSM tem um papel importante nesta área. A completa resposta tem um link no final do blogue.

Eventos Chetan Gowda anunciou que o SotM Asia ocorrerá em Bengaluru (também conhecida como Bangalore), na India, nos dias 17 e 18 de Novembro. O dia da comunidade na FOSS4G SotM Oceania 2018 realizar-se-á na sexta-feira, dia 23 de novembro. O programa para o dia acabou de ser anunciado. Registo para FOSS4G Ocenia (20 a 23 de setembro de 2018, Melbourne) está aberto. A Geofabrik habitualmente organiza duas ou três semanas-hack, anualmente na sua sede. Como a Christine Karch estabeleceu em blogpost (de), o evento mais recente teve de ser alterado para um local maior na Universidade de Karlsruhe, depois de 9 pessoas da organização SotM e 7 pessoas que desenvolvem o JOSM terem anunciado que se iriam juntar ao evento. Como parte do projeto openminds.at, o Prémio de Open Source Austríaco (de) será atribuído, pela segunda vez, no dia 9 de novembro. Um Baile Open Source ocorrerá no final da atribuição. A nova delegação belga OSMF planeia o seu primeiro encontro oficial, durante a semana de 13 de novembro.

OSM Humanitário O fundo de população das Nações Unidas relata como Crowd2Map Tansania está a usar o OpenStreetMap para preencher as lacunas em mapas da Tanzânia rural para acabar com a mutilação genital feminina (MGF). Um breve resumo demonstra as [conclusões preliminares] do HOT(https://www.hotosm.org/updates/hacktoberfest-update-tasking-manager-improvements-and-more/) depois de 3 semanas da sua Hacktoberfest. O HOT agradece a todos que trabalharam em questões relacionadas com o uso e a experiência em ferramentas de Open Source, bem como com a sua performance e resolução de bugs em geral. Há vários eventos de MissingMaps por todo o lado, em novembro. Descobre um em que possas participar.

Raphaelmirc - Recife/PE. http://guiaosmbr.webnode.com


          Network Security Specialist - InfoTeK - Pensacola, FL      Cache   Translate Page      
Monitor and understand emerging threats on open source, defined as those technical vulnerabilities and exploits that could present a threat to government...
From InfoTeK - Wed, 05 Sep 2018 23:46:11 GMT - View all Pensacola, FL jobs
          ShareX : A Free Screenshot And Image Sharing Tool For Windows      Cache   Translate Page      
ShareX is an open source screenshot tool for Windows that makes capturing screenshots, GIFs and screen recording easy. Download and install it from here. During installation, there will be a prompt...

Read more.

          Free Stocks Ticker 2.0.10      Cache   Translate Page      
Free, open source, customizable scrolling stock quotes or news headlines. 2018-11-06
          Open Knowledge Foundation: Announcing Rufus Pollock is Joining Viderum      Cache   Translate Page      

Open Knowledge International and Viderum are delighted to announce that Rufus Pollock is joining Viderum as President and CEO. Rufus is a pioneer and leader in the open data community, founder of Open Knowledge, and the original creator of CKAN. Rufus will also be acquiring a majority stake in Viderum and is committed to ongoing investment to accelerate its growth.

Viderum is a leading provider of data portals and open data solutions to the public sector. Open Knowledge created Viderum in 2015 to take forward their pioneering work in creating CKAN, the open source software that has powered many of the world’s leading open data portals including data.gov, data.gov.uk and many others. Rufus will be taking over leadership with transitional support from departing CEO Sebastian Moleski.

Rufus said: “I’m incredibly excited about this opportunity. I’m passionate about the potential for open data and data generally to transform government, enterprise and the non-profit sector. Thanks to the great work by Sebastian and the team, Viderum has grown steadily over the years. I’m committed to continuing and accelerating that work.”

Vicky Brock, Board Member of Open Knowledge International which set up Viderum in 2015 and remain active stewards and stakeholders in CKAN and Viderum, said: “We’re delighted to have Rufus back involved in this way. Rufus has a great vision both for the technology and for the broader data ecosystem, and we’re very excited to be part of this next step in developing and accelerating open data.”

Sebastian said: “The growth of Viderum over the past three years has shown that an open-source startup can flourish and succeed in the international civic tech space. I’m excited to see Rufus join Viderum and he will be able to capitalize and develop on its strengths to deliver access to open data to everyone worldwide.”

 

About Viderum
Viderum is an open data solutions provider. Founded as a separate company in 2015, Viderum creates, maintains, and deploys technology for governments, public sector enterprises, and non-profit organizations to manage their data and publish it as open data.

About Rufus Pollock
Dr Rufus Pollock is a researcher, technologist and entrepreneur. He has been a pioneer in the global Open Data movement, advising national governments, international organisations and industry on how to succeed in the digital world. He is the founder of Open Knowledge, a leading NGO with a presence in over 35 countries, empowering people and organizations with access to information so they can create insight and drive change. Formerly, he was the Mead Fellow in Economics at Emmanuel College, University of Cambridge. He has been the recipient of a $1m Shuttleworth Fellowship and is currently an Ashoka Fellow and Fellow of the RSA. He holds a PhD in Economics and a double first in Mathematics from the University of Cambridge.


          Open Source Machine Learning Tool Could Help Choose Cancer Drugs      Cache   Translate Page      
Using machine learning to analyze RNA expression tied to information about patient outcomes with specific drugs, the open source tool could help ...
          Comment on 6 Best Linux Distributions for Beginners in 2018 by someone      Cache   Translate Page      
I can agree in many points with your comment. And you are right about lazy coulture BUT, this generation does not have an option or they are trapt in the coulture that tries to slave them. When somebody gets the point of open source he will be prepared to learn and help others. Anyway even if there are some lazy individuals we musn't loose the faith in others who doesn't know the opens source ideas and options. All the best!! And thanks for nice article!
          Eva Icons      Cache   Translate Page      
Eva Icons is a pack of more than 480 beautifully crafted Open Source icons for common actions and items. Download our set on the desktop to use them in your digital products for Web, iOS and Android.
          Comment on PS Vita 3.69 confirmed to patch h-encore, TheFlow to open source the exploit by ashim      Cache   Translate Page      
my vita is of 3.69 i upgarded beacuse i did not kenw about it now can i do aif you have any ideanything any ides pls say to me
          Open Source Intelligence (OSINT) Analyst, JBLM WA Job - SAIC - Fort Lewis, WA      Cache   Translate Page      
For information on the benefits SAIC offers, see My SAIC Benefits. Headquartered in Reston, Virginia, SAIC has annual revenues of approximately $4.5 billion....
From SAIC - Fri, 05 Oct 2018 02:47:27 GMT - View all Fort Lewis, WA jobs
          ghc (8.6.2)      Cache   Translate Page      
GHC is a state-of-the-art, open source, compiler and interactive environment for the functional language Haskell.

          awscli (1.16.48)      Cache   Translate Page      
The AWS CLI is an open source tool built on top of the AWS SDK for Python (Boto) that provides commands for interacting with AWS services.

          libreoffice-still (6.0.7)      Cache   Translate Page      
LibreOffice is the free power-packed Open Source personal productivity suite for Windows, Macintosh and Linux, that gives you six feature-rich applications for all your document production and data processing needs.

          libreoffice-fresh (6.1.3)      Cache   Translate Page      
LibreOffice is the free power-packed Open Source personal productivity suite for Windows, Macintosh and Linux, that gives you six feature-rich applications for all your document production and data processing needs.

          git-lfs (2.6.0)      Cache   Translate Page      
An open source Git extension for versioning large files

           LF Commerce, an open source ecommerce dashboard. ReactJS + ExpressJS       Cache   Translate Page      
Comments
          Programming News      Cache   Translate Page      
  • Open Source Survey Shows Python Love, Security Pain Points

    ActiveState published results of a survey conducted to examine challenges faced by developers who work with open source runtimes, revealing love for Python and security pain points.

  • Study Finds Lukewarm Corporate Engagement With Open Source

    Companies expect developers to use open source tools at work, but few make substantial contributions in return

    Developers say that nearly three-quarters of their employers expect them to use open source software to do their jobs, but that those same companies’ contribution to the open source world is relatively low, with only 25 percent contributing more than $1,000 (£768) a year to open source projects.

    Only a small number of employers, 18 percent, contribute to open source foundations, and only 34 percent allow developers to use company time to make open source contributions, according to a new study.

    The study follows IBM’s announcement last week that it plans to buy Linux maker Red Hat for $34 billion (£26m) in order to revitalise its growth in the cloud market, an indication of the importance of open source in the booming cloud industry.

    The report by cloud technology provider DigitalOcean, based on responses from more than 4,300 developers around the world, is the company’s fifth quarterly study on developer trends, with this edition focusing entirely on open source.

  • On learning Go and a comparison with Rust

    I spoke at the AKL Rust Meetup last month (slides) about my side project doing data mining in Rust. There were a number of engineers from Movio there who use Go, and I've been keen for a while to learn Go and compare it with Rust and Python for my data mining side projects, so that inspired me to knuckle down and learn Go.

    Go is super simple. I was able to learn the important points in a couple of evenings by reading GoByExample, and I very quickly had an implementation of the FPGrowth algorithm in Go up and running. For reference, I also have implementations of FPGrowth in Rust, Python, Java and C++.

  • anytime 0.3.2

    A new minor release of the anytime package arrived on CRAN this morning. This is the thirteenth release, and the first since July as the package has gotten feature-complete.

    anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

read more


          IBM/Red Hat Latest      Cache   Translate Page      

read more


          Events: Hacktoberfest 2018, 2018 China Open Source Conference, Qt World Summit 2018 Boston      Cache   Translate Page      
  • Observations on Hacktoberfest 2018

    This term I'm teaching two sections of our Topics in Open Source Development course. The course aims to take upper-semester CS students into open source projects, and get them working on real-world software.

    My usual approach is to put the entire class on the same large open source project. I like this method, because it means that students can help mentor each other, and we can form a shadow-community off to the side of the main project. Typically I've used Mozilla as a place to do this work.

    [...]

    Finally, a theme I saw again and again was students beginning to find their place within the larger, global, open source community. Many students had their blogs quoted or featured on social media, and were surprised that other people around the world had seen them and their work.

  • COSCon Bridges East & West, Open Source Powers Now & Future

    The OSI was honored to participate in the 2018 China Open Source Conference (COSCon'18) hosted by OSI Affiliate Member KAIYUANSHE in Shenzhen, China. Over 1,600 people attended the exciting two-day event, with almost another 10,000 watching via live-stream online. The conference boasted sixty-two speakers from twelve countries, with 11 keynotes (including OSI Board alum Tony Wasserman), 67 breakout sessions, 5 lightning talks (led by university students), 3 hands-on camps, and 2 specialty forums on Open Source Education and Open Source Hardware.

    COSCon'18 also served as an opportunity to make several announcements, including the publication of "The 2018 China Open Source Annual Report", the launch of "KCoin Open Source Contribution Incentivization Platform", and the unveiling of KAIYUANSHE's "Open Hackathon Cloud Platform".

    Since it's foundation in October of 2014, KAIYUANSHE has continously helped open source projects and communities thrive in China, while also contributing back to the world by, "bringing in and reaching out". COSCon'18 is one more way KAIYUANSHE serves to: raise awareness of, and gain expereince with, global open source projects; build and incentivise domestic markets for open source adoption; study and improve open source governance across industry sectors; promote and serve the needs of local develoeprs, and; identify and incubate top-notch local open source projects.

  • Qt World Summit 2018 Boston

    This year’s North American stop on the Qt World Summit world tour was in Boston, held during the Red Sox’s World Series win. Most of us were glad to be flying home before celebration parades closed the streets! The Qt community has reason to celebrate too, as there’s an unprecedented level of adoption and support for our favorite UX framework. If you didn’t get a chance to attend, you missed the best Qt conference on this continent. We had a whole host of KDABians onsite, running training sessions and delivering great talks alongside all the other excellent content. For those of you who missed it, don’t worry – there’s another opportunity coming up! The next stop on Qt’s European tour will be with Qt World Summit Berlin December 5-6. Be sure to sign up for one of the training sessions now before they’re sold out.

read more


          DarkCyber for November 6, 2018, Is Now Available: Part Two, Amazon's Disruptive Thrust      Cache   Translate Page      

DarkCyber for November 6, 2018, is now available at www.arnoldit.com/wordpress and on Vimeo at https://vimeo.com/298831585. - - In this program, DarkCyber explains how Amazon is using open source software and proprietary solutions to reinvent IBM's concept of vendor lock in. Decades ago, IBM used mainframes and their proprietary hardware and software to create a barrier to change for government agencies using the systems. Amazon's approach is to provide a platform which makes use of open s...

Read the full story at https://www.webwire.com/ViewPressRel.asp?aId=230166


          OpenPGP.js - OpenPGP implementation for JavaScript      Cache   Translate Page      
OpenPGP.js is a JavaScript implementation of the OpenPGP protocol. It aims to provide an Open Source OpenPGP library in JavaScript so it can be used on virtually every device. This does not require gpg on their machines in order to use the library. The idea is to implement all the needed OpenPGP functionality in a JavaScript library that can be reused in other projects that provide browser extensions or server applications. It should allow you to sign, encrypt, decrypt, and verify any kind of text - in particular e-mails - as well as managing keys.

          Mautic - Open Source Marketing Automation Software      Cache   Translate Page      
Mautic is marketing automation software (email, social & more). It provides support for Social Media Marketing, Contact Management, Email Marketing, Campaigns, Forms, Reports.

          Compress files faster using Snappy      Cache   Translate Page      
Snappy is the fast compression/decompression library from Google. It does not target to reduce compression size but it does faster compression. Most of the open source products use Snappy.

          LXer: Advance Your Open Source Skills with These Essential Articles, Videos, and More      Cache   Translate Page      
Published at LXer: Recent industry events have underscored the strength of open source in today’s computing landscape. With billions of dollars being spent, the power of open source development,...
          LXer: Cryptomator Cloud Storage Encryption Tool 1.4.0 Released With FUSE / Dokany Support      Cache   Translate Page      
Published at LXer: Cryptomator, a free and open source, cross-platform client-side encryption tool for cloud files, was updated to version 1.4.0. With this release, Cryptomator can use FUSE on...
          LXer: GraphQL API Specification Moving Forward with Independent Foundation      Cache   Translate Page      
Published at LXer: The open source GraphQL specification was originally developed by Facebook as an alternative to REST, providing organizations with a new way to enable more complex and dynamic...
          LXer: Red Hat - How An Open Source Software Company Became 34,000 Million Dollars Company      Cache   Translate Page      
Published at LXer: In a historic milestone, Red Hat, the company that triumphed with Linux and open source has been acquired by IBM for 34,000 million dollars, the largest transaction in history...
          LXer: We love Kubernetes, but it's playing catch-up with our Service Fabric, says Microsoft Azure exec      Cache   Translate Page      
Published at LXer: Jason Zander on cloud native, Red Hat, and figuring out open sourceInterview A curious feature of Microsoft's cloud platform is that it has two fundamentally different platforms...
          LXer: OPNids Integrates Machine Learning Into Open Source Suricata IDS      Cache   Translate Page      
Published at LXer: New open source project gets underway integrating the Suricata Intrusion Detection System (IDS) with the DragonFly Machine Learning Engine, which uses a streaming data analytics...
          Announcing .NET Standard 2.1      Cache   Translate Page      

Since we shipped .NET Standard 2.0 about a year ago, we’ve shipped two updates to .NET Core 2.1 and are about to release .NET Core 2.2. It’s time to update the standard to include some of the new concepts as well as a number of small improvements that make your life easier across the various implementations of .NET.

Keep reading to learn more about what’s new in this latest release, what you need to know about platform support, governance and coding.

What’s new in .NET Standard 2.1?

In total, about 3k APIs are planned to be added in .NET Standard 2.1. A good chunk of them are brand-new APIs while others are existing APIs that we added to the standard in order to converge the .NET implementations even further.

Here are the highlights:

  • Span<T>. In .NET Core 2.1 we’ve added Span<T> which is an array-like type that allows representing managed and unmanaged memory in a uniform way and supports slicing without copying. It’s at the heart of most performance-related improvements in .NET Core 2.1. Since it allows managing buffers in a more efficient way, it can help in reducing allocations and copying. We consider Span<T> to be a very fundamental type as it requires runtime and compiler support in order to be fully leveraged. If you want to learn more about this type, make sure to read Stephen Toub’s excellent article on Span<T>.
  • Foundational-APIs working with spans. While Span<T> is available as a .NET Standard compatible NuGet package (System.Memory) already, adding this package cannot extend the members of .NET Standard types that deal with spans. For example, .NET Core 2.1 added many APIs that allow working with spans, such as Stream.Read(Span<Byte>). Part of the value proposition to add span to .NET Standard is to add theses companion APIs as well.
  • Reflection emit. To boost productivity, the .NET ecosystem has always made heavy use of dynamic features such as reflection and reflection emit. Emit is often used as a tool to optimize performance as well as a way to generate types on the fly for proxying interfaces. As a result, many of you asked for reflection emit to be included in the .NET Standard. Previously, we’ve tried to provide this via a NuGet package but we discovered that we cannot model such a core technology using a package. With .NET Standard 2.1, you’ll have access to Lightweight Code Generation (LCG) as well as Reflection Emit. Of course, you might run on a runtime that doesn’t support running IL via interpretation or compiling it with a JIT, so we also exposed two new capability APIs that allow you to check for the ability to generate code at all (RuntimeFeature.IsDynamicCodeSupported) as well as whether the generated code is interpreted or compiled (RuntimeFeature.IsDynamicCodeCompiled). This will make it much easier to write libraries that can exploit these capabilities in a portable fashion.
  • SIMD. .NET Framework and .NET Core had support for SIMD for a while now. We’ve leveraged them to speed up basic operations in the BCL, such as string comparisons. We’ve received quite a few requests to expose these APIs in .NET Standard as the functionality requires runtime support and thus cannot be provided meaningfully as a NuGet package.
  • ValueTask and ValueTask<T>. In .NET Core 2.1, the biggest feature was improvements in our fundamentals to support high-performance scenarios, which also included making async/await more efficient. ValueTask<T> already exists and allows to return results if the operation completed synchronously without having to allocate a new Task<T>. With .NET Core 2.1 we’ve improved this further which made it useful to have a corresponding non-generic ValueTask that allows reducing allocations even for cases where the operation has to be completed asynchronously, a feature that types like Socket and NetworkStream now utilize. Exposing these APIs in .NET Standard 2.1 enables library authors to benefit from these improvements both, as a consumer, as well as a producer.
  • DbProviderFactories. In .NET Standard 2.0 we added almost all of the primitives in ADO.NET to allow O/R mappers and database implementers to communicate. Unfortunately, DbProviderFactories didn’t make the cut for 2.0 so we’re adding it now. In a nutshell, DbProviderFactories allows libraries and applications to utilize a specific ADO.NET provider without knowing any of its specific types at compile time, by selecting among registered DbProviderFactory instances based on a name, which can be read from, for example, configuration settings.
  • General Goodness. Since .NET Core was open sourced, we’ve added many small features across the base class libraries such as System.HashCode for combining hash codes or new overloads on System.String. There are about 800 new members in .NET Core and virtually all of them got added in .NET Standard 2.1.

For more details, you might want to check out the full API diff between .NET Standard 2.1 and .NET Standard 2.0. You can also use apisof.net to quickly check whether a given API will be included with .NET Standard 2.1.

.NET platform support

In case you missed our Update on .NET Core 3.0 and .NET Framework 4.8, we’ve described our support for .NET Framework and .NET Core as follows:

.NET Framework is the implementation of .NET that’s installed on over one billion machines and thus needs to remain as compatible as possible. Because of this, it moves at a slower pace than .NET Core. Even security and bug fixes can cause breaks in applications because applications depend on the previous behavior. We will make sure that .NET Framework always supports the latest networking protocols, security standards, and Windows features.

.NET Core is the open source, cross-platform, and fast-moving version of .NET. Because of its side-by-side nature it can take changes that we can’t risk applying back to .NET Framework. This means that .NET Core will get new APIs and language features over time that .NET Framework cannot. At Build we showed a demo how the file APIs are faster on .NET Core. If we put those same changes into .NET Framework we could break existing applications, and we don’t want to do that.

Given many of the API additions in .NET Standard 2.1 require runtime changes in order to be meaningful, .NET Framework 4.8 will remain on .NET Standard 2.0 rather than implement .NET Standard 2.1. .NET Core 3.0 as well as upcoming versions of Xamarin, Mono, and Unity will be updated to implement .NET Standard 2.1.

Library authors who need to support .NET Framework customers should stay on .NET Standard 2.0. In fact, most libraries should be able to stay on .NET Standard 2.0, as the API additions are largely for advanced scenarios. However, this doesn’t mean that library authors cannot take advantage of these APIs even if they have to support .NET Framework. In those cases they can use multi-targeting to compile for both .NET Standard 2.0 as well as .NET Standard 2.1. This allows writing code that can expose more features or provide a more efficient implementation on runtimes that support .NET Standard 2.1 while not giving up on the bigger reach that .NET Standard 2.0 offers.

For more recommendations on targeting, check out the brand new documentation on cross-platform targeting.

Governance model

The .NET Standard 1.x and 2.0 releases focused on exposing existing concepts. The bulk of the work was on the .NET Core side, as this platform started with a much smaller API set. Moving forward, we’ll often have to standardize brand-new technologies, which means we need to consider the impact on all .NET implementations, not just .NET Core, and including those managed in other communities such as Mono or Unity. Our governance model has been updated to best include all considerations, including:

A .NET Standard review board. To ensure we don’t end up adding large chunks of API surface that cannot be implemented, a review board will sign-off on API additions to the .NET Standard. The board comprises representatives from .NET platform, Xamarin and Mono, Unity and the .NET Foundation and will be chaired by Miguel de Icaza. We will continue to strive to make decisions based on consensus and will leverage Miguel’s extensive expertise and experience building .NET implementations that are supported by multiple parties when needed.

A formal approval process. The .NET Standard 1.x and 2.0 version were largely mechanically derived by computing which APIs existing .NET implementations had in common, which means the API sets were effectively a computational outcome. Moving forward, we are implementing an editorial approach:

  • Anyone can submit proposals for API additions to the .NET Standard.
  • New members on standardized types are automatically considered. To prevent accidental fragmentation, we’ll automatically consider all members added by any .NET implementation on types that are already in the standard. The rationale here is that divergence at that the member level is not desirable and unless there is something wrong with the API it’s likely a good addition.
  • Acceptance requires:
    • A sponsorship from a review board member. That person will be assigned the issue and is expected to shepherd the issue until it’s either accepted or rejected. If no board member is willing to sponsor the proposal, it will be considered rejected.
    • A stable implementation in at least one .NET implementation. The implementation must be licensed under an open source license that is compatible with MIT. This will allow other .NET implementations to jump- start their own implementations or simply take the feature as-is.
  • .NET Standard updates are planned and will generally follow a set of themes. We avoid releases with a large number of tiny features that aren’t part of a common set of scenarios. Instead, we try to define a set of goals that describe what kind of feature areas a particular .NET Standard version provides. This simplifies answering the question which .NET Standard a given library should depend on. It also makes it easier for .NET implementations to decide whether it’s worth implementing a higher version of .NET Standard.
  • The version number is subject to discussion and is generally a function of how significant the new version is. While we aren’t planning on making breaking changes, we’ll rev the major version if the new version adds large chunks of APIs (like when we doubled the number of APIs in .NET Standard 2.0) or has sizable changes in the overall developer experience (like the added compatibility mode for consuming .NET Framework libraries we added in .NET Standard 2.0).

For more information, take a look at the .NET Standard governance model and the .NET Standard review board.

Summary

The definition of .NET Standard 2.1 is ongoing. You can watch our progress on GitHub and still file requests.

If you want to quickly check whether a specific API is in .NET Standard (or any other .NET platform), you can use apisof.net. You can also use the .NET Portability Analyzer to check whether an existing project or binary can be ported to .NET Standard 2.1.

Happy coding!


          Michael Snoyman: Proposal: Stack Code of Conduct      Cache   Translate Page      

Technical disagreement is not only an inevitable part of software development and community building. It is a necessary and healthy component thereof. Respectful conversations around technical trade-offs are vital to improving software. And these conversations need to be handled with proper care.

I have failed at this in the past, and I’ve apologized for having done so. I’ve strived to do better since. I have encouraged—and continue to encourage—my friends, colleagues, and fellow developers to do the same, and I believe it has been successful.

I think the other extreme—avoiding all discussions of disagreements—is actively harmful. And unfortunately, I see significant confusion in the general Haskell community around what is and is not an acceptable form of technical disagreement.

Outside of my position on the Core Libraries Committee, I do not hold any official position in the leadership of Haskell. I’ve written open source libraries and educational material. I’ve worked on and maintained build tooling. I run a team of Haskell engineers at FP Complete. None of those positions grant me any authority to demand compliance from individuals with any code of conduct.

That said, at many points in the past, I have reached out to individuals whom I’ve been told (or saw myself) were behaving in a non-ideal way. I’ve attempted to remedy those situations privately wherever possible, and have on rare occasions asked people to discontinue such behavior publicly.

I’ve tried to keep this process informal because, as mentioned, I’m not any official authority. The exception to this would be my leadership at FP Complete, but I have a strong policy of avoiding the usage of my position to force those reporting to me to behave a certain way. I do not believe it is appropriate for me to leverage what essentially comes to someone’s livelihood to demand their compliance with my wishes.

There’s been a slow burn of public discussion over the years that has made me consider making a more formal process. But recently, I had a “straw that broke the camel’s back” moment. Someone I like and respect expressed opinions that I think are antithetical to healthy community growth. I won’t call anyone out or refer to those discussions, I’m just explaining why I’m doing this now.

My proposal

I will not in any way claim to have an authority position for the Haskell community. As a result, I’m not going to make any statement for the Haskell community. As the founder of the Stack project, I think I have more of a right (and responsibility) to speak for that project. So I’m going to refer here to the Haskell Stack community. I’m hesitant to do so given that it may be seen as divisive, but I believe the alternative—trying to speaking for the entire Haskell community—is inappropriate. My intent is not to be divisive.

I also have no ability to enforce any kind of behavior on Stack users or contributors. Stack is open source: no matter what statements I make, anyone can use it. I could ask the rest of the maintainer team to block pull requests from specific people, but I believe that’s also overstepping my authority, and so I won’t be doing it.

Instead, I intend to publish two documents, following interaction with people interested in engaging with me on this process. Those two documents will be:

  • A code of conduct. I do not intend to write one myself, but instead use a standard, “off the shelf” CoC. I did this about a year ago for Yesod, and it worked out well.
  • A recommended communication guide. This is something I do intend to write myself, with feedback from others. I intend to write this as an open source project, on a Github repo, accepting issues and pull requests. I intend this to address specific concerns of how the communication I’ve been involved with has broken down.

I am not demanding anything of anyone here. I’m not expecting anyone in other communities (whether within Haskell or outside of it) to participate or change their behavior. And as stated, I’m not willing or able to demand changes from within the Stack community.

What I am hoping is that by clarifying these points:

  • People unsure of how to communicate in certain situations have some clarity
  • We have some better ideas of how to communicate on sensitive technical topics
  • If someone within the Stack community is seen as misbehaving, there’s a mechanism to politely and confidentially raise the issue, and hopefully have the situation improved

Next steps

I’ve created a Github repo. This repo is on my personal Github account, not commercialhaskell, fpco, or haskell. Again, this is my own personal set of content. I encourage others to participate if they feel invested in the topic. I’m considering having a sign-up sheet for the repo after we have a first version, so that people can state that:

  • They’ve reviewed a certain version of the repo
  • They agree with what it says
  • They will strive to follow its recommendations

What to avoid

We’ll get into far more details in that repo itself, but since I anticipate this blog post itself kicking off some discussion, I’m going to make some requests right now:

  • Avoid ad hominems. (Yes, I’ve made mistakes here in the past.) This applies both to someone’s personal history, and to any organizations they are members of, or who they are friendly with.
  • Avoid offensive language. This is actually more complicated than it seems; one of the ways I’ve upset people is my highly sarcastic-by-nature communication style. I’ve worked hard on tamping that down. I point this out because what’s offensive to one person may be normal to others. This is especially true in a global community.
  • We will inevitably have to address some old behavior to know what to include in our recommendations. This is not an invitation to air grievances, as tempting as that may be for some people. Keep the comments non-personal, speak in general terms, and try to avoid any grandstanding. This will be a tough line to walk. But a simple example:
    • Good: “I’ve seen people discredit some people’s opinion because of who their friends are, I believe we should address that in the guidelines.”
    • Bad: “Two months ago, in this Reddit thread, person X said Y about Z. This is horrible, and X needs to either apologize or be removed from the community. They need to be mentioned by name in the guidelines as an offender.”

This is my first time trying to have a discussion quite like this, and I’m guessing the first time many people involved will be participating in one. As one final note, I’d like to request people make an assumption of good will. Mistakes will be made, but we can try to minimize those mistakes, and move on from them when they occur.


          Robber - Tool For Finding Executables Prone To DLL Hijacking      Cache   Translate Page      

Robber is a free open source tool developed using Delphi XE2 without any 3rd party dependencies.
What is DLL hijacking ?!
Windows has a search path for DLLs in its underlying architecture. If you can figure out what DLLs an executable requests without an absolute path (triggering this search process), you can then place your hostile DLL somewhere higher up the search path so it'll be found before the real version is, and Windows will happilly feed your attack code to the application.

So, let's pretend Windows's DLL search path looks something like this:
A) . <-- current working directory of the executable, highest priority, first check
B) \Windows
C) \Windows\system32
D) \Windows\syswow64 <-- lowest priority, last check
and some executable "Foo.exe" requests "bar.dll", which happens to live in the syswow64 (D) subdir. This gives you the opportunity to place your malicious version in A), B) or C) and it will be loaded into executable.
As stated before, even an absolute full path can't protect against this, if you can replace the DLL with your own version.
Microsoft Windows protect system pathes like System32 using Windows File Protection mechanism but the best way to protect executable from DLL hijacking in entrprise solutions is :
  • Use absolute path instead of relative path
  • If you have personal sign, sign your DLL files and check the sign in your application before load DLL into memory. otherwise check the hash of DLL file with original DLL hash)
And of course, this isn't really limited to Windows either. Any OS which allows for dynamic linking of external libraries is theoretically vulnerable to this.
Robber use simple mechanism to figure out DLLs that prone to hijacking :
  1. Scan import table of executable and find out DLLs that linked to executable
  2. Search for DLL files placed inside executable that match with linked DLL (as i said before current working directory of the executable has highest priority)
  3. If any DLL found, scan the export table of theme
  4. Compare import table of executable with export table of DLL and if any matching was found, the executable and matched common functions flag as DLL hijack candidate.
Feauters :
  • Ability to select scan type (signed/unsigned applications)
  • Determine executable signer
  • Determine wich referenced DLLs candidate for hijacking
  • Determine exported method names of candidate DLLs
  • Configure rules to determine which hijacks is best or good choice for use and show theme in different colors
Find out latest Robber executable here

Download Robber

          Looking back and stepping forward: Real community spirit with the Open Source Initiative at MozFest 2018      Cache   Translate Page      

For the third year in a row, Adblock Plus participated in the leading event that builds upon the incredible movement for an open web: MozFest. This year, we had a clear mission to connect, share, and celebrate 20 years of open source together with the Open Source Initiative (OSI). It was the perfect occasion to combine our appreciation for an open Internet, and to actively support and participate in the broader movement that exists around the issues of the web.


          Symbolic Mathematics for Chemists: A Guide for Maxima Users      Cache   Translate Page      

 

An essential guide to using Maxima, a popular open source symbolic mathematics engine to solve problems, build models, analyze data and explore fundamental concepts

Symbolic Mathematics for Chemists offers students of chemistry a guide to Maxima, a popular open source symbolic mathematics engine that can be used to solve problems, build models, analyze data, and explore fundamental chemistry concepts. The author — a noted expert in the field — focuses



Read More...

          today's leftovers      Cache   Translate Page      
  • QEMU 3.1 Begins Its Release Dance With 3.1.0-RC0

    The initial release candidate of the upcoming QEMU 3.1 is now available for this important piece of the open-source Linux virtualization stack.

  • Planet KDE Twitter Feed

    Some years ago I added an embedded Twitter feed to the side of Planet KDE.  This replaced the earlier feed manually curated feeds from identi.ca and twitter which people added but had since died out (in the case of identi.ca) and been blocked (in the case of Twitter).  That embedded Twitter feed used the #KDE tag and while there was the odd off topic or abusive post for the most part it was an interesting way to browse what the people of the internet were saying about us.  However Twitter shut that off a few months ago which you could well argue is what happens with closed proprietary services.

    We do now have a Mastodon account but my limited knowledge and web searching on the subject doesn’t give a way to embed a hashtag feed and the critical mass doesn’t seem to be there yet, and maybe it never will due to the federated-with-permissions model just creating more silos.

  • VyOS 1.2.0-rc6 is available for download

    As usual, every week we make a new release candidate so that people interested in testing can test the changes quickly and people who reported bugs can confirm they are resolved or report further issues.

  • SUSECON Global Open Source Conference Opens Registrations
  • Marvell, TUXEDO Computers Sponsor openSUSE Project

    Two companies were recently added to the openSUSE Sponsors page thanks to the companies generous donations to the openSUSE Project.

    Both Marvell and TUXEDO Computers have provided tangible support through donations to openSUSE to promote the use and development of Linux.

    “We are thoroughly pleased to have Marvell and TUXEDO Computers as sponsors of the openSUSE Project,” said Richard Brown, chairman of the openSUSE Board. “The sponsorships support and encourage open-software development. Multiple Linux distributions and the open-source community will benefit greatly from the equipment.”

  • TeX Live/Debian updates 20181106 00:51

    All around updates in the TeX Live on Debian world: Besides the usual shipment of macro and font packages, we have uploaded a new set of binaries checked out from current svn, as well as the latest and shiniest version of biber to complement the macro update of biblatex.

  • Kai-Chung Yan: My Open-Source Activities from September to October 2018
  •  

  • Intel 6th and 7th Gen box PCs offer PCIe graphics expansion

    Aaeon launched a rugged, Linux-friendly line of “Boxer-6841M” industrial computers based on 6th or 7th Gen Core CPUs with either a PCIe x16 slot for Nvidia GPU cards or 2x PCIe x8 slots for frame grabbers.

    The Boxer-6841M line of six industrial box PCs is designed for edge AI and machine vision applications. Like last year’s Boxer-6839, the rugged, wall-mountable computers run Linux (Ubuntu 16.04) or Windows on Intel’s 6th Generation “Skylake” and 7th Generation “Kaby Lake” Core and Xeon processors with 35W to 73W TDPs. The systems use T and TE branded Core CPUs and Intel H110 PCH or C236 PCH chipsets.

read more


          OSS and Sharing Leftovers      Cache   Translate Page      
  • Mapillary launches an open-source Software Development Kit

    Mapillary, the street-level imagery platform that uses computer vision to improve maps for the likes of HERE, the World Bank, and automotive companies, launched an open-source Software Development Kit (SDK) to allow developers integrate the company’s image capture functionality into their own apps. The release makes it easier than ever for developers everywhere to incorporate a street-level capture component in their own apps with custom features, spearheading efforts to update maps and make them more detailed.

  • Open Source Machine Learning Tool Could Help Choose Cancer Drugs

    The selection of a first-line chemotherapy drug to treat many types of cancer is often a clear-cut decision governed by standard-of-care protocols, but what drug should be used next if the first one fails?

    That’s where Georgia Institute of Technology researchers believe their new open source decision support tool could come in. Using machine learning to analyze RNA expression tied to information about patient outcomes with specific drugs, the open source tool could help clinicians chose the chemotherapy drug most likely to attack the disease in individual patients.

    In a study using RNA analysis data from 152 patient records, the system predicted the chemotherapy drug that had provided the best outcome 80 percent of the time. The researchers believe the system’s accuracy could further improve with inclusion of additional patient records along with information such as family history and demographics.

  • Open source the secret sauce in secure, affordable voting tech

    The fastest, most cost-effective way to secure direct-record electronic voting machines in the United States, according to cybersecurity experts, is to stop using them. Switch to paper ballots and apply risk-limiting audits to ensure that vote tallies are conducted properly. And over the long term, consider switching to the cheaper—and more advanced and secure—voting technology that cybersecurity expert Ben Adida is dedicating his next career move to developing.

    Adida’s new company, which he publicly announced at the Context Conversations event here Monday evening, which The Parallax co-sponsored, is the nonprofit VotingWorks. The company, which currently is hosted by another nonprofit Adida declined to name in a conversation after the event that eventually will become its own 501(c)3, has one goal: to build a secure, affordable, open-source voting machine for use in general, public elections.

  • The Incorporation of Open Source
  • ADAF – an Open Source Digital Standards Repository for Africa – Launches in Kenya

    Digital assets provide an easy and more secure way of doing business and Africa is quickly benefiting from these new economies of trade.

    With over 1 billion people and a combined GDP of over $3.4 trillion, Africa has some of the fastest growing economies in the world.

    The African Continental Free Trade Area (AfCTA) which was signed by 44 African countries earlier this year is expected to boost intra-Africa trade tool by driving industrialisation, economic diversification and development across the African continent.

  • Infosys Launches Open Source DevOps Project

    Ravi Kumar, president and deputy COO for Infosys, said the global systems integrator has spent the last several years turning each element of its DevOps platform into a set of microservices based on containers that can now be deployed almost anywhere. That decision made it more practical for Infosys to then offer the up the entire DevOps framework it relies on to drive thousands of projects employing more than 200,000 developers as a single open source project, he said. That framework relies on an instance of the open source Jenkins continuous integration/continuous deployment (CI/CD) framework at its core.

  • Real World Data App: FDA Releases Open Source Code

    The US Food and Drug Administration (FDA) on Tuesday released computer code and a technical roadmap to allow researchers and app developers to use the agency’s newly created app that helps link real world data with electronic health data supporting clinical trials or registries.

    FDA said the app and patient data storage system can be reconfigured by organizations conducting clinical research and can be rebranded by researchers and developers who would like to customize and rebrand it.

    Among the features of the app is a secure data storage environment that supports auditing necessary for compliance with 21 CFR Part 11 and the Federal Information Security Management Act, so it can be used for trials under Investigational New Drug oversight.

  • Texas State should switch to open source textbooks
  • Big Boost For Open Access As Wellcome And Bill & Melinda Gates Foundation Back EU's 'Plan S'

    Although a more subtle change, it's an important one. It establishes unequivocally that anyone, including companies, may build on research financed by Wellcome. In particular, it explicitly allows anyone to carry out text and data mining (TDM), and to use papers and their data for training machine-learning systems. That's particularly important in the light of the EU's stupid decision to prevent companies in Europe from carrying out either TDM or training machine-learning systems on material to which they do not have legal access to unless they pay an additional licensing fee to publishers. This pretty much guarantees that the EU will become a backwater for AI compared to the US and China, where no such obstacles are placed in the way of companies.

  • Jono Bacon: Video: 10 Avoidable Career Mistakes (and How to Conquer Them)
  • 10 avoidable career mistakes (and how to conquer them)

read more


          EEE, Openwashing and Surveillance as Linux Foundation 'Thing'      Cache   Translate Page      

read more


          GraphQL API Specification Moving Forward with Independent Foundation      Cache   Translate Page      

eWEEK: The open source GraphQL specification was originally developed by Facebook as an alternative to REST, providing organizations with a new way to enable more complex and dynamic data driven APIs.


          CPod: An Open Source Podcast App for Linux      Cache   Translate Page      

itsFOSS: CPod is an open source podcast application for Linux, Windows and MacOS.


          Asp .Net Core Development Company      Cache   Translate Page      

The ASP.NET Framework is one of the leading frameworks for developing applications, but it only permits developers to build applications for Windows. Now as one of the top .net core framework development agency, we can help you develop applications for Windows, Mac, and Linux using the same code using open source .net core framework supported […]

The post Asp .Net Core Development Company appeared first on .


          Edgeworx ioFog Platform Drills Software Down to the Edge      Cache   Translate Page      
The company’s platform is based on the Eclipse ioFog open source project and targets developers looking to deploy and manage any application or containerized microservice at the edge.
          GraphQL API Specification Moving Forward with Independent Foundation      Cache   Translate Page      

eWEEK: The open source GraphQL specification was originally developed by Facebook as an alternative to REST, providing organizations with a new way to enable more complex and dynamic data driven APIs.


          Alliance for Open Chatbot : Créer des interactions entre les différentes solutions      Cache   Translate Page      
Plusieurs spécialistes de solutions chatbots ont créé l’Alliance For Open Chabot afin de développer un standard d’interface « open source » permettant aux chatbots d’interagir entre eux.
          This MIT PhD Wants to Replace America's Broken Voting Machines with Open Source Software, Chromebooks, and iPads      Cache   Translate Page      
In 2006, Ben Adida wrote a 254-page PhD dissertation on "cryptographic voting systems." Now, he wants to fix America's broken voting machines.
          Real World Data App: FDA Releases Open Source Code - Regulatory Focus      Cache   Translate Page      

Real World Data App: FDA Releases Open Source Code
Regulatory Focus
The US Food and Drug Administration (FDA) on Tuesday released computer code and a technical roadmap to allow researchers and app developers to use the agency's newly created app that helps link real world data with electronic health data supporting ...


          Enquête de DigitalOcean : plus d'un développeur sur deux contribue à l'open source, mais leurs entreprises ne soutiennent pas autant la communauté      Cache   Translate Page      
Enquête de DigitalOcean : plus d'un développeur sur deux contribue à l'open source
mais leurs entreprises ne soutiennent pas autant la communauté

Pour la cinquième édition de son rapport baptisé Currents, sur les tendances chez les développeurs, DigitalOcean a interrogé plus de 4300 personnes dans le monde entier sur l'état de l'open source. Le fournisseur US d'infrastructure cloud s'est notamment intéressé à la manière dont les développeurs et leurs entreprises utilisent l'open source, et ce qui...
          Zoosk: Software Engineer - JavaScript      Cache   Translate Page      

(San Francisco)

Are you looking to help build out products that can have a positive, real-life impact on millions of people? At Zoosk, we have the tremendous and rewarding challenge of facilitating meaningful connections between over 40 million users worldwide and growing. Our users span 80 countries worldwide and exchange millions of messages in 25 different languages every single day.
 
To match the demands of our increasing user base, our Browser team is tackling unique challenges in scaling our Front-End architecture. As a member of Browser Engineering, you'll help us optimize a substantial Angular codebase and build out a fully responsive Single Page Web Application that better serves our members and helps our Product team make better, faster decisions.
 
We’re looking for a sharp, smart, passionate team player to help us maintain and expand our applications further.

What you’ll do:

    • Be technically responsible for the projects you take on and choose the best method or technology to make them successful.
    • Collaborate with product managers to define clear requirements, deliverables, and milestones.
    • Build new features and applications to better serve our users.
    • Contribute to the Front-End by building and iterating on site-wide features and changes, discussing new technologies, patterns and tools, and maintaining best practices.
    • Working closely with UI and UX designers to iterate on our customer experience.

What we're looking for in you:

    • You are a great communicator and are comfortable sharing your thoughts in writing and in-person.
    • You have 2+ years of related experience in Front-End development with JavaScript and the Angular 2+ framework.
    • Bachelor's Degree in Computer Science or equivalent experience.
    • You know the subtle nuances of JavaScript - e.g. hoisting, the difference between "undefined" and "null", not being surprised that "typeof []" returns "object".
    • You have a very good grasp of Computer Science fundamentals - you know what a Collision in a Hashmap is, what the Builder pattern in OO is, and you can write a Ternary search function by yourself without searching for the solution in Stack Overflow.
    • You work well on small teams and are equally comfortable on fast-paced and continuous deployment projects as you are working on large-scale initiatives.

It’s cool if you have:

    • Experience with Node.js.
    • CSS experience using Sass.
    • Contributed to Open Source projects and have a showcase in your Github account.
Zoosk Core Values: Don’t stop caring. Be a doer. We’re better together. Speak up. We say no to the status quo. If there’s fun to be had, then have it.
 
While there is a lot to do, we also have a very open and collaborative culture and know how to have a lot of fun; frequent team and company outings, Hackathons, happy hour every Thursday, daily catered lunches, health and wellness benefits, 401K matching, generous maternity/paternity leave policies, and flexible time off are just some of the ways we support our teams.
 
Zoosk is proud to be an equal opportunity workplace committed to building a team culture that celebrates diversity and inclusion.

Job Perks: Team & company outings, Hackathons, weekly happy hour, catered lunch, health & wellness benefits, 401K matching, generous maternity/paternity leave policies, & unlimited flexible time off.

Apply:


          Percona Live Europe 2018: What’s Up for Wednesday      Cache   Translate Page      
Percona Live Europe Open Source Database Conference PLE 2018Welcome to Wednesday at Percona Live Europe 2018! Today is the final day! Check out all of the excellent sessions to attend. Please see the important updates below. Download the conference App If you haven’t already downloaded the app, go to the app store and download the official Percona Live App! You can view the schedule, […]
          vScaler integrates RAPIDS for accelerated data science toolchains      Cache   Translate Page      
vScaler has incorporated NVIDIA’s new RAPIDS open source software into its cloud platform for on-premise, hybrid, and multi-cloud environments. Deployable via its own Docker container in the vScaler...
       

          Open Source Ball / Open Minds Preisverleihung: 5×2 Tickets zu gewinnen!      Cache   Translate Page      
Am Freitag steigt im Wiener Palais Wertheim der Open Source Ball 2018. datenschmutz verlost 5x2 Karten im Wert von €250!
          Ruby/Rails дайджест #23: релиз Ruby 2.5.3, обновление Hanami до версии 1.3.0, фреймворк Action Text для Ruby on Rails 6      Cache   Translate Page      

Всем привет!

В октябре Ruby-комьюнити успело порадовать нас хорошими новостями. Прежде всего, были представлены обновленные версии языка Ruby и популярного фреймворка Hanami. Cообщество Ruby ведет активную работу над фреймворком Action Text, который войдет в состав Ruby on Rails 6 (не пропустите подборку новостей о Rails 6 от bogdanvlviv). Также обратите внимание, что в CircleCI добавлена поддержка GitHub Checks.

Почитать

Introducing Action Text for Rails 6 — что такое фреймворк Action Text, который будет включен в Ruby on Rails 6.

What is new in Rails 6.0 — подборка последних новостей о Ruby on Rails 6.

Upgrading GitHub from Rails 3.2 to 5.2 — Eileen Uchitelle из команды GitHub рассказывает об обновлении проекта до версии Ruby on Rails 5.2.1.

Cache Invalidation Complexity: Rails 5.2 and Dalli Cache Store — как избежать проблем в работе кэш-ключей при использовании Rails 5.2.

Working with ActiveRecord Callbacks — автор делится советами по использованию колбэков ActiveRecord в Rails-приложениях.

Code Audit: How to Provide the Best Quality for Your Ruby on Rails Application — как провести code audit приложения на Ruby on Rails.

Microservices vs spaghetti code are not your only options — автор рассматривает возможный вариант стандартизации архитектуры Passenger по примеру Kubernetes.

Meet Yabeda: Modular framework for instrumenting Ruby applications — знакомимся с Yabeda — фреймворком для сбора метрик в приложениях на Ruby.

How Devise keeps your Rails app passwords safe — разбираемся в деталях: как работает популярный гем Devise.

CircleCI launches support for GitHub Checks — инструмент CircleCI теперь поддерживает GitHub Checks.

Where Ruby/Sinatra falls short — что следует учитывать при разработке приложений на Sinatra.

Ruby Method Lookup, RubyVM.stat and Global State — автор подробно описывает, как и зачем избегать определения глобальных методов и констант.

Some notes on what’s going on in ActiveStorage — статья поможет разобраться с тем, как работает Active Storage в Ruby on Rails.

Pair With Me: Rubocop Cop that Detects Duplicate Array Allocations — учимся использовать популярный линтер RuboCop для ускорения производительности Rails-приложений.

12 Factor CLI Apps — знакомимся с методологией 12 factor app, разработанной компанией Heroku для создания CLI-приложений.

Destructuring Methods in Ruby — как выполнить деструктуризацию методов в Ruby.

Rails Parts — автор делится опытом реструктуризации приложения на Rails.

Ruby gotchas for the JavaScript developer — на что стоит обратить внимание JavaScript-разработчику при изучении Ruby.

Ruby Plotting with Galaaz: An example of tightly coupling Ruby and R in GraalVM — учимся строить графики на языке R в Ruby-приложениях при помощи библиотеки Galaaz.

Подборка от AppSignal

The Magic of Class-level Instance Variables — какие возможности дает использование class-level instance variables в разработке на Ruby.

The innards of a RubyGem — автор показывает, как создать гем без использования Bundler.

Building a Ruby C Extension From Scratch — краткая инструкция, как писать Ruby-расширения на языке C.

Подборка от BigBinary

Ruby 2.6 adds RubyVM::AST module — разбираемся с модулем RubyVM::AST в Ruby 2.6.

Ruby 2.6 Range#cover? now accepts Range object as an argument — метод Range#cover? в Ruby 2.6 принимает объекты класса Range в качестве аргументов.

Rails 5.2 adds DSL for configuring Content Security Policy header — в Rails 5.2 добавлен DSL для конфигурации Content Security Policy.

Rails 5.2 disallows raw SQL in dangerous Active Record methods preventing SQL injections — версия 5.2 фреймворка Ruby on Rails не разрешает использование чистого SQL для предотвращения SQL injections.

Skip devise trackable module for API calls to avoid users table getting locked — автор делится опытом решения проблемы, связанной с работой модуля trackable, при использовании популярного гема Devise.

Подборка от Bozhidar Batsov

A Better Way to Compare Versions in Ruby — как сравнивать версии при разработке на Ruby.

A Safer RuboCop — автор рассказывает о безопасном автоисправлении в RuboCop.

Подборка от Igor Springer

5 security issues in Ruby on Rails apps from real life — автор на основе собственного опыта перечисляет пять уязвимостей в Ruby on Rails приложениях.

How to log HTTParty requests — как регистрировать все запросы, посылаемые гемом httparty.

`ActiveSupport::StringInquirer` magic — как и зачем использовать класс ActiveSupport::StringInquirer.

`ActiveSupport::ArrayInquirer` and even more Rails magic — разбираемся с классом ActiveSupport::ArrayInquirer.

Подборка от Jason Swett

What Exactly Makes «Bad» Code Bad? — автор делится своим мнением о том, что такое плохой код и почему.

How to See Your Feature Specs Run in the Browser — как запускать feature tests в браузере.

Factories and Fixtures in Rails — обзор трех способов генерации тестовых данных в приложениях на Rails.

Подборка от Josef Strzibny

Debugging silently failing compilation aka Webpacker can’t find application.js in public/packs/manifest.json — автор делится опытом отладки ошибок компиляции при использовании Webpacker.

Building auto login for fast Rails development with Sorcery — простой способ ускорить процесс разработки ПО при помощи автоматической аутентификации.

Подборка от Mehdi Farsi

5 Ruby Tips You Probably Don’t Know — автор описывает пять возможностей языка Ruby, о которых многие разработчики не знают.

The Evolution of Ruby Strings from 1.8 to 2.5 — освежите в памяти, какие изменения произошли с классом String от версии Ruby 1.8 до 2.5.

Why the Ruby community encourages Duck Typing — автор делится мнением, почему сообщество Ruby поощряет duck typing.

The short guide to learning how Classes work in Ruby — этот краткий гайд поможет новичкам изучить, как работают классы в языке Ruby.

Подборка от reinteractive

How to structure JavaScript code when using AJAX in Rails — два способа структурировать JavaScript-код при использовании AJAX в приложениях на Rails.

To Microservice or Monolith, that is the question — автор делится мыслями о выборе архитектуры приложения.

Подборка от RubyGuides

How to Check If a Variable is Defined in Ruby — как проверить, инициализирована ли переменная в Ruby.

Understanding The Differences Between Puts, Print & P — освежаем в памяти, в чем разница между тремя способами вывода данных в Ruby.

How to Use RSpec Mocks (Step-By-Step Tutorial) — пошаговый туториал об использовании mocks в тест-фреймворке RSpec.

How to Use the Ruby Grep Method (With Examples) — автор показывает, как использовать метод grep в Ruby с подробными примерами.

How to Read & Parse CSV Files With Ruby — как считывать и записывать csv-файлы, а также какие существуют конвертеры и гемы для работы с ними.

How to Use Ruby Any, All, None & One — детальный разбор четырех методов Enumerable.

What is Ruby on Rails? — обзорная статья о Ruby on Rails: философия фреймворка, причины стать Rails-разработчиком и как начать изучение Rails.

How to Use The Ruby Map Method (With Examples) — как использовать метод map в Ruby.

Understanding Method Visibility In Ruby — в чем разница между public, private и protected методами в Ruby.

How To Delegate Methods in Ruby — автор показывает несколько способов делегирования методов в Ruby.

Подборка от thoughtbot

Tab completion in GNU Readline: Ruby edition — как реализовать автозаполнение в командной строке при помощи GNU Readline в Ruby.

Writing Less Error-Prone Code — автор делится советами, как писать более качественный код.

Туториалы

How to Build Chat into Ruby on Rails Applications — как реализовать live chat в приложении на Ruby on Rails.

Simplifying internal validations using Dry-Validation — учимся отделять валидацию данных от бизнес-логики при помощи гема dry-validation.

How we halved our memory consumption in Rails with jemalloc — автор показывает, как сократить использование памяти в Rails-приложении при помощи менеджера памяти jemalloc.

Scale Out Multi-Tenant Apps based on Ruby on Rails — инструкция по горизонтальному масштабированию многопользовательских приложений на Ruby on Rails.

Using Ruby on Rails 5.2 Active Storage — как настроить Active Storage при работе с версией 5.2 фреймворка Ruby on Rails.

How to Use Repository Pattern with Active Record — автор показывает, как использовать repository pattern при работе с Active Record в приложениях на Rails.

How to: Execute RSpec in parallel locally — как обеспечить параллельное выполнение тестов RSpec в локальном окружении.

Launching Your Own Ruby Gem — Part 1: Build It — первая часть подробной инструкции, как создать гем.

Testing Ruby’s CGI — как тестировать CGI в Ruby.

Ruby async await — автор рассказывает, как реализовать функцию async await в Ruby.

Custom URLs in Ruby on Rails: How you can use descriptive slugs instead of IDs — как реализовать настраиваемые URL в приложении на Rails.

Ruby and Rack: The beginning — разбираемся, как Rack взаимодействует с веб-серверами Webrick, Mongrel, Thin и Puma.

Handling exceptions in Rails API applications — автор делится опытом обработки исключений в Rails API приложениях.

How to use HMAC-SHA256 to connect to a REST API like Ticketmatic — краткая инструкция, как соединиться с REST API, если используется алгоритм HMAC-SHA256.

Цикл статей о создании data API при помощи Ruby on Rails 5:

Релизы

Ruby 2.5.3 — вышла версия 2.5.3 языка Ruby.

Hanami 1.3.0 — представлена версия 1.3.0 популярного фреймворка Hanami.

Ruby Gems

minitest-mock_expectations — гем для подтверждения вызова метода при работе с фреймворком Minitest.

Salus — инструмент для координации работы сканеров уязвимостей.

Enkrip — гем шифрует и дешифрует атрибуты моделей Active Record.

OurPC — экспериментальная реализация gRPC клиента и сервера.

События

Ruby Meditation #24 — 3 ноября в Киеве пройдет Ruby Meditation #24. Темы включают domain-driven design в Rails, runtime model в Ruby, оптимизацию Capybara.

Конференции

RubyConf 2018 — с 13 по 15 ноября в Лос-Анджелесе (США) проходит конференция RubyConf 2018; конференцию открывает создатель языка Ruby Yukihiro ’Matz’ Matsumoto.

Послушать

The Bike Shed

172: What I Believe About Software — ведущий и гость подкаста обсуждают основные составляющие процесса разработки ПО: что такое story points, когда делать рефакторинг и code review и т. д.

175: Tell Me When It’s Real — участники дискуссии обсуждают новейшие тренды в мире веб-разработки.

Ruby Rogues

RR 382: «When to Build... When to Buy» with The Panelists — участники дискуссии обсуждают, стоит ли создавать новые инструменты или же покупать сторонние решения.

RR 383: «Rbspy: A New(ish) Ruby Profiler!» with Julia Evans — главная тема подкаста — профайлер Ruby Spy.

RR 384: «Sonic Pi» with Sam Aaron — в гостях у ведущих Sam Aaron — разработчик среды программирования для создания музыки Sonic Pi.

RR 385: «Ruby/Rails Testing» with Jason Swett — в гостях у Ruby Rogues Jason Swett — ведущий подкаста The Ruby Testing.

RR 386: Web Console Internals with Genadi Samokovarov — основная тема подкаста — использование веб-консоли для отладки приложений на Ruby.

RWpod

Ruby on Rails Podcast

246: Trust Arts, Trust Rails with Patrick FitzGerald and Danielle Greaves — участники дискуссии обсуждают свои любимые аспекты фреймворка Ruby on Rails.

247: Introducing Action Text for Rails 6 with Javan Makhmali — выпуск посвящен фреймворку Action Text, который войдет в состав Ruby on Rails 6.

248: Diving Into Ruby Weekly with Peter Cooper — ведущая подкаста беседует с Peter Cooper — редактором рассылки Ruby Weekly.

The Ruby Testing Podcast

013 — The Balance Between Testing and Feature Development with Dave Kimura — как найти баланс между написанием функционального кода и тестов.

014 — Chris Oliver, Creator of GoRails — ведущий и гость подкаста обсуждают множество тем, в том числе integration и unit tests, а также тест-фреймворк Cucumber.

016 — Fast Tests with Vladimir Dementyev — участники дискуссии обсуждают, как ускорить прохождение тестов на Ruby.

Remote Ruby

What else can Rails add by default? — участники подкаста обсуждают обновление GitHub до Rails 5.2, фреймворк Action Text, необходимость добавить полнофункциональную встроенную аутентификацию в Rails.

The Yak Shave

4: Folks are in a Stink — ведущий и гость подкаста обсуждают важность документации в процессе разработки ПО, а также делятся советами по работе с базами данных, API и т. п.

5: A Series of Anecdotes — участники дискуссии обсуждают важность обратной связи (feedback) при разработке open source software.

6: The Podcast After the Last Podcast — послушайте, что такое WebAssembly и как его можно использовать в веб-разработке.

Посмотреть

Alpha preview: Action Text for Rails 6 — создатель Ruby on Rails David Heinemeier Hansson рассказывает о фреймворке Action Text, который войдет в состав Ruby on Rails 6.

Октябрьские выпуски GoRails, в которых ведущий продолжает серии скринкастов о nested comments и ElasticSearch, показывает, как создать приложение на основе Slack slash-команд, а также как использовать гем name_of_person:

Подборка платных скринкастов от Drifting Ruby в октябре:

Выпуски платных скринкастов от Ruby Tapas за октябрь:


Касательно тем/материалов/ивентов, которые стоит добавить в следующий выпуск дайджеста, пишите в комментариях или на volodymyr.vorobiov@rubygarage.org. Спасибо за помощь в подготовке дайджеста команде RubyGarage.


← Предыдущий выпуск: Ruby дайджест #22


          Mozilla Firefox Developer Edition, Portable 64b6 (web browser) Released      Cache   Translate Page      

PortableApps.com is proud to announce the release of Mozilla Firefox® Developer Edition, Portable 64b6. It's the developer's edition of the Mozilla Firefox web browser bundled with a PortableApps.com Launcher as a portable app, so you can do web development on the go. Both the 32-bit and 64-bit versions are included. And it's open source and completely free.

Mozilla®, Firefox® and the Firefox logo are registered trademarks of the Mozilla Foundation and used under license.

Update automatically or install from the portable app store in the PortableApps.com Platform.


          Mozilla Firefox, Portable Edition 64.0 Beta 6 (web browser) Released      Cache   Translate Page      

PortableApps.com is proud to announce the release of Mozilla Firefox®, Portable Edition 64.0 Beta 6. And they won't affect your standard local or portable Firefox install. It's packaged in PortableApps.com Format so it can easily integrate with the PortableApps.com Platform. And, as always, it's open source and completely free.

Mozilla®, Firefox® and the Firefox logo are registered trademarks of the Mozilla Foundation and are used under license.

Update automatically or install from the portable app store (advanced apps enabled) in the PortableApps.com Platform.


          Weak self-scrambling SSDs opens up Windows BitLocker      Cache   Translate Page      
Crucial MX300 Adding software encryption recommended to boost BitLocker security.

Users whose believe the data on their drives are protected with Microsoft's Windows Bitlocker could be in for lengthy workarounds, after researchers showed that the default hardware-based encryption on solid state storage isn't secure.

Carlo Meijer and Bernard van Gastel of Radboud University, Netherlands, detailed in their paper [pdf] how techniques known to be used by the US National Security Agency (NSA) can get around encryption that looks strong and impenetrable on paper.

This is a problem for Bitlocker which defaults to hardware encryption on SSDs as per the Trusting Computing Group Opal Self Encrypting Drive (SED) specification.

Bitlocker can be coaxed into using software encryption with the Windows Group Policy tool, if users have admin rights on the computers in question.

However, on Bitlocked drives that are already using the default hardware encryption, changing Group Policy settings has no effect.

"Only an entirely new installation, including setting the Group Policy correctly and securely erasing the internal drive, enforces software encryption," the researchers noted.

As a workaround to boost Bitlocker security, the researchers suggested using an open source utility such as VeraCrypt along with the SSD hardware encryption.

Using different techniques such as Joint Test Action Group (JTAG) industry standard debugging ports, and modified firmware or password validation, the researchers found that they could bypass full disk encryption on the following solid-state drives:

  • Crucial MX100
  • Crucial MX200
  • Crucial MX300
  • Samsung 840 EVO
  • Samsung 850 EVO
  • Samsung T3
  • Samsung T5

Crucial and Samsung were given six months by the researchers to issue fixed firmware for the SSDs; while the former company updated all three models, Samsung only issued new firmware for the T3 and T5, and recommends software encryption for the 840 and 850 EVO drives.

With improved cryptographic hardware in modern processors the main reason for using only the built-in encryption feature in SSDs - improved performance - no longer applies, the researchers said.

Instead, they suggested a combination of the two for users to keep their data on SSDs secure.

"One should not rely solely on hardware encryption as offered by SSDs for confidentiality," they said.

"We recommend users that depend on hardware encryption implemented in SSDs to employ also a software full-disk encryption solution, preferably an open-source and audited one.

Got a news tip for our journalists? Share it with us anonymously here.

          A first look at changes coming in ASP.NET Core 3.0      Cache   Translate Page      
While we continue to work on finalizing the next minor version of ASP.NET Core, we’re also working on major updates to our next release that will include some changes in how projects are composed with frameworks, tighter .NET Core integration, and 3rd party open source integration, all with the goal of making it easier and... Read more
          Open Source in the Era of 5G      Cache   Translate Page      

Last week, I had the pleasure of speaking once again at All Things Open 2018. If you’re not familiar with All Things Open (ATO), you should be! Billed as “a conference exploring open source, open tech and the open web in the enterprise,” this was the sixth iteration of ATO—and wow, has it ever grown!

The post Open Source in the Era of 5G appeared first on VMware Open Source Blog.


          Apache Struts 2.3.x vulnerable to two year old RCE flaw      Cache   Translate Page      

The Apache Software Foundation is urging users that run Apache Struts 2.3.x to update the Commons FileUpload library to close a serious vulnerability that could be exploited for remote code execution attacks. The probem Apache Struts 2 is a widely-used open source web application framework for developing Java EE web applications. The Commons FileUpload library is used to add file upload capabilities to servlets and web applications. The vulnerability (CVE-2016-1000031) is present in Commons FileUpload … More

The post Apache Struts 2.3.x vulnerable to two year old RCE flaw appeared first on Help Net Security.


          Chega a sétima edición da gran cita argalleira da Coruña: a feira OSHWDem      Cache   Translate Page      
Esta finde , o sábado 10, imos ter na Coruña unha oportunidade de ver tecnoloxías, falar delas e mesmo facelas. O motivo? A celebración, na Domus, da feira OSHWDem ( Open Source Hardware Demonstration ... - Fuente: www.codigocero.com
          Open Source Intelligence (OSINT) Analyst, JBLM WA Job - SAIC - Fort Lewis, WA      Cache   Translate Page      
For information on the benefits SAIC offers, see My SAIC Benefits. Headquartered in Reston, Virginia, SAIC has annual revenues of approximately $4.5 billion....
From SAIC - Fri, 05 Oct 2018 02:47:27 GMT - View all Fort Lewis, WA jobs
          How to Install Nginx with Virtual Hosts and SSL Certificate      Cache   Translate Page      
Nginx (short for Engine-x) is a free, open source, powerful, high-performance and scalable HTTP and reverse proxy server, a mail and standard TCP/UDP proxy server. It is easy to use and configure, with a...
          Consultancy Services for Understanding Risk - NRIP      Cache   Translate Page      
Jamaica Social Investment Fund invites tenders for the Consultancy Services for Understanding Risk - NRIP project
being funded by IBRD


Scope of Work
The National Risk Information Platform (NRIP) includes the integration of a Coastal Risk Atlas (CRA). The
Consultant Firm is to provide a modern open source platform for interaction and sharing of multi-hazard risk data
among diverse users, engaging national stakeholders in the process. This involves design, development and
deployment of the platform. Training is to be developed and executed for the users of the platform. Maintenance
and support of the implemented platform is also required.

Tender documents are available upon payment by cash, Certified/Managers check of a non-refundable fee in the amount of $0
Tenders should be submitted no later than 4:00 PM, 2018-11-20 to:
Ground Floor The Dorchester
11 Oxford Road
Kingston 5, Jamaica
For more information contact Suzette Livermore at
Phone: 968-4545
Email: contracting@jsif.org
or visit Jamaica Social Investment Fund's website at http://www.jsif.org


          An Open Memo to IBM: With Great Power Comes Great Responsibility      Cache   Translate Page      

IBM’s recent acquisition of Red Hat came as a shock to many in the industry. It does have some overarching positives. It validates the mainstream acceptance of open source by enterprises. It proves hybrid cloud is as critical as public cloud to these enterprises, if not more so. And it lends credibility to a fourth public [...]

Read More...

The post An Open Memo to IBM: With Great Power Comes Great Responsibility appeared first on NGINX.


          displaycal 3.7.1.1-1 x86_64      Cache   Translate Page      
Open Source Display Calibration and Characterization powered by Argyll CMS (Formerly known as dispcalGUI)
          Update - Freeware - Cryptomator v1.4.0      Cache   Translate Page      
Cryptomator provides you with free, Open Source client-side encryption for your cloud files. It works with any cloud provider, including Dropbox, Google Drive, OneDrive and any other storage service.....
          Open Source: Facebook will GraphQL an Linux Foundation übergeben      Cache   Translate Page      
Die von Facebook initiierte Abfragesprache GraphQL soll künftig von einem Kollaborationsprojekt unter dem Dach der Linux Foundation verwaltet werden, wie etwa auch Node.js. Unterstützt wird das Projekt unter anderem von Airbnb, Github oder Twitter. (Linux Foundation, Soziales Netz)
          Open Source by Default | LINUX Unplugged 274      Cache   Translate Page      
Have the revolutionaries won the war against proprietary software? That’s the argument being made. And we argue, what else did you expect?
          Re: Beispiel?      Cache   Translate Page      
Open Source: Facebook will GraphQL an Linux Foundation übergeben
          Beispiel?      Cache   Translate Page      
Open Source: Facebook will GraphQL an Linux Foundation übergeben
          Capping (v8.0.19)      Cache   Translate Page      
Change Log:
--------------------
Capping
v8.0.19 (2018-11-06)
Full Changelog Previous releases

Update deDE, closes #17
Fix Eye of the Storm RBG, closes #16
Always hide queue bars when inside a pvp instance.


Description:
--------------------
https://cdn-wow.mmoui.com/preview/pvw68420.png Please support my work on Patreon!

Battleground timers and other PvP features.

Configure by right-clicking the anchor or by typing /capping

Features

All battlegrounds/arenas have queue timers
Arenas - Shadow Sight timer, arena time remaining
Alterac Valley - Node timers, auto quest turnins
Arathi Basin - Node timers and final score estimation
Eye of the Storm - Flag respawn timer, final score estimation
Isle of Conquest - Node timers and siege engine timer
Warsong Gulch - Flag respawn timer
Wintergrasp - Wall attack alerts
Battle for Gilneas - Node timers and final score estimation
Deepwind Gorge - Node timers and final score estimation


Capping is open source and development is done on GitHub. You can contribute code, localization, and report issues there: https://github.com/BigWigsMods/Capping
          IBM Will Acquire Open-Source Software Company Red Hat In $34 Billion Deal      Cache   Translate Page      
In what may be the most significant tech acquisition of the year, IBM says it will acquire open-source software company Red Hat for approximately $34 billion. Under the terms of the deal announced Sunday, IBM will acquire Red Hat for $190 a share — a premium of more than 60 percent over Red Hat's closing price of $116.68 on Friday. "The acquisition of Red Hat is a game-changer. It changes everything about the cloud market," Ginni Rometty, IBM chairman, president and chief executive officer said in a statement . "This is the next chapter of the cloud." Raleigh, N.C.–based Red Hat makes software for the open-source Linux operating system, an alternative to proprietary software made by Microsoft. It sells features and support on a subscription basis to its corporate customers. Red Hat President and CEO Jim Whitehurst called the deal "a banner day for open source" and pointed to his company's long history of partnering with IBM. He said the acquisition would give Red Hat greater scale and
          Improving the GoDaddy User Experience with Elastic Machine Learning      Cache   Translate Page      

This post is a recap of a community talk given at Elastic{ON} 2018. Interested in seeing more talks like this? Check out the conference archive or find out when the Elastic{ON} Tour is coming to a city near you.

GoDaddy is known for web hosting and domain management, as anyone that’s watched the Super Bowl in recent years would know. But with over 17 million customers, 75 million domains, and 10 million hosted sites, they’re also well versed in big data. Keeping sites running smoothly requires insight into every piece of their infrastructure, from virtual server patch level to network hiccups to malicious attacks. This could be difficult with over 200,000 messages coming in every second (DNS queries, system logs, business events, and more), but with its speed at scale, the Elastic Stack is up to the task.

GoDaddy’s introduction to Elasticsearch was a lot like other companies that use open source software. Disparate teams throughout the company set up their own clusters to handle their own specific needs. It got the job done, but this unmanaged deployment model led to hundreds of clusters running on varying versions of Elasticsearch analyzing siloed data. Knowing there was a better way, they formed a team around managing the deployment of Elasticsearch in 2014. This team now manages over 60 Elasticsearch clusters spanning 700+ Docker containers, with feeds coming in from teams all over the company. These clusters account for over 270 TB of data from their (11 PB) HDFS environment.

One of the first use cases their new Elasticsearch team tackled was managing patch compliance throughout their entire ecosystem. In the pre-Beats world of 2014, GoDaddy developed Windows and Linux agents (similar to Auditbeat and Winlogbeat) to send system data to Elasticsearch. With these agents installed on all of their servers (bare metal and virtual), GoDaddy was able to gain valuable insight into patching levels and compliance throughout their entire infrastructure. And by utilizing different dashboards and visualizations within Kibana, they were able to easily provide fine-grain patch information to admins and engineers, as well as high-level business reports to executives — all while accessing the same centralized data so everyone is on the same page.

Maintaining server patch levels is important for keeping site traffic flowing, and that flow helps keep users engaged. If a website is loading slowly, visitors will go somewhere else. So, with the experience of their millions of customers in mind, GoDaddy knew they needed to track how well data centers were performing and how their performance impacted visitors. They already had all of the data they needed, as every component of their systems generated logs, but they needed a way to view it holistically.

Centralized Logging with Machine Learning for Anomaly Detection

GoDaddy needed to centralize and analyze their various performance and engagement data sets, and the Elastic Stack was the answer. By sending netflow data, sFlow data, real user management (RUM), and peering relationship and routing data to Elasticsearch, they were able to get a much more detailed view of user experience and system performance data — a level of detail that can only be seen by analyzing all of the different data sources at once. And since then, GoDaddy has begun to take that data even further with the help of Elastic machine learning features.

Having centralized access to mountains of system data is great, but tracking down problems can be difficult. GoDaddy tracks every user click and website interaction, but with millions of pages operating around the world, there’s no way any team of humans could sift through all that data. Fortunately, Elastic machine learning features make anomaly detection a simple task. Working with machine learning experts at Elastic, GoDaddy has been able to implement RUM-focused machine learning jobs that have made anomaly detection easy.

“In terms of the overall effort, leverage your Elastic team. They are extremely helpful. We've had a very close partnership and very frequent calls, a completely open line of communication around all the updates, you will get stuck, use them for that. That's really what they're good at.” - Felix Gorodishter, Principal Architect, GoDaddy

By specifying a threshold for page load times and parameters around page traffic, the GoDaddy team lets Elastic machine learning features handle the job of learning what’s normal and what’s anomalous, and then letting them know whenever there’s a problem. Machine learning cuts through the noise so GoDaddy can focus on what’s important.

Learn about how GoDaddy is leveraging Elastic machine learning features to monitor hosted site performance by watching Stories from the Trenches at GoDaddy: How Big Data Insights Equal Big Money from Elastic{ON} 2018. You’ll also get a peek into the interesting ways they’re using machine learning to monitor business KPIs around product adoptions and hear about the lessons they’ve learned along the way.


          Grant Evaluation Analyst and Coordinator - Lawrence University of Wisconsin - Appleton, WI      Cache   Translate Page      
Proficiency with web-based survey software (Qualtrics), open source web content management systems (Drupal), and web enabled enterprise reporting systems...
From Lawrence University of Wisconsin - Sat, 06 Oct 2018 06:49:40 GMT - View all Appleton, WI jobs
          Assistant Director of Research Administration - Lawrence University of Wisconsin - Appleton, WI      Cache   Translate Page      
Proficiency with web-based survey software (Qualtrics), open source web content management systems (Drupal), and web enabled enterprise reporting systems...
From Lawrence University of Wisconsin - Fri, 14 Sep 2018 06:52:02 GMT - View all Appleton, WI jobs
          LISWire: Equinox and Above the Treeline Announce Koha Integration with Edelweiss+Analytics      Cache   Translate Page      

Duluth, Ga., November 6, 2018: As part of their ongoing commitment to supporting public libraries, Above the Treeline and the Equinox Open Library Initiative are excited to announce a tool which facilitates export of data from Koha into Edelweiss, allowing for detailed collection analysis. Libraries who use Koha will be able to easily configure scheduled data exports from their Koha ILS into Edelweiss+Analytics.

“We are very excited to extend Edelweiss+ support to Koha users,” said Mike Rylander, president and founder of Equinox. “Equinox is always looking for ways to improve library services through partnerships, and Above the TreeLine has been extremely easy to work with."

Equinox Open Library Initiative is a nonprofit founded in 2007. They currently help libraries and consortia of all types leverage open source solutions to best serve their patrons. They are passionate about the ongoing development of open source technologies for libraries. The addition of Edelweiss+Analytics functionality provides Koha libraries with increased flexibility and resources.

“The teams at Equinox and Above the Treeline are both committed to supporting the work of library professionals,” said John Rubin, Founder and CEO of Above the Treeline, the company behind Edelweiss+. “With the technical setup accelerated and simplified, libraries can quickly start using tools to help them manage their collection and maximize their budgets.”

The Edelweiss+ platform is used by over 100,000 book professionals to sell, discover, and order new titles. It hosts digital catalogs from all major publishers and over 95% of the US frontlist titles. Edelweiss+Analytics is an add-on module that allows libraries to simplify collection management and benchmark their performance. It helps librarians identify circulation trends, weed stale titles, select faster-circulating titles, compare location and collection performance, benchmark to libraries and retailers across the country, and more.

The partnership between Equinox and Above the Treeline started as a result of partnering to integrate Edelweiss+Analytics and Evergreen ILS, another open source solution supported by Equinox. The functionality for Evergreen was so popular with libraries that the team decided to expand the awesomeness to Koha.

Interested libraries using Koha should inquire with Above the Treeline to learn more about the platform and how it works here: http://www.abovethetreeline.com/koha.

About Equinox Open Library Initiative
Equinox Open Library Initiative is a nonprofit company engaging in literary, charitable, and educational endeavors serving cultural and knowledge institutions. As the successor to Equinox Software, Inc., the Initiative carries forward a decade of service and experience with Evergreen and other open source library software. At Equinox OLI we help you empower your library with open source technologies. To learn more, please visit https://www.equinoxinitiative.org/.

About Koha
Created in 1999 by Katipo Communications for the Horowhenua Library Trust in New Zealand, Koha is the first open source Integrated Library System to be used worldwide. The software is a full-featured ILS with a dual-database design (search engine and RDBMS) built to be compliant with library standards. Koha’s OPAC, staff, and self-checkout interfaces are all web applications. Distributed under the General Public License (GPL), libraries are free to use and install Koha themselves or to purchase support and development services. For more information on Koha, please visit http://koha-community.org.

About Above the Treeline
We connect the world with books. Edelweiss+, powered by Above the Treeline, gives over 100,000 book professionals a single source of information to sell, discover, and order new titles. Our analytical and workflow tools have helped publishers, booksellers, libraries, and reviewers “work better, read more” for over 15 years. We proudly serve thousands of retail outlets, libraries, and publishing houses across the United States and Europe.

Above the Treeline is a privately-owned company based in Ann Arbor, MI. It was founded in 2002 by John Rubin, the son of an Indie bookseller and grandson of a librarian, who saw an opportunity to increase business efficiency in the book industry with technical solutions. While our product functionality has grown significantly since then, we continue to be dedicated to serving book professionals. For more information, go to http://www.abovethetreeline.com
###

Organization Type: 
Topics: 

          Open Source: Facebook will GraphQL an Linux Foundation übergeben      Cache   Translate Page      
Die von Facebook initiierte Abfragesprache GraphQL soll künftig von einem Kollaborationsprojekt unter dem Dach der Linux Foundation verwaltet werden, wie etwa auch Node.js. Unterstützt wird das Projekt unter anderem von Airbnb, Github oder Twitter. (Linux Foundation, Soziales Netz)
          Stage: Afstudeeropdracht HoloLens in Veenendaal      Cache   Translate Page      
<p><strong>Omzetten DICOM naar 3D surfacemodellen voor HoloLens en AR</strong></p> <p>Lijkt het je interessant om tijdens je afstudeerstage met de HoloLens aan de slag te gaan? Automatiseer jij tijdens je opdracht het omzetten van DICOM beelden naar 3D surfacemodellen die direct in de HoloLens kunnen worden ingeladen?<br /><br />De HoloLens en Augmented Reality in het algemeen zullen een revolutie gaan ontketenen in de Zorg. In ziekenhuizen en operatiekamers in het bijzonder, wordt steeds meer gewerkt met 3D modellen van de patiënt en zijn organen. Op dit moment wordt deze data nog 2D geprojecteerd op (platte) beeldschermen. Chirurgen hebben aangegeven met AR (het mengen van virtuele 3D beelden met de echte wereld) uren te kunnen besparen op sommige operaties en de kwaliteit van het resultaat te kunnen verhogen. Daarmee worden de kosten verlaagd en het succes van de operatie voor de<br />patiënt verhoogd.<br /><br />Een van de toepassingen is het tonen van tumoren, zodat deze efficiënt door de arts kunnen worden verwijderd. Een andere toepassing is het passen van een prothese op een hologram van een patiënt, zodat de operatie daarmee beter kan worden voorbereid en tot 2 uur wordt bespaard op het juist plaatsen van schroeven. Ook zijn er diverse toepassingen in zorgeducatie en patiëntvoorlichting te bedenken.<br /><br /><u>Onderzoek</u><br />Tijdens jouw afstudeeropdracht automatiseer je het omzetten van DICOM beelden naar 3D surfacemodellen die direct in de Hololens kunnen worden ingeladen. Er zullen mogelijk delen van verschillende open source pakketten moeten worden gecombineerd om tot het eindresultaat te komen. Aangezien er veel variatie zit in DICOM beelden zal een en ander geparametriseerd moeten kunnen worden.<br />Ook zitten er soms verschillen in locaties van de verschillende beelden. De software moet hiermee om kunnen gaan en correcties uit kunnen voeren op positie en oriëntatie van het resultaat, zodat de beelden exact op dezelfde plek over elkaar heen liggen. Soms is het onmogelijk om op basis van een bitrange een gewenst object uit de scan te halen. In dit geval moet het mogelijk zijn in de applicatie om handmatig een gebied in een foto te kunnen markeren die vervolgens wordt gebruikt als begrenzing voor de verdere 3D omzetting.</p> <p>Je gaat deze opdracht samen met een van onze engineers verder uitwerken. Dit met als doel om tot een definitieve afstudeeropdracht, compleet met deliverables, te komen die bij jou past.</p> ...
          MapR Solutions Architect - Perficient - National, WV      Cache   Translate Page      
Design and develop open source platform components using Spark, Java, Oozie, Kafka, Python, and other components....
From Perficient - Wed, 03 Oct 2018 20:48:20 GMT - View all National, WV jobs
          (USA-OH-Columbus) Senior DevOps Engineer      Cache   Translate Page      
Senior DevOps Engineer Senior DevOps Engineer - Skills Required - Devops, AWS, Cloud, Agile, Build and CI automation tools, Scripting languages- Python/Bash/Go, Docker Containers/containers, Linux System Administration, Collaboration tools - Jira/Confluence/Slack If you are a DevOps Engineer with 5+ years of experience, please read on! --CANDIDATES WHO FILL OUT THE SKILLS AND QUESTIONS PORTION OF THE APPLICATION WILL RECEIVE TOP PRIORITY-- With our HQ in Europe, and multiple offices in the US, we are a leading developer, manufacturer and supplier of RFID products and IoT solutions. In other words...we make products that help businesses identify, authenticate, track and complement product offerings. We are growing at an extremely high rate and are looking to add a talented Senior DevOps Engineer in our Columbus, OH office who has over 5 years of relevant experience, with strong AWS and Automation experience. The engineer will create cloud formation templates to build AWS services and maintain development, staging, demo, and production environments. This is a unique, special opportunity to get in early with a rapidly growing company. Sound like you? Email your most updated resume ASAP to neda.talebi@cybercoders.com! **What You Will Be Doing** - Review and analyze existing cloud applications (AWS) - Design and implement solutions and services in the cloud - Provide guidance and expertise on DevOps, migrations, and cloud technologies to developers and in some cases partners or customers - Be devoted to knowing the latest trends in the Cloud space and solutions - Demonstrated usage of Agile and DevOps processes and activities on project execution - Ability to use wide variety of open source technologies and cloud services - Experience w/ automation and configuration management w/in DevOps environment **What You Need for this Position** - 5+ years relevant experience - Strong demonstrated usage of Agile and DevOps processes on project execution - Strong AWS experience - Previous experience maintaining applications in the cloud - required - Strong knowledge of Build & CI automation tools - Strong knowledge of docker containerization - Strong knowledge of source code management tools & artifact management (GitHub) - Good knowledge of scripting languages - Python, Bash, Go - Good knowledge of linux system administration - Good knowledge of infrastructure as Code - AWS CloudFormation - General knowledge of collaboration tools - Jira, Confluence, Slack - Strong communication skills - Strong problem solving skills - Takes initiative/lead by example -AWS certification is a plus! **What's In It for You** - Competitive base salary (DOE) - PTO - Health insurance - top of the line - 401k w/ matching - Huge room for growth - Unique, special opportunity to get in early with a growing company So, if you are a DevOps Engineer with 5+ years of experience, please apply today! --CANDIDATES WHO FILL OUT THE SKILLS AND QUESTIONS PORTION OF THE APPLICATION WILL RECEIVE TOP PRIORITY-- Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior DevOps Engineer* *OH-Columbus* *NT2-1492892*
          Moscow Apache Ignite Meetup #5      Cache   Translate Page      
Всем привет!

14 ноября приглашаем на очередную встречу Apache Ignite в Москве. Будет интересно архитекторам и разработчикам, интересующимся open source платформой для распределённых приложений Apache Ignite.

Программа

18:30 — 19:00 — Сбор гостей, приветственный кофе

Доклады:

  • Измерение производительности Apache ignite. Как мы делаем бенчмарки — Илья Сунцов (GridGain)
  • Apache Ignite TeamCity Bot: боремся с нестабильными тестами в Open Source сообществе — Дмитрий Павлов (GridGain) и Николай Кулагин (Сбербанк Технологии)
  • Transparent Data Encryption. История разработки major feature в большом open source проекте — Николай Ижиков, Apache Ignite Committerа

22:00 — 22:30 — Розыгрыш полезных книг и свободное общение

Мероприятие бесплатное, нужно зарегистрироваться
          COSCon Bridges East & West, Open Source Powers Now & Future      Cache   Translate Page      

The OSI was honored to participate in the 2018 China Open Source Conference (COSCon'18) hosted by OSI Affiliate Member KAIYUANSHE in Shenzhen, China. Over 1,600 people attended the exciting two-day event, with almost another 10,000 watching via live-stream online. The conference boasted sixty-two speakers from twelve countries, with 11 keynotes (including OSI Board alum Tony Wasserman), 67 breakout sessions, 5 lightning talks (led by university students), 3 hands-on camps, and 2 specialty forums on Open Source Education and Open Source Hardware.

COSCon'18 also served as an opportunity to make several announcements, including the publication of "The 2018 China Open Source Annual Report", the launch of "KCoin Open Source Contribution Incentivization Platform", and the unveiling of KAIYUANSHE's "Open Hackathon Cloud Platform".

Since its foundation in October of 2014, KAIYUANSHE has continuously helped open source projects and communities thrive in China, while also contributing back to the world by, "bringing in and reaching out". COSCon'18 is one more way KAIYUANSHE serves to: raise awareness of, and gain expereince with, global open source projects; build and incentivise domestic markets for open source adoption; study and improve open source governance across industry sectors; promote and serve the needs of local develoeprs, and; identify and incubate top-notch local open source projects.

In addition to all of the speakers and attendees, KAIYUANSHE would like to thank their generous sponsors for all of their support in making COSCon'18 a great success.

2018 China Open Source Annual Report - Created by KAIYUANSHE volunteers over the past six months, the 2018 Open Source Annual Report describes the current status, and unique dynamics, of Open Source Software in China. The report provides a global perspective with contributions from multiple communities, and is now available on GitHub: contributions welcome.

KCoin - Open Source Contribution Incentivization Platform - KCoin, an open source, blockchain-based, contribution incentivization mechanism was launched at COSCon'18. KCoin is curently used by three projects including, KFCoding--a next generation interactive developer learning community, ATN--an AI+Blockchain-based open source platform, and Dao Planet--a contribution-based community incentive infrastructure.

Open Hackathon Platform Donation Ceremony - Open Hackathon Platform is a one-stop cloud platform for hosting or participating online in hackathons. Originally developed by and run internally for Microsoft development, the platform was officially donated to KAIYUANSHE by Microsoft during the conference. Since May of 2015 the open source platform has hosted more than 10 hackathons and other collabrative development eforts including hands-on camps and workshops, and is the first project to be contributed by a leading international corporation to a Chinese open source community. Ulrich Homann, Distinguished Architect at Microsoft who presided over the dedication offered, “We are looking forward to contributions from the KAIYUANSHE community which will make the Open Hackathon Cloud Platform an even better platform for your needs. May the source be with you!”

Open Source 20-Year Anniversary Celebration Party - Speakers, sponsors, community and media partners, and KAIYUANSHE directors and officers came together to celebrate the 20-year anniversary of Open Source Software and the Open Source Initiative. The evening was hosted by OSI Board Director Tony Wasserman, and Ross Gardler of the Apache Software Foundation, who both shared a few thoughts about the long journey and success of Open Source Software. Other activities included, a "20 Years of Open Source Timeline", where attendees added their own memories and milestones; "Open-Source-Awakened Jedi" cosplay with Kaiyuanshe directors and officers serving OSI 20th Anniversary cake as Jedi warrior's (including cutting the cake with light sabers!).

The celebration also provided an opportunity to recognize the outstanding contributions to KAIYUANSHE and open source by two exceptional individuals. Cynthia Xin and Junbo Wang were both awarded the "Open Source Star" trophy. Cynthia was recogmized for her work as both the Event Team Lead and Community Partnership Team Lead, while Junbo Wang, was recognized for contributions as the Open Hackathon Cloud Platform Infrastructure Team Lead, and KCoin Project Lead.

"May the source be with you!" Fun for all at the 20th Anniversary of Open Source party during COSCon'18.

 

Other highlights included:

  • A "Fireside Chat" with Nat Friedman, GitHub CEO, and Ted Liu, Kaiyuanshe Chairman
  • Apache Project Incubation
  • Implementing Open Source Governance at Scale
  • Executive Roundtable: "Collision of Cultures"
  • 20 years of open source: Where can we do better?
  • How to grow the next generation of university talent with open source.
  • Open at GitLab: discussions and engagement.
  • Three communities--Open Source Software (OSS), Open Source Hardware (OSHW) and Creative Commons (CC)--on stage, sharing and brainstorming.
  • Made in China, "Xu Gu Hao": open source hardware and education for the fun of creating!
Former OSI Board Director Tony Wasserman presents at COSCon'18

 

COSCon'18 organizers would like to recognize and thank their international and domestic communities for their support, Apache Software Foundation (ASF), Open Source Initiative (OSI), GNOME, Mozilla, FreeBSD and another 20+ domestic communities. As of Oct. 23rd, there were more than 120,000 viewerships from the retweet of the articles published for the COSCon'18 by the domestic communities and more retweets to come from the international communities. We are grateful for these lovely community partners. The board of GNOME Foundation also sent a greeting video for the conference.

Many attendees also offered their thoughts on the event...

COSCon was a great opportunity to meet developers and learn how GitHub can better serve the open source community in China. It is exciting to see how much creativity and passion there is for open source in China.
---- Nat Friedman, CEO, GitHub

COSCon is the meetup place for open source communities. No matter where you are, on stage or in the audience crowd, the spirits of openness, freedom, autonomy and collaboration run through the entire conference. Technologies rises and falls, only the ecosystem sustains over the community.
---- Tao Jiang, Founder of CSDN

When I visited China in 2015, I said "let's build the bridge together", in 2018 China Open Source Conference, I say "let's cross the bridge together!"
---- Ross Gardler, Executive Vice President, Apache Software Foundation

The conference was an excellent opportunity to learn about "adoption and use of FOSS from industry leaders in China and around the world."
---- Tony Wasserman, OSI Board Member Alumni, Professor of Carnegie Mellon University

I'm very glad to see the increasing influence power of KAIYUANSHE and wish it gets better and better.
---- Jerry Tan, Baidu Open Source Lead & Deep Learning Evangelist

It is a great opportunity to share Microsoft’s Open source evolution with the OSS community in China through the 2018 ConsCon conference. I am honored to officially donate the Microsoft Open Hackathon platform to the Kayuanshe community. Contributing over boundaries of space and time is getting more important than ever – an open platform like the Microsoft Open Hackathon environment can bring us together wherever we are, provide a safe online environment enabling us to solve problems, add unique value and finally have lots of fun together.
---- Ulrich Homann,Distinguished Architect, Microsoft

I was impressed by the vibrant interest in the community for OSS and The Apache Software Foundation, particularly by young developers.
---- Dave Fisher, Apache Incubator PMC member & mentor

Having the China Open Source Conference is a gift for the 20-year anniversary of the birth of open source from the vast number of Chinese open source fans. In 2016, OSI officially announced that Kaiyuanshe becomes an OSI affiliate member in recognizing Kaiyuanshe's contribution in promoting open source in China. Over the years, the influence of Kaiyuanshe has been flourishing, and many developers have participated & contributed to its community activities. In the future, Huawei Cloud is willing to cooperate with Kaiyuanshe further to contribute to software industry growth together.
---- Feng Xu, founder & general manager of DevCloud, Huawei Cloud


          (USA-CA-San Jose) Principal GUI Developer      Cache   Translate Page      
Principal GUI Developer (Widgets, Enterprise) Principal GUI Developer (Widgets, Enterprise) - Skills Required - Widget design, JavaScript, UX, HTML5, CSS3, REST API, UI, Enterprise Products, REST APIs If you are a Senior GUI Developer (Enterprise, Storage, Data Center) with experience, please read on! **What You Will Be Doing** -As a key member of the GUI team, ensure success in building Robin's Web application for various platforms, browsers, and resolutions. -Work closely with the other developers in the team. -Build reusable widgets libraries for future use. -Optimize Application for performance and response time. -Utilize latest Web technologies, tools and frameworks such as: JavaScript, HTML5, CSS3, JQuery, REST APIs. -Research, use and customize open source widgets. **What You Need for this Position** -Experience with HTML, CSS, JavaScript, and jQuery. -Experience working with GIT. -Love to learn and a quick self-learner. -Strong team and communication skills. -Strong attention to detail and good organizational skills. -Passion for GUI development. -Experienced software engineer or an enthusiastic college grad. Preferred skills -Experience with CodeIgniter. -Experience with PHP. -Experience with cross browser development. **What's In It for You** Highly competitive job offers and equity!!! Work with an amazing team!!! So, if you are a Senior GUI Developer (Enterprise, Storage, Data Center) with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Principal GUI Developer* *CA-San Jose* *JG2-1493051*
          (USA-NC-Raleigh) Embedded Engineer - Linux, C++, JavaScript      Cache   Translate Page      
Embedded Engineer - Linux, C++, JavaScript Embedded Engineer - Linux, C++, JavaScript - Skills Required - Linux, C++, JavaScript, Android, GIT, Embedded, communication Protocol If you are a Embedded Engineer with experience, please read on! **Top Reasons to Work with Us** - HUGE Room for Growth - Great Work/Life Balance/Autonomy - Competitive Pay **What You Will Be Doing** Design and implement software of embedded devices and systems from requirements to production and deployment Design, develop, code, test and debug system software Review code and design Analyze and enhance efficiency, stability, and scalability of system resources Integrate and validate new product designs Support software QA and optimize I/O performance Interface with multiple teams to design and develop Assess third party and open source software **What You Need for this Position** At Least 3 Years of experience and knowledge of: - Linux - C+- JavaScript - Android - GIT - Embedded - communication Protocol **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Relocation - Bonus - 401k So, if you are a Embedded Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Embedded Engineer - Linux, C++, JavaScript* *NC-Raleigh* *DN2-1492983*
          (USA-NY-New York) Senior Data Analyst      Cache   Translate Page      
Senior Data Analyst Senior Data Analyst - Skills Required - Saas/e-commerce, Data Analyst, SQL, Python, ETL Job title: Senior Data Analyst Job Location: New York, NY We are a marketplace that provides licensed cannabis retailers the ability to order from their favorite brands, as well as a suite of software tools for those brands to manage and scale their operation. **Top Reasons to Work with Us** With over 2,000+ dispensaries and more than 500+ leading brands in Colorado, Washington, California, Oregon, Nevada, Maryland, and Arizona, we are setting the industry standard for how cannabis brands and retailers work together. Our team, backed by funding from leading VC's, is poised to define the cannabis wholesale market. Just named one of Fast Company's "Top 10 Most Innovative Companies in Enterprise", joining the ranks of Amazon, Slack, and VMWare - and we're just getting started! **What You Will Be Doing** - Implementing a Business Intelligence tool to help us drive business decisions. - Propose, build, and own our data warehouse architecture and infrastructure. - Own the design and development of automated dashboards. - Drive internal business decisions and directions through establishment of core KPIs - Create industry-defining metrics and standard reporting to help drive the cannabis industry. - Support ad-hoc data analysis or reporting needs from teammates or customers. - Proactively conduct ad-hoc analyses to discover new product and business opportunities. - Develop new metrics to better identify trends and key performance drivers within a particular area of the business. **What You Need for this Position** - 5+ years working at an established SaaS or E-Commerce company - 5+ years working with SQL and other data insight tools - Proficiency in at least one programming language such as Python - Strong grasp of statistics and experience with open source big data technology - 3+ years of working crossing functional to create KPIs which drive or support strategy decisions - 2+ years with ETL processes - Has lead business intelligence implementations efforts such as Periscope, Looker, Tableau, or Domo - Has published industry reports or worked closely with a marketing team to showcase powerful data insights to establish thought leadership **What's In It for You** - Healthcare matching - 3 weeks paid vacation a year - Fun office environment - Competitive salary - Benefit Matching (medical, dental, vision) - Generous stock options - Team events Interviews are ongoing, please complete questions and you will move straight to hiring manager for possible interviews. Incomplete applications will move through HR first. Please complete applications to expedite process. Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Senior Data Analyst* *NY-New York* *BZ2-1493027*
          (USA-NY-New York City) Sr. Full Stack Engineer      Cache   Translate Page      
Sr. Full Stack Engineer Sr. Full Stack Engineer - Skills Required - Java, Scala, C#, REACT, D3, SQL, SPARK, SQL Server As the technological hub between lenders and capital markets, my client provides all market participants with unprecedented data transparency, insight, and analytics. To date, my client has offered investors insight into over $15 billion of securitizations and more than $64 billion of consumer, small business, real estate, auto, and student loans from the largest online lenders, including LendingClub, Prosper, and SoFi. But that's just the beginning. The work doesn't stop until we've built a comprehensive end-to-end solution that facilitates investment across capital markets and asset classes. That's at least $13 trillion of data to bring into our data pipeline and make dynamic for our users. Join us if you're ready for the challenge. **What You Will Be Doing** As a Senior Full Stack Engineer, you'll have +5 years of experience with Java, Scala or C# and a strong understanding of functional programing to build out products as part of a unified platform for advanced quantitative analytics and reporting that will help prevent the next crisis in these assets. -You will work with our modern open source stack -Develop experiencing complex products and architecture -Interact with a diverse team -Gain knowledge of consumer lending markets **What You Need for this Position** More Than 5 Years of experience and knowledge of: -Java, Scala or C# -Functional programming -React -D3 -SQL Server -Spark **What's In It for You** - Vacation/PTO - Medical - Dental - Vision - Bonus - 401k So, if you are a Sr. Full Stack Engineer with experience, please apply today! Applicants must be authorized to work in the U.S. **CyberCoders, Inc is proud to be an Equal Opportunity Employer** All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. **Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. *Sr. Full Stack Engineer* *NY-New York City* *AM11-1492944*
          25 Most Active Open Source Projects at Microsoft's CodePlex      Cache   Translate Page      

Originally posted on: http://brustblog.net/archive/2007/08/23/25-Most-Active-Open-Source-Projects-at-Microsofts-CodePlex.aspx

eWeek has put together a nice slide show, including screen shots, of the top 25 projects on CodePlex based on activity.

This would be a nice feature for CodePlex to browse based on screen shots or other project contributed images instead of just a list mode.


          IBM Invests Billions To Purchase Popular Red Hat Linux      Cache   Translate Page      
IBM has recently announced what is to be the largest open source acquisition in history.  They're buying Red Hat for a staggering $34 billion dollars.  This, as the saying goes, changes everything. ...
          treo/treopim (1.19.0)      Cache   Translate Page      
TreoPIM. Open source PIM application.
          Language Server Protocol, by NSHipster.com      Cache   Translate Page      

This is arguably the most important decision Apple has made for Swift since releasing the language as open source in 2014. It’s a big deal for app developers, and it’s an even bigger deal for Swift developers on other platforms.

To understand why, this week’s article will take a look at what problem the Language Server Protocol solves, how it works, and what it’s long-term impacts may be.


          OPSTECH (Open Source Technology) CMS - MULTI SQL INJECTION      Cache   Translate Page      
Topic: OPSTECH (Open Source Technology) CMS - MULTI SQL INJECTION Risk: Medium Text: = [+] Title :- OPSTECH (Open Source Technology) CMS - SQL INJECTION [+] Date ... - Source: cxsecurity.com
          SQLMAP - Automatic SQL Injection Tool 1.2.11      Cache   Translate Page      
sqlmap is an open source command-line automatic SQL injection tool. Its goal is to detect and take advantage of SQL injection vulnerabilities in web applications. Once it detects one or more SQL injec ... - Source: packetstormsecurity.com
          Comment on Firefox: The Internet’s Knight in Shining Armor by Aravind      Cache   Translate Page      
I've been thinking in the same way since few months and using Firefox in Ubuntu. What are your thoughts on Brave, a Chromium based privacy focused open source project?
          Re: Beispiel?      Cache   Translate Page      
Open Source: Facebook will GraphQL an Linux Foundation übergeben
          BGP Hijacks: Two More Papers Consider the Problem      Cache   Translate Page      

CircleID CircleID: The security of the global Default Free Zone (DFZ) has been a topic of much debate and concern for the last twenty years (or more). Two recent papers have brought this issue to the surface once again — it is worth looking at what these two papers add to the mix of what is known, and what solutions might be available. The first of these —

Demchak, Chris, and Yuval Shavitt. 2018. "China's Maxim — Leave No Access Point Unexploited: The Hidden Story of China Telecom's BGP Hijacking." Military Cyber Affairs 3 (1). https://doi.org/10.5038/2378-0789.3.1.1050.

— traces the impact of Chinese "state actor" effects on BGP routing in recent years. Whether these are actual attacks, or mistakes from human error for various reasons generally cannot be known, but the potential, at least, for serious damage to companies and institutions relying on the DFZ is hard to overestimate. This paper lays out the basic problem, and the works through a number of BGP hijacks in recent years, showing how they misdirected traffic in ways that could have facilitated attacks, whether by mistake or intentionally. For instance, quoting from the paper:

Starting from February 2016 and for about 6 months, routes from Canada to Korean government sites were hijacked by China Telecom and routed through China.
On October 2016, traffic from several locations in the USA to a large Anglo-American bank
headquarters in Milan, Italy was hijacked by China Telecom to China.
Traffic from Sweden and Norway to the Japanese network of a large American news organization was hijacked to China for about 6 weeks in April/May 2017.

What impact could such a traffic redirection have? If you can control the path of traffic while a TLS or SSL session is being set up, you can place your server in the middle as an observer. This can, in many situations, be avoided if DNSSEC is deployed to ensure the certificates used in setting up the TLS session is valid, but DNSSEC is not widely deployed, either. Another option is to simply gather encrypted traffic and either attempt to break the key or use data analytics to understand what the flow is doing (a side channel attack).

What can be done about these kinds of problems? The "simplest" — and most naïve — answer is "let's just secure BGP." There are many, many problems with this solution. Some of them are highlighted in the second paper under review —

Bonaventure, Olivier. n.d. "A Survey among Network Operators on BGP Prefix Hijacking — Computer Communication Review." Accessed November 3, 2018. https://ccronline.sigcomm.org/2018/ccr-january-2018/a-survey-among-network-operators-on-bgp-prefix-hijacking/.

— which illustrates the objections providers have to the many forms of BGP security that have been proposed to this point. The first is, of course, that it is expensive. The ROI of the systems proposed thus far are very low; the cost is high, and the benefit to the individual provider is rather low. There is both a race to perfection problem here, as well as a tragedy of the commons problem. The race to perfection problem is this: we will not design, nor push for the deployment of, any system which does not "solve the problem entirely." This has been the mantra behind BGPSEC, for instance. But not only is BGPSEC expensive — I would say to the point of being impossible to deploy — it is also not perfect.

The second problem in the ROI space is the tragedy of the commons. I cannot do much to prevent other people from misusing my routes. All I can really do is stop myself and my neighbors from misusing other people's routes. What incentive do I have to try to make the routing in my neighborhood better? The hope that everyone else will do the same. Thus, the only way to maintain the commons of the DFZ is for everyone to work together for the common good. This is difficult. Worse than herding cats.

A second point — not well understood in the security world — is this: a core point of DFZ routing is that when you hand your reachability information to someone else, you lose control over that reachability information. There have been a number of proposals to "solve" this problem, but it is a basic fact that if you cannot control the path traffic takes through your network, then you have no control over the profitability of your network. This tension can be seen in the results of the survey above. People want security, but they do not want to release the information needed to make security happen. Both realities are perfectly rational!

Part of the problem with the "more strict," and hence (considered) "more perfect" security mechanisms proposed is simply this: they are not quite enough. They expose far too much information. Even systems designed to prevent information leakage ultimately leak too much.

So… what do real solutions on the ground look like?

One option is for everyone to encrypt all traffic, all the time. This is a point of debate, however, as it also damages the ability of providers to optimize their networks. One point where the plumbing allegory for networking breaks down is this: all bits of water are the same. Not all bits on the wire are the same.

Another option is to rely less on the DFZ. We already seem to be heading in this direction, if Geoff Huston and other researchers are right. Is this a good thing, or a bad one? It is hard to tell from this angle, but a lot of people think it is a bad thing.

Perhaps we should revisit some of the proposed BGP security solutions, reshaping some of them into something that is more realistic and deployable? Perhaps — but the community is going to let go of the "but it's not perfect" line of thinking, and start developing some practical, deployable solutions that don't leak so much information.

Finally, there is a solution Leslie Daigle and I have been tilting at for a couple of years now. Finding a way to build a set of open source tools that will allow any operator or provider to quickly and cheaply build an internal system to check the routing information available in their neighborhood on the 'net, and mix local policy with that information to do some bare bones work to make their neighborhood a little cleaner. This is a lot harder than "just build some software" for various reasons; the work is often difficult — as Leslie says, it is largely a matter of herding cats, rather than inventing new things.
Written by Russ White, Network Architect at LinkedInFollow CircleID on TwitterMore under: Access Providers, Cybersecurity, Networks

The post BGP Hijacks: Two More Papers Consider the Problem appeared first on iGoldRush Domain News and Resources.


          Más de 15.000 sitios web que usan Wordpress han sido hackeados      Cache   Translate Page      

Worpress es un objetivo popular por que la mayoría de las webs lo usa para manejar y publicar su contenido. Esto de acuerdo a un informe por parte de Sucuri de sitios web comprometidos, hay un 78% de los 21.821 sitios estudiados que usan este gestor de contenidos.

Wordpress, con el paso del tiempo, continua a la cabeza con un amplio porcentaje de sitios web comprometidos, con una tasa del 74%.

El informe se centra en cuatro populares gestores de contenidos Open Source: se cubre también Joomla con un 14% de sitios comprometidos, Magento con un 5% y Drupal con un 2%.
Cuando se trata de software sin actualizar, un vector común para los ciberdelincuentes, el informe encontró que el 55% de las instalaciones de Wordpress estaban sin actualizar, mientras que Joomla, con un 86% de sitios sin actualizar, Drupal con un 84% y Magento con un 96%, continúan siendo líderes indiscutibles en sitios web sin actualizar o con versiones vulnerables.

Año tras año, Wordpress vio un decremento de un 1% en sitios web sin actualizar o infectados, mientras que Drupal tuvo un aumento de un 3%. Joomla y Magento siguen mostrando las versiones más desactualizadas de cada plataforma.

Las extensiones con poca seguridad dan a los cibercriminales un acceso inicial la mayor parte del tiempo. Sucuri encontró que, de media, las instalaciones de Wordpress tenían instaladas 12 plugins en cualquier momento. 

El top tres de vulnerabilidades de plugins de Wordpress que han contribuido al hackeo de los sitios con Wordpress: Gravity Forms, TimThumb y RevSlider.

Preocupantemente, solo el 18% de los sitios web infectados han sido metidos en una Blacklist por los principales motores de búsqueda y de protección de servicios web, dejando un 82% de los sitios web infectados sin marcar y presentando un potencial riesgo a los usuarios.

El esfuerzo más satisfactorio por bloquear estos sitios fue de Google Safe Browsing, con un 52% de sitios bloqueados. Norton Safeweb consiguió encontrar un 38%, mientras que McAfee SiteAdvisor encontró solo el 11% de sitios web hackeados.

Fuente: Infosecurity
          Open Source Intelligence (OSINT) Analyst, JBLM WA Job - SAIC - Fort Lewis, WA      Cache   Translate Page      
For information on the benefits SAIC offers, see My SAIC Benefits. Headquartered in Reston, Virginia, SAIC has annual revenues of approximately $4.5 billion....
From SAIC - Fri, 05 Oct 2018 02:47:27 GMT - View all Fort Lewis, WA jobs
          Cloud Security Developer - RBC - Toronto, ON      Cache   Translate Page      
Use Jenkins, Open Source Tools like CFNNAG, Security Test Automation Tools like Server Spec, Inspec to develop a CICD Security Testing Pipeline....
From RBC - Mon, 22 Oct 2018 19:27:08 GMT - View all Toronto, ON jobs
          Adventures in Open Source: #OSMC 2018 – Day 0: Prometheus Training      Cache   Translate Page      

To most people, monitoring is not exciting, but it seems lately that the most exciting thing in monitoring is the Prometheus project. As a project endorsed by the Cloud Native Computing Foundation, Prometheus is getting a lot of attention, especially in the realm of cloud applications and things like monitoring Kubernetes.

At this year’s Open Source Monitoring Conference they offered a one day training course, so I decided to take it to see what all the fuss was about. I apologize in advance that a lot of this post will be comparing Prometheus to OpenNMS, but in case you haven’t guessed I’m biased (and a bit jealous of all the attention Prometheus is getting).

The class was taught by Julian Pivotto who is both a Prometheus user and a decent instructor. The environment consisted of 15 students with laptops set up on a private network to give us something to monitor.

Prometheus is written in Go (I’m never sure if I should call it “Go” or if I need to say “Golang”) which makes it compact and fast. We installed it on our systems by downloading a tarball and simply executing the application.

Like most applications written in the last decade, the user interface is accessed via a browser. The first thing you notice is that the UI is incredibly minimal. At OpenNMS we get a lot of criticism of our UI, but the Prometheus interface is one step above the Google home page. The main use of the web page is for querying collected metrics, and a lot of the configuration is done by editing YAML files from the command line.

Once Prometheus was installed and running, the first thing we looked at was monitoring Prometheus itself. There is no real magic here. Metrics are exposed via a web page that simply lists the variables available and their values. The application will collect all of the values it finds and store them in a time series database called simply the TSDB.

The idea of exposing metrics on a web page is not new. Over a decade ago we at OpenNMS were approached by a company that wanted us to help them create an SNMP agent for their application. We asked them why they needed SNMP and found they just wanted to expose various metrics about their app to monitor its performance. Since it ran on Linux system with an embedded web server, we suggested that they just write the values to a file, put that in the webroot, and we would use the HTTP Collector to retrieve and store them.

The main difference between that method and Prometheus is that the latter expects the data to be presented in a particular format, whereas the OpenNMS method was more free-form. Prometheus will also collect all values presented without extra configuration, whereas you’ll need to define the values of interest within OpenNMS.

In Prometheus there is no real auto-discovery of devices. You edit a file in which you create a “job”, in our case the job was called “Prometheus”, and then you add “targets” based on IP address and port. As we learned in the class, for each different source of metrics there is usually a custom port. Prometheus stats are on port 9100, node data is exposed on 9090 via the node_exporter, etc. When there is an issue, this can be reflected in the status of the job. For example, if we added all 15 Prometheus instances to the job “Prometheus” and one of them went down, then the job itself would show as degraded.

After we got Prometheus running, we installed Grafana to make it easier to display the metrics that Prometheus was capturing. This is a common practice these days and a good move since more and more people are becoming familiar it. OpenNMS was the first third-party datasource created for Grafana, and the Helm application brings bidirectional functionality for managing OpenNMS alarms and displaying collected data.

After that we explored various “components” for Prometheus. While a number of applications are exposing their data in a format that Prometheus can consume, there are also other components that can be installed, such as the node_exporter which displays server-related metrics and to provide data that isn’t otherwise natively available.

The rest of the class was spent extending the application and playing with various use cases. You can “federate” Prometheus to aggregate some of the collected data from multiple instance under one, and you can separate out your YAML files to make them easier to read and manage.

The final part of the class was working with the notification component called the “alertmanager” to trigger various actions based on the status of metrics within the system.

One thing I wish we could have covered was the “push” aspect of Prometheus. Modern monitoring is moving from a “pull” model (i.e. SNMP) to a “push” model where applications simply stream data into the monitoring system. OpenNMS supports this type of monitoring through the telemetryd feature, and it would be interesting to see if we could become a sink for the Prometheus push format.

Overall I enjoyed the class but I fail to see what all the fuss is about. It’s nice that developers are exposing their data via specially formatted web pages, but OpenNMS has had the ability to collect data from web pages for over a decade, and I’m eager to see if I can get the XML/JSON collector to work with the native format of Prometheus. Please don’t hate on me if you really like Prometheus – it is 100% open source and if it works for you then great – but for something to manage your entire network (including physical servers and especially networking equipment like routers and switches) you will probably need to use something else.


          Globaler Integrationsspezialist Yenlo wird erster Value-Added Reseller-Partner von WSO2      Cache   Translate Page      
Yenlo und WSO2 haben heute vereinbart, dass Yenlo ab sofort als WSO2 Value-Added Reseller (VAR) fungiert. Dieser Schritt folgt einer langjährigen Partnerschaft zwischen WSO2, dem führenden Anbieter von Open Source-Integration, und Yenlo als globalen Integrationsspezialisten und Certified Premier Partner von WSO2. Yenlo hat mehr als 11 Jahre Erfahrung mit der Bereitstellung von Lösungen für Enterprise-Anwendungen, […]
          Adventures in Open Source: #OSMC 2018 – Day -1      Cache   Translate Page      

The annual Open Source Monitoring Conference (OSMC) held in Nürnberg, Germany each year brings together pretty much everyone who is anyone in the free and open source monitoring space. I really look forward to attending, and so do a number of other people at OpenNMS, but this year I won the privilege, so go me.

The conference is a lot of fun, which must be the reason for the hell trip to get here this year. Karma must be trying to bring things into balance.

As an American Airlines frequent flier whose home airport is RDU, most of my trips to Europe involve Heathrow airport (American has a direct flight from RDU to LHR that I’ve taken more times than I can count).

I hate that airport with the core of my being, and try to avoid it whenever possible. While I could have taken a flight from LHR directly to Nürnberg on British Airways, I decided to fly to Philadelphia and take a direct American flight to Munich. It is just about two hours by train from MUC to Nürnberg Hbf and I like trains, so combine that with getting to skip LHR and it is a win/win.

But it was not to be.

I got to the airport and watched as my flight to PHL got delayed further and further. Chris, at the Admiral’s Club desk, was able to re-route me, but that meant a flight through Heathrow (sigh). Also, the Heathrow flight left five hours later than my flight to Philadelphia, and I ended up waiting it out at the airport (Andrea had dropped me off and I didn’t want to ask her to drive all the way back to get me just for a couple of hours).

Because of the length of this trip I had to check a bag, and I had a lot of trepidation that my bag would not be re-routed properly. Chris even mentioned that American had actually put it on the Philadelphia flight but he had managed to get it removed and put on the England flight, and American’s website showed it loaded on the plane.

That also turns out to be the last record American has on my bag, at least on the website I can access.

American Tracking Website

The fight to London was uneventful. American planes tend to land at Terminal 3 and most other British Airways planes take off from Terminal 5, so you have to make your way down a series a long corridors and take a bus to the other terminal. Then you have to go through security, which is usually when my problems begin.

I wear contact lenses, and since my eyes tend to react negatively to the preservatives found in saline solution I use a special, preservative-free brand of saline. Unfortunately, it is only available in 118ml bottles. As most frequent fliers know, the limit for the size of liquid containers for carry on baggage is 100ml, although the security people rarely notice the difference. When they do I usually just explain that I need it for my eyes and I’m allowed to bring it with me. That is, everywhere except Heathrow airport. Due to the preservative-free nature of the saline I can’t move it to another container for fear of contamination.

Back in 2011 was the first time that my saline was ever confiscated at Heathrow. Since then I’ve carried a doctor’s note stating that it is “medically necessary” but once even then I had it confiscated a few years later at LHR because the screener didn’t like the fact that my note was almost a year old. That said, many times have I gone through that airport with no one noticing the slightly larger size of my saline bottle, but on this trip it was not to be.

When your carry on items get tagged for screening at Heathrow’s Terminal 5, you kind of wait in a little mob of people for the one person to methodically go through your stuff. Since I had several hours between flights it was no big deal for me, but it is still very annoying. Of course when the screener got to my items he was all excited that he had stopped the terrorist plot of the century by discovering my saline bottle was 18ml over the limit, and he truly seemed disappointed when I produced my doctor’s note, freshly updated as of August of this year.

Screeners at Heathrow are not imbued with much decision making ability, so he literally had to take my note and bottle to a supervisor to get it approved. I was then allowed to take it with me, but I couldn’t help thinking that the terrorists had won.

The rest of my stay at the world’s worst airport was without incident, and I squeezed into my window seat on the completely full A319 to head to Munich.

One we landed I breezed through immigration (Germans run their airports a bit more efficiently than the British) and waited for my bag. And waited. And waited.

When I realized it wouldn’t be arriving with me, I went to look for a BA representative. The sign said to find them at the “Lost and Found” kiosk, but the only two kiosks in the rather small baggage area were not staffed. I eventually left the baggage area and made my way to the main BA desk, where I managed to meet Norbert. After another 15 minutes or so, Norbert brought me a form to fill out and promised that I would receive an e-mail and a text message with a “file number” to track the status of my bag.

I then found the S-Bahn train which would take me to the Munich Hauptbahnhof where I would get my next train to Nürnberg.

I had made a reservation for the train to insure I had a seat, but of course that was on the 09:55 train which I would have taken had I been on the PHL flight. I changed that to a 15:00 train when I was rerouted, and apparently one change is all you get with Deutsche Bahn, but Ronny had suggested I buy a “flexpreis” ticket so I could take any train from Munich to Nürnberg that I wanted. I saw there were a number of “Inter-City Express (ICE)” trains available, so I figured I would just hop on the first one I found.

When I got to the station I saw that a train was leaving from Platform (Gleis) 20 at 15:28. It was now 15:30 so I ran and boarded just before it pulled out of the station.

It was the wrong train.

Well, not exactly. There are a number of types of trains you can take. The fastest are the ICE trains that run non-stop between major cities, but there are also “Inter-City (IC)” trains that make more stops. I had managed to get on a “Regional Bahn (RB)” train which makes many, many stops, turning my one hour trip into three.

(sigh)

The man who took my ticket was sympathetic, and told me to get off at Ingolstadt and switch to an ICE train. I was chatting on Mattermost with Ronny most of this time, and he was able to verify the proper train and platform I needed to take. That train was packed, but I ended up sitting with some lovely people who didn’t mind chatting with me in English (I so love visiting Germany for this reason).

So, about seven hours later than I had planned I arrived at my hotel, still sans luggage. After getting something to eat I started the long process of trying to locate my bag.

I started on Twitter. Both the people at American and British Airways asked me to DM them. The AA folks said I needed to talk with the BA folks and the BA folks still have yet to reply to me. Seriously BA, don’t reach out to me if you don’t plan to do anything. It sets up expectations you apparently can’t meet.

Speaking of not doing anything, my main issue was that I need a “file reference” in order to track my lost bag, but despite Norbert’s promise I never received a text or e-mail with that information. I ended up calling American, and the woman there was able to tell me that she showed the bag was in the hands of BA at LHR. That was at least a start, so she transferred me to BA customer support, who in turn transferred me to BA delayed baggage, who told me I needed to contact American.

(sigh)

As calmly as I could, I reiterated that I started there, and then the BA agent suggested I visit a particular website and complete a form (similar to the one I did for Norbert I assume) to get my “file reference”. After making sure I had the right URL I ended the call and started the process.

I hit the first snag when trying to enter in my tag number. As you can see from the screenshot above, my tag number starts with “600” and is ten digits long. The website expected a tag number that started with “BA” followed by six digits, so my AA tag was not going to work.

BA Tracking Website - wrong number

But at least this website had a different number to call, so I called it and explained my situation once again. This agent told me that I should have a different tag number, and after looking around my ticket I did find one in the format they were after, except starting with “AA” instead of “BA”. Of course, when I entered that in I got an error.

BA Tracking Website - error

After I explained that to the agent I remained on the phone for about 30 minutes until he was able to, finally, give me a file reference number. At this point I was very tired, so I wrote it down and figured I would call it a night and go to sleep.

But I couldn’t sleep, so I tried to enter that number into the BA delayed bag website. It said it was invalid.

(sigh)

Then I got a hint of inspiration and decided to enter in my first name as my last, and voila! I had a missing bag record.

BA Tracking Website - missing bag

That site said they had found my bag (the agent on the phone had told me it was being “traced”) and it also asked me to enter in some more information about it, such as the brand of the manufacturer.

BA Tracking Website - information required

Of course when I tried to do that, I got an error.

BA Tracking Website - system error

Way to go there, British Airways.

Anyway, at that point I could sleep. As I write this the next morning nothing has been updated since 18:31 last night, but I hold out hope that my bag will arrive today. I travel a lot so I have a change a clothes with me along with all the toiletries I need to not offend the other conference attendees (well, at least with my hygiene), but I can’t help but be soured on the whole experience.

This year I have spent nearly US$20,000 with American Airlines (they track that for me on their website). I paid them for this ticket and they really could have been more helpful instead of just washing their hands and pointing their fingers at BA. British Airways used to be one of the best airlines on the planet, but lately they seemed to have turned into Ryanair but without that airline’s level of service. The security breach that exposed the personal information of their customers, stories like this recent issue with a flight from Orlando, and my own experience this trip have really put me off flying them ever again.

Just a hint BA – from a customer service perspective – when it comes to finding a missing bag all we really want (well, besides the bag) is for someone to tell us they know where it is and when we can expect to get it. The fact that I had to spend several hours after a long trip to get something approximating that information is a failure on your part, and you will lose some if not all of my future business because of it.

I also made the decision to further curtail my travel in 2019, because frankly I’m getting too old for this crap.

So, I’m now off to shower and to get into my last set of clean clothes. Here’s hoping my bag arrives today so I can relax and enjoy the magic that is the OSMC.


          MJS 084: Henry Zhu      Cache   Translate Page      

Panel: Charles Max Wood

Guest: Henry Zhu

This week on My JavaScript Story, Charles speaks with Henry Zhu who is working full-time on Babel! They discuss Henry’s background, past/current projects, Babel, and Henry’s new podcast. Check-out today’s episode to hear more!

In particular, we dive pretty deep on:

0:00 – Advertisement: Get A Coder Job!

1:00 – Chuck: Today we are talking with Henry Zhu! You are the maintainer of Babel – and we have had you on the show before. Anything else?

1:25 – Henry: I used to work with Adobe and now live in NY.

1:44 – Chuck: Episode 321 we talked to you and you released Babel 7. Tell us about Babel, please.

2:01 – Henry: It’s a translator for programming languages and it’s a compiler. It only translates JavaScript to JavaScript. You would do this because you don’t know what your users’ are using. It’s an accessibility thing as well.

3:08 – Chuck: Later, we will dive into this some more. Let’s back-up: how did you get into programming?

3:22 – Henry: I think I was in middle school and I partnered with a friend for science class and we made a flash animation about earthquakes. Both of my parents worked in the field, too. They never really encouraged me to do it, but here I am.

4:07 – Chuck: How did you get into Java?

4:11 – Henry: I made some games and made a Chinese card game. Then in college I went to a bunch of Hackathons. In college I didn’t major into computer science, but I took a bunch of classes for fun. I learned about Bootstrap and did a bunch of things with that.

5:12 – Chuck: How did you settle on JavaScript?

5:28 – Henry: It was my experience – you don’t have to download anything. You can just open things up in the console and it’s easy to share. I think I like the visual part of it and their UI.

6;07 – Chuck: At some point you ran across Babel – how did you get into that?

6:17 – Henry: After college I wanted to do software. I threw out my degree of industrial engineering. I tried to apply to Google and other top companies. I applied to various places and picked something that was local. I met Jonathan Neal and he got me into open source. Through that, I wanted to contribute to Angular, but it was hard for me. Then I found a small issue with a linting error. After that I made 30 commits to Angular. I added a space here and there. JSES is the next thing I got involved with. There is one file for the rule itself and one for the test and another for the docs. I contributed there and it was easy. I am from Georgia and a year in I get an email through Adobe. They asked if I wanted to work through Enhance in Adobe. I moved to NY and started working here. I found JS LINT, and found out about Babel JS LINT. And that’s how I found about Babel.

9:24 – Chuck: Was Sebastian still running the project at the time?

9:33 – Henry.

10:53 – Chuck: It seems like when I talk with people that you are the LEAD on Babel?

11:07 – Henry: I guess so, because I am spending the most time on it. I also quit the job to work on it. However, I want people to know that there are other people out there to give you help, too.

11:45 – Chuck: Sebastian didn’t say: this is the guy that is the lead now. But how did that crystalize?

12:12 – Henry: I think it happened by accident. I stumbled across it. By people stepping down they stepped down a while ago and others were helping and making changes. It was weird because Sebastian was going to come back.

It’s hard when you know that the person before had gotten burnt-out.

14:28 – Chuck: What is it like to go fulltime on an open source project and how do you go about it?

14:34 – Henry: I don’t want to claim that you have to do it my way. Maybe every project is different. Maybe the focus is money. That is a basic issue. If your project is more of a service, then direct it towards that. I feel weird if I made Babel a service. For me it feels like an infrastructure thing I didn’t want to do that.

I think people want to do open source fulltime, but there are a lot of things to take into consideration.

16:38 – Chuck.

16:50 – Guest.

16:53 – Henry.

16:55 – Chuck: How do you pay the bills?

17:00 – Henry: Unlike Kickstarter, Patreon is to help donate money to people who are contributing content.

If you want to donate a lot then we can tweak it.

19:06 – Chuck: Is there something in particular that you’re proud of?

19:16 – Henry: I worked on JS ES – I was a core team member of that. Going through the process of merging them together was quite interesting. I could write a whole blog post about that. There are a lot of egos and people involved. There are various projects.

Something that I have been thinking about...

20:53 – Chuck: What are you working on now?

20:58 – Henry: We released 7 a while ago and 7.1. Not sure what we are going to do next. Trying to figure out what’s important and to figure out what we want to work on. I have been thinking long-term; for example how do we get reviewers, among other things. I can spend a lot of time fixing bugs, but that is just short-term. I want to invest ways to get more people in. There is a lot of initiatives but maybe we can do something new. Maybe pair with local universities. Maybe do a local Meetup? Learning to be okay with not releasing as often. I don’t want to put fires out all day. Trying to prioritize is important.

23:17 – Chuck.

23:2 – Henry: Twitter and other platforms.

23:37 – Chuck: Picks!

23:38 – Advertisement – Fresh Books! 30-Day Trial!

24:45 – Picks.

Links:

Sponsors:

Picks:

Henry

  • My own podcast – releasing it next week
  • Podcast about Faith and Open Source

Charles

  • Ruby Rogues’ cohost + myself – Data Podcast – DevChat.Tv
  • Reworking e-mails

          Meet Franz, an open source messaging aggregator      Cache   Translate Page      

If you are like me, you use several different chat and messaging services during the day. Some are for work and some are for personal use, and I find myself toggling through a number of them as I move from apps to browser tabs—here, there, and everywhere.

The Franz website explains, “Being part of different communities often requires you to use different messaging platforms. You end up with lots of different apps and browser windows trying to stay on top of your messages and chats. Driven by that, we built Franz, a one-step solution to the problem.”

Read more

read more


          How open source in education creates new developers      Cache   Translate Page      

Like many programmers, I got my start solving problems with code. When I was a young programmer, I was content to code anything I could imagine—mostly games—and do it all myself. I didn't need help; I just needed less sleep. It's a common pitfall, and one that I'm happy to have climbed out of with the help of two important realizations:

First, the software that impacts our daily lives the most isn't made by an amazingly talented solo developer. On a large scale, it's made by global teams of hundreds or thousands of developers. On smaller scales, it's still made by a team of dedicated professionals, often working remotely. Far beyond the value of churning out code is the value of communicating ideas, collaborating, sharing feedback, and making collective decisions.

Read more

read more


          Reviewed: New Name, Logo, and Identity for Pay.UK by SomeOne      Cache   Translate Page      

“Above my Pay Grade”

New Name, Logo, and Identity for Pay.UK by SomeOne

Established in 2017 as New Payment System Operator ("NPSO"), the newly renamed Pay.UK is the UK's leading retail payments service provider that allows individuals and businesses to, among other things, get their salaries, pay their bills, and make online and mobile banking payments. In 2017, £6.7 trillion moved through their system, enabling £17.5 billion in payments every single day. Pay.UK is sort of the new parent brand of pre-existing payment services like Bacs Direct Credit, Direct Debit, Faster Payments, cheques and Paym. This past October, Pay.UK introduced its new identity, designed by London, UK-based SomeOne, who also conceived the name.

The feedback we got from stakeholder research was that a new name had to be 'short and snappy', and 'literally describe what we do'.With this in mind we renamed the organisation Pay.UK in recognition of its essential national service and digital future.

SomeOne project page

New Name, Logo, and Identity for Pay.UK by SomeOne
Logo, static.
The identity itself had not only to be solid and simple, but also be contemporary enough to attract new fintechs entering the UK market. We wanted to get across the idea that Pay.UK was constantly 'in motion' and not rigid as an institution.

To go with the new name, SomeOne created a continuously moving brand mark representing the organisation's 'gears in motion' and desire to be the 'beating heart' of UK payments. When seen statically, the mark should never be the same, reflecting the unique payment patterns of every UK individual.

For this to be truly dynamic, the digital team at SomeOne developed a generative logo creator - an application where anyone could create their own version of the Pay.UK symbol.

SomeOne project page

Logo, animated.
Icon animation.
Logo generator.

The old name sounded more like a category than an organization's name -- "New Payment System Operator" sounded more like "Telecommunications Service Provider" than "AT&T". Pay.UK is so unbelievably simple it's a surprise they were able to take it. To a certain degree it almost sounds like a government agency and I don't know if that's a little misleading in giving the organization more apparent power than it has or than it should have. Anyway: good name, yo. The old logo didn't help the name much, looking like a Nutrition Fact line item. The new logo is... different. As I'm trying to write this paragraph, I find myself starting and stopping on trying to assess the icon. One part of me appreciates how different it is -- in either static or animated form -- and trying to create almost a new category of icon. Another part of me finds it, particularly in motion, pretty disturbing. It's like the inside of the mouth of a villain's robot mascot that eats things. The individual shapes of the icon are not pleasant either. It's really weird. I'm very much looking forward to reading the comments because I'm so unsure on this one.

The wordmark is actually nice, especially given that it's a font straight out of the box. In its simplicity it helps ground the icon and the size relationship in the lock-up makes the name the most visible element.

The word mark and brand's typography all use Source Sans, Adobe's first open source typeface family, primarily designed for user interfaces. As part of the brand world, this sits alongside a no-nonsense black and white colour palette and colour photography depicting the 'real life' end users - people across the UK benefitting from fast, secure payments.

SomeOne project page

New Name, Logo, and Identity for Pay.UK by SomeOne
Business cards.
New Name, Logo, and Identity for Pay.UK by SomeOne
New Name, Logo, and Identity for Pay.UK by SomeOne
Booklet covers and spreads.

The booklet covers achieve a very interesting, almost United-Nations-like look, that makes the identity feel very authoritative but, as paired with the customer photography, also accessible.

New Name, Logo, and Identity for Pay.UK by SomeOne
Brand pattern.
New Name, Logo, and Identity for Pay.UK by SomeOne
Lanyards.
New Name, Logo, and Identity for Pay.UK by SomeOne
Website.
New Name, Logo, and Identity for Pay.UK by SomeOne
Lobby.
New Name, Logo, and Identity for Pay.UK by SomeOne
Conference room. Spoiler: Icon does not work in single color.
New Name, Logo, and Identity for Pay.UK by SomeOne
Mug.

Overall, there is a confusing sense of what the exact positioning of this is: is it a government agency? A business-to-business corporate brand? A direct-to-consumer retail brand? Am I supposed to fear it like the IRS or embrace it like Shopify? Sorry, more questions than answers today.


          How to Find the Best Umbraco 7.12.3 Hosting in Europe ?      Cache   Translate Page      

Best, Cheap Umbraco 7.12.3 hosting award is selected by BestWindowsHostingASP.NET professional review team based on the price, server reliability, loading speed, features, customer support, and guarantee. Based on it's easy to use, many of peoples ask our team to give Umbraco 7.12.3 hosting services. Because of that, we will announce you the Best, Cheap Umbraco 7.12.3 Hosting recommendation. 
http://www.bestwindowshostingasp.net/2018/07/flash-sale-recommended-umbraco-7110.html

Umbraco CMS is a fully-featured open source content management system with the flexibility to run anything from small campaign or brochure sites right through to complex applications for Fortune 500's and some of the largest media sites in the world.  Umbraco is easy to learn and use, making it perfect for web designers, developers and content creators alike.

How to Find the Best Umbraco 7.12.3 Hosting in Europe ?

HostForLIFE.eu - HostForLIFE.eu is recognized as one of the Best, Cheap Umbraco 7.12.3 Hosting Provider. You can always start from their start from €2.97/month and this plan has supported Umbraco 7.12.3 with the one-click installer, within less than 5 minutes. They provide cheap, best and instant activation on your Umbraco Hosting hosting account with UNLIMITED Bandwidth, Diskspace, and Domain. Their data center maintains the highest possible standards for physical security. They have invested a great deal of time and money to ensure you get excellent uptime and optimal performance.

http://hostforlife.eu/European-Umbraco-761-Hosting

At HostForLIFE.eu, customers can also experience fast Umbraco 7.12.3 hosting. The company invested a lot of money to ensure the best and fastest performance of the data centers, servers, network and other facilities. Its data centers are equipped with the top equipment like cooling system, fire detection, high-speed Internet connection, and so on. That is why HostForLIFE.eu guarantees 99.98% uptime for Umbraco 7.12.3. And the engineers do regular maintenance and monitoring works to assure its Umbraco 7.12.3 hosting are security and always up. 

The Umbraco 7.12.3 Hosting solution offers a comprehensive feature set that is easy-to-use for new users, yet powerful enough for the most demanding ecommerce expert. Written in .NET Umbraco is simple, powerful and runs beautifully on HostForLIFE.eu's Umbraco 7.12.3 hosting packages, it is a very popular alternative to Umbraco 7.12.3 hosting 


Their top priority to deliver the ultimate customer experience, and they strongly believe that you’ll love their service - so much so that if for any reason you’re unhappy in your first 30 days as a customer, you’re more than welcome to request your money back.

HostForLIFE.eu currently operates data center located in Amsterdam (NL), London (UK), Seattle (US), Paris (FR) and Frankfurt (DE). All their data center offers complete redundancy in power, HVAC, fire suppression, network connectivity, and security.

Reason Why You Should Choose HostForLIFE.eu as Umbraco 7.12.3 Hosting

HostForLIFE Umbraco 7.12.3 optimized hosting infrastructure features independent email, the web, database, DNS and control panel servers and a lightning fast servers ensuring your site load super quick! Reason why you should choose cheap and reliable with HostForLIFE to host your Umbraco 7.12.3 site:
  • Top of the line servers optimized for your Umbraco 7.12.3 installation
  • 24/7/35 Technical support from Umbraco 7.12.3 hosting experts
  • HostForLIFE.eu provide full compatibility with Umbraco 7.12.3 hosting and all popular plug-in.
  • Free professional installation of Umbraco 7.12.3.
  • HostForLIFE.eu have excellence knowledge base and professional team support
http://hostforlife.eu/European-Umbraco-781-Hosting

HostForLIFE.eu is the Best Umbraco 7.12.3 Hosting in Europe

We recommend it for people ache for a secure, high performance and budget. In the case you plan to launch a new site or move out from a terrible web hosting, HostForLIFE.eu.com is a good option. To learn more about HostForLIFE.eu visit homepage:
http://hostforlife.eu/European-Umbraco-216-Hosting


          IBM Invests Billions To Purchase Popular Red Hat Linux      Cache   Translate Page      
IBM has recently announced what is to be the largest open source acquisition in history.  They're buying Red Hat for a staggering $34 billion dollars.  This, as the saying goes, changes everything. ...
          Senior Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Thu, 01 Nov 2018 23:17:40 GMT - View all Seattle, WA jobs
          Front End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Wed, 31 Oct 2018 23:17:46 GMT - View all Seattle, WA jobs
          Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 22 Oct 2018 23:17:46 GMT - View all Seattle, WA jobs
          Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Sun, 21 Oct 2018 23:17:43 GMT - View all Seattle, WA jobs
          Software Engineering Manager, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Wed, 17 Oct 2018 23:17:47 GMT - View all Seattle, WA jobs
          Senior Back End Engineer, Axon Now - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in software engineering and open source technologies, and can intuit the fine line between promising new practice and overhyped fad....
From Axon - Fri, 05 Oct 2018 23:18:37 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Now - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in software engineering and open source technologies, and can intuit the fine line between promising new practice and overhyped fad....
From Axon - Fri, 05 Oct 2018 23:18:02 GMT - View all Seattle, WA jobs
          Senior Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 16 Jul 2018 05:18:50 GMT - View all Seattle, WA jobs
          Liter of Light      Cache   Translate Page      
https://www.aneddoticamagazine.com/wp-content/uploads/Logo_of_Liter_of_Light_.jpeg

Liter of Light is a global, grassroots movement committed to providing affordable, sustainable solar light to people with limited or no access to electricity. Through a network of partnerships around the world, Liter of Light volunteers teach marginalized communities how to use recycled plastic bottles and locally sourced materials to illuminate their homes, businesses and streets. Liter of Light has installed more than 350,000 bottle lights in more than 15 countries and taught green skills to empower grassroots entrepreneurs at every stop. Liter of Light’s open source technology has been recognized by the UN and adopted for use in some UNHCR camps.



– Use always a sheet metal scissors for cutting round edges to make sure that the hole in the roof is as close to the diameter of the bottle as possible ( you will have to glue the underside!)

– Sand very well the bottle till the shiny part is removed. The glue will not stick on to the plastic

– The top portion where the bottle meets the plastic is where most leaks happen. You must use a tougher glue than just an elastomeric sealant. Use a SikaFlex 11FC glue or an epoxy sealant to make 100 percent sure this is sealed properly.

– Always use PET soda bottles ( 1 L, 1.5 L, 2 L ) to make sure that it does not easily break. DO NOT USE plastic water bottles.

– Use a guide in cutting the roof that assures that the bottle fits snugly though the roof. You will have to seal this from the underside so don’t freehand cutting the hole.

-Use a guide plastic bottle filled with cement to open up the cut metal sheet so you will not have to damage the empty plastic bottle.

– Seal the underside of the bottle cap once you placed distilled water and bleach to prevent leaks.

-Place a 1 1/4 inch pipe covering the top and around the plastic bottle cap as this is prone to cracking with the infrared rays of the sun.


Good luck and be safe installing a Liter of Light.


http://literoflight.org



 


Aneddotica Magazine - Collaborative Blog since 2012 https://www.aneddoticamagazine.com/liter-of-light/
          vScaler Cloud Adopts RAPIDS Open Source Software for Accelerated Data Science      Cache   Translate Page      

vScaler has incorporated NVIDIA’s new RAPIDS open source software into its cloud platform for on-premise, hybrid, and multi-cloud environments. Deployable via its own Docker container in the vScaler Cloud management portal, the RAPIDS suite of software libraries gives users the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. "The new RAPIDS library offers Python interfaces which will leverage the NVIDIA CUDA platform for acceleration across one or multiple GPUs. RAPIDS also focuses on common data preparation tasks for analytics and data science. This includes a familiar DataFrame API that integrates with a variety of machine learning algorithms for end-to-end pipeline accelerations without paying typical serialization costs. RAPIDS also includes support for multi-node, multi-GPU deployments, enabling vastly accelerated processing and training on much larger dataset sizes."

The post vScaler Cloud Adopts RAPIDS Open Source Software for Accelerated Data Science appeared first on insideHPC.


          Eine Chance für Open Hardware: das Recht auf Reparatur      Cache   Translate Page      

In einer Zeit, in der große Teile der technischen Apparate Daten verarbeiten, hat die Forderung nach transparenter Technik einen hohen Stellenwert: Wie erfasst das Smartphone seine Umgebung, welche Daten werden erhoben und wohin gelangen sie? Open Source, das Offenlegen des technischen Aufbaus und die Verwendung von freien Lizenzen, ist hier das bekannte Schlagwort. Während Open Source für den Bereich der Software ein verbreitetes Thema ist und freie Software weite Teile der Softwareentwicklung bereichert, steckt Open Source Hardware noch in den Kinderschuhen. Schlimmer noch: Wer sich an die 50er und 60er Jahre erinnert, weiß, dass der Schaltplan oft fester Bestandteil erworbener Geräte war. Heute verschwinden sie immer mehr in verklebten Gehäusen und sollen möglichst wenige Gründe zum Öffnen geben – eine technische Dokumentation liefert nur die nötigsten Informationen.

Wenn wir uns nicht weiter von der Technik entkoppeln, wenn wir ihr Herrschaftspotential verringern möchten, muss es grundsätzlich die Möglichkeit geben, sie zu durchdringen. Das erfordert eine offene Dokumentation, aber auch ein Design der Geräte selbst, welches das Öffnen ermöglicht. Diese Forderungen stechen sich mit dem allgemeinen Verständnis von der Vermarktung von technischen Geräten: Gibt es eine offene Dokumentation, so ist die Angst groß, dass andere sie gleichermaßen vermarkten. Produzierende, wie Arduino, XYZ Cargo und andere, zeigen allerdings, dass es auch anders geht. Der Verdacht ist groß, dass der Wille klein ist, neue Geschäftsmodelle zu entwickeln. Lieber wird sich auf das Altbekannte verlassen.

Open Hardware: Um was geht es genau?

Offene Technologien prägen Diskussionen um Verteilungs-, Sicherheits- und Innovationsfragen - dabei sind auch Patentfragen immer wieder das Thema. Denn technische Geräte unterliegen rechtlich erst mal nicht dem Urheberrecht. Erst das Patent verwehrt die Nachnutzung durch Dritte. Geht es bei Open Hardware also schlicht um die Patentierung? Nein: Wie auch bei Software ermöglicht es erst die Dokumentation ein Produkt vollständig zu verstehen – mit dem Unterschied, dass bei Hardware von der Dokumentation auch die Reproduzierbarkeit abhängt: “Open source hardware is hardware whose design is made publicly available so that anyone can study, modify, distribute, make, and sell the design or hardware based on that design,” (Open Source Hardware Association).
Die Frage, wie diese Offenlegung des Designs für Hardware genau auszusehen hat, ist weitaus schwerer zu beantworten als für Software – zu viele Parameter spielen für die Reproduzierbarkeit eine Rolle. Eine konkrete und praxistaugliche Anleitung zur Dokumentation von Open Hardware hat das Forschungsprojekt „OPEN! Methods and tools for community-based product development“ entwickelt. Antworten zur Frage nach der richtigen Lizenz gibt die Open Source Hardware Association.

Das Recht auf Reparatur macht Open Hardware vermittelbar

Es scheinen also viele Weichen gestellt, um Open Hardware praktisch umzusetzen. Warum passiert trotzdem so wenig? Eine Antwort ist die fehlende Öffentlichkeit: Open Hardware ist ein schwer vermittelbares Nischenthema.
Das Recht auf Reparatur kann das ändern. Es kann Open Hardware aus der Blase der technisch Involvierten heraustragen und für Laien greifbar problematisieren. Denn hier zeigt sich für Verbrauchende ganz praktisch welche Hürden aus geschlossenen Systemen entstehen, wie schwer es ist, defekten Geräten zu neuer Funktion zu verhelfen: Verklebte Gehäuse verhindern das Öffnen, fehlende Dokumentationen das Nachvollziehen, kaum vorhandene Ersatzteile das Reparieren. Radikal: Das Gerät gehört nicht wirklich den Kaufenden. Um es sein Eigentum nennen zu können, müssen die Kaufenden auch das Recht und die Möglichkeit der Reparatur haben.

Mehr Öffentlichkeit für das EU-Ecodesign-Paket!

Auf EU-Ebene gibt es aktuell kaum wahrgenommene Anstrengungen, das Recht auf Reparatur zu strukturieren. Wir müssen diese Debatte nutzen und befeuern, um Open Hardware greifbar zu kommunizieren und rechtlich zu verankern. Konkret geht es um das EU-Ecodesign-Paket, dessen Beschließung noch bis vor Kurzem auf unbestimmte Zeit verschoben werden sollte. Ein offener Brief sowie eine Petition haben dazu beigetragen, das zu verhindern. Dennoch braucht es mehr Druck, damit die Entscheidung im Sinne der Verbraucher getroffen wird. Indem wir die Debatte mit den Erfahrungen und Forderungen aus dem Bereich der Open Hardware bereichern, tragen wir dazu bei, dass das Recht auf Reparatur nicht bei den Forderungen nach mehr Modularisierung von technischen Produkten und Bereithaltung von Ersatzteilen stehen bleibt. Das Ziel ist die offene Dokumentation.


          Senior Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Thu, 01 Nov 2018 23:17:40 GMT - View all Seattle, WA jobs
          Front End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Wed, 31 Oct 2018 23:17:46 GMT - View all Seattle, WA jobs
          Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 22 Oct 2018 23:17:46 GMT - View all Seattle, WA jobs
          Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Sun, 21 Oct 2018 23:17:43 GMT - View all Seattle, WA jobs
          Software Engineering Manager, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Wed, 17 Oct 2018 23:17:47 GMT - View all Seattle, WA jobs
          Senior Back End Engineer, Axon Now - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in software engineering and open source technologies, and can intuit the fine line between promising new practice and overhyped fad....
From Axon - Fri, 05 Oct 2018 23:18:37 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Now - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in software engineering and open source technologies, and can intuit the fine line between promising new practice and overhyped fad....
From Axon - Fri, 05 Oct 2018 23:18:02 GMT - View all Seattle, WA jobs
          Senior Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 16 Jul 2018 05:18:50 GMT - View all Seattle, WA jobs
          AMD Unveils World's First 7nm Datacenter GPUs with PCIe 4.02 Interconnect      Cache   Translate Page      
AMD unveiled the world's first lineup of 7nm GPUs for the datacenter that will utilize an all new version of the ROCM open software platform for accelerated computing. "The AMD Radeon Instinct MI60 and MI50 accelerators feature flexible mixed-precision capabilities, powered by high-performance compute units that expand the types of workloads these accelerators can address, including a range of HPC and deep learning applications." They are specifically designed to tackle datacenter workloads such as rapidly training complex neural networks, delivering higher levels of floating-point performance, while exhibiting greater efficiencies. The new "Vega 7nm" GPUs are also the world's first GPUs to support the PCIe 4.02 interconnect which is twice as fast as other x86 CPU-to-GPU interconnect technologies and features AMD Infinity Fabric Link GPU interconnect technology that enables GPU-to-GPU communication that is six times faster than PCIe Gen 3. The AMD Radeon Instinct MI60 Accelerator is also the world's fastest double precision PCIe accelerator with 7.4 TFLOPs of peak double precision (FP64) performance. "Google believes that open source is good for everyone," said Rajat Monga, engineering director, TensorFlow, Google. "We've seen how helpful it can be to open source machine learning technology, and we're glad to see AMD embracing it. With the ROCm open software platform, TensorFlow users will benefit from GPU acceleration and a more robust open source machine learning ecosystem." ROCm software version 2.0 provides updated math libraries for the new DLOPS; support for 64-bit Linux operating systems including CentOS, RHEL and Ubuntu; optimizations of existing components; and support for the latest versions of the most popular deep learning frameworks, including TensorFlow 1.11, PyTorch (Caffe2) and others. Discussion
          The Indian Express Script | Firstpost Script      Cache   Translate Page      
Our India times clone is mainly developed for the people to take up their news-portal business through on-line to provide a brand new professional news-portal script with advanced features and functionality to enhance the business to make latest and trends easier access to the users and this script will also help the new entrepreneur who likes to do on-line business and to provide the latest trending trusted news service with reliable and robust script, this India times script makes much easier for the users to access the site without any technical knowledge because our script is made as user-friendly. The Indian Express Script is designed with Open Source PHP platform to make the script as much as efficient to the user, this script can be customized to the users as globalised or local to make their reach to the worldwide and here the new user can simply register their account with their valid mail id and password to make authentication account
          Why you should use Gandiva for Apache Arrow      Cache   Translate Page      

Over the past three years Apache Arrow has exploded in popularity across a range of different open source communities. In the Python community alone, Arrow is being downloaded more than 500,000 times a month. The Arrow project is both a specification for how to represent data in a highly efficient way for in-memory analytics, as well as a series of libraries in a dozen languages for operating on the Arrow columnar format.

In the same way that most automobile manufacturers OEM their transmissions instead of designing and building their own, Arrow provides an optimal way for projects to manage and operate on data in-memory for diverse analytical workloads, including machine learning, artificial intelligence, data frames, and SQL engines.

To read this article in full, please click here

(Insider Story)
          What’s new in NativeScript 5.0      Cache   Translate Page      

NativeScript, a framework for native mobile application development leveraging JavaScript technologies, now has Version 5.0 available. 

Featuring a set of cross-platform abstractions and runtimes, the open source NativeScript lets you  develop native mobile apps with JavaScript, TypeScript, or Angular. A NativeScript runtime translates between JavaScript, TypeScript, and Angular and the native APIs in Apple iOS and Google Android, letting developers write an application just once to support both platforms. 

To read this article in full, please click here

(Insider Story)
          ifacesoft/ice (1.16.23)      Cache   Translate Page      
Ice Open Source PHP Framework
          TiDB: Architecture and Use Cases of A Cloud-Native NewSQL Database      Cache   Translate Page      
TiDB is an open source cloud-native distributed database that handles hybrid transactional and analytical processing (HTAP) workloads–a member of the NewSQL class of databases that’s reinventing how a relational database can be designed, built, and deployed at massive scale. (“Ti” for those of you wondering stands for “Titanium”.) Since PingCAP started building TiDB just three and a half years ago, it has become one of the fastest-growing databases in the world, supported by a strong, vibrant community (at time of writing: 15,000+ GitHub stars, 200+ contributors, 7200+ commits, 2000+ forks).
          Study Finds Lukewarm Corporate Engagement With Open Source      Cache   Translate Page      
Companies expect developers to use open source tools at work, but few make substantial contributions in return
          Domoticz - open source domotica systeem - deel 3      Cache   Translate Page      
Replies: 11853 Last poster: BoschR at 07-11-2018 14:04 Topic is Open Sp33dFr34k schreef op dinsdag 2 oktober 2018 @ 14:09: Ik zie dat er een DIY versie bestaat van de milight controller: https://github.com/sidoh/esp8266_milight_hub Heeft iemand dit succesvol gekoppeld aan Domoticz, en zoja, hoe bevalt dat? Precies dit is ook mijn vraag eigenlijk. Kan er nog niet echt wat over vinden en/of bedenken hoe dit in Domoticz te integreren. Tijdje (zomer) minder actief geweest met domotica. Heb nu 2 systemen draaien (Domoticz en HA) maar dit is niet bepaald handig. Ik heb niet bijzonder enthousiast over beide, maar het minst over HA. HA uitzetten is alleen niet echt een optie omdat ik daar dus de ESP8266 milight hub oa mee aanstuur. Maar ook bv de MQTT broker.
          Luke Knoble replied to Luke Knoble's discussion Senior design team - Looking to create open source swarm software..Ideas on where to start?      Cache   Translate Page      
Luke Knoble replied to Luke Knoble's discussion Senior design team - Looking to create open source swarm software..Ideas on where to start?

          JAVA EN OPEN SOURCE ONTWIKKELAAR MERELBEKE (GENT) - Xplore Group - Kontich      Cache   Translate Page      
Bij voorkeur behaalde je een bachelor of masteropleding, of volgde je een opleiding tot Java-programmeur bij de VDAB....
Van Bonque - Mon, 29 Oct 2018 08:49:06 GMT - Toon alle vacatures in Kontich
          Critical Encryption Bypass Flaws in Popular SSDs Compromise Data Security      Cache   Translate Page      

Vulnerabilities in Samsung, Crucial storage devices enable data recovery without a password or decryption key, researchers reveal.

The full disk hardware encryption available on some widely used storage devices is so poorly implemented there may as well not be any encryption on them at all, say security researchers at Radboud University in the Netherlands.

Hardware full disk encryption encryption is generally perceived as good as or even better protection than software encryption. Microsoft's BitLocker encryption software for windows even defaults to hardware encryption if the feature is available in the underlying hardware.

But when the researchers tested self-encrypting solid state drives (SSDs) from two major manufacturers ― Samsung and Crucial ― they found fundamental vulnerabilities in many models that make it possible for someone to bypass the encryption entirely.

The flaws allow anyone with the requisite know-how and physical access to the drives to recover encrypted data without the need for any passwords or decryption keys.

"We didn't expect to get these results. We are shocked," says Bernard van Gastel, an assistant professor at Radboud University and one of the researchers who uncovered the flaws. "I can't imagine how somebody would make errors like this" in implementing hardware encryption.

Together, Samsung and Crucial account for some half of all SSDs currently sold in the market. But based on how difficult it is to get full disk encryption at the hardware level right, it wouldn't be surprising if similar flaws exist in SSDs from other vendors as well, van Gastel says. "We didn't look at other models, but it is logical to assume that Samsung and Crucial are not the only ones with the problems," he notes.

Many of the problems have to do with how difficult it is for vendors to correctly implement the requirements of TCG Opal, a relatively new specification for self-encrypting drives, van Gastel says. The standard is aimed at improving encryption protections at the hardware level, but it can be complex to implement and easy to misinterpret, resulting in errors being made, he adds.

One fundamental flaw that Gastel and fellow researcher Carlo Meijer discovered in several of the Samsung and Crucial SSDs they inspected was a failure to properly bind the disk encryption key (DEK) to a password. "Normally when you set up hardware encryption in a SSD, you enter a password. Using the password, an encryption key is derived, and using the encryption key, the disk is encrypted," Gastel says.

What the researchers found in several of the SSDs was an absence of such linking. Instead of the encryption key being derived from the password, all the information required to recover the encrypted data was stored in the contents of the drive itself. Because the password check existed in the SSD, the researchers were able to show how someone could modify the check so it would always pass any password that was entered into it, thereby making data recovery trivial.

Another fundamental flaw the researchers discovered allows for a disk encryption key to be recovered from an SSD even after a user sets a new master password for it. In this case, the vulnerability is tied to a property of flash memory in SSDs called "wear leveling," which is designed to prolong the service life of the devices by ensuring data erasures and rewrites are distributed evenly across the medium, van Gastel says. Several of the devices that the researchers inspected stored cryptoblobs in locations that made it possible to recover the DEK even if a new master password is set for it.

In total, the researchers discovered six potential security issues with hardware encryption in the devices they inspected. The impacted devices are Crucial MX 100 (all form factors); Crucial MX200 (all form factors); Crucla MX300 (all form factors); Samsung 840 EVO; Samsung 850 EVO; and the Samsung T3 and T5 USB drives.

The key takeaway for organizations is to not rely on hardware encryption as the sole mechanism for protecting data, van Gastel says. Where possible, it is also vital to employ software full-disk encryption. He recommends using open source software, such as Veracrypt, which is far likelier to have been fully audited for security issues than a proprietary encryption tool.

Organizations using BitLocker should adjust their group policy settings to enforce software encryption in all situations. Such changes, however, will make little difference on already-deployed drives, van Gastel notes.

In a brief consumer advisory , Samsung acknowledged the issues in its self-encrypting SSDs. The company advised users to install encryption software in the case of nonportable SSDs and to update their firmware for portable SSDs. Crucial has so far not commented publicly on the issue.

For the industry at large, the issues that were discovered in the Samsung and Crucial drives highlight the need for a reference implementation of the Opal spec, van Gastel says. Developers need to have a standard way of implementing Opal that is available for public scrutiny and auditing, he says.

Related Content: Data Encryption: 4 Common Pitfalls New Method Proposed for Secure Government Access to Encrypted Data IEEE Calls for Strong Encryption 7 Critical Criteria for Data Encryption In The Cloud
Critical Encryption Bypass Flaws in Popular SSDs Compromise Data Security

Black Hat Europe returns to London Dec 3-6 2018 with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ...View Full Bio


          Acquia, BigCommerce Partner to Accelerate Content for Commerce Initiatives      Cache   Translate Page      
Acquia, BigCommerce Partner to Accelerate Content for Commerce Initiatives christopher.rogers Tue, 11/06/2018 - 08:12

Partnership Focused on Helping Fast-growing Brands Deliver Better Customer Experiences

BOSTON -- November 6, 2018 -- Acquia and BigCommerce today announced a partnership to help merchants develop and launch online ecommerce solutions with powerful, on-brand customer experiences. This partnership will enable fast-growing brands to take advantage of industry-leading ecommerce solutions with world-class open source content management.

Today’s retailers naturally struggle to deliver on all of their omni-channel initiatives. No one shops on a single medium, and research shows that convenience and price are the biggest reasons why consumers shop online. Brands need to work hard to earn the loyalty of repeat customers. That means investing in content-rich, fully personalized experiences that work across every channel, helping provide consumers with a great experience at every touchpoint.

Acquia and BigCommerce are collaborating to make it easier for brands to take advantage of cloud-based commerce and content solutions. Through a series of go-to-market initiatives, merchants can now tap into the leading open source content management system Drupal and BigCommerce’s leading ecommerce platform for catalog, order, and customer data.

“Brands are under intense pressure to establish trust and foster a loyal customer base, or else risk being disrupted by dominant global retailers and low-cost competitors. That is why it’s so important to elevate your brand story at every opportunity and affirm your value in every interaction,” said Michael Sullivan, Acquia CEO. “We’re working with BigCommerce to help fast-track the development of content for commerce initiatives for ambitious, mid-market brands. Our combined approach provides enormous opportunities to build brand storefronts and connect with consumers across every channel.”

Brands that rely on the Acquia Experience Platform along with Drupal for content management will discover opportunities to create more powerful, branded content for commerce with BigCommerce. Working with BigCommerce and Acquia, brands can very quickly set up an online store that is part of their digital experience.

Sign up for our newsletter

Receive the best content about the future of marketing, industry shifts, and other thought leadership.

“Working with Acquia, BigCommerce is expanding what’s possible for merchants and retailers that are looking to provide a unique site experience by empowering them to make content for commerce a strategic part of their marketing and retail strategies,” said Brent Bellm, BigCommerce CEO. “BigCommerce’s ecommerce platform offers a best-in-class commerce solution paired with the Acquia Experience Platform, enabling merchants to launch the world’s most engaging, content-rich commerce sites.”

BigCommerce and BORN will co-present Commerce Meets Customer Experience: How Brands Are Keeping Up With Consumer Demands at Acquia Engage, Acquia’s annual North American user conference, on Thursday, Nov. 8. To register visit engage.acquia.com.

About BigCommerce

BigCommerce is the world’s leading cloud ecommerce platform for established and rapidly-growing businesses. Combining enterprise functionality, an open architecture and app ecosystem, and market-leading performance, BigCommerce enables businesses to grow online sales with 80% less cost, time and complexity than on-premise software. BigCommerce powers B2B and B2C ecommerce for more than 60,000 brands, 2,000+ mid-market businesses, 30 Fortune 1000 companies and industry-leading brands, including Assurant, Ben & Jerry’s, Paul Mitchell, Skullcandy, Sony and Toyota. For more information, visit www.bigcommerce.com.

About Acquia

Acquia is the open source digital experience company. We provide the world’s most ambitious brands with technology that allows them to embrace innovation and create customer moments that matter. At Acquia, we believe in the power of community - giving our customers the freedom to build tomorrow on their terms. To learn more, visit acquia.com.

- ### -

All logos, company and product names are trademarks or registered trademarks of their respective owners.


          GPL Initiative Expands With 16 Additional Companies Joining Campaign For Greater Predictability In Open Source Licensing      Cache   Translate Page      

Click to view a price quote on RHT.

Click to research the Computer Software & Services industry.

          Klantketenbeheerder (Middleware DBA) (MK)      Cache   Translate Page      
Startdatum: 1-1-2019 Einddatum: 31-3-2019 Indicatie intakegesprek: zsm Indicatie besluitvorming: zsm Sluitingsdatum: 12-11-2018 Sluitingstijd: 17:00 Opdrachtomschrijving: Als Klantketenbeheerder (Middleware DBA) voer je zeer diverse beheertaken uit, waarbij diepgaande technische kennis, van met name Oracle maar ook Open Source producten, en goede contactuele eigenschappen van grote waarde zijn...
          2018 International Library Automation Perceptions Survey      Cache   Translate Page      
Library Technology Guides is inviting libraries to participate in the 2018 International Library Automation Perceptions Survey:
"Please take this opportunity to register the perceptions of the library automation system used in your library, its vendor, and the quality of support delivered. The survey also probes at considerations for migrating to new systems, involvement in discovery products, and the level of interest in open source ILS. While the numeric rating scales support the statistical results of the study, the comments offered also provide interesting insights into the current state of library automation satisfaction."

"Note: If you have responded to previous editions of the survey, please give your responses again this year. By responding to the survey each year, you help identify long-term trends in the changing perceptions of these companies and products.
As with the previous versions of the survey, only one response per library is allowed and any individual can respond only for one library. These restrictions ensure that no single organization or individual can skew the statistics."
The annual survey has been conducted every year since 2007. The results of all previous surveys are available on the Library Technology Guides website, which is maintained by Marshall Breeding, a well-known library automation expert.
          HoST — Journal of History of Science and Technology      Cache   Translate Page      

Está online o novo número da HoST — Journal of History of Science and Technology


HoST — Journal of History of Science and Technology é uma revista de acesso aberto com arbitragem científica, disponível em linha, publicada em inglês pela De Gruyter, em resultado de uma parceria de quatro unidades de investigação portuguesas (CIUHCT, CIDEHUS, Instituto de Ciências Sociais, e Instituto de História Contemporânea).

CONTEÚDOS DESTE NÚMERO


          LXer: How to install Hadoop on Ubuntu 18.04 Bionic Beaver Linux      Cache   Translate Page      
Apache Hadoop is an open source framework used for distributed storage as well as distributed processing of big data on clusters of computers which runs on commodity hardwares. Hadoop stores data in Hadoop Distributed File System (HDFS) and the processing of these data is done using MapReduce. YARN provides API for requesting and allocating resource in the Hadoop cluster.
          Kommentar zu FS224 Lass uns reich werden mit toten Menschen von Sven Dean      Cache   Translate Page      
Zum Thema Diabetes ein Nachtrag: Handy am NFC Messgerrät am Arm hat den Nachteil, dass man hier immer noch aktiv messen muss. Es gibt bereits seit längerem Geräte mit Bluetooth Trchnik, die permanent den Wert messen und hier bei Schwankungen und starken Ab und Anstiegen alarmieren. Ein weiteres sehr spannendes Thema ist ein Open Source Projekt, welches einen sogenannten Closed Loop System abbilden. Hierbei werden Insulinpumpen über eine Software gesteuert, die aus den oben benannten Blutzuckermessverfahren und einem Algohritmus, die Insulinzufuhr permanent justiert, sozusagen eine künstliche Pankreas. VG Sven
          Using open source data to measure street walkability and bikeability in China: a case of four cities - Gu P, Han Z, Cao Z, Chen Y, Jiang Y.       Cache   Translate Page      
Whether people would choose to walk or ride a bike for their daily travel is affected by how desirable the environment is for walking and biking. To better inform urban planning and design practices, studies on measuring walkability and bikeability have em...
          Cloud Security Developer - RBC - Toronto, ON      Cache   Translate Page      
Use Jenkins, Open Source Tools like CFNNAG, Security Test Automation Tools like Server Spec, Inspec to develop a CICD Security Testing Pipeline....
From RBC - Mon, 22 Oct 2018 19:27:08 GMT - View all Toronto, ON jobs
          Compensation Consultant - Elastic - Seattle, WA      Cache   Translate Page      
At Elastic, we have a simple goal: to take on the world's data problems with products that delight and inspire. As the company behind the popular open source...
From Elastic - Thu, 01 Nov 2018 14:29:27 GMT - View all Seattle, WA jobs
          Open source machine learning tool could help choose cancer drugs      Cache   Translate Page      
(Georgia Institute of Technology) Using machine learning techniques, a new open source decision support tool could help clinicians choose cancer therapy drugs by analyzing RNA expression tied to information about patient outcomes with specific drugs.
          Open Source Intelligence (OSINT) Analyst, JBLM WA Job - SAIC - Fort Lewis, WA      Cache   Translate Page      
For information on the benefits SAIC offers, see My SAIC Benefits. Headquartered in Reston, Virginia, SAIC has annual revenues of approximately $4.5 billion....
From SAIC - Fri, 05 Oct 2018 02:47:27 GMT - View all Fort Lewis, WA jobs
          Home Assistant - Open source Python3 home automation      Cache   Translate Page      
Replies: 7480 Last poster: symen89 at 07-11-2018 15:52 Topic is Open CodeIT schreef op woensdag 7 november 2018 @ 12:49: [...] Bovenstaande code lijkt te resulteren in een oneindige loop. Door je action trigger je iedere keer dezelfde automation. Wil je perse een dim plus en dim min script gebruiken? Anders zou ik de input_number.slider1 een range geven van 0 tot 5. Dan een automation met bovenstaande trigger. Bij actions kun je dan de brightness sturen middels een template {{ states.input_number.slider1.state * 55 }}. Deze zorgt er volgens mij ook voor dat je lamp uit is bij waarde 0 van de input_number. Als je de scripts wilt blijven gebruiken kun je middels een script_template je script aanroepen. Welk script je moet aanroepen kun je uitvogelen met 'trigger.from_state.state' en 'trigger.to_state.state 'Ik zou niet weten hoe ik het anders moet aan pakken dan met een dim+ en dim- script. Als ik de trigger.from_state.state gebruik krijg ik een error in mijn automation. code:1 2 3 4 5 6 7 8 9 10 - alias: Dim light + initial_state: 'on' trigger: - platform: state entity_id: input_number.slider1 condition: - condition: template value_template: '{{ trigger.from_state.state < trigger.to_state.state }}' action: - service: script.lighttube_dimplus
          Domoticz - open source domotica systeem - deel 3      Cache   Translate Page      
Replies: 11863 Last poster: Mardox at 07-11-2018 15:38 Topic is Open Toppe schreef op woensdag 7 november 2018 @ 15:21: Je data zal altijd bedraad uitgelezen worden (tenslotte sluit je de P1 van je slimme meter aan via USB op je Pi) waardoor er in elk geval geen data verloren gaat.Er zijn natuurlijk ook draadloze oplossingen om de data van je slimme meter uit te lezen. Gebruik zelf deze: http://www.esp8266thingies.nl/ En ik moet zeggen dat behalve een paar initiële problemen de boel nu al een jaar stabiel (zonder één enkele reboot) draait! Ook niet het idee dat ik waardes mis in Domoticz.
          Technoslavia 2.5: Open Source Topography      Cache   Translate Page      

Open source has always been a part of the data science landscape (as we explored here in 2015 and here in 2017). And while every tool in Technoslavia has some free offerings, we’re taking this chance to highlight the dominating force of open source in the ecosystem.


          Continuous Integration and Delivery with Ansible      Cache   Translate Page      

Ansible is a very powerful open source automation language. What makes it unique from other management tools, is that it is also a deployment and orchestration tool. In many respects, it aims to provide large productivity gains to a wide variety of automation challenges. While Ansible provides more productive drop-in replacements for many core capabilities in other automation solutions, it also seeks to solve other major IT challenges.


          Introduction to the Community Data License Agreement      Cache   Translate Page      
CSIG (Cognitive Systems Institute Group) Talk — Thursday Nov 8, 2018 - 10:30-11am US Eastern 

Talk Title: "Introduction to the Community Data License Agreement: An "Open Source" Agreement Specifically Designed for Data and Content Sharing and Analysis in the Big Data world

Speaker: Christopher O'Neill, IBM 

Abstract: This talk will provide a general introduction the Community Data License Agreement ("CDLA") family of agreements, published  by The Linux Foundation in late 2017. Topics will include the general structure of the agreements, with a particular focus on their unique  data analysis terms and other aspects in which they are designed to address the unique characteristics of data from an Intellectual  Property perspective. 

Bio: Christopher O'Neill is Associate General Counsel — Intellectual Property Law at IBM Corporation, based in Armonk, New York. In over  25 years at IBM, Mr. O'Neill has held a variety of positions, both in IBM's product businesses and in its litigation group. In his current role,  Mr. O'Neill has responsibility for a variety of matters, including IP indemnity matters, data rights issues, adversely-held patent matters,  and open source issues. He graduated from New York University School of Law in 1987 and is a member of the New York bar. 

Zoom meeting Link: https://zoom.us/j/7371462221; Zoom Cailin: (415) 762-9988 or (646) 568-7788 Meeting id 7371462221 
Zoom International Numbers: https://zoom.us/zoomconference 
Check http://cognitive-science.info/community/weekly-update/ for recordings & slides, and for any date & time changes 

Join Linkedin Group: https://www.Iinkedin.com/groups/6729452/ (Cognitive Systems Institute) to receive notifications 
Thu, Nov 8, 10:30am US Eastern • https://zoom.us/j/7371462221 
More Details Here : http://cognitive-science.info/community/weekly-update/   (Also slides and talk recording will be placed here) 

More at: https://www.linuxfoundation.org/press-release/2017/10/linux-foundation-debuts-community-data-license-agreement/  #CSIGnews #opentechai #AI @KarolynSchalk @mattganis @jwaup @MishiChoudhary @t_streinz @hyurko


          (USA-VA-Arlington) Operations Research Analyst      Cache   Translate Page      
The Civil and San Diego (CSD) Operations is responsible for a broad range analytical, engineering, independent assessments, strategic planning, and technology research, development, testing and evaluation (RDT&E;) programs in support of new SPA markets. Requirements * Bachelors’ Degree in Mathematics, Operations Research, Statistics, Computer Science, Engineering, quantitative social sciences, or a related discipline. * Some related experience (including academic projects), preferably in national security related programs. * Experience in any of the following programming languages: Python, R, MATLAB, Java, C++, VBA, SQL. * Familiar with visualization software such as Qlik or Tableau. * Experience with ArcGIS or other Geospatial Information systems. * Familiar with common open source and commercial data science and operations research tools and software. * Proven ability to prioritize and execute workload to meet requirements and deadlines. * Experience developing compelling written and visual communications that are clear, concise, and suited to the audience. * Proficient with using Microsoft Office Suite (e.g., Excel, Access, Word, PowerPoint). * Possess an active SECRET clearance with the ability to obtain TS. **Desired ** * Master’s Degree. * Experience providing analytical support to the Department of Homeland Security. * 2+ years’ experience performing quantitative analysis. * Experience within cybersecurity and/or critical infrastructure domains. * Active TS clearance. * DHS Suitability. Job Description: The Junior Operations Research Analyst will provide advanced analytical support to the Department of Homeland Security (DHS) and apply rational, systematic, science-based techniques, and critical thinking to help inform and improve the Department’s decision-making abilities for preparation, response and recovery to manmade and natural disasters and impacts on the Nation’s critical infrastructure. Provide interdisciplinary analytics expertise and skills to support and conduct descriptive, predictive, and prescriptive projects. Participate in a wide range of optimization and related modeling activities, including, but not limited to, model implementation and operations, model use documentation, and optimization and operations research capability. Responsibilities ** * Assist in identifying, prioritizing, and executing programs to help demonstrate and expand analytical capabilities * Develop and use operations research techniques such as mathematical optimization, decision analysis, and statistical analysis. * Implement data exploration techniques, analytical methods, operations research, and modeling and simulation tools to support program execution * Develop plans of action and timetables in order to meet established deadlines * Assist in the development of briefings, memos, and Contract Deliverables * Assist with data integration requirements development Candidates must be able to work independently and as part of a team. Successful candidates will be subject to a security investigation and must meet eligibility requirements for access to classified information. U.S. citizenship required. Systems Planning and Analysis, Inc. Attention: Human Resources 2001 North Beauregard Street Alexandria, VA 22311-1739 FAX: 703-399-7350 *SPA is an Equal Opportunity Employer (Minorities/Females/Disabled/Veterans)* In addition to professionally rewarding challenges, SPA employees also enjoy an excellent benefits package that includes insurance, leave, parking, and more, plus a generous qualified 401(k) plan. *Req Code:* CD18-292 *Job Type:* Operations Research Analyst *Location:* Arlington, VA *Clearance Level:* Secret
          Yoast SEO 9.1: The Hacktoberfest Edition      Cache   Translate Page      

October is Hacktoberfest month! Hacktoberfest is all about contributing to open source projects together. This year, we welcomed a nice crowd at Yoast HQ to help us improve our own and other peoples open source projects. Some of these fixes and enhancements made by this fine group of people made it into Yoast SEO 9.1. […]

The post Yoast SEO 9.1: The Hacktoberfest Edition appeared first on Yoast.


          (USA-VA-Quantico) Senior Threat Analyst (Counterintelligence Analyst)      Cache   Translate Page      
The Joint Cyber/Information Security, Intelligence Acquisition Group (JCAG) supports the Office of the Under Secretary of Defense for Research and Engineering’s (OUSD(R&E;’s) Joint Acquisition Protection and Exploitation Cell (JAPEC) which is led in partnership with the Office of the Under Secretary of Defense for Intelligence (OUSD(I)). The Counterintelligence Analyst position will be located at the Russell Knox Building in Quantico, VA but may be required to attend meetings in the National Capital Region (e.g., Mark Center, Pentagon) and occasional Chantilly, VA. Requirements * Bachelor’s Degree. * 10+ years of counterintelligence experience. * 2+ years in a management role in at least one of the following areas: Science and Technology, DoD Production Centers, Counterintelligence (Offensive, Analysis & Production, Intelligence or Counterintelligence to support Research Development & Acquisitions (RDA), Supply Chain Risk Management, Advanced Persistent Threat or Open Source Intelligence. * 3+ years of demonstrated work experience of intelligence capabilities other than in the area the candidate’s management experience. * 3+ years of experience providing direct RDA support to a DoD acquisition program. * Basic Knowledge, Skill, or Ability in the following: Defense Intelligence Policy, Intelligence Support to Research, Development and Acquisitions or Production of Intelligence Collection requirements. * Demonstrated ability to conduct analysis linking intelligence, acquisition, engineering/technical information. Analytic ability may be demonstrated with any combination of work products and education. * Experience using Intelligence Community message handling and retrieval systems such as M3, TAC Ability to work both independently and with personnel from multiple organizations. * Excellent oral and written communication skills. * Active Top Secret Clearance with SCI eligibility. **Desired Qualifications:** * Undergraduate Degree in a Technical Discipline. * Graduate Degree. * Experience in Lean Six Sigma. * Familiarity with Palantir. * Military experience. * Demonstrated ability to work from high-level, limited and evolving guidance in order to produce timely and effective results. Job Description: Responsible for conducting analysis to identify threats to critical acquisition programs and technologies. Apply the principles of intelligence research to analyze, evaluate, and interpret raw data collected from established sources to produce finished intelligence. Produce integrated products for the JAPEC Case Manager(s). Develop and maintain data in the subject field and for the production of reports and assessments. Prepare, present and defend portions of finished intelligence products. Provide inputs to JAPEC documentation: (1) Integrated Intelligence and Counterintelligence Support to Acquisitions Program Plan; (2) Joint Collection, Analysis; and Production Plan for Horizontal Threats to Critical Technology, including classification guides, if needed; and (3) JAPEC Strategic Engagement Plan. Candidates must be able to work independently and as part of a team. Successful candidates will be subject to a security investigation and must meet eligibility requirements for access to classified information. U.S. citizenship required. Systems Planning and Analysis, Inc. Attention: Human Resources 2001 North Beauregard Street Alexandria, VA 22311-1739 FAX: 703-399-7350 *SPA is an Equal Opportunity Employer (Minorities/Females/Disabled/Veterans)* In addition to professionally rewarding challenges, SPA employees also enjoy an excellent benefits package that includes insurance, leave, parking, and more, plus a generous qualified 401(k) plan. *Req Code:* JD18-380 *Job Type:* Military Operations Analyst *Location:* Quantico, VA *Clearance Level:* Top Secret/SCI
          Reservoir computing approaches for representation and classification of multivariate time series. (arXiv:1803.07870v2 [cs.NE] UPDATED)      Cache   Translate Page      

Authors: Filippo Maria Bianchi, Simone Scardapane, Sigurd Løkse, Robert Jenssen

Classification of multivariate time series (MTS) has been tackled with a large variety of methodologies and applied to a wide range of scenarios. Among the existing approaches, reservoir computing (RC) techniques, which implement a fixed and high-dimensional recurrent network to process sequential data, are computationally efficient tools to generate a vectorial, fixed-size representation of the MTS that can be further processed by standard classifiers. Despite their unrivaled training speed, MTS classifiers based on a standard RC architecture fail to achieve the same accuracy of other classifiers, such as those exploiting fully trainable recurrent networks. In this paper we introduce the reservoir model space, an RC approach to learn vectorial representations of MTS in an unsupervised fashion. Each MTS is encoded within the parameters of a linear model trained to predict a low-dimensional embedding of the reservoir dynamics. Our model space yields a powerful representation of the MTS and, thanks to an intermediate dimensionality reduction procedure, attains computational performance comparable to other RC methods. As a second contribution we propose a modular RC framework for MTS classification, with an associated open source Python library. By combining the different modules it is possible to seamlessly implement advanced RC architectures, including our proposed unsupervised representation, bidirectional reservoirs, and non-linear readouts, such as deep neural networks with both fixed and flexible activation functions. Results obtained on benchmark and real-world MTS datasets show that RC classifiers are dramatically faster and, when implemented using our proposed representation, also achieve superior classification accuracy.


          Private Data Objects: an Overview. (arXiv:1807.05686v2 [cs.CR] UPDATED)      Cache   Translate Page      

Authors: Mic Bowman, Andrea Miele, Michael Steiner, Bruno Vavala

We present Private Data Objects (PDOs), a technology that enables mutually untrusted parties to run smart contracts over private data. PDOs result from the integration of a distributed ledger and Intel Secure Guard Extensions (SGX). In particular, contracts run off-ledger in secure enclaves using Intel SGX, which preserves data confidentiality, execution integrity and enforces data access policies (as opposed to raw data access). A distributed ledger verifies and records transactions produced by PDOs, in order to provide a single authoritative instance of such objects. This allows contracting parties to retrieve and check data related to contract and enclave instances, as well as to serialize and commit contract state updates. The design and the development of PDOs is an ongoing research effort, and open source code is available and hosted by Hyperledger Labs [5, 7].


          (USA-CA-Sunnyvale) Software Engineer, Network Systems      Cache   Translate Page      
LinkedIn was built to help professionals achieve more in their careers, and every day millions of people use our products to make connections, discover opportunities and gain insights. Our global reach means we get to make a direct impact on the world’s workforce in ways no other company can. We’re much more than a digital resume – we transform lives through innovative products and technology. Searching for your dream job? At LinkedIn, we strive to help our employees find passion and purpose. Join us in changing the way the world works. LinkedIn is looking to hire Software engineer focused on hyperscale DataCenter and Backbone networking. Our Global Network Software Engineering team supports the production network that serves LinkedIn. throughout world. As a member of this team, you will design and develop new products leveraging open data plane hardware for building highly scalable systems. The goal is to deliver scale and reliability to our global network, covering datacenter, edge and backbone core. These products which define the LinkedIn Platform will be built drawing expertise from cross-disciplinary areas including large scale distributed systems, fault-tolerant systems, auto-remediation and concurrent systems, operating systems and high performance data analytics. If you are excited about innovating and building systems from scratch for massive scale, then our team might be the right place for you. You will have wide latitude in making contributions in several areas. In this highly visible role, you will work within the various members of the architecture and engineering teams providing guidance in technologies, product feature definitions, implementation and tradeoffs and architecture definition. You will be responsible in design and developing various networking OS and controller functional features. Additional responsibilities include influencing open source community in white-box switching and SDN controller developments. You will utilize your excellent communication skills interacting with peer groups & drive technical presentations. Responsibilities: •Develop Software Infrastructure to improve our ability to provisioning and monitor large-scale network environments. •Build and invent new ways of operating LinkedIn’s Next-Generation Network. •Work closely with our Network Engineering teams to deliver fast, smooth roll-out of new designs and products. •Reduce the end-to-end cost of delivering packets •Work in a large scale Linux/Unix environment. Basic Qualifications: •Bachelors in Electrical Engineering/Computer Science or equivalent practical experience •1+ years of design, development experience of Networking Switches/Routers Operating System (NOS) •1+ years of development experience in L3 control plane and data plane •1+ years of development experience in Linux networking stack •1+ years of software development using Python or C or C++ Preferred Qualifications: •Development experience such as Layer 3 routing protocols such as BGP/OSPF/ISIS •Development experience in MPLS, Segment Routing, Config Mgmt •Development experience with at least one of these Open Source routing protocols: Quagga/FRR •Industry knowledge about SDN technologies is desirable •1+ year software development using Go language is a plus.
          Radxa Launching the Rock Pi SBC, Mender.io Collaborating with Google Cloud IoT Core, Parasoft's New Initiative to Support Open-Source Projects, New Foundation Formed for GraphQL and Keeper Security Announces BreachWatch Dark Web Monitoring Product       Cache   Translate Page      

News briefs for November 7, 2018.

Radxa is launching a Raspberry Pi clone called the Rock Pi that runs Linux or Android on a hexa-core Rockchip RK3399 SoC. LinuxGizmos writes that the Rock Pi will closely match the RPi 3 layout and "may be the most affordable RK3399 based SBC yet, starting at $39 with 1GB RAM".

Mender.io, the open-source update manager for IoT, announces its collaboration with Google Cloud IoT Core "to create a reference integration enabling rapid detection and updates of issues in IoT devices". Thomas Ryd, CEO of Northern.tech, the company behind the Mender.io project says, "Almost daily news stories circulate about bricked devices due to poor home-built update tools. We are inspired to address this common problem with an open-source project." The collaboration has "resulted in a tutorial and reference integration to easily detect issues with Cloud IoT Core and the ability to correct those issues via updates to IoT devices with Mender. Users of Cloud IoT Core now have a secure and robust way to keep their Linux devices securely updated." See the Google blog post for more details.

Parasoft announces a new initiative to support open-source projects and communities. The company plans to offer free access to its tool suite "enabling developers to leverage test automation software, deep code analysis, and security capabilities for their open-source projects". To be eligible, developers must "prove they are an active contributor and vital to an open-source project that is recognized within the global open-source community. The free user licenses will be valid for one year." Send email to opensource@parasoft.com for more information.

The Linux Foundation is forming a new foundation to support the open-source GraphQL specification. eWeek reports that "the move to create a new vendor-neutral independent foundation under the Linux Foundation will help further advance the development of GraphQL". The GraphQL started out as an internal project at Facebook for its newsfeed API and was open-sourced in 2015. Currently, the specification is used "beyond Facebook by web properties including GitHub, Shopify, Twitter and Airbnb, among others".

Keeper Security announces its new BreachWatch dark web monitoring product. BreachWatch searches the dark web for user accounts from compromised websites and notifies users when it finds their account information, alerting them to update their credentials. BreachWatch is available for iOS, Android and Linux. See the press release for more information.


          Vote Democratic Today. The World Will Be A Better And Happier Place      Cache   Translate Page      


              Vote Democratic Today!







Buy the books at

 www.amazon.com/author/paulbabicki
====================================================











11/05/2018 05:05 AM EST

Original release date: November 05, 2018
The US-CERT Cyber Security Bulletin provides a summary of new vulnerabilities that have been recorded by the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) in the past week. The NVD is sponsored by the Department of Homeland Security (DHS) National Cybersecurity and Communications Integration Center (NCCIC) / United States Computer Emergency Readiness Team (US-CERT). For modified or updated entries, please visit the NVD, which contains historical vulnerability information.
The vulnerabilities are based on the CVE vulnerability naming standard and are organized according to severity, determined by the Common Vulnerability Scoring System (CVSS) standard. The division of high, medium, and low severities correspond to the following scores:
·        High - Vulnerabilities will be labeled High severity if they have a CVSS base score of 7.0 - 10.0
·        Medium - Vulnerabilities will be labeled Medium severity if they have a CVSS base score of 4.0 - 6.9
·        Low - Vulnerabilities will be labeled Low severity if they have a CVSS base score of 0.0 - 3.9
Entries may include additional information provided by organizations and efforts sponsored by US-CERT. This information may include identifying information, values, definitions, and related links. Patch information is provided when available. Please note that some of the information in the bulletins is compiled from external, open source reports and is not a direct result of US-CERT analysis.
The NCCIC Weekly Vulnerability Summary Bulletin is created using information from the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD). In some cases, the vulnerabilities in the Bulletin may not yet have assigned CVSS scores. Please visit NVD for updated vulnerability entries, which include CVSS scores once they are available.
­­­­­­­­­­­­­­­+++++++++++++++++++++++++++++++++++++++++++++++++++++
===================================  
Good Netiquette And A Green Internet To All!  =====================================================================Tabula Rosa Systems - Tabula Rosa Systems (TRS) is dedicated to providing Best of Breed Technology and Best of Class Professional Services to our Clients. We have a portfolio of products which we have selected for their capabilities, viability and value. TRS provides product, design, implementation and support services on all products that we represent. Additionally, TRS provides expertise in Network Analysis, eBusiness Application Profiling, ePolicy and eBusiness Troubleshooting

We can be contacted at:

sales@tabularosa.net  or 609 818 1802.
 ===============================================================
In addition to this blog, Netiquette IQ has a website with great assets which are being added to on a regular basis. I have authored the premiere book on Netiquette, “Netiquette IQ - A Comprehensive Guide to Improve, Enhance and Add Power to Your Email". My new book, “You’re Hired! Super Charge Your Email Skills in 60 Minutes. . . And Get That Job!” has just been published and will be followed by a trilogy of books on Netiquette for young people. You can view my profile, reviews of the book and content excerpts at:

 www.amazon.com/author/paulbabicki

Anyone who would like to review the book and have it posted on my blog or website, please contact me paul@netiquetteiq.com.

In addition to this blog, I maintain a radio show on BlogtalkRadio  and an online newsletter via paper.li.I have established Netiquette discussion groups with Linkedin and  Yahoo I am also a member of the International Business Etiquette and Protocol Group and Minding Manners among others. I regularly consult for the Gerson Lehrman Group, a worldwide network of subject matter experts and I have been contributing to the blogs Everything Email and emailmonday . My work has appeared in numerous publications and I have presented to groups such as The Breakfast Club of NJ and  PSG of Mercer County, NJ.


Additionally, I am the president of Tabula Rosa Systems, a “best of breed” reseller of products for communications, email, network management software, security products and professional services.  Also, I am the president of Netiquette IQ. We are currently developing an email IQ rating system, Netiquette IQ, which promotes the fundamentals outlined in my book.

          Netiquette Security Bulletin - SB18-309: Vulnerability Summary for the Week of October 29, 2018      Cache   Translate Page      

Vote Democratic Tomorrow!




11/05/2018 05:05 AM EST

Original release date: November 05, 2018
The US-CERT Cyber Security Bulletin provides a summary of new vulnerabilities that have been recorded by the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) in the past week. The NVD is sponsored by the Department of Homeland Security (DHS) National Cybersecurity and Communications Integration Center (NCCIC) / United States Computer Emergency Readiness Team (US-CERT). For modified or updated entries, please visit the NVD, which contains historical vulnerability information.
The vulnerabilities are based on the CVE vulnerability naming standard and are organized according to severity, determined by the Common Vulnerability Scoring System (CVSS) standard. The division of high, medium, and low severities correspond to the following scores:
·        High - Vulnerabilities will be labeled High severity if they have a CVSS base score of 7.0 - 10.0
·        Medium - Vulnerabilities will be labeled Medium severity if they have a CVSS base score of 4.0 - 6.9
·        Low - Vulnerabilities will be labeled Low severity if they have a CVSS base score of 0.0 - 3.9
Entries may include additional information provided by organizations and efforts sponsored by US-CERT. This information may include identifying information, values, definitions, and related links. Patch information is provided when available. Please note that some of the information in the bulletins is compiled from external, open source reports and is not a direct result of US-CERT analysis.
The NCCIC Weekly Vulnerability Summary Bulletin is created using information from the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD). In some cases, the vulnerabilities in the Bulletin may not yet have assigned CVSS scores. Please visit NVD for updated vulnerability entries, which include CVSS scores once they are available.




Buy the books at

 www.amazon.com/author/paulbabicki
====================================================











11/05/2018 05:05 AM EST

Original release date: November 05, 2018
The US-CERT Cyber Security Bulletin provides a summary of new vulnerabilities that have been recorded by the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) in the past week. The NVD is sponsored by the Department of Homeland Security (DHS) National Cybersecurity and Communications Integration Center (NCCIC) / United States Computer Emergency Readiness Team (US-CERT). For modified or updated entries, please visit the NVD, which contains historical vulnerability information.
The vulnerabilities are based on the CVE vulnerability naming standard and are organized according to severity, determined by the Common Vulnerability Scoring System (CVSS) standard. The division of high, medium, and low severities correspond to the following scores:
·        High - Vulnerabilities will be labeled High severity if they have a CVSS base score of 7.0 - 10.0
·        Medium - Vulnerabilities will be labeled Medium severity if they have a CVSS base score of 4.0 - 6.9
·        Low - Vulnerabilities will be labeled Low severity if they have a CVSS base score of 0.0 - 3.9
Entries may include additional information provided by organizations and efforts sponsored by US-CERT. This information may include identifying information, values, definitions, and related links. Patch information is provided when available. Please note that some of the information in the bulletins is compiled from external, open source reports and is not a direct result of US-CERT analysis.
The NCCIC Weekly Vulnerability Summary Bulletin is created using information from the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD). In some cases, the vulnerabilities in the Bulletin may not yet have assigned CVSS scores. Please visit NVD for updated vulnerability entries, which include CVSS scores once they are available.
­­­­­­­­­­­­­­­+++++++++++++++++++++++++++++++++++++++++++++++++++++
===================================  
Good Netiquette And A Green Internet To All!  =====================================================================Tabula Rosa Systems - Tabula Rosa Systems (TRS) is dedicated to providing Best of Breed Technology and Best of Class Professional Services to our Clients. We have a portfolio of products which we have selected for their capabilities, viability and value. TRS provides product, design, implementation and support services on all products that we represent. Additionally, TRS provides expertise in Network Analysis, eBusiness Application Profiling, ePolicy and eBusiness Troubleshooting

We can be contacted at:

sales@tabularosa.net  or 609 818 1802.
 ===============================================================
In addition to this blog, Netiquette IQ has a website with great assets which are being added to on a regular basis. I have authored the premiere book on Netiquette, “Netiquette IQ - A Comprehensive Guide to Improve, Enhance and Add Power to Your Email". My new book, “You’re Hired! Super Charge Your Email Skills in 60 Minutes. . . And Get That Job!” has just been published and will be followed by a trilogy of books on Netiquette for young people. You can view my profile, reviews of the book and content excerpts at:

 www.amazon.com/author/paulbabicki

Anyone who would like to review the book and have it posted on my blog or website, please contact me paul@netiquetteiq.com.

In addition to this blog, I maintain a radio show on BlogtalkRadio  and an online newsletter via paper.li.I have established Netiquette discussion groups with Linkedin and  Yahoo I am also a member of the International Business Etiquette and Protocol Group and Minding Manners among others. I regularly consult for the Gerson Lehrman Group, a worldwide network of subject matter experts and I have been contributing to the blogs Everything Email and emailmonday . My work has appeared in numerous publications and I have presented to groups such as The Breakfast Club of NJ and  PSG of Mercer County, NJ.


Additionally, I am the president of Tabula Rosa Systems, a “best of breed” reseller of products for communications, email, network management software, security products and professional services.  Also, I am the president of Netiquette IQ. We are currently developing an email IQ rating system, Netiquette IQ, which promotes the fundamentals outlined in my book.

          Set up JavaScript THREE.js browser-based project in WebStorm with NPM and testing      Cache   Translate Page      
Convert existing demo code of the open source THREE.js Periodic table to use JavaScript modules via NPM and WebPack, along with WebStorm as IDE. See attachments for requirements and important job information. (Budget: $30 - $250 USD, Jobs: 3D Animation, HTML5, Javascript, node.js, Test Automation)
          Comment on Vcpkg: A tool to build open source libraries on Windows by Jean-Christophe Fillion-Robin      Cache   Translate Page      
Formatting issues have been addressed.
          Comment on Vcpkg: A tool to build open source libraries on Windows by Jean-Christophe Fillion-Robin      Cache   Translate Page      
Thanks Anton, we converted the examples and references to variables and command line arguments to be in fixed font. Let us know if that works better. It also look like there is an issue with the rendering of "greater than" and "smaller than" in the examples, this will fix fixed shortly.
          Comment on Vcpkg: A tool to build open source libraries on Windows by Dženan Zukić      Cache   Translate Page      
We will fix it, thanks!
          Comment on Vcpkg: A tool to build open source libraries on Windows by Anton      Cache   Translate Page      
Hi! Thank you for such a great article. One small detail could be fixed though: "U+2013 : EN DASH"(–) is used instead of double "U+002D : HYPHEN-MINUS"(--), which prevents code from executed properly (when copy-pasted).
          HeroPress: Accidental Activist      Cache   Translate Page      
Pull Quote: Imagine a world where more kinds of people are speaking up. That's a world I'm excited to see.

I never meant to become an activist. I swear. It was an accident.

And yet here I am, celebrating my one year anniversary of leading the Diversity Outreach Speaker Training working group in the WordPress Community team. We are causing waves in the number of women and other underrepresented groups who are stepping up to become speakers at WordPress Meetups, WordCamps, and events. Pretty cool, right?

How did this happen?

Let’s start this story with how I got into WordPress. Back in 2011, I was looking for a practicum placement for the New Media Design and Web Development college program I was in in Vancouver, BC. We had touched on WordPress only briefly in class. I was curious about it, so I got a Practicum placement working on a higher education website that was built in WordPress. (It was in BuddyPress, even! Ooh. Aah.) As a thank you, my practicum advisor bought me a ticket to WordCamp Vancouver 2011: Developer’s Edition. That event was the start of my love affair with WordPress and I began taking on freelancing gigs. I’ve been a WordPress solopreneur for most of the time since.

The following year my practicum advisor, who had become a client, was creating the first ever BuddyCamp for BuddyPress. He asked me to be on his organizing team. (Side note: I was especially excited to moderate a panel with Matt Mullenweg and other big names on it!) I was noticed and I was invited to be on the core organizing team for the next year’s WordCamp Vancouver by the lead organizer. I was thrilled. It was quite an honour!

This is where the real story begins… after an important disclaimer.

Disclaimer: For simplicity in this story, I’ll be using the terms women and men, though in reality gender is not a simple binary and is actually a wide spectrum of different identities.

The Real Beginning

There were three of us—myself and two men—and it was our first time any of us were organizing a WordCamp. We were having dinner in one of our apartments and we had 40 speaker applications spread out before us. The plan was to pick 14 to speak. It was hard. They were all really good.

The lead organizer grabbed 6 out of the 7 that came from the women and said, “Well, we are accepting all of these.”

At this point I didn’t know that not many women were applying to speak at tech conferences at the time.

So I was the one saying, “Wait, wait. Who cares what gender they are? Let’s go through them and pick based on the topics that would fit the conference’s flow.”

They both said that the 6 of the women’s pitches were really good, fit with the flow, and frankly, we needed to accept as many as we could or we’d get called out. (This is embarrassing to say now, but that was the conversation back in 2013.)

Here’s how it went down:

After we accepted the six, two of the women dropped out for family emergencies. (Guess how many men dropped out for family emergencies?) Also we had added a third speakers’ track. Now there were only 4 women out of 28 speakers. Only 1 in 7. That is 14%, my friends. That is not great.

So not great, in fact, that we did get called out. People did talk about it, question us about it, and even wrote blog posts about it.

More Experience

So when later that year I went to WordCamp San Francisco—the biggest WordCamp at the time (before there was a WordCamp US)—I took the opportunity to chat with other organizers at a WordCamp organizer brunch.

I found out that many of the organizers had trouble getting enough women presenters.

I was surprised to find that we actually had a high number of women applicants in comparison to others, as many of them had zero! They were asking me how we got such a high number. They all said they would happily accept more if only more would apply.

So then I needed to know, why was this happening? Why weren’t we getting more women applicants? I started researching, reading, and talking to people.

Though this issue is complex, one thing that came up over and over was that when we would ask the question, “Hey, will you speak at my conference?” we would get two answers:

  • “What would I talk about?”
  • “I’m not an expert on anything. I don’t know enough about anything to give a talk on it.”

That’s when the idea happened.

The Idea

As it goes, the idea happened while I was at a feminist blanket-fort slumber party. Yes, you heard right. And as one does at all feminist blanket-fort slumber parties, we talked about feminist issues.

When I brought up my issue about the responses we were getting, one of the ladies said, “Why don’t you get them in a room and have them brainstorm topics?”

And that was it. That set me on the path.

I became the lead of a small group creating a workshop in Vancouver. In one of the exercises, we invited the participants to brainstorm ideas and show them that they have literally a hundred ideas. (Then the biggest problem becomes picking one. 🙂 )

In our first iteration, we covered:

  • Why it matters that women (added later: diverse groups) are in the front of the room
  • The myths of what it takes to be the speaker at the front of the room (aka beating impostor syndrome)
  • Different speaking formats, especially story-telling
  • Finding and refining a topic
  • Tips to becoming a better speaker
  • Practising leveling up speaking in front of the group throughout the afternoon

Other cities across North America got wind of our workshop and started running it as well, and they added their own material.

Our own participants wanted more support, so the next year we added material created from the other cities as well as generated more of our own:

  • Coming up with a great title
  • Writing a pitch that is more likely to get accepted
  • Writing a bio
  • Creating an outline
  • Creating better slides

We did it! In 2014—in only one year since we started—we had 50% women speakers and 3 times the number of women applicants! Not only that, but it was a Developer’s Edition. It’s more challenging it is to find women developers in general, let alone those who will step up to speak.

Building On

Impressive as that is, the reason I am truly passionate about this work is because of what happened next:

  • Some of the women who did our workshop and started publicly speaking stepped up to be leaders in our community and created new things for us. For example, a couple of them created a new Meetup track with a User focus.
  • A handful of others became WordCamp organizers. One year Vancouver had an almost all-female organizing team – 5 out of 6!
  • It also influenced local businesses. One local business owner loved what one of the women speakers said so much that he hired her immediately. She was the first woman developer on the team, and soon after she became the Senior Developer.

It is results like these that ignited my passion. I’ve now seen time and again what happens when different kinds of folks speak in the front of the room. More kinds of people feel welcome in the community. The speakers and the new community members bring new ideas and new passions that help to make the technology we are creating more inclusive as well as generate new ideas that benefit everyone.

This workshop has been so successful, with typical results of 40-60% women speakers at WordCamps, that the WordPress Community Team asked me to promote it and train it for women and all diverse groups around the world. We created the Diversity Outreach Speaker Training working group. I started creating and leading it in late 2017.

Thanks to our group, our workshop has been run in 17 cities so far this year, 32 have been trained to run it, and 53 have expressed interest in 24 countries. Incredible!

I love this work so much that I’m now looking at how to do this for a living. I’m proud of how the human diversity represented on the stage adds value not only to the brand but also in the long-term will lead to the creation of a better product. I’m inspired by seeing the communities change as a result of the new voices and new ideas at the WordPress events.

“Jill’s leadership in the development and growth of the Diversity Outreach Speaker Training initiative has had a positive, measurable impact on WordPress community events worldwide. When WordPress events are more diverse, the WordPress project gets more diverse — which makes WordPress better for more people.”

– Andrea Middleton, Community organizer on the WordPress open source project

I’m exploring sponsorships, giving conference and corporate trainings, and looking at other options so that I can be an Accidental Activist full-time and make a bigger impact. Imagine a world where more kinds of people are speaking up. That’s a world I’m excited to see.

Resources:

Workshop: http://diversespeakers.info/

More info and please let us know if you use it or would like help using it: https://tiny.cc/wpwomenspeak

Diversity Outreach Speaker Training Team—Join us! https://make.wordpress.org/community/2017/11/13/call-for-volunteers-diversity-outreach-speaker-training/

How to build a diverse speaker roster: Coming soon. Contact Jill for it.

The post Accidental Activist appeared first on HeroPress.


          The rise of local mapping communities      Cache   Translate Page      
Members of the mapping community in Kinshasa plan the collection of field data for the Kisenso neighborhood. (courtesy of OpenDRI)
Members of the mapping community in Kinshasa, DR Congo plan the collection of field data for the Kisenso neighborhood. (Courtesy of OpenDRI)

There is a unique space where you can encounter everyone from developers of self-driving cars in Silicon Valley to city planners in Niamey to humanitarian workers in Kathmandu Valley: the global OpenStreetMap (OSM) community. It comprises a geographically and experientially diverse network of people who contribute to OSM, a free and editable map of the world that is often called the “Wikipedia of maps.”  

What is perhaps most special about this community is its level playing field. Anyone passionate about collaborative mapping can have a voice from anywhere in the world. In the past few years, there has been a meteoric rise of locally organized mapping communities in developing countries working to improve the map in service of sustainable development activities.

The next opportunity to see the OSM community in action will be the November 14th mapathon hosted by the Global Facility for Disaster Reduction and Recovery (GFDRR)’s Open Data for Resilience Initiative (OpenDRI). Mapathons bring together volunteers to improve the maps of some of the world’s most vulnerable areas, not only easing the way for emergency responders when disaster strikes, but also helping cities and communities plan and build more resiliently for the future.

GFDRR’s engagement with local OSM communities

[[tweetable]]The 2010 Haiti earthquake served as a wake-up call about the need for access to better quality information for reducing vulnerability to natural hazards and climate change impacts.[[/tweetable]] In the years since, OpenDRI has turned to the OSM platform as an important way to bring people together to create open data, learn new skills, and support the human networks that eventually become key actors for resilience. We can gather people in a room around something exciting, like a mapathon, and start a conversation about sharing information for the benefit of everyone.

Changes in the mapped areas in OpenStreetMap for Kampala, Uganda, from 2016 to 2018. (courtesy of OpenDRI and OSM)
Changes in the mapped areas in OpenStreetMap for Kampala, Uganda, from 2016 to 2018. (Courtesy of OpenDRI and OSM)
[[tweetable]]Any data, technology, or tool is only as valuable as the way and the extent to which people use it[[/tweetable]], and that’s why building sustainable mapping communities is so critical for this work. Even as we engage governments to promote the use of open data and open source tools, OpenDRI also strives to nurture local communities of OSM users and developers from universities, NGOs, and innovation hubs. To that end, OpenDRI supports local OSM communities and conferences like “State of the Map” whenever possible, particularly by funding scholarships for attendees who would not otherwise get to attend, learn, and share knowledge.

Participatory mapping in Asia and Africa
A member of the local mapping community in Uganda collects field data for OSM. (Courtesy of OpenDRI)
A member of the local mapping community in Uganda
collects field data for OSM. (Courtesy of OpenDRI)


OpenDRI started its work with OSM by supporting the growth of local mapping communities in  Indonesia, the Philippines, Nepal, Bangladesh, and Sri Lanka, including through the Open Cities Project. Many of these communities were quick on their feet to respond to the devastating 2015 earthquakes in Nepal. More than 6,000 volunteers helped add data to the OSM platform, mapping up to 80 percent of affected zones, an effort which continues to provide invaluable information to emergency response and preparedness efforts.In the years since, OSM communities across Asia have come together to exchange knowledge and build connections at a series of open source mapping conferences. The fourth “State of the Map Asia” conference will take place in Bangalore, India, this month.

In Africa, the stakes for OSM are even higher, because it is often the only digital map available for many locations. Recent years have brought a rise of participatory mapping communities across Africa, which now total more than 30 active local OSM groups. Africa’s first-ever “State of the Map” conference was held in Kampala, Uganda in 2017.

Building on that momentum, OpenDRI recently launched the Open Cities Africa project, currently supporting the development of teams in 11 cities across Africa. These teams are taking the lead in collecting data remotely and on-the-ground through participatory mapping, thus building mapping capacity in their local OSM communities. They are also collaborating with World Bank teams to use the new OSM data to help address a range of development challenges, from urban flooding in Kinshasa to coastal risk management in Senegal. Drawing on our experiences in Asia, we are incorporating novel approaches in our engagement in Africa, including online learning, gender integration, disruptive technologies, and design research.

What’s next for local OSM communities?

About this series
More blog posts


Local OSM communities are hopeful that the future will see a larger and more diverse population of mappers worldwide – this will be key to improving the “Wikipedia of maps” even further. As technology giants join the global OSM community, we are now exploring how new machine learning mapping techniques might complement and amplify the work of local OSM communities.

Over the past seven years, the OpenDRI team has been hard at work to create local communities around open-source mapping as part of our drive to promote open data for resilience, and that effort will continue.

To discover the OSM community for yourself and learn more about the benefits of using geospatial data for addressing the world’s most critical development challenges, join us on Wednesday, November 14 for the OpenDRI mapathon at the World Bank.  
 
READ MORE
          Python Application and Platform Developer - Plotly - Montréal, QC      Cache   Translate Page      
Engage our Open Source communities and help take stewardship of our OSS projects. Duties &amp; Responsibilities:....
From Plotly - Thu, 06 Sep 2018 15:03:02 GMT - View all Montréal, QC jobs
          Radxa Launching the Rock Pi SBC, Mender.io Collaborating with Google Cloud IoT Core, Parasoft's New Initiative to Support Open-Source Projects, New Foundation Formed for GraphQL and Keeper Security Announces BreachWatch Dark Web Monitoring Product       Cache   Translate Page      

News briefs for November 7, 2018.

Radxa is launching a Raspberry Pi clone called the Rock Pi that runs Linux or Android on a hexa-core Rockchip RK3399 SoC. LinuxGizmos writes that the Rock Pi will closely match the RPi 3 layout and "may be the most affordable RK3399 based SBC yet, starting at $39 with 1GB RAM".

Mender.io, the open-source update manager for IoT, announces its collaboration with Google Cloud IoT Core "to create a reference integration enabling rapid detection and updates of issues in IoT devices". Thomas Ryd, CEO of Northern.tech, the company behind the Mender.io project says, "Almost daily news stories circulate about bricked devices due to poor home-built update tools. We are inspired to address this common problem with an open-source project." The collaboration has "resulted in a tutorial and reference integration to easily detect issues with Cloud IoT Core and the ability to correct those issues via updates to IoT devices with Mender. Users of Cloud IoT Core now have a secure and robust way to keep their Linux devices securely updated." See the Google blog post for more details.

Parasoft announces a new initiative to support open-source projects and communities. The company plans to offer free access to its tool suite "enabling developers to leverage test automation software, deep code analysis, and security capabilities for their open-source projects". To be eligible, developers must "prove they are an active contributor and vital to an open-source project that is recognized within the global open-source community. The free user licenses will be valid for one year." Send email to opensource@parasoft.com for more information.

The Linux Foundation is forming a new foundation to support the open-source GraphQL specification. eWeek reports that "the move to create a new vendor-neutral independent foundation under the Linux Foundation will help further advance the development of GraphQL". The GraphQL started out as an internal project at Facebook for its newsfeed API and was open-sourced in 2015. Currently, the specification is used "beyond Facebook by web properties including GitHub, Shopify, Twitter and Airbnb, among others".

Keeper Security announces its new BreachWatch dark web monitoring product. BreachWatch searches the dark web for user accounts from compromised websites and notifies users when it finds their account information, alerting them to update their credentials. BreachWatch is available for iOS, Android and Linux. See the press release for more information.


          VMware Acquires Heptio, Mining Bitcoin Requires More Energy Than Mining Gold, Fedora Turns 15, Microsoft's New Linux Distros and ReactOS 0.4.10 Released      Cache   Translate Page      

News briefs for November 6, 2018.

VMware has acquired Heptio, which was founded by Joe Beda and Craig McLuckie, two of the creators of Kubernetes. TechCrunch reports that the terms of the deal aren't being disclosed and that "this is a signal of the big bet that VMware is taking on Kubernetes, and the belief that it will become an increasing cornerstone in how enterprises run their businesses." The post also notes that this acquisition is "also another endorsement of the ongoing rise of open source and its role in cloud architectures".

The energy needed to mine one dollar's worth of bitcoin is reported to be more than double the energy required to mine the same amount of gold, copper or platinum. The Guardian reports on recent research from the Oak Ridge Institute in Cincinnati, Ohio, that "one dollar's worth of bitcoin takes about 17 megajoules of energy to mine...compared with four, five and seven megajoules for copper, gold and platinum".

Happy 15th birthday to Fedora! Fifteen years ago today, November 6, 2003, Fedora Core 1 was released. See Fedora Magazine's post for a look back at the Fedora Project's beginnings.

Microsoft announced the availability of two new Linux distros for Windows Subsystem for Linux, which will coincide with the Windows 10 1809 release. ZDNet reports that the Debian-based Linux distribution WLinux is available from the Microsoft Store for $9.99 currently (normally it's $19.99). Also, OpenSUSE 15 and SLES 15 are now available from the Microsoft Store as well.

ReactOS 0.4.10 was released today. The main new feature is "ReactOS' ability to now boot from a BTRFS formatted drive". See the official ChangeLog for more details.


          Time for Net Giants to Pay Fairly for the Open Source on Which They Depend      Cache   Translate Page      
money

Net giants depend on open source: so where's the gratitude?

Licensing lies at the heart of open source. Arguably, free software began with the publication of the GNU GPL in 1989. And since then, open-source projects are defined as such by virtue of the licenses they adopt and whether the latter meet the Open Source Definition. The continuing importance of licensing is shown by the periodic flame wars that erupt in this area. Recently, there have been two such flarings of strong feelings, both of which raise important issues.

First, we had the incident with Lerna, "a tool for managing JavaScript projects with multiple packages". It came about as a result of the way the US Immigration and Customs Enforcement (ICE) has been separating families and holding children in cage-like cells. The Lerna core team was appalled by this behavior and wished to do something concrete in response. As a result, it added an extra clause to the MIT license, which forbade a list of companies, including Microsoft, Palantir, Amazon, Motorola and Dell, from being permitted to use the code:

For the companies that are known supporters of ICE: Lerna will no longer be licensed as MIT for you. You will receive no licensing rights and any use of Lerna will be considered theft. You will not be able to pay for a license, the only way that it is going to change is by you publicly tearing up your contracts with ICE.

Many sympathized with the feelings about the actions of the ICE and the intent of the license change. However, many also pointed out that such a move went against the core principles of both free software and open source. Freedom 0 of the Free Software Definition is "The freedom to run the program as you wish, for any purpose." Similarly, the Open Source Definition requires "No Discrimination Against Persons or Groups" and "No Discrimination Against Fields of Endeavor". The situation is clear cut, and it didn't take long for the Lerna team to realize their error, and they soon reverted the change:


          Weekend Reading: FOSS Projects      Cache   Translate Page      

FOSS Project Spotlights provide an opportunity for free and open-source project team members to show Linux Journal readers what makes their project compelling. Join us this weekend as we explore some of the latest FOSS projects in the works.

 

FOSS Project Spotlight: Nitrux, a Linux Distribution with a Focus on AppImages and Atomic Upgrades

by Nitrux Latinoamericana S.C.

Nitrux is a Linux distribution with a focus on portable, application formats like AppImages. Nitrux uses KDE Plasma 5 and KDE Applications, and it also uses our in-house software suite Nomad Desktop.

 

FOSS Project Spotlight: Tutanota, the First Encrypted Email Service with an App on F-Droid

by Matthias Pfau

Seven years ago, Tutanota was being built, an encrypted email service with a strong focus on security, privacy and open source. Long before the Snowden revelations, the Tutanota team felt there was a need for easy-to-use encryption that would allow everyone to communicate online without being snooped upon.

 

FOSS Project Spotlight: LinuxBoot

by David Hendricks

Linux as firmware.

The more things change, the more they stay the same. That may sound cliché, but it's still as true for the firmware that boots your operating system as it was in 2001 when Linux Journal first published Eric Biederman's "About LinuxBIOS". LinuxBoot is the latest incarnation of an idea that has persisted for around two decades now: use Linux as your bootstrap.

 

FOSS Project Spotlight: CloudMapper, an AWS Visualization Tool

by Scott Piper

Duo Security has released CloudMapper, an open-source tool for visualizing Amazon Web Services (AWS) cloud environments.

When working with AWS, it's common to have a number of separate accounts run by different teams for different projects. Gaining an understanding of how those accounts are configured is best accomplished by visually displaying the resources of the account and how these resources can communicate. This complements a traditional asset inventory.

 

FOSS Project Spotlight: Ravada

by Francesc Guasch


          How to Create Grayscale Heightmaps for 3D Textures      Cache   Translate Page      

In a previous blog post I discussed the new 3D Textures functionality that was added to SOLIDWORKS 2019 ( see How to Use 3D Texture  in SOLIDWORKS ) . One thing that was discussed in that blog post was the fact that though you can use any type of image for 3D textures, what works best are greyscale heightmap images. With this in mind I wanted to share with you a post that was originally presented during the beta program for SOLIDWORKS 2019. This post explores one way that you can create greyscale heightmap images. Again thanks to my colleague Xiao Liu for his help producing this post and the video (https://youtu.be/z_jh_PrZzOs).

 

There are many software packages that can be utilized to generate grayscale heightmaps. Here we will use Blender, which is a free and open source 3D creation suite, to create grayscale heightmaps. An example will be described step-by-step to illustrate the generation process of a grayscle heightmap image.

 

Please note that this blog is not an advertisement for Blender and the DS SOLIDWORKS Co. doesn’t have any collaborations with Blender. However, I am very grateful to the Blender’s team for providing such a useful free tool. If you use Blender you may consider making a donation to Blender’s development fund.

 

The Outline for how to create the grayscale heightmaps:

 

  1. Generate a SOLIDWORKS solid body and save it as STL file
  2. Import the STL file into Blender
  3. Adjust the location of the 3D model in the Blender
  4. Adjust the orientation and location of the camera
  5. Generate grayscale heightmap using the node editor
  6. Generate a 3D model

 

 

Generate a 3D polyhedron model (Image 1) in SOLIDWORKS.

 

To have a proper orientation of the model after importing it into Blender, the bottom surface of the polyhedron should be located on the front plane in the SOLIDWORKS part file. Note that the 3D model needs to be saved as STL format or other formats which can be imported into Blender. Image 2 shows the resultant grayscale heightmap of this polyhedron.

 

Image1.png

Image 1. A SOLIDWORKS body

Image2.png

Image 2. A height map of of the SOLIDWORKS body

 

Importing the 3D model into Blender

 

Open Blender and follow the following steps:

File >> Import >> Stl(.stl) >> Change the directory to where the 3D body has been saved

Image3.png

Image 3. Process for import STL file into Blender

 

Adjusting the location of the 3D model in the Blender

 

  1. Right-click the 3D model and press Shift+Ctrl+Alt+C key to show the coordinates of the polyhedron’s geometric center
  2. Left-click the graphics area and press N key to open Transform window
  3. Move the geometric center of the model to the origin by adjusting Rotation and Location as shown in the following image

Image4.png

Image 4. The settings for relocated model

 

Relocate and setup the camera

 

  1. Left-click the camera in Outline and move the camera above the polyhedron model by changing the value of location and location (step 2 in the following image)
  2. Press 0 key to obtain the field of view of the camera (step 3)
  3. Click the Data tab and choose Orthographic for the lens and adjust the value for Orthographic Scale (step 4)
  4. Adjust the values in the Clipping to ensure the camera to capture the view of the entire model (step 5)

Note that with orthographic lens objects always appear at their actual size, regardless of distance. This means that parallel lines appear parallel and do not converge. For clipping interval, only objects within the limits are visible. Thus, users need to adjust the value of Clipping Start and End to have the entire model in the clipping interval.

5. Click the Render tab and Set the Resolution (The resolution of the final heightmap) as shown in step 6

Note that for the models with simple geometry appearance, the resolution of 512 x 512 pixels will be fine enough. For a model with many details, a heigher resolution setting is more suitable. However, a heigher resolution grayscale heightmap image will raise the computational cost of 3D texture generation.

Image5.png

Image 5. Process to setup the camera

 

Generate a grayscale heightmap using Node Editor

 

  1. Expand the bottom window and change the editor type to Node Editor
  2. Click the Composing and enable the Use Nodes
  3. Add Normalized node and Invert color node and link them
  4. Expand the Color Management in the Scence and change Display Device from sRGB to None
  5. Press F12 key, then the heightmap of mesh model will be generated as shown in Image 7.

Image6.png

Image 6. Node editor setting

Image7.png

Image 7. Height map for 3D model

 

 

Again, please watch the following video to see a demonstration of generating a grayschale heightmap for 3D Texture in SOLIDWORKS 2019:

 

 

https://youtu.be/z_jh_PrZzOs

 

Thank you,

-Xiao Liu

-Marlon Banta


          VirtualBox Guest-to-Host escape 0day and exploit released online      Cache   Translate Page      

Independent vulnerability researcher Sergey Zelenyuk has made public a zero-day vulnerability he discovered in VirtualBox, the popular open source virtualization software developed by Oracle. About the vulnerability The vulnerability affects VirtualBox 5.2.20 and earlier, and is present on the default VM configuration. “The only requirement is that a network card is Intel PRO/1000 MT Desktop (82540EM) and a mode is NAT,” Zelenyuk says. Along with the details about the flaw, which allows attackers to escape … More

The post VirtualBox Guest-to-Host escape 0day and exploit released online appeared first on Help Net Security.


          MapR Solutions Architect - Perficient - National, WV      Cache   Translate Page      
Design and develop open source platform components using Spark, Java, Oozie, Kafka, Python, and other components....
From Perficient - Wed, 03 Oct 2018 20:48:20 GMT - View all National, WV jobs
          LF Commerce, an open source ecommerce dashboard. ReactJS + ExpressJS      Cache   Translate Page      
LF Commerce

An ecommerce dashboard written in ReactJS + ExpressJS.

Test account

test@test.com

123

Installation Yarn yarn install NPM npm install How to run this? Yarn yarn client NPM npm run client Unit Test

For every main directory (components, containers etc.), there should be a __tests__ directory for all unit test cases.

yarn test [test_directory] How to contribute to this project?

Your contribution is appreicated. For the purpose of having good project management, I encourage you to understand the project structure and way of working before you start to contribute to this project.

├── client # The web frontend written in ReactJS │ ├── public # Static public assets and uploads │ ├── src # ReactJS source code │ │ ├── actions # Actions and Action creators of Redux │ │ ├── apis # Files for REST APIs │ │ │ ├── mocks # Mocked API response │ │ ├── components # React components │ │ | ├── __tests__ # Unit test for components │ │ ├── containers # React containers │ │ | ├── __tests__ # Unit test for containers │ │ ├── reducers # React reducers │ │ | ├── __tests__ # Unit test for reducers │ │ ├── sagas # Redux saga files │ │ | ├── __tests__ # Unit test for sagas │ │ ├── translations # All language translation .json files │ │ └── App.css # Your customized styles should be added here │ │ └── App.js # ** Where React webapp routes configured. │ │ └── index.js # React webapp start point └── .travis.yml # Travis CI config file └── .eslintrc.json # **Don't change settings here. └── package.json # All project dependancies └── app.js # Restful APIs written in ExpressJS └── README.md # **Don't change contents here. 1. Always work on your own feature or bugfix branch.

You will need to follow the naming convention if it's a new feature: feature/xxx-xxx-xx

or fix/xxx-xxx-xx if it's a bug or other type of fixing branch.

2. Always run eslint

Before creating a PR, you should run:

yarn lint:client

to make sure all formatting or other issues have been properly fixed.

... TBC

License

LF Commerce is Apache-2.0 licensed.


          [Freelancer] Mobile app for the Opencart 3 platform      Cache   Translate Page      
From Freelancer // Hi, Looking at someone to build a mobile application for the Opencart 3.x.x.x platform. Fully Native Android / iOS App Android / iOS Support Open source & Highly Customizable Push Notification Real-Time...
          The Indian Express Script | Firstpost Script      Cache   Translate Page      
Our India times clone is mainly developed for the people to take up their news-portal business through on-line to provide a brand new professional news-portal script with advanced features and functionality to enhance the business to make latest and trends easier access to the users and this script will also help the new entrepreneur who likes to do on-line business and to provide the latest trending trusted news service with reliable and robust script, this India times script makes much easier for the users to access the site without any technical knowledge because our script is made as user-friendly. The Indian Express Script is designed with Open Source PHP platform to make the script as much as efficient to the user, this script can be customized to the users as globalised or local to make their reach to the worldwide and here the new user can simply register their account with their valid mail id and password to make authentication account
          LXer: How open source in education creates new developers      Cache   Translate Page      
Like many programmers, I got my start solving problems with code. When I was a young programmer, I was content to code anything I could imagine—mostly games—and do it all myself. I didn't need help; I just needed less sleep. It's a common pitfall, and one that I'm happy to have climbed out of with the help of two important realizations:read more
          TuxMachines: Gitbase: Exploring Git repos with SQL      Cache   Translate Page      

Git has become the de-facto standard for code versioning, but its popularity didn't remove the complexity of performing deep analyses of the history and contents of source code repositories.

SQL, on the other hand, is a battle-tested language to query large codebases as its adoption by projects like Spark and BigQuery shows.

So it is just logical that at source{d} we chose these two technologies to create gitbase: the code-as-data solution for large-scale analysis of git repositories with SQL.

Gitbase is a fully open source project that stands on the shoulders of a series of giants which made its development possible, this article aims to point out the main ones.

Read more

read more


          Meet Franz, an open source messaging aggregator      Cache   Translate Page      

If you are like me, you use several different chat and messaging services during the day. Some are for work and some are for personal use, and I find myself toggling through a number of them as I move from apps to browser tabs—here, there, and everywhere.


read more
          How open source in education creates new developers      Cache   Translate Page      
Learning to program

Like many programmers, I got my start solving problems with code. When I was a young programmer, I was content to code anything I could imagine—mostly games—and do it all myself. I didn't need help; I just needed less sleep. It's a common pitfall, and one that I'm happy to have climbed out of with the help of two important realizations:


read more
          Developer - Java J2EE      Cache   Translate Page      
DC-Washington, Developer – Java J2EE Washington, DC MUST : Java Developer 2- 3 years' experience of hands on development experience with open source Java Stack Deep Understanding and experience with designing and implementing highly scalable web applications in a cloud environment required 2-3 years' Experience with continuous integration, continuous delivery, and cloud solutions such as Amazon Web Services requ
          React Already Did That at All Things Open 2018      Cache   Translate Page      
All Things Open is a large, community-created open source conference in Raleigh, North Carolina, with nearly 4,000 attendees and 20 concurrent sessions. At this year’s event, I was invited to deliver a talk similar to one I had presented at JSConf titled “React Already Did That.” The session itself is not actually about React, but […]
           MOL chooses Finlands's NAPA performance monitoring and analysis      Cache   Translate Page      
Finland's NAPA, the leading maritime software, services and data analysis provider has announced that it has signed an agreement with MOL (Mitsui OSK Lines Ltd) to provide performance analysis and reporting for 80 of MOL's time-chartered bulk carriers with NAPA Fleet Intelligence, according to Shipping Gazette.NAPA Fleet Intelligence combines NAPA's naval architectural expertise and proprietary data modelling with open source data and MOL's own reports to produce industry-leading analytics and actionable insights with zero onboard installation.Through its use of NAPA Fleet Intelligence, MOL will receive fleet-wide hull performance analysis, voyage-by-voyage performance reports, and voyage planning that's entirely cloud-based.This will allow MOL to further enhance its business efficiency by planning its fleet operations with better understanding of individual ship performance, more accurate fuel consumption estimates, and greater clarity on arrival times and voyage duration.NAPA's voyage reporting software is the first to use the data from noon-reports as well as remote analytics to precisely categorise fuel use into individual categories such as calm sea consumption. It can also identify the causes of increased fuel use, including environmental effects such as hull fouling.The noon-reports are further combined with remote-sensed data such as AIS, chart data and environmental data. This is processed through algorithms and hydrodynamic calculations.Said MOL managing executive officer Toshiaki Tanaka: "NAPA Fleet Intelligence provides us with the tools we need to accurately assess the technical and commercial performance of our fleet to a level of detail that was previously unattainable for chartered ships."Said NAPA vice president Naoki Mizutani: "With current freight rates where they are, NAPA's Fleet Intelligence provides charterers with the opportunity to evaluate their fleet, fuel use, and prospective voyage routes and their relative profitability in a single platform with no disruption, or additional costs caused by sensor installations."NAPA operates from 11 offices across Asia, Europe and the Americas supported by its Helsinki headquarters. To date, NAPA has over 400 user organisations for its design solutions and over 2,500 installations onboard vessels.

Source: Transportweekly
          GPL Initiative Expands with 16 Additional Companies Joining Campaign for Greater Predictability in Open Source Licensing      Cache   Translate Page      
...ability to deliver and stimulate demand for new products and technological innovations on a timely basis; delays or reductions in information technology spending; the integration of acquisitions and the ability to market successfully acquired technologies and products; risks related to errors ...

          Dropsolid: Dropsolid at Drupal Europe      Cache   Translate Page      
Dropsolid-booth at Drupal Europe

Drupal Europe

dropsolid8
Drupalcon
Drupal conferenties
general Drupal
Drupal

Last September Dropsolid sponsored and attended Drupal Europe. Compared to the Northern America’s conferences, getting Europeans to move to another location is challenging. Certainly when there are many conferences of such high quality that compete such as Drupalcamps, Drupal Dev Days, Frontend United, Drupalaton, Drupaljam, Drupal Business Days. I’m happy for the team they succeeded in making Drupal Europe profitable, this is a huge accomplishment and it also sends a strong signal to the market!

Knowing these tendencies, it was amazing to see that there is a huge market-fit for the conference that Drupal Europe filled in. Also a great sign for Drupal as a base technology and the growth of Drupal. Hence, for Dropsolid it was a must to attend, help and to sponsor such an event. Not only because it helps us getting the visibility in the developer community but also to connect with the latest technologies surrounding the Drupal ecosystem.

The shift to decoupled projects is a noticeable one for Dropsolid and even the Dropsolid platform is a Drupal decoupled project using Angular as our frontend. Next to that, we had a demo at our booth that showed a web VR environment in our Oculus Rift where cotent came from a Drupal 8 application.

 

People trying our VR-demo at Drupal Europe

 

On top of that, Drupal Europe was so important to us that our CTO helped the content team by being a volunteer and selection the sessions that were related to Devops & Infrastructure. Nick has been closely involved in this area and we’re glad to donate his time to help curate and select qualitative sessions for Drupal Europe.

None of this would have been possible without the support of our own Government who supports companies like Dropsolid to be present at these international conferences. Even though Drupal Europe is a new concept, it was seen and accepted as a niche conference that allows companies like Dropsolid to get brand awareness and knowledge outside of Belgium. We thank them for this support!

 

Afbeeldingsresultaat voor flanders investment and trade

 

From Nick: “One of the most interesting sessions for me was the keynote about the “Future of the open web and open source”. The panel included, next to Dries, Barb Palser from Google, DB Hurley from Mautic and Heather Burns. From what we gathered Matt Mullenberg was also supposed to be there but he wasn’t present. Too bad, as I was hoping to see such a collaboration and discussion. The discussion that got me the most is the “creepifying” of our personal data and how this could be reversed. How can one gain control the access of your own data and how can one revoke such an access. Just imagine, how many companies have your personal name and email and how could technology disrupt such a world where an individual controls what is theirs. I recommend watching the keynote in any case!”

 

 

We’ve also seen how Drupal.org could look like with the announced integration with Gitlab. I can’t recall myself being more excited when it comes to personal maintenance pain. In-line editing of code being one of the most amazing ones. More explanation can be found at https://dri.es/state-of-drupal-presentation-september-2018.

 

 

From Nick: 
“Another session that really caught our eye and is worthy of a completely separate blogpost is the session of Markus Kalkbrenner about Advanced Solr. Perhaps to give you some context, I’ve been working with Solr for more than 9 years. I can prove it with a commit even!  https://cgit.drupalcode.org/apachesolr_ubercart/commit/?id=b950e78. This session was mind blowing. Markus used very advanced concepts from which I hardly knew the existence of, let alone found an application for it. 

One of the use cases is a per-user sort based on the favorites of a user. The example Markus used was a recipe site where you can rate recipes. Obviously you could sort on the average rating but what if you want to sort the recipe’s by “your” rating. This might seem trivial but is a very hard problem to solve as you have to normalize a dataset in Solr which is by default a denormalized dataset. 

Now, what if you want to use this data to get personalized recommendations. This means we have to learn about the user and use this data on the fly to get these recommendations based on the votes the user applied to recipes. Watch how this work in the recording of Markus and be prepared to have your mind blown.”

 

 

There were a lot of other interesting sessions and most of them had recordings and their details can be found and viewed at https://www.drupaleurope.org/program/schedule. If you are interested in the future of the web and how Drupal plays an important role in this we suggest you take a look. If you are more into meeting people in real-time and being an active listener there is Drupalcamp Ghent (http://drupalcamp.be) at the 23rd and the 24th of November. Dropsolid is also a proud sponsor of this event.

And an additional tip: Markus’s session will also be presented there ;-)


          Mozilla Addons Blog: Friend of Add-ons: Jyotsna Gupta      Cache   Translate Page      

Our newest Friend of Add-ons is Jyotsna Gupta! Jyotsna first became involved with Mozilla in 2015 when she became a Firefox Student Ambassador and started a Firefox club at her college. She has contributed to several projects at Mozilla, including localization, SuMo, and WebMaker, and began exploring Firefox OS app development after attending a WoMoz community meetup in her area.

In 2017, a friend introduced Jyotsna to browser extension development. Always curious and interested in trying new things, she created PrivateX, an extension that protects user privacy by opening websites that ask for critical user information in a private browsing window and removing Google Analytics tracking tokens. With her newfound experience developing extensions, Jyotsna began mentoring new extension developers in her local community, and joined the Featured Extensions Advisory Board.

After wrapping up two consecutive terms on the board, she served on the judging panel for the Firefox Quantum Extensions Challenge, evaluating more than 100 extensions to help select finalists for each award category. Currently, she is an add-on content reviewer on addons.mozilla.org and a Mozilla Rep. She frequently speaks about cross-browser extension development at regional events.

When asked about her experience contributing to Mozilla, Jyotsna says, “It has been a wonderful learning experience for me as a Mozillian. When I was a student, Mozilla was something that I could add to my profile to enhance my resume. There was a time when I refrained myself from speaking up, but today, I’m always ready to speak in front of a huge number of people. Getting involved with Mozilla helped me in meeting like-minded people around the globe, working with diverse teams, learned different cultures, gained global exposure and a ton of other things. 
And I’m fortunate enough to have wonderful mentors around me, boosting me up to see a brighter side in every situation.”

Jyotsna also has advice for newcomers to open source projects. “To the contributors who are facing imposter syndrome, trust me, you aren’t alone. We were all there once. We are here for you. May the force be with you.”

Thank you so much for your many wonderful contributions, Jyotsna!

To learn more about how to get involved in the add-ons community, please take a look at our wiki to see current contribution opportunities.

The post Friend of Add-ons: Jyotsna Gupta appeared first on Mozilla Add-ons Blog.


          Comment on A first look at changes coming in ASP.NET Core 3.0 by Bertasoft      Cache   Translate Page      
I know you hate those technologies, but please add support for webforms, like you did for winforms. There are a lot of projects built on that can not be rewritten and must be supported and expanded. Don't you want to do this? Open source and move to github.
          Python Application and Platform Developer - Plotly - Montréal, QC      Cache   Translate Page      
Engage our Open Source communities and help take stewardship of our OSS projects. Duties &amp; Responsibilities:....
From Plotly - Thu, 06 Sep 2018 15:03:02 GMT - View all Montréal, QC jobs
          Domoticz - open source domotica systeem - deel 3      Cache   Translate Page      
Replies: 11884 Last poster: darklord007 at 07-11-2018 18:06 Topic is Open Sinds ik domoticz draai (met een rfxcom433 en P1 kabel) heb ik om de zoveel tijd dat de rfxcom niet meer werkt. Ik krijg dan de volgende melding: 2018-11-07 17:44:00.530 Error: RFXtrx433e hardware (4) nothing received for more than 30 Minutes!.... 2018-11-07 17:44:01.531 Error: Restarting: RFXtrx433e De P1 blijft wel gewoon werken. Als ik hem dan even uit trek en weer insteek werkt ie weer... Iemand enig idee waardoor dit getriggerd kan worden en hoe ik dit kan oplossen?
          Wayne Beaton: Eclipse Foundation Specification Process, Part I: The EDP      Cache   Translate Page      

The Eclipse Foundation Specification Process (EFSP) was authored as an extension to the Eclipse Development Process (EDP). With this in mind, before we can discuss the EFSP, we’ll start with a quick EDP primer.

At a high (and very simplified) level, the EDP looks a little something like this:

edp_lifecycle

 

All open source projects at the Eclipse Foundation start life as a proposal. A proposal literally proposes the creation of a new open source project: the proposal document suggests a name for the new project, and defines many things, including a description and scope of work. The proposal also serves as the nomination and election of all project committers and project leads.

The proposal is posted for community feedback for a minimum of two weeks; during that time, the Eclipse Foundation staff works behind the scenes to ensure that the project’s name can be claimed as a trademark, a mentor has been identified, the licensing scheme works, and more. The community feedback period ends with a creation review which lasts for a minimum of one week. The creation review is the last opportunity for the community and the members of the Eclipse Foundation to provide feedback and express concerns regarding the project.

After successful completion of the creation review, and the project resources have been provisioned by the Eclipse Webmaster team, the project team engages in development. Project committers push code to into the project’s source code repositories, and produce and disseminate milestone (snapshot) builds to solicit feedback as part of an iterative development process.

When the time comes to deliver a formal release, the project team produces release candidates and engages in a release review. A release review provides an opportunity for the project team to demonstrate to their Project Management Committee (PMC) that their content is ready for release, work with the Eclipse Intellectual Property Team to ensure that all of the required IP due diligence has been completed successfully, and give the community and membership a final opportunity provide feedback and express concerns. Following a successful release review, the project team will push out their final (GA) build and announce the official release to their community via established channels.

The proposal serves as the first plan for the new open source project. Subsequent releases start with the creation of some sort of plan before reengaging in the development (release) cycle. The level of formality in the planning process varies by project. For many projects, the plan is little more than an acknowledgement that further development is needed. But for some projects, planning is a well-defined open process by which the committers work with their communities to identify themes and issues that will be addressed by the release.

In my next post, I’ll discuss how this process is extended by the the EFSP. Then, I’ll start digging into the details.

You can find the community draft of the Eclipse Foundation Specification Process here.


          Home Assistant - Open source Python3 home automation      Cache   Translate Page      
Replies: 7483 Last poster: alexswart at 07-11-2018 17:48 Topic is Open Ik schakel mijn kichten vaak met een kaku afstandsbediening met de groep schakelaar. Ik krijg het niet voor elkaar om de status in hass dan te laten wijzigen ik heb al alliasses toegevoegd maar dit heeft geen succes. Iemand dit al wel werkend gekregen?
          FDA unveils open source code for collecting patient data - Healthcare IT News      Cache   Translate Page      

Healthcare IT News

FDA unveils open source code for collecting patient data
Healthcare IT News
The U.S. Food and Drug Administration on Tuesday posted open source code for its MyStudies App to enable researchers to collect patient-provided data. WHY IT MATTERS. FDA explained that after going through a pilot test, the MyStudies App is not what's ...
United States: FDA Puts Liquid Nitrogen Use On Ice In New Food Code InterpretationMondaq News Alerts

all 4 news articles »

          Why open source isn't just about code - TechRepublic      Cache   Translate Page      

TechRepublic

Why open source isn't just about code
TechRepublic
TechRepublic's Dan Patterson asks Abby Cabunoc Mayes of the Mozilla Foundation to discuss how open source is about code, but equally about culture. The following is an edited transcript of the interview. Dan Patterson: Open source code makes the ...


          How to add app icons and splash screens to a React Native app in staging and production      Cache   Translate Page      

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

React Native was designed to be “learn once, write anywhere,” and it is usually used to build cross platform apps for iOS and Android. And for each app that we build, there are times we need to reuse the same code, build and tweak it a bit to make it work for different environments. For example, we might need multiple skins, themes, a free and paid version, or more often different staging and production environments.

And the task that we can’t avoid is adding app icons and splash screens to our apps.

In fact, to add a staging and production environment, and to add app icons, requires us to use Xcode and Android Studio, and we do it the same way we do with native iOS or Android projects.

Let’s call our app MyApp and bootstrap it with react-native init MyApp . There will of course, be tons of libraries to help us with managing different environments.

In this post, we will do just like we did with native apps, so that we know the basic steps.

Build configuration, target, build types, production flavor, and build variant

There are some terminologies we needed to remember. In iOS, debug and releases are called build configurations, and staging and production are called targets.

A build configuration specifies a set of build settings used to build a target’s product in a particular way. For example, it is common to have separate build configurations for debug and release builds of a product.

A target specifies a product to build and contains the instructions for building the product from a set of files in a project or work-space. A target defines a single product; it organizes the inputs into the build system — the source files and instructions for processing those source files — required to build that product. Projects can contain one or more targets, each of which produces one product

In Android, debug and releases are called build types, and staging and production are called product flavors. Together they form build variants.

For example, a “demo” product flavor can specify different features and device requirements, such as custom source code, resources, and minimum API levels, while the “debug” build type applies different build and packaging settings, such as debug options and signing keys. The resulting build variant is the “demoDebug” version of your app, and it includes a combination of the configurations and resources included in the “demo” product flavor, “debug” build type, and main/ source set.

Staging and production targets in iOS

Open MyApp.xcodeproj inside ios using Xcode. Here is what we get after bootstrapping:

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

React Native creates iOS and tvOS apps, and two test targets. In Xcode, a project can contain many targets, and each target means a unique product with its own build settings — Info.plist and app icons.

Duplicate target

If we don’t need the tvOS app, we can delete the MyApp-tvOS and MyApp-tvOSTests . Let’s use MyApp target as our production environment, and right click -> Duplicate to make another target. Let’s call it MyApp Staging.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Each target must have unique bundle id. Change the bundle id of MyApp to com.onmyway133.MyApp and MyApp Staging to com.onmyway133.MyApp.Staging.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Info.plist

When we duplicate MyApp target , Xcode also duplicates Info.plist into MyApp copy-Info.plist for the staging target. Change it to a more meaningful name Info-Staging.plist and drag it to the MyApp group in Xcode to stay organised. After dragging, MyApp Staging target can’t find the plist, so click Choose Info.plist File and point to the Info-Staging.plist.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Scheme

Xcode also duplicates the scheme when we duplicate the target, so we get MyApp copy:

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Click Manage Schemes in the scheme drop-down to open Scheme manager:

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

I usually delete the generated MyApp copy scheme, then I create a new scheme again for the MyApp Staging target. You need to make sure that the scheme is marked as Shared so that it is tracked into git.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

For some reason, the staging scheme does not have all the things set up like the production scheme. You can run into issues like ‘React/RCTBundleURLProvider.h’ file not found or RN: ‘React/RCTBridgeModule.h’ file not found . It is because React target is not linked yet.

To solve it, we must disable Parallelise Build and add React target and move it above MyApp Staging.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Staging and production product flavors in Android

Open the android folder in Android Studio. By default there are only debug and release build types:

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

They are configured in the app module build.gradle:

buildTypes {
    release {
        minifyEnabled enableProguardInReleaseBuilds
        proguardFiles getDefaultProguardFile("proguard-android.txt"), "proguard-rules.pro"
    }
}

First, let’s change application id to com.onmyway133.MyApp to match iOS. It is not required but I think it’s good to stay organised. Then create two product flavors for staging and production. For staging, let’s add .Staging to the application id.

From Android Studio 3, “all flavors must now belong to a named flavor dimension” — normally we just need default dimensions. Here is how it looks in build.gradle for our app module:

android {
    compileSdkVersion rootProject.ext.compileSdkVersion
    buildToolsVersion rootProject.ext.buildToolsVersion
    flavorDimensions "default"

defaultConfig {
        applicationId "com.onmyway133.MyApp"
        minSdkVersion rootProject.ext.minSdkVersion
        targetSdkVersion rootProject.ext.targetSdkVersion
        versionCode 1
        versionName "1.0"
        ndk {
            abiFilters "armeabi-v7a", "x86"
        }
    }
    splits {
        abi {
            reset()
            enable enableSeparateBuildPerCPUArchitecture
            universalApk false  // If true, also generate a universal APK
            include "armeabi-v7a", "x86"
        }
    }
    buildTypes {
        release {
            minifyEnabled enableProguardInReleaseBuilds
            proguardFiles getDefaultProguardFile("proguard-android.txt"), "proguard-rules.pro"
        }
    }

productFlavors {
        staging {
            applicationIdSuffix ".Staging"
        }

        production {

        }
    }
}

Click Sync Now to let gradle do the syncing job. After that, we can see that we have four build variants:

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

How to run staging and production

To run the Android app, we can specify a variant like react-native run-android — variant=productionDebug, but I prefer to go to Android Studio, select the variant, and run.

To run iOS app, we can specify the scheme like react-native run-ios — simulator=’iPhone X’ — scheme=”MyApp Staging” . As of react-native 0.57.0 this does not work. But it does not matter as I usually go to Xcode, select the scheme, and run.

Add app icon for iOS

According to the Human Interface Guideline, we need app icons of different sizes for different iOS versions, device resolutions, and situations (notification, settings, Spring Board). I’ve crafted a tool called IconGenerator, which was previously mentioned in Best Open Source Tools For Developers. Drag the icon that you want — I prefer those with 1024x1024 pixels for high resolution app icons — to the Icon Generator MacOS app.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Click Generate and we get AppIcon.appiconset . This contains app icons of the required sizes that are ready to be used in Asset Catalog. Drag this to Asset Catalog in Xcode. That is for production.

For staging, it’s good practice to add a “Staging” banner so that testers know which is staging, and which is production. We can easily do this in Sketch.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Remember to set a background, so we don’t get a transparent background. For an app icon with transparent background, iOS shows the background as black which looks horrible.

After exporting the image, drag the staging icon to the IconGenerator the same way we did earlier. But this time, rename the generated appiconset to AppIcon-Staging.appiconset. Then drag this to Asset Catalog in Xcode.

For the staging target to use staging app icons, open MyApp Staging target and choose AppIcon-Staging as App Icon Source.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Add app icon for Android

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

I like to switch to Project view, as it is easier to change app icons. Click res -> New -> Image Asset to open Asset Studio. We can use the same app icons that we used in iOS:

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Android 8.0 (API level 26) introduced Adaptive Icons so we need to tweak the Resize slider to make sure our app icons look as nice as possible.

Android 8.0 (API level 26) introduces adaptive launcher icons, which can display a variety of shapes across different device models. For example, an adaptive launcher icon can display a circular shape on one OEM device, and display a squircle on another device. Each device OEM provides a mask, which the system then uses to render all adaptive icons with the same shape. Adaptive launcher icons are also used in shortcuts, the Settings app, sharing dialogs, and the overview screen. — Android developers

We are doing for production first, which means the main Res Directory. This step will replace the existing placeholder app icons generated by Android Studio when we bootstrapped React Native projects.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Now that we have production app icons, let’s make staging app icons. Android manages code and assets via convention. Click on src -> New -> Directory and create a staging folder. Inside staging, create a folder called res . Anything we place in staging will replace the ones in main — this is called source sets.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

You can read more here: Build with source sets.

You can use source set directories to contain the code and resources you want packaged only with certain configurations. For example, if you are building the “demoDebug” build variant, which is the crossproduct of a “demo” product flavor and “debug” build type, Gradle looks at these directories, and gives them the following priority:
src/demoDebug/ (build variant source set)
src/debug/ (build type source set)
src/demo/ (product flavor source set)
src/main/ (main source set)

Right click on staging/res -> New -> Image Asset to make app icons for staging. We also use the same staging app icons like in iOS, but this time we choose staging as Res Directory. This way Android Studio knows how to generate different ic_launcher and put them into staging.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Add launch screen for iOS

The splash screen is called a Launch Screen in iOS, and it is important.

A launch screen appears instantly when your app starts up. The launch screen is quickly replaced with the first screen of your app, giving the impression that your app is fast and responsive

In the old days, we needed to use static launch images with different sizes for each device and orientation.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Launch Screen storyboard

For now the recommended way is to use Launch Screen storyboard . The iOS project from React Native comes with LaunchScreen.xib but xib is a thing of the past. Let’s delete it and create a file called Launch Screen.storyboard .

Right click on MyApp folder -> New and chose Launch Screen, add it to both targets as usually we show the same splash screen for both staging and production.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Image Set

Open asset catalog, right click and select New Image Set . We can name it anything. This will be used in the Launch Screen.storyboard.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Open Launch Screen.storyboard and add an UIImageView . If you are using Xcode 10, click the Library button in the upper right corner and choose Show Objects Library.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Set image for Image View, and make sure Content Mode is set to Aspect Filled, as this ensures that the image always covers the full screen (although it may be cropped). Then connect ImageView using constraints to the View, not the Safe Area. You do this by Control+drag from the Image View (splash) to the View.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Constrains without margin

Click into each constraint and uncheck Relative to Margin. This makes our ImageView pin to the very edges of the view and with no margin at all.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Now go to both targets and select Launch Screen.storyboard as Launch Screen File:

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

On iOS, the launch screen is often cached, so you probably won’t see the changes. One way to avoid that is to delete the app and run it again.

Add a launcher theme for Android

There are several ways to add splash screen for Android, from using launcher themes, Splash Activity, and a timer. For me, a reasonable splash screen for Android should be a very minimal image.

As there are many Android devices with different ratios and resolutions, if you want to show a full screen splash image, it will probably not scale correctly for each device. This is just about UX.

For the splash screen, let’s use the launcher theme with splash_background.xml .

Learn about Device Metric

There is no single splash image that suits all Android devices. A more logical approach is to create multiple splash images for all common resolutions in portrait and landscape. Or we can design a minimal splash image that works. You can find more info here: Device Metric.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Here is how to add splash screen in 4 easy steps:

Add splash image

We usually need a common splash screen for both staging and production. Drag an image into main/res/drawble . Android Studio seems to have a problem with recognising some jpg images for the splash screen, so it’s best to choose png images.

Add splash_background.xml

Right click on drawable -> New -> Drawable resource file . Name it whatever you want — I choose splash_background.xml . Choose the root element as layer-list:

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

A Layer List means “a Drawable that manages an array of other Drawables. These are drawn in array order, so the element with the largest index is drawn on top”. Here is how splash_background.xml looks like:

<?xml version="1.0" encoding="utf-8"?>

<!-- The android:opacity=”opaque” line — this is critical in preventing a flash of black as your theme transitions. -->
<layer-list xmlns:android="http://schemas.android.com/apk/res/android"
    android:opacity="opaque">

    <!-- The background color, preferably the same as your normal theme -->
    <item android:drawable="@android:color/white"/>

    <!-- Your splash image -->
    <item>
        <bitmap
            android:src="@drawable/iron_man"
            android:gravity="center"/>

    </item>

</layer-list>

Note that we point to our splash image we added earlier with android:src=”@drawable/iron_man”.

Declare style

Open styles.xml and add SplashTheme:

<style name="SplashTheme" parent="Theme.AppCompat.NoActionBar">
    <item name="android:windowBackground">@drawable/splash_background</item>
</style>

Use SplashTheme

Go to Manifest.xml and change the theme of the the launcher activity, which has category android:name="android.intent.category.LAUNCHER" . Change it to android:theme="@style/SplashTheme" . For React Native, the launcher activity is usually MainActivity . Here is how Manifest.xml looks:

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.myapp">

    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/>

    <application
      android:name=".MainApplication"
      android:label="@string/app_name"
      android:icon="@mipmap/ic_launcher"
      android:allowBackup="false"
      android:theme="@style/AppTheme">
      <activity
        android:name=".MainActivity"
        android:label="@string/app_name"
        android:configChanges="keyboard|keyboardHidden|orientation|screenSize"
        android:theme="@style/SplashTheme"
        android:windowSoftInputMode="adjustResize">
        <intent-filter>
            <action android:name="android.intent.action.MAIN" />
            <category android:name="android.intent.category.LAUNCHER" />
        </intent-filter>
      </activity>
      <activity android:name="com.facebook.react.devsupport.DevSettingsActivity" />
    </application>

</manifest>

Run the app now and you should see the splash screen showing when the app starts.

Managing environment configurations

The differences between staging and production are just about app names, application ids, and app icons. We probably use different API keys, and backend URL for staging and production.

Right now the most popular library to handle these scenarios is react-native-config, which is said to “bring some 12 factor love to your mobile apps”. It requires lots of steps to get started, and I hope there is a less verbose solution.

Where to go from here

In this post, we touched Xcode and Android Studio more than Visual Studio Code, but this was inevitable. I hope this post was useful to you. Here are some more links to read more about this topic:

Original post https://medium.freecodecamp.org/how-to-add-app-icons-and-splash-screens-to-a-react-native-app-in-staging-and-production-d1dab615e7c6


          What I learned building my first slack bot.       Cache   Translate Page      

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

I built a slack bot which is now live on slacks app directory and realized that I had to face some challenges, which I wasn't prepared for.

About our app:
Olaph is a bot which helps you to take part at your stand-ups (or daily, weekly or however call it at your company :) a quick meeting where team members update each other about their current progress or challenges)
You can create a simple standup where you define the members, days, time and questions which should be asked to your team and Olaph will send those questions to your team-members and broadcast their answers to a specified channel.
We don't use it as a replacement for our dailies, but as a reminder where you can think about what you actually did before the meeting starts (who doesn't know the thinking-breaks when someone doesn't remember what he did yesterday).
also, you can comment on the answers of others without interrupting the daily.

First of all:
The slack support is the best I worked with so far, they respond fast and precise, gave me tips and used feedback properly, e.g.: they changed the documentation on some places which I found confusing.

Slack suggests some third party SDK's from developers which built wrappers for different languages, but none of them really suited our needs, so we started from scratch (we will publish our wrapper as well, but to make it open source, we have to change and extend some things)

The start was pretty straightforward, we created a slack app, registered our endpoints, and tested it in a dev-workspace.

When we were in our finishing-phase, things started to become interesting.
To submit your app to the slack app directory, you have to create a fresh workspace with your installed app & test-users to give slack access to it.
They have a submission checklist, where all necessary steps are described: https://api.slack.com/docs/slack-apps-checklist.
so far so good, we clicked on submit, checked all points and submitted Olaph.
After some days, the slack-team had some minor requests which we had to change (just naming of slash-commands and parameters for commands)
And then we were allowed to publish our app!

Said and done, Olaph was live and some workspaces installed Olaph.
The first week was horrible, errors here and there, gladly no crashes, but some users were not able to use Olaph properly..
Some of you might know this feeling, you built something, tried to test every part of your code and suddenly nothing works.
And then I made the first mistake, I started to freak out.
"I tested the whole flow, I created a blank workspace, installed the app, used it, everything worked on my development system, how in the world is it possible that there are so many errors?"
I got a mail from a nice woman, which asked for help because she wasn't able to create a standup.
That calmed my mind a bit, there are actually users that ask for help instead of uninstalling everything and ignore my app forever.

The challenge was that slack sends different data to your backend when its used in enterprise environments.
There is some additional/different data, mixed userids and things I didn't know before because I didn't test our app on enterprise workspaces, of course I didn't, I don't even know what that costs, because for enterprises you have to contact slack.

well, for companies it might be expensive (or maybe not, as mentioned, I don't know the prices) but I was a developer. I found out that you can request an enterprise test instance for developers.
So I immediately requested an instance and.. waited.
That's why I had to fix errors which I saw in our logs, tell the users that this error is fixed, and wait for the next error.

It took about a week until I could set up my test-enterprise and test the app there. After I fixed every error and understood how enterprise workspaces work, I could publish a new release and our Users could finally use Olaph properly.

What i miss at slack are some statistics and numbers about my app, when you publish your app to the App directory, you don't get any information about clicks, installs, uninstalls, how often slash commands are used or stuff like that.
Of course, you are able to collect those data by yourself, but i have to invest time for that which could be used to improve or extend the product.

Slack has a public roadmap for developers to give insights in their current priorities, but the cards are not very detailed, so i dont really know if we will get real statistics and numbers, but they are pretty active and i am pretty confident that slack will give us tools to improve our apps.

So far I really enjoyed the work with slack and their API, and I am looking forward to further changes and improvements.


          Progress and roadblocks: a journey into open-source      Cache   Translate Page      

Enhancing generators for ES6 in the Sequelize CLI

This is the story of my discovery of ES6-class style model definitions with Sequelize (see Using ES6 classes for Sequelize 4 models). The realisation that the current Sequelize CLI model generator didn’t support it (as is expected since it’s not the most common way of doing it). Finally, the impetus to just implement it myself.

In other words, I started using a style of Sequelize models that the Sequelize CLI didn’t support. I didn’t get an answer on a “feature request” issue, so I just forked it, implemented it and published it.

The Free Open Source Software movement has its ups and downs, its hype and abandoned projects. One of the great aspects of it is that you can always (depending on the license) just fork something and publish it 🙂.

Some links:

Live-tweet storms:

Table of contents:

Dealing with a GitHub fork repository 🍴

  1. Fork using the GitHub UI
  2. Clone your fork git clone git@github.com:USERNAME/REPONAME.git
    • eg. git clone git@github.com:HugoDF/sequelize-cli.git
  3. Create upstream remote
    1. git remote add upstream git remote add upstream https://github.com/sequelize/cli
  4. (To sync with upstream) fetch upstream remote and merge it into local branch (from GitHub Help Syncing a Fork)
    1. Get latest from upstream git fetch upstream
    2. Merge changes git merge upstream/master
    3. Send upstream changes to fork git push

Following setup instructions 📐

Getting a project set up can be daunting, luckily Sequelize CLI has some steps in CONTRIBUTING.md, they’re the following:

  1. npm install
    • Didn’t work on Node 10, something to do with node-gyp and SQLite
  2. npm test
    • takes 10-15 minutes
    • failed on latest master

My goals 🛒

I wanted to:

  1. Update sequelize model:create to support a second template, which uses an ES6 class model 👍 when passed a --class flag, the tasks are as follows
    1. Add support for a --class flag, somewhere in model_generate.js
    2. Check for args.class in model_helper.js generateFileContent, and switch templates depending on that
    3. Create a class-model.js file which is a template similar to src/assets/models/model.js but uses ES6 class syntax for the Sequelize model
    4. Use class-model.js instead of model.js in model_helper#generateFileContent when args.class is set (ie. true)
  2. Update the migration generator (these are totally arbitrary choices but things I’ve found wanting to do in Sequelize projects):
    1. use object shorthand notation for functions instead of arrow functions
    2. (Ended up not doing this, because it’s trivial and nitpicky) use (sequelize, DataTypes) -> Promise function signature, ie. rename queryInterface and Sequelize to sequelize and DataTypes respectively
  3. (Won’t do) Update model/index.js to a form that is more explicit and more in line with what I would use in production
    • Why? This isn’t something that is compatible with current behaviour of the template, you would lose auto-loading of all models in the models folder.

Getting tests to pass 🚫

  1. Failing locally…
  2. Was working for https://github.com/sequelize/cli

Why?

2.1/ Day 2 of trying to hack on Sequelize CLI

I’ve got green tests… it turns out master is a fake ✅ passing build.

It loads latest sequelize-cli from npm, soooo isn’t testing the code in master.

Hard reset to latest released npm version fixed the test 👍

— Hugo Di Francesco (@hugo__df) 31 October 2018

Sequelize CLI Continuous Integration isn’t set up how you expect it to be set up:

  • What you expect: checkout the commit’s code and test it
  • What it does: fetch sequelize-cli from npm, and test that… so it’s testing the latest published release, not the latest code in the repo.

Tests still take a while ⏳

  • they’re mostly integration tests: writing files, writing to the DB and having to setup/teardown every so often
  • on my machine a bunch of tests run in 2-3 seconds each and the total time is about 10-15 minutes

Releasing the updates 🚀

Trying to contribute back upstream

A GitHub issue created on the 1st of August 2018 seems to have been lost (not even a “this isn’t something we want to do”).

It’s not the nicest experience but it’s totally fine by me, OSS maintainers don’t owe us anything after all.

This might be a niche use-case, most people seem to be happy to write models in the module.exports = () => { /* do some stuff in the closure */ return }; fashion, although looking at the number of people who read “Using ES6 classes for Sequelize 4 models”, it’s a valid way to write an application using Sequelize:


#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

I went ahead and created a PR but I don’t expect it to get merged:

Publishing a fork as an npm scoped module

I always thought this is more a niche/variation type of thing, so publishing it as a fork makes sense

Update package.json to have, publishConfig is very important if you’re publishing a scoped module

{
  "name": "@hugodf/sequelize-cli",
  "publishConfig": {
    "access": "public"
  }
}

Use np 🙂:

  1. npx np and go through the dialog
  2. Since the tests take 10-15 minutes to run, I’ve found using the --yolo option useful

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Lessons about the Open Source Movement 📖

People are paid to make mistakes, people who work for free will also make (similar) mistakes

Just like at the day job, it’s important to have empathy for “interesting things” going on in the code eg. the tests running against something other than the code.

If anything maintainers have created this project for free, in their spare time so expect more “interesting things” in the codebase.

Technical debt is a sign that the open source project has reached some level of maturity.

Remember that your situation is the following:

  1. Aware of the project’s existence
  2. Using the project
  3. Finding a use-cases where the project doesn’t quite fit your needs
  4. That use-case might not be something the project even wants to solve
  5. Attempting to contribute back to the project

You’re in a very specific situation where the project has reached some level of critical mass/awareness/scale. To get here, just like at a startup attempting to scale, the maintainers have had to move relatively fast and fly by the seat of their pants, maybe put some “hacks” in some places.

No one owes you anything

As I mentioned, I opened both an issue and a PR and I don’t think either one of them is going to get a response.

That’s why I just went ahead and published the fork as a module (scoped modules is awesome for this kind of use-case 👍 npm team).

Do what you can

Every little helps:

  • Mentioning something that stumped you during setup (eg. master being a fake green ✅ build)
  • Documentation updates
  • Opening random PRs with super-opinionated new feature 🙄
  • Write blog posts documenting interesting usage of a library

Everyone is just trying to help.

Ruediger Theiselmann


          Python Application and Platform Developer - Plotly - Montréal, QC      Cache   Translate Page      
Engage our Open Source communities and help take stewardship of our OSS projects. Duties &amp; Responsibilities:....
From Plotly - Thu, 06 Sep 2018 15:03:02 GMT - View all Montréal, QC jobs
          Haivision Publishes SRT Open Source Specification - Protocol for Low...      Cache   Translate Page      

SRT (Secure Reliable Transport) technical overview aims to foster understanding, adoption, and to pave the path to official standardization

(PRWeb November 07, 2018)

Read the full story at https://www.prweb.com/releases/haivision_publishes_srt_open_source_specification_protocol_for_low_latency_video_streaming_and_file_transfer/prweb15900385.htm


          Sicher in der Cloud: Beliebter Open Source-Verschlüssler erhält bislang größtes Update      Cache   Translate Page      
Daten auf einem System mit Internetzugang stehen immer in der Gefahr, von Hackern geklaut zu werden. Auch Cloud-Speicher bilden in dieser Hinsicht keine Ausnahme. Einzige Schutzmöglichkeit ist die Verschlüsselung der Daten vor dem Upload. Genau das bewerkstelligen Sie mit der Open Source-Software Cryptomator, die jetzt ihr bislang größtes Update erhalten
          Games: Vilmonic, Galaxy of Pen & Paper, Lutris, Surviving Mars: Space Race, Tropico      Cache   Translate Page      

          LogDNA Announces Advanced Self Hosted and Multi-Cloud Logging Platforms      Cache   Translate Page      
...locality and performance requirements. LogDNA designed the Self Hosted, Multi-Cloud solution utilizing insights shared by customers into productivity and budget challenges with existing logging platforms.   The Self Hosted solution exceeds the capabilities of competitors and DIY open source options in ...

          MapR Solutions Architect - Perficient - National, WV      Cache   Translate Page      
Design and develop open source platform components using Spark, Java, Oozie, Kafka, Python, and other components....
From Perficient - Wed, 03 Oct 2018 20:48:20 GMT - View all National, WV jobs
          Episode 6: Conferences and Community      Cache   Translate Page      
Episode 6: Conferences and Community cover

          CIO - Senior Java Developer - Experis - Lowell, MA      Cache   Translate Page      
Hands-on experience with programming languages such as Java / Python etc. Familiarity with Microservices, AWS Web Services, Apigee and a range of open source...
From Experis - Thu, 01 Nov 2018 23:43:10 GMT - View all Lowell, MA jobs
          N. Iaroci - My story with Python and Open Source      Cache   Translate Page      
none
          Cloud Security Developer - RBC - Toronto, ON      Cache   Translate Page      
Use Jenkins, Open Source Tools like CFNNAG, Security Test Automation Tools like Server Spec, Inspec to develop a CICD Security Testing Pipeline....
From RBC - Mon, 22 Oct 2018 19:27:08 GMT - View all Toronto, ON jobs
           OS time: working on open source during company time      Cache   Translate Page      
We get to contribute to open source software at Phusion, like through maintaining the frontapp gem. Initially scratching our own itch, outside contributions make working on it really worth the while: https://blog.phusion.nl/2018/10/31/os-time-working-on-open-source-in-the-boss-time/

          How To Select Between Fast Ring And Slow Ring In Windows Insider Program?      Cache   Translate Page      

Fast Ring and Slow rings are two different rings or categories of Windows Insider Program. Windows Insider program is an open source software testing platforms that can be used by developers, tech lovers, enterprise testers, etc. With Windows Insider program, a user can try and test the different Windows operating system builds much before they […]

The post How To Select Between Fast Ring And Slow Ring In Windows Insider Program? appeared first on My Windows Hub.


          Acquia and BigCommerce say partnership will accelerate content for commerce initiatives      Cache   Translate Page      

Bigcommerce instagram, Bigcommerce Google Shopping Integration translation and localisation

Acquia and Bigcommerce have announced a new partnership to help merchants develop and launch online ecommerce solutions with effective, on-brand customer experiences. They say that this partnership will enable fast-growing brands to take advantage of industry-leading ecommerce solutions with world-class open source content management. They note in a press release that retailers naturally struggle to […]
          A VMware Perspective on Open Source in China – Part One, Cloud Native      Cache   Translate Page      

By Alan Ren I lead VMware’s Advanced Technology and Evangelism Center—or simply, VMware’s R&D efforts—in China and over a series of four posts, I want to share our perspective on the state of open source in China and where VMware, in particular, is making an impact. This is an exciting time for the Chinese open

The post A VMware Perspective on Open Source in China – Part One, Cloud Native appeared first on VMware Open Source Blog.


          2019年13个值得关注的Linux和开源会议      Cache   Translate Page      

有时,一个how-to的演讲可以为你节省一周的工作时间。小组讨论可以帮助你发现制定企业开源战略的一个要素。当然,你可以从书本或GitHub中学习。但没有比听取已经完成某项工作的人解释他们如何解决你所面临的同样问题更好的了。开源项目的运作方式决定了人们定期会沟通和交流来创建优秀的项目(例如云原生计算),可能今天你都没有听说过的技术明天就可以帮到你。

在2019年那么多会议中,你怎么选择?一些涉及广泛的开源主题;其他可能特定于你的技术堆栈。

在这里,笔者按时间顺序,列出了13个2019年最好的开源会议,以帮助你的职业生涯、技能和业务。


2019年13个值得关注的Linux和开源会议
Southern California linux Expo(SCALE)

网址: http://www.socallinuxexpo.org/scale/16x

日期:2019年3月7日至10日

SCALE是北美最大的由社区运作的开源和免费软件会议。它为从初学者到专家的每个人提供课程和研讨会。

例如,在更复杂的一面,2018年会议有关于微服务架构以及如何快速完成Debian / Ubuntu应用程序的演讲。与此同时,不太熟悉Linux的人可以参加有关容器和虚拟机基础知识以及如何保证Ubuntu安全的会议。SCALE还提供关于不太常见但至关重要的主题的会议。例如,去年,著名的开源律师Karen Sandler主持了一个关于开源程序员就业合同的会议。

Linux Foundation Open Source Leadership Summit

网址: https://events.linuxfoundation.org/events/open source-leadership-summit-2019 / register /

日期:2019年3月12日至14日

Linux基金会的Open Source Leadership Summit是一个仅限受邀者参加的会议。

这不是面向程序开发人员或系统管理员的会议。这是面向开源社区管理者以及项目和公司领导者的会议。会议内容包括如何审查开源项目的可行性等主题的高级别小组讨论和演示;开源贡献的最佳实践;如何处理专利、许可和其他开源知识产权问题。

SUSECon

网址: https://www.susecon.com/

日期:2019年4月1日至5日

对于那些围绕SUSE Linux Enterprise Server(SLES)构建IT堆栈的人来说,SUSECon是必须关注的。与Red Hat一样,SUSE正在围绕OpenStack构建自己的云堆栈,所以如果OpenStack让你感兴趣,你也得关注这个会议。

你可以了解到SUSE最新版本的信息。你还可以找到有关如何充分利用SUSE功能和程序的会议,例如基于Ceph的SUSE Storage 5.5、如何管理使用YaST的服务器,以及如何管理SLES上的高可用性。

Open Networking Summit

网址: https://events.linuxfoundation.org/events/open-networking-summit-north-america-2019/attend/

日期:2019年4月2日至5日

你的工作是否需要了解21世纪的网络技术,如软件定义网络(SDN)、网络功能虚拟化(NFV)和相关技术?如果是这样,这个会议不要错过。

如此多的SDN / NFV项目(如OpenDaylight、Open Network Operating System、Open Platform for Network Virtualization 和Tungsten Fabric),很难全部跟踪。2018年,LF Nerwork Fund出现,现在发展得如何了?这个会议会给你答案。

如果你想更深了解SDN / NFV,那么Open Networking Summit North America不要错过。除了关于SDN的会议和对话,还会有NFV以及SDN和NFV技术的祖父OpenFlow的培训。

Cloud Foundry Summit

网址: https://www.cloudfoundry.org/event/nasummit2019/

日期:2019年4月2日至4日

随着IT公司从服务器迁移到容器,从数据中心迁移到云,从旧式程序迁移到云原生,了解平台即服务(PaaS)是必须的。Cloud Foundry是一个开源PaaS云平台,可以弥补传统软件和云原生程序之间的差距。

如果你的公司正在使用这些工具构建基础设施,Cloud Foundry Summit将是一个很棒的会议。它使你可以访问项目中的mover和shaker,并深入了解Cloud Foundry的工作原理。在明年的会议上,去找到对容器、物联网、机器学习、Node.js和无服务器计算的深入探讨。

LinuxFest Northwest

网址: https://linuxfestnorthwest.org/conferences/2019

日期:2019年4月28日至29日

LinuxFest Northwest是历史最悠久的社区开源会议,在2019年年满20周年。像SCALE一样,它适合每个人。例如,去年的会议包括诸如《Git简介(即使是非开发人员)》、《关于黑客的经验教训》和《技术设计中的限制和权衡》等。

OpenStack峰会

网址: https://www.openstack.org/summit/denver-2019/

日期:4月29日~5月2日

笔者非常看好OpenStack作为基础设施即服务云的未来。但这也意味着OpenStack非常复杂。峰会不仅包括热门内容,还包括如何充分利用构成OpenStack的众多组件。

以下是OpenStack在上次会议中的一些精彩内容:《通过构建Zuul CI / CD云,我们学到了什么》/《Kubernetes管理101:从零到(初级)英雄》、《谷歌,请给我创建一个VM》,这些都是很幽默的高技术性的演讲。

红帽峰会

网址: https://www.redhat.com/en/summit/2019

日期:2019年5月7日至9日

贵公司是否使用RHEL?或者Fedora、CentOS?如果你使用其中任何一个(大多数企业IT部门的回答都是肯定的 ),那么红帽峰会是必须的。除了获取有关红帽产品和服务的最新消息外,该会议还可以很方便地可以获得红帽许多认证的培训,如性能调优、使用Java EE实现微服务架构和OpenStack管理。还有许多动手实验室和专家讲座。

你还可以期待有关容器和Kubernetes开发人员的最佳实践、工具和框架,在企业中使用Ansible DevOps,调优Red Hat Gluster存储的回忆。简而言之,这是一场程序员、系统管理员以及弥合两者之间差距的人的盛会。

O'Reilly‘s Open Source Convention(OSCON)

网址: https://conferences.oreilly.com/oscon/oscon-or

日期:2019年7月15日至18日

OSCON关注的是开源作为商业和社会变革的催化剂。因此,虽然OSCON确实探索和解释了热门语言、工具和开发实践,但它也将开源置于社会环境中。

在这样一个大型会议中,你可以找到来自主题专家的大量会议,了解当今最热门开源技术代码的来龙去脉。你还可以期待找到区块链等前沿主题,Kotlin、Go和Elm等新兴语言,以及大数据的Spark、Mesos、Akka、Cassandra和Kafka(SMACK)堆栈。

这个会议适合所有人,从开发人员到CxO,再到黑客和极客。

Open Source Summit North Ameria

网址: https://events.linuxfoundation.org/events/open source-summit-north-america-2019 /

日期:2019年8月21日至23日

Linux基金会的Open Source Summit是开源节目的展示。除了业界大咖的主题演讲外,它还包括许多其他业务和技术方面的内容。

除了Linux、器和云基础知识之外,会议内容还包括网络、无服务器、边缘计算和AI。也包括培训课程和Docker和rkt容器等技术的实践研讨会,以及Kubernetes和Prometheus容器监控。只要是开源的,全包括在内。

在会话中,你将通过聆听顶级Linux内核开发人员的小组讨论了解Linux开发圈中正在发生的事情,了解经验丰富的从业者或编写代码的程序员是如何使用Linux的。无论你是开源新手还是经验丰富的老手,你都会发现一些有用的东西。

ApacheCon

网址: https://www.apachecon.com/frontpage.html

日期:2019年9月10日至12日(暂定)

贵公司是否依赖Apache软件?如果是的,那么你需要参加ApacheCon。这是一个小型会议,可能有500名与会者,但如果你严重依赖Tomcat、CloudStack、Struits或几乎任何大数据开源软件,那么它是你的最佳选择。

为了对会议内容有个大概的了解,可以看看去年在蒙特利尔举行的会议,会上展示了即将推出的Apache Tomcat版本、Apache Web服务器上的HTTP / 2协议和TLS / SSL技术状态,以及如何在重要的迁移中部署CloudStack。对于使用Apache软件的人来说,这是一次非常实用的实用会议。

Open Source Summit Europe

网址: https://events.linuxfoundation.org/upcoming-events/

日期:2019年10月28日至30日

这个会议涵盖了所有开源主题。例如,去年在爱丁堡,有一个关于日志信息穿越分布式数据流管道过程的会议,还有如何在Kubernetes上构建一个容错的自定义资源控制器,以及在企业开源软件上使用GitHub的最佳实践。

KubeCon和CloudNativeCon

网址: https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/

日期:2019年11月18日至21日

Kubernetes已成为重要的云容器编排项目。由于AWS采用Kubernetes,所有主要的云现在都支持它。如果你正在使用云上的容器,你必须知道Kubernetes。

云原生计算技术正变得越来越流行。与容器和Kubernetes一样,在当今基于云的IT世界中,云原生编程技能越来越重要。

该会议高度关注当前正在发生的事情以及如何使用最新的工具。你可以期待找到有关如何使用生产就绪的Kubernetes、该如何处理容器构建清单以及使用GPU和Kubernetes扩展AI工作负载的会话。该会议面向的是具备一定云原生和Kubernetes知识的人。

原文链接:

https://www.hpe.com/us/en/insights/articles/the-top-linux-and-open-source-conferences-in-2019-1810.html


          Small, Sharp, Software Tools: Harness the Combinatoric Power of Command-Line Too ...      Cache   Translate Page      

November 07, 2018

Welcome to November! There are fifty-four days left in the year. What will you do with them? How about digging into a new book and a new issue of this month's magazine?

The command-line interface is making a comeback. That's because developers know that all the best features of your operating system are hidden behind a user interface designed to help average people use the computer. But you're not the average user.

Small, Sharp, Software Tools: Harness the Combinatoric Power of Command-Line Tools and Utilities is now in beta. Come get your own copy today from pragprog.com/book/bhcldev .

And because it's November, a new issue of PragPub is out!

Read on for details.

/\ndy

SeaGL Conference Nov 9 10
Small, Sharp, Software Tools: Harness the Combinatoric Power of Command-Line Too ...

In Seattle? We're proud to be a media sponsor for SeaGL. SeaGL is a grassroots technical conference dedicated to spreading awareness and knowledge about the GNU /linux community and free/libre/open-source software/hardware. Our goal for SeaGL is to produce an event that is as enjoyable and informative for those who spend their days maintaining hundreds of servers as it is for students who have only just started exploring technology options. More information at https://seagl.org .

Small, Sharp, Software Tools: Harness the Combinatoric Power of Command-Line Tools and Utilities


Small, Sharp, Software Tools: Harness the Combinatoric Power of Command-Line Too ...

No matter what language or platform you're using, you can use the CLI to create projects, run servers, and manage files. You can even create new tools that fit right in with grep, sed, awk, and xargs. You'll work with the Bash shell and the most common command-line utilities available on macOS, windows 10, and many flavors of Linux.

Create files without opening a text editor. Manage complex directory structures and move around your entire file system without touching the mouse. Diagnose network issues and interact with APIs. Chain several commands together to transform data, and create your own scripts to automate repetitive tasks. Make things even faster by customizing your environment, creating shortcuts, and integrating other tools into your environment. Hands-on activities and exercises will cement your newfound knowledge and give you the confidence to use the CLI to its fullest potential. And if you're worried you'll wreck your system, this book walks you through creating an Ubuntu virtual machine so you can practice worry-free.

Dive into the CLI and join the thousands of other devs who use it every day.

Now in beta from pragprog.com/book/bhcldev .

November PragPub Magazine
Small, Sharp, Software Tools: Harness the Combinatoric Power of Command-Line Too ...

In November PragPub traditionally celebrates the craft of writing, especially technical writing. This month we have writing advice from regular columnist Russ Olsen and Pragmatic Bookshelf Senior Acquisitions Editor Brian MacDonald. Whether you’ve got a book itching to get written or you’ve been hankering to write an article for PragPub or you want to improve your blogging or documentation writing or you’re just curious how the process works, both authors will entertain and enlighten you.

Mostly, though, this issue of PragPub is about software development, which is a different kind of writing. This month we have a meaty, code-rich article by Aaron Bedra on security. Aaron explains how multi-factor authentication (MFA) works, focusing on one of the most common second factors, Time-based One-Time Password, or TOTP .

TOTP authentication uses a combination of a secret and the current time to derive a predictable multi-digit value. The secret is shared between the issuer and the user in order to compare generated values to determine if the user in fact possesses the required secret. Aaron walks you through how to implement server side TOTP token issuing, and discusses its security requirements.

The environment in which you develop software can have a big influence on how productive, creative, and effective you can be. Marcus Blankenship draws on rich experience to explore working remotely. His key insight is that remote working is not one thing, and understanding which kind of remote working fits your personal style is crucial.

Your editor continues his computer history series, this month looking at the events leading up to and following the release of the MITS Altair 8800, by some accounts the first personal computer. Antonio Cangiano is here again with the latest tech books, Mike has the latest tech news, and John Shade asks, “if we can’t have Asimov’s Three Laws of Robotics, what consolation can we have instead?”

Oh, and there’s a puzzle. We hope you enjoy this November issue of PragPub

Now available from theprosegarden.com .

Upcoming Author Appearances 2018-11-09 VM Brasseur, SeaGL 2018-11-10 Andy Lester, Milwaukee Code Camp 2018-11-22 James O. Coplien, Val Research, Kōenji, Tokyo, Japan 2018-12-01 Fred Hebert, ElixirConf Mexico You Could Be a Published Author

Is there a tech topic you are deeply passionate about and want to share with the rest of us? You could become a published Pragmatic Bookshelf author! Take a look at our pragprog.com/write-for-us page for details, including our 50% royalty (yes, for real!) and world-class development editors .

Don't Get Left Out

Are your friends jealous that you get these spiffy email newsletters and they don't? Clue them in that all they need to do is create an account onpragprog.com (email address and password is all it takes) and select the checkbox to receive newsletters.

Are you following us on Twitter and/or Facebook? Here's where you can find us and keep up with the latest news and commentary, and occasional discounts:

Tell your friends!

Follow us on Twitter: @pragprog , @pragpub , Andy Hunt @PragmaticAndy .

Coming Soon: Programming WebAssembly with Rust: Unified Development for Web, Mobile, and Embedded Applications, in beta Build Reactive Websites with RxJS: Master Observables and Wrangle Events, in print Modern Systems Programming with Scala Native: Write Lean, High-Performance Code without the JVM , in beta Recently Released: Web Development with ReasonML Programming Kotlin Xcode Treasures Forge Your Future with Open Source

Thanks for your continued support,

Andy Hunt Publisher, Pragmatic Bookshelf

Books eBooks PragPub Magazine Audiobooks and Screencasts

PragProg.com
          redo: a top-down software build system (designed by djb)      Cache   Translate Page      
redo: a top-down software build system

redo is a competitor to the long-lived, but sadly imperfect, make program. There are many such competitors, because many people over the years have been dissatisfied with make's limitations. However, of all the replacements I've seen, only redo captures the essential simplicity and flexibility of make, while avoiding its flaws. To my great surprise, it manages to do this while being simultaneously simpler than make, more flexible than make, and more powerful than make.

Although I wrote redo and I would love to take credit for it, the magical simplicity and flexibility comes because I copied verbatim a design by Daniel J. Bernstein (creator of qmail and djbdns, among many other useful things). He posted some very terse notes on his web site at one point (there is no date) with the unassuming title, " Rebuilding target files when source files have changed ." Those notes are enough information to understand how the system is supposed to work; unfortunately there's no code to go with it. I get the impression that the hypothetical "djb redo" is incomplete and Bernstein doesn't yet consider it ready for the real world.

I was led to that particular page by random chance from a link on The djb way , by Wayne Marshall.

After I found out about djb redo, I searched the Internet for any sign that other people had discovered what I had: a hidden, unimplemented gem of brilliant code design. I found only one interesting link: Alan Grosskurth, whose Master's thesis at the University of Waterloo was about top-down software rebuilding, that is, djb redo. He wrote his own (admittedly slow) implementation in about 250 lines of shell script.

If you've ever thought about rewriting GNU make from scratch, the idea of doing it in 250 lines of shell script probably didn't occur to you. redo is so simple that it's actually possible. For testing, I actually wrote an even more minimal version, which always rebuilds everything instead of checking dependencies, in 210 lines of shell (about 4 kbytes).

The design is simply that good.

My implementation of redo is called redo for the same reason that there are 75 different versions of make that are all called make . It's somehow easier that way. Hopefully it will turn out to be compatible with the other implementations, should there be any.

My extremely minimal implementation, called do , is in the minimal/ directory of this repository.

(Want to discuss redo? See the bottom of this file for information about our mailing list.)

Install

Install to /usr:

./redo test && sudo ./redo install

Install to $HOME:

./redo test && PREFIX=$HOME ./redo install License

My version of redo was written without ever seeing redo code by Bernstein or Grosskurth, so I own the entire copyright. It's distributed under the GNU LGPL version 2. You can find a copy of it in the file called LICENSE.

minimal/do is in the public domain so that it's even easier to include inside your own projects for people who don't have a copy of redo.

What's so special about redo?

The theory behind redo is almost magical: it can do everything make can do, only the implementation is vastly simpler, the syntax is cleaner, and you can do even more flexible things without resorting to ugly hacks. Also, you get all the speed of non-recursive make (only check dependencies once per run) combined with all the cleanliness of recursive make (you don't have code from one module stomping on code from another module).

(Disclaimer: my current implementation is not as fast as make for some things, because it's written in python. Eventually I'll rewrite it an C and it'll be very, very fast.)

The easiest way to show it is with an example.

Create a file called default.o.do:

redo-ifchange $2.c gcc -MD -MF $2.d -c -o $3 $2.c read DEPS <$2.d redo-ifchange ${DEPS#*:}

Create a file called myprog.do:

DEPS="a.o b.o" redo-ifchange $DEPS gcc -o $3 $DEPS

Of course, you'll also have to create a.c and b.c , the C language source files that you want to build to create your application.

In a.c:

#include <stdio.h> #include "b.h" int main() { printf(bstr); }

In b.h:

extern char *bstr;

In b.c:

char *bstr = "hello, world!\n";

Now you simply run:

$ redo myprog

And it says:

redo myprog redo a.o redo b.o

Now try this:

$ touch b.h $ redo myprog

Sure enough, it says:

redo myprog redo a.o

Did you catch the shell incantation in default.o.do where it generates the autodependencies? The filename default.o.do means "run this script to generate a .o file unless there's a more specific whatever.o.do script that applies."

The key thing to understand about redo is that declaring a dependency is just another shell command. The redo-ifchange command means, "build each of my arguments. If any of them or their dependencies ever change, then I need to run the current script over again."

Dependencies are tracked in a persistent .redo database so that redo can check them later. If a file needs to be rebuilt, it re-executes the whatever.do script and regenerates the dependencies. If a file doesn't need to be rebuilt, redo can calculate that just using its persistent .redo database, without re-running the script. And it can do that check just once right at the start of your project build.

But best of all, as you can see in default.o.do , you can declare a dependency after building the program. In C, you get your best dependency information by trying to actually build, since that's how you find out which headers you need. redo is based on the following simple insight: you don't actually care what the dependencies are before you build the target; if the target doesn't exist, you obviously need to build it. Then, the build script itself can provide the dependency information however it wants; unlike in make , you don't need a special dependency syntax at all. You can even declare some of your dependencies after building, which makes C-style autodependencies much simpler.

(GNU make supports putting some of your dependencies in include files, and auto-reloading those include files if they change. But this is very confusing - the program flow through a Makefile is hard to trace already, and even harder if it restarts randomly from the beginning when a file changes. With redo, you can just read the script from top to bottom. A redo-ifchange call is like calling a function, which you can also read from top to bottom.)

Does it make cross-platform builds easier?

A lot of build systems that try to replace make do it by trying to provide a lot of predefined rules. For example, one build system I know includes default rules that can build C++ programs on Visual C++ or gcc, cross-compiled or not cross-compiled, and so on. Other build systems are specific to ruby programs, or python programs, or Java or .Net programs.

redo isn't like those systems; it's more like make. It doesn't know anything about your system or the language your program is written in.

The good news is: redo will work with any programming language with about equal difficulty. The bad news is: you might have to fill in more details than you would if you just use ANT to compile a Java program.

So the short version is: cross-platform builds are about equally easy in make and redo. It's not any easier, but it's not any harder.

FIXME: Tools like automake are really just collections of Makefile rules so you don't have to write the same ones over and over. In theory, someone could write an automake-like tool for redo, and you could use that.

Hey, does redo even run on windows?

FIXME: Probably under cygwin. But it hasn't been tested, so no.

If I were going to port redo to Windows in a "native" way, I might grab the source code to a posix shell (like the one in MSYS) and link it directly into redo.

make also doesn't really run on Windows (unless you use MSYS or Cygwin or something like that). There are versions of make that do - like Microsoft's version - but their syntax is horrifically different from one vendor to another, so you might as well just be writing for a vendor-specific tool.

At least redo is simple enough that, theoretically, one day, I can imagine it being cross platform.

One interesting project that has appeared recently is busybox-w32 ( https://github.com/pclouds/busybox-w32 ). It's a port of busybox to win32 that includes a mostly POSIX shell (ash) and a bunch of standard Unix utilities. This might be enough to get your redo scripts working on a win32 platform without having to install a bunch of stuff. But all of this needs more experimentation.

One script per file? Can't I just put it all in one big Redofile like make does?

One of my favourite features of redo is that it doesn't add any new syntax; the syntax of redo is exactly the syntax of sh... because sh is the program interpreting your .do file.

Also, it's surprisingly useful to have each build script in its own file; that way, you can declare a dependency on just that one build script instead of the entire Makefile, and you won't have to rebuild everything just because of a one-line Makefile change. (Some build tools avoid that same problem by tracking which variables and commands were used to do the build. But that's more complex, more error prone, and slower.)

See djb's Target files depend on build scripts article for more information.

However, if you really want to, you can simply create a default.do that looks something like this:

case $1 in *.o) ...compile a .o file... ;; myprog) ...link a program... ;; *) echo "no rule to build '$1'" >&2; exit 1 ;; esac

Basically, default.do is the equivalent of a central Makefile in make. As of recent versions of redo, you can use either a single toplevel default.do (which catches requests for files anywhere in the project that don't have their own .do files) or one per directory, or any combination of the above. And you can put some of your targets in default.do and some of them in their own files. Lay it out in whatever way makes sense to you.

One more thing: if you put all your build rules in a single default.do, you'll soon discover that changing anything in that default.do will cause all your targets to rebuilt - because their .do file has changed. This is technically correct, but you might find it annoying. To work around it, try making your default.do look like this:

. ./default.od

And then put the above case statement in default.od instead. Since you didn't redo-ifchange default.od , changes to default.od won't cause everything to rebuild.

Can I set my dircolors to highlight .do files in ls output?

Yes! At first, having a bunch of .do files in each directory feels like a bit of a nuisance, but once you get used to it, it's actually pretty convenient; a simple 'ls' will show you which things you might want to redo in any given directory.

Here's a chunk of my .dircolors.conf:

.do 00;35 *Makefile 00;35 .o 00;30;1 .pyc 00;30;1 *~ 00;30;1 .tmp 00;30;1

To activate it, you can add a line like this to your .bashrc:

eval `dircolors $HOME/.dircolors.conf` What are the three parameters ($1, $2, $3) to a .do file?

NOTE: These definitions have changed since the earliest (pre-0.10) versions of redo. The new definitions match what djb's original redo implementation did.

$1 is the name of the target file.

$2 is the basename of the target, minus the extension, if any.

$3 is the name of a temporary file that will be renamed to the target filename atomically if your .do file returns a zero (success) exit code.

In a file called chicken.a.b.c.do that builds a file called chicken.a.b.c , $1 and $2 are chicken.a.b.c , and $3 is a temporary name like chicken.a.b.c.tmp . You might have expected $2 to be just chicken , but that's not possible, because redo doesn't know which portion of the filename is the "extension." Is it .c , .b.c , or .a.b.c ?

.do files starting with default. are special; they can build any target ending with the given extension. So let's say we have a file named default.c.do building a file called chicken.a.b.c . $1 is chicken.a.b.c , $2 is chicken.a.b , and $3 is a temporary name like chicken.a.b.c.tmp .

You should use $1 and $2 only in constructing input filenames and dependencies; never modify the file named by $1 in your script. Only ever write to the file named by $3. That way redo can guarantee proper dependency management and atomicity. (For convenience, you can write to stdout instead of $3 if you want.)

For example, you could compile a .c file into a .o file like this, from a script named default.o.do :

redo-ifchange $2.c gcc -o $3 -c $2.c Why not named variables like $FILE, $EXT, $OUT instead of $1, $2, $3?

That sounds tempting and easy, but one downside would be lack of backward compatibility with djb's original redo design.

Longer names aren't necessarily better. Learning the meanings of the three numbers doesn't take long, and over time, those extra few keystrokes can add up. And remember that Makefiles and perl have had strange one-character variable names for a long time. It's not at all clear that removing them is an improvement.

What happens to the stdin/stdout/stderr in a redo file?

As with make, stdin is not redirected. You're probably better off not using it, though, because especially with parallel builds, it might not do anything useful. We might change this behaviour someday since it's such a terrible idea for .do scripts to read from stdin.

As with make, stderr is also not redirected. You can use it to print status messages as your build proceeds. (Eventually, we might want to capture stderr so it's easier to look at the results of parallel builds, but this is tricky to do in a user-friendly way.)

Redo treats stdout specially: it redirects it to point at $3 (see previous question). That is, if your .do file writes to stdout, then the data it writes ends up in the output file. Thus, a really simple chicken.do file that contains only this:

echo hello world

will correctly, and atomically, generate an output file named chicken only if the echo command succeeds.

Isn't it confusing to have stdout go to the target by default?

Yes, it is. It's unlike what almost any other program does, especially make, and it's very easy to make a mistake. For example, if you write in your script:

echo "Hello world"

it will go to the target file rather than to the screen.

A more common mistake is to run a program that writes to stdout by accident as it runs. When you do that, you'll produce your target on $3, but it might be intermingled with junk you wrote to stdout. redo is pretty good about catching this mistake, and it'll print a message like this:

redo zot.do wrote to stdout *and* created $3. redo ...you should write status messages to stderr, not stdout. redo zot: exit code 207

Despite the disadvantages, though, automatically capturing stdout does make certain kinds of .do scripts really elegant. The "simplest possible .do file" can be very short. For example, here's one that produces a sub-list from a list:

redo-ifchange filelist grep ^src/ filelist

redo's simplicity is an attempt to capture the "Zen of Unix," which has a lot to do with concepts like pipelines and stdout. Why should every program have to implement its own -o (output filename) option when the shell already has a redirection operator? Maybe if redo gets more popular, more programs in the world will be able to be even simpler than they are today.

By the way, if you're running some programs that might misbehave and write garbage to stdout instead of stderr (Informational/status messages always belong on stderr, not stdout! Fix your programs!), then just add this line to the top of your .do script:

exec >&2

That will redirect your stdout to stderr, so it works more like you expect.

Can a *.do file itself be generated as part of the build process?

Not currently. There's nothing fundamentally preventing us from allowing it. However, it seems easier to reason about your build process if you aren't auto-generating your build scripts on the fly.

This might change someday.

Do end users have to have redo installed in order to build my project?

No. We include a very short and simple shell script called do in the minimal/ subdirectory of the redo project. do is like redo (and it works with the same *.do scripts), except it doesn't understand dependencies; it just always rebuilds everything from the top.

You can include do with your program to make it so non-users of redo can still build your program. Someone who wants to hack on your program will probably go crazy unless they have a copy of redo though.

Actually, redo itself isn't so big, so for large projects where it matters, you could just include it with your project.

How does redo store dependencies?

At the toplevel of your project, redo creates a directory named .redo . That directory contains a sqlite3 database with dependency information.

The format of the .redo directory is undocumented because it may change at any time. Maybe it will turn out that we can do something simpler than sqlite3. If you really need to make a tool that pokes around in there, please ask on the mailing list if we can standardize something for you.

Isn't using sqlite3 overkill? And un-djb-ish?

Well, yes. Sort of. I think people underestimate how "lite" sqlite really is:

root root 573376 2010-10-20 09:55 /usr/lib/libsqlite3.so.0.8.6

573k for a complete and very fast and transactional SQL database. For comparison, libdb is:

root root 1256548 2008-09-13 03:23 /usr/lib/libdb-4.6.so

...more than twice as big, and it doesn't even have an SQL parser in it! Or if you want to be really horrified:

root root 1995612 2009-02-03 13:54 /usr/lib/libmysqlclient.so.15.0.0

The mysql client library is two megs, and it doesn't even have a database server in it! People who think SQL databases are automatically bloated and gross have not yet actually experienced the joys of sqlite. SQL has a well-deserved bad reputation, but sqlite is another story entirely. It's excellent, and much simpler and better written than you'd expect.

But still, I'm pretty sure it's not very "djbish" to use a general-purpose database, especially one that has a SQL parser in it. (One of the great things about redo's design is that it doesn't ever need to parse anything, so a SQL parser is a bit embarrassing.)

I'm pretty sure djb never would have done it that way. However, I don't think we can reach the performance we want with dependency/build/lock information stored in plain text files; among other things, that results in too much fstat/open activity, which is slow in general, and even slower if you want to run on Windows. That leads us to a binary database, and if the binary database isn't sqlite or libdb or something, that means we have to implement our own data structures. Which is probably what djb would do, of course, but I'm just not convinced that I can do a better (or even a smaller) job of it than the sqlite guys did.

Most of the state database stuff has been isolated in state.py. If you're feeling brave, you can try to implement your own better state database, with or without sqlite.

It is almost certainly possible to do it much more nicely than I have, so if you do, please send it in!

Is it better to run redo-ifchange once per dependency or just once?

The obvious way to write a list of dependencies might be something like this:

for d in *.c; do redo-ifchange ${d%.c}.o done

But it turns out that's very non-optimal. First of all, it forces all your dependencies to be built in order (redo-ifchange doesn't return until it has finished building), which makes -j parallelism a lot less useful. And secondly, it forks and execs redo-ifchange over and over, which can waste CPU time unnecessarily.

A better way is something like this:

for d in *.c; do echo ${d%.c}.o done | xargs redo-ifchange

That only runs redo-ifchange once (or maybe a few times, if there are really a lot of dependencies and xargs has to split it up), which saves fork/exec time and allows for parallelism.

If a target didn't change, how do I prevent dependents from being rebuilt?

For example, running ./configure creates a bunch of files including config.h, and config.h might or might not change from one run to the next. We don't want to rebuild everything that depends on config.h if config.h is identical.

With make , which makes build decisions based on timestamps, you would simply have the ./configure script write to config.h.new, then only overwrite config.h with that if the two files are different. However, that's a bit tedious.

With redo , there's an easier way. You can have a config.do script that looks like this:

redo-ifchange autogen.sh *.ac ./autogen.sh ./configure cat config.h configure Makefile | redo-stamp

Now any of your other .do files can depend on a target called config . config gets rebuilt automatically if any of your autoconf input files are changed (or if someone does redo config to force it). But because of the call to redo-stamp, config is only considered to have changed if the contents of config.h, configure, or Makefile are different than they were before.

(Note that you might actually want to break this .do up into a few phases: for example, one that runs aclocal, one that runs autoconf, and one that runs ./configure. That way your build can always do the minimum amount of work necessary.)

What hash algorithm does redo-stamp use?

It's intentionally undocumented because you shouldn't need to care and it might change at any time. But trust me, it's not the slow part of your build, and you'll never accidentally get a hash collision.

Why not always use checksum-based dependencies instead of timestamps?

Some build systems keep a checksum of target files and rebuild dependents only when the target changes. This is appealing in some cases; for example, with ./configure generating config.h, it could just go ahead and generate config.h; the build system would be smart enough to rebuild or not rebuild dependencies automatically. This keeps build scripts simple and gets rid of the need for people to re-implement file comparison over and over in every project or for multiple files in the same project.

There are disadvantages to using checksums for everything automatically, however:

Building stuff unnecessarily is much less dangerous than not building stuff that should be built. Checksums aren't perfect (think of zero-byte output files); using checksums will cause more builds to be skipped by default, which is very dangerous.

It makes it hard to force things to rebuild when you know you absolutely want that. (With timestamps, you can just touch filename to rebuild everything that depends on filename .)

Targets that are just used for aggregation (ie. they don't produce any output of their own) would always have the same checksum - the checksum of a zero-byte file - which causes confusing results.

Calculating checksums for every output file adds time to the build, even if you don't need that feature.

Building stuff unnecessarily and then stamping it is much slower than just not building it in the first place, so for almost every use of redo-stamp, it's not the right solution anyway.

To steal a line from the Zen of Python: explicit is better than implicit. Making people think about when they're using the stamp feature - knowing that it's slow and a little annoying to do - will help people design better build scripts that depend on this feature as little as possible.

djb's (as yet unreleased) version of redo doesn't implement checksums, so doing that would produce an incompatible implementation. With redo-stamp and redo-always being separate programs, you can simply choose not to use them if you want to keep maximum compatibility for the future.

Bonus: the redo-stamp algorithm is interchangeable. You don't have to stamp the target file or the source files or anything in particular; you can stamp any data you want, including the output of ls or the content of a web page. We could never have made things like that implicit anyway, so some form of explicit redo-stamp would always have been needed, and then we'd have to explain when to use the explicit one and when to use the implicit one.

Thus, we made the decision to only use checksums for targets that explicitly call redo-stamp (see previous question).

I suggest actually trying it out to see how it feels for you. For myself, before there was redo-stamp and redo-always, a few types of problems (in particular, depending on a list of which files exist and which don't) were really annoying, and I definitely felt it. Adding redo-stamp and redo-always work the way they do made the pain disappear, so I stopped changing things.

Why does 'redo target' always redo the target, even if it's unchanged?

When you run make target , make first checks the dependencies of target; if they've changed, then it rebuilds target. Otherwise it does nothing.

redo is a little different. It splits the build into two steps. redo target is the second step; if you run that at the command line, it just runs the .do file, whether it needs it or not.

If you really want to only rebuild targets that have changed, you can run redo-ifchange target instead.

The reasons I like this arrangement come down to semantics:

"make target" implies that if target exists, you're done; conversely, "redo target" in English implies you really want to redo it, not just sit around.

If this weren't the rule, redo and redo-ifchange would mean the same thing, which seems rather confusing.

If redo could refuse to run a .do script, you would have no easy one-line way to force a particular target to be rebuilt. You'd have to remove the target and then redo it, which is more typing. On the other hand, nobody actually types "redo foo.o" if they honestly think foo.o doesn't need rebuilding.

For "contentless#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000" targets like "test" or "clean", it would be extremely confusing if they refused to run just because they ran successfully last time.

In make, things get complicated because it doesn't differentiate between these two modes. Makefile rules with no dependencies run every time, unless the target exists, in which case they run never, unless the target is marked ".PHONY", in which case they run every time. But targets that do have dependencies follow totally different rules. And all this is needed because there's no way to tell make, "Listen, I just really want you to run the rules for this target right now ."

With redo, the semantics are really simple to explain. If your brain has already been fried by make, you might be surprised by it at first, but once you get used to it, it's really much nicer this way.

Can my .do files be written in a language other than sh?

Yes. If the first line of your .do file starts with the magic "#!/" sequence (eg. #!/usr/bin/python ), then redo will execute your script using that particular interpreter.

Note that this is slightly different from normal Unix execution semantics. redo never execs your script directly; it only looks for the "#!/" line. The main reason for this is so that your .do scripts don't have to be marked executable (chmod +x). Executable .do scripts would suggest to users that they should run them directly, and they shouldn't; .do scripts should always be executed inside an instance of redo, so that dependencies can be tracked correctly.

WARNING: If your .do script is written in Unix sh, we recommend not including the #!/bin/sh line. That's because there are many variations of /bin/sh, and not all of them are POSIX compliant. redo tries pretty hard to find a good default shell that will be "as POSIXy as possible," and if you override it using #!/bin/sh, you lose this benefit and you'll have to worry more about portability.

Can a single .do script generate multiple outputs?

FIXME: Yes, but this is a bit imperfect.

For example, compiling a .java file produces a bunch of .class files, but exactly which files? It depends on the content of the .java file. Ideally, we would like to allow our .do file to compile the .java file, note which .class files were generated, and tell redo about it for dependency checking.

However, this ends up being confusing; if myprog depends on foo.class, we know that foo.class was generated from bar.java only after bar.java has been compiled. But how do you know, the first time someone asks to build myprog, where foo.class is supposed to come from?

So we haven't thought about this enough yet.

Note that it's okay for a .do file to produce targets other than the advertised one; you just have to be careful. You could have a default.javac.do that runs 'javac $2.java', and then have your program depend on a bunch of .javac files. Just be careful not to depend on the .class files themselves, since redo won't know how to regenerate them.

This feature would also be useful, again, with ./configure: typically running the configure script produces several output files, and it would be nice to declare dependencies on all of them.

Recursive make is considered harmful. Isn't redo even more recursive?

You probably mean this 1997 paper by Peter Miller.

Yes, redo is recursive, in the sense that every target is built by its own .do file, and every .do file is a shell script being run recursively from other shell scripts, which might call back into redo . In fact, it's even more recursive than recursive make. There is no non-recursive way to use redo.

However, the reason recursive make is considered harmful is that each instance of make has no access to the dependency information seen by the other instances. Each one starts from its own Makefile, which only has a partial picture of what's going on; moreover, each one has to stat() a lot of the same files over again, leading to slowness. That's the thesis of the "considered harmful" paper.

It turns out that non-recursive make should also be considered harmful . The problem is Makefiles aren't very "hygienic" or "modular"; if you're not running make recursively, then your one copy of make has to know everything about everything in your entire project. Every variable in make is global, so every variable defined in any of your Makefiles is visible in all of your Makefiles. Every little private function or macro is visible everywhere. In a huge project made up of multiple projects from multiple vendors, that's just not okay. Plus, if all your Makefiles are tangled together, make has to read and parse the entire mess even to build the smallest, simplest target file, making it slow.

redo deftly dodges both the problems of recursive make and the problems of non-recursive make. First of all, dependency information is shared through a global persistent .redo database, which is accessed by all your redo instances at once. Dependencies created or checked by one instance can be immediately used by another instance. And there's locking to prevent two instances from building the same target at the same time. So you get all the "global dependency" knowledge of non-recursive make. And it's a binary file, so you can just grab the dependency information you need right now, rather than going through everything linearly.

Also, every .do script is entirely hygienic and traceable; redo discourages the use of global environment variables, suggesting that you put settings into files (which can have timestamps and dependencies) instead. So you also get all the hygiene and modularity advantages of recursive make.

By the way, you can trace any redo build process just by reading the .do scripts from top to bottom. Makefiles are actually a collection of "rules" whose order of execution is unclear; any rule might run at any time. In a non-recursive Makefile setup with a bunch of included files, you end up with lots and lots of rules that can all be executed in a random order; tracing becomes impossible. Recursive make tries to compensate for this by breaking the rules into subsections, but that ends up with all the "considered harmful" paper's complaints. redo runs your scripts from top to bottom in a nice tree, so it's traceable no matter how many layers you have.

How do I set environment variables that affect the entire build?

Directly using environment variables is a bad idea because you can't declare dependencies on them. Also, if there were a file that contained a set of variables that all your .do scripts need to run, then redo would have to read that file every time it starts (which is frequently, since it's recursive), and that could get slow.

Luckily, there's an alternative. Once you get used to it, this method is actually much better than environment variables, because it runs faster and it's easier to debug.

For example, djb often uses a computer-generated script called compile for compiling a .c file into a .o file. To generate the compile script, we create a file called compile.do :

redo-ifchange config.sh . ./config.sh echo "gcc -c -o \$3 $2.c $CFLAGS" >$3 chmod a+x $3

Then, your default.o.do can simply look like this:

redo-ifchange compile $2.c ./compile $1 $2 $3

This is not only elegant, it's useful too. With make, you have to always output everything it does to stdout/stderr so you can try to figure out exactly what it was running; because this gets noisy, some people write Makefiles that deliberately hide the output and print something friendlier, like "Compiling hello.c". But then you have to guess what the compile command looked like.

With redo, the command is ./compile hello.c , which looks good when printed, but is also completely meaningful. Because it doesn't depend on any environment variables, you can just run ./compile hello.c to reproduce its output, or you can look inside the compile file to see exactly what command line is being used.

As a bonus, all the variable expansions only need to be done once: when generating the ./compile program. With make, it would be recalculating expansions every time it compiles a file. Because of the way make does expansions as macros instead of as normal variables, this can be slow.

How do I write a default.o.do that works for both C and C++ source?

We can upgrade the compile.do from the previous answer to look something like this:

redo-ifchange config.sh . ./config.sh cat <<-EOF [ -e "\$2.cc" ] && EXT=.cc || EXT=.c gcc -o "\$3" -c "\$1\$EXT" -Wall $CFLAGS EOF chmod a+x "$3"

Isn't it expensive to have ./compile doing this kind of test for every single source file? Not really. Remember, if you have two implicit rules in make:

%.o: %.cc gcc ... %.o: %.c gcc ...

Then it has to do all the same checks. Except make has even more implicit rules than that, so it ends up trying and discarding lots of possibilities before it actually builds your program. Is there a %.s? A %.cpp? A %.pas? It needs to look for all of them, and it gets slow. The more implicit rules you have, the slower make gets.

In redo, it's not implicit at all; you're specifying exactly how to decide whether it's a C program or a C++ program, and what to do in each case. Plus you can share the two gcc command lines between the two rules, which is hard in make. (In GNU make you can use macro functions, but the syntax for those is ugly.)

Can I just rebuild a part of the project?

Absolutely! Although redo runs "top down" in the sense of one .do file calling into all its dependencies, you can start at any point in the dependency tree that you want.

Unlike recursive make, no matter which subdir of your project you're in when you start, redo will be able to build all the dependencies in the right order.

Unlike non-recursive make, you don't have to jump through any strange hoops (like adding, in each directory, a fake Makefile that does make -C ${TOPDIR} back up to the main non-recursive Makefile). redo just uses filename.do to build filename , or uses default*.do if the specific filename.do doesn't exist.

When running any .do file, redo makes sure its current directory is set to the directory where the .do file is located. That means you can do this:

redo ../utils/foo.o

And it will work exactly like this:

cd ../utils redo foo.o

In make, if you run

make ../utils/foo.o

it means to look in ./Makefile for a rule called ../utils/foo.o... and it probably doesn't have such a rule. On the other hand, if you run

cd ../utils make foo.o

it means to look in ../utils/Makefile and look for a rule called foo.o. And that might do something totally different! redo combines these two forms and does the right thing in both cases.

Note: redo will always change to the directory containing the .do file before trying to build it. So if you do

redo ../utils/foo.o

the ../utils/default.o.do file will be run with its current directory set to ../utils. Thus, the .do file's runtime environment is always reliable.

On the other hand, if you had a file called ../default.o.do, but there was no ../utils/default.o.do, redo would select ../default.o.do as the best matching .do file. It would then run with its current directory set to .., and tell default.o.do to create an output file called "utils/foo.o" (that is, foo.o, with a relative path explaining how to find foo.o when you're starting from the directory containing the .do file).

That sounds a lot more complicated than it is. The results are actually very simple: if you have a toplevel default.o.do, then all your .o files will be compiled with $PWD set to the top level, and all the .o filenames passed as relative paths from $PWD. That way, if you use relative paths in -I and -L gcc options (for example), they will always be correct no matter where in the hierarchy your source files are.

Can I put my .o files in a different directory from my .c files?

Yes. There's nothing in redo that assumes anything about the location of your source files. You can do all sorts of interesting tricks, limited only by your imagination. For example, imagine that you have a toplevel default.o.do that looks like this:

ARCH=${1#out/} ARCH=${ARCH%%/*} SRC=${1#out/$ARCH/} redo-ifchange $SRC.c $ARCH-gcc -o $3 -c $SRC.c

If you run redo out/i586-mingw32msvc/path/to/foo.o , then the above script would end up running

i586-mingw32msvc-gcc -o $3 -c path/to/foo.c

You could also choose to read the compiler name or options from out/$ARCH/config.sh, or config.$ARCH.sh, or use any other arrangement you want.

You could use the same technique to have separate build directories for out/debug, out/optimized, out/profiled, and so on.

Can my filenames have spaces in them?

Yes, unlike with make. For historical reasons, the Makefile syntax doesn't support filenames with spaces; spaces are used to separate one filename from the next, and there's no way to escape these spaces.

Since redo just uses sh, which has working escape characters and quoting, it doesn't have this problem.

Does redo care about the differences between tabs and spaces?

No.

What if my .c file depends on a generated .h file?

This problem arises as follows. foo.c includes config.h, and config.h is created by running ./configure. The second part is easy; just write a config.h.do that depends on the existence of configure (which is created by configure.do, which probably runs autoconf).

The first part, however, is not so easy. Normally, the headers that a C file depends on are detected as part of the compilation process. That works fine if the headers, themselves, don't need to be generated first. But if you do

redo foo.o

There's no way for redo to automatically know that compiling foo.c into foo.o depends on first generating config.h.

Since most .h files are not auto-generated, the easiest thing to do is probably to just add a line like this to your default.o.do:

redo-ifchange config.h

Sometimes a specific solution is much easier than a general one.

If you really want to solve the general case, djb has a solution for his own projects , which is a simple script that looks through C files to pull out #include lines. He assumes that #include <file.h> is a system header (thus not subject to being built) and #include "file.h" is in the current directory (thus easy to find). Unfortunately this isn't really a complete solution, but at least it would be able to redo-ifchange a required header before compiling a program that requires that header.

Why doesn't redo by default print the commands as they are run?

make prints the commands it runs as it runs them. redo doesn't, although you can get this behaviour with redo -v or redo -x . (The difference between -v and -x is the same as it is in sh... because we simply forward those options onward to sh as it runs your .do script.)

The main reason we don't do this by default is that the commands get pretty long winded (a compiler command line might be multiple lines of repeated gibberish) and, on large projects, it's hard to actually see the progress of the overall build. Thus, make users often work hard to have make hide the command output in order to make the log "more readable."

The reduced output is a pain with make, however, because if there's ever a problem, you're left wondering exactly what commands were run at what time, and you often have to go editing the Makefile in order to figure it out.

With redo, it's much less of a problem. By default, redo produces output that looks like this:

$ redo t redo t/all redo t/hello redo t/LD redo t/hello.o redo t/CC redo t/yellow redo t/yellow.o redo t/bellow redo t/c redo t/c.c redo t/c.c.c redo t/c.c.c.b redo t/c.c.c.b.b redo t/d

The indentation indicates the level of recursion (deeper levels are dependencies of earlier levels). The repeated word "redo" down the left column looks strange, but it's there for a reason, and the reason is this: you can cut-and-paste a line from the build script and rerun it directly.

$ redo t/c redo t/c redo t/c.c redo t/c.c.c redo t/c.c.c.b redo t/c.c.c.b.b

So if you ever want to debug what happened at a particular step, you can choose to run only that step in verbose mode:

$ redo t/c.c.c.b.b -x redo t/c.c.c.b.b * sh -ex default.b.do c.c.c.b .b c.c.c.b.b.redo2.tmp + redo-ifchange c.c.c.b.b.a + echo a-to-b + cat c.c.c.b.b.a + ./sleep 1.1 redo t/c.c.c.b.b (done)

If you're using an autobuilder or something that logs build results for future examination, you should probably set it to always run redo with the -x option.

Is redo compatible with autoconf?

Yes. You don't have to do anything special, other than the above note about declaring dependencies on config.h, which is no worse than what you would have to do with make.

Is redo compatible with automake?

Hells no. You can thank me later. But see next question.

Is redo compatible with make?

Yes. If you have an existing Makefile (for example, in one of your subprojects), you can just call make from a .do script to build that subproject.

In a file called myproject.stamp.do:

redo-ifchange $(find myproject -name '*.[ch]') make -C myproject all

So, to amend our answer to the previous question, you can use automake-generated Makefiles as part of your redo-based project.

Is redo -j compatible with make -j?

Yes! redo implements the same jobserver protocol as GNU make, which means that redo running under make -j, or make running under redo -j, will do the right thing. Thus, it's safe to mix-and-match redo and make in a recursive build system.

Just make sure you declare your dependencies correctly; redo won't know all the specific dependencies included in your Makefile, and make won't know your redo dependencies, of course.

One way of cheating is to just have your make.do script depend on all the source files of a subproject, like this:

make -C subproject all find subproject -name '*.[ch]' | xargs redo-ifchange

Now if any of the .c or .h files in subproject are changed, your make.do will run, which calls into the subproject to rebuild anything that might be needed. Worst case, if the dependencies are too generous, we end up calling 'make all' more often than necessary. But 'make all' probably runs pretty fast when there's nothing to do, so that's not so bad.

Parallelism if more than one target depends on the same subdir

Recursive make is especially painful when it comes to parallelism. Take a look at this Makefile fragment:

all: fred bob subproj: touch $@.new sleep 1 mv $@.new $@ fred: $(MAKE) subproj touch $@ bob: $(MAKE) subproj touch $@

If we run it serially, it all looks good:

$ rm -f subproj fred bob; make --no-print-directory make subproj touch subproj.new sleep 1 mv subproj.new subproj touch fred make subproj make[1]: 'subproj' is up to date. touch bob

But if we run it in parallel, life sucks:

$ rm -f subproj fred bob; make -j2 --no-print-directory make subproj make subproj touch subproj.new touch subproj.new sleep 1 sleep 1 mv subproj.new subproj mv subproj.new subproj mv: cannot stat 'ubproj.new': No such file or directory touch fred make[1]: *** [subproj] Error 1 make: *** [bob] Error 2

What happened? The sub-make that runs subproj ended up getting twice at once, because both fred and bob need to build it.

If fred and bob had put in a dependency on subproj, then GNU make would be smart enough to only build one of them at a time; it can do ordering inside a single make process. So this example is a bit contrived. But imagine that fred and bob are two separate applications being built from the same toplevel Makefile, and they both depend on the library in subproj. You'd run into this problem if you use recursive make.

Of course, you might try to solve this by using nonrecursive make, but that's really hard. What if subproj is a library from some other vendor? Will you modify all their makefiles to fit into your nonrecursive makefile scheme? Probably not.

Another common workaround is to have the toplevel Makefile build subproj, then fred and bob. This works, but if you don't run the toplevel Makefile and want to go straight to work in the fred project, building fred won't actually build subproj first, and you'll get errors.

redo solves all these problems. It maintains global locks across all its instances, so you're guaranteed that no two instances will try to build subproj at the same time. And this works even if subproj is a make-based project; you just need a simple subproj.do that runs make subproj .

Dependency problems that only show up during parallel builds

One annoying thing about parallel builds is... they do more things in parallel. A very common problem in make is to have a Makefile rule that looks like this:

all: a b c

When you make all , it first builds a, then b, then c. What if c depends on b? Well, it doesn't matter when you're building in serial. But with -j3, you end up building a, b, and c at the same time, and the build for c crashes. You should have said:

all: a b c c: b b: a

and that would have fixed it. But you forgot, and you don't find out until you build with exactly the wrong -j option.

This mistake is easy to make in redo too. But it does have a tool that helps you debug it: the --shuffle option. --shuffle takes the dependencies of each target, and builds them in a random order. So you can get parallel-like results without actually building in parallel.

What about distributed builds?

FIXME: So far, nobody has tried redo in a distributed build environment. It surely works with distcc, since that's just a distributed compiler. But there are other systems that distribute more of the build process to other machines.

The most interesting method I've heard of was explained (in public, this is not proprietary information) by someone from Google. Apparently, the Android team uses a tool that mounts your entire local filesystem on a remote machine using FUSE and chroots into that directory. Then you replace the $SHELL variable in your copy of make with one that runs this tool. Because the remote filesystem is identical to yours, the build will certainly complete successfully. After the $SHELL program exits, the changed files are sent back to your local machine. Cleverly, the files on the remote server are cached based on their checksums, so files only need to be re-sent if they have changed since last time. This dramatically reduces bandwidth usage compared to, say, distcc (which mostly just re-sends the same preparsed headers over and over again).

At the time, he promised to open source this tool eventually. It would be pretty fun to play with it.

The problem:

This idea won't work as easily with redo as it did with make. With make, a separate copy of $SHELL is launched for each step of the build (and gets migrated to the remote machine), but make runs only on your local machine, so it can control parallelism and avoid building the same target from multiple machines, and so on. The key to the above distribution mechanism is it can send files to the remote machine at the beginning of the $SHELL, and send them back when the $SHELL exits, and know that nobody cares about them in the meantime. With redo, since the entire script runs inside a shell (and the shell might not exit until the very end of the build), we'd have to do the parallelism some other way.

I'm sure it's doable, however. One nice thing about redo is that the source code is so small compared to make: you can just rewrite it.

Can I convince a sub-redo or sub-make to not use parallel builds?

Yes. Put this in your .do script:

unset MAKEFLAGS

The child make